This commit is contained in:
DoTheEvolution 2020-04-10 00:52:11 +02:00
parent 2b1b47cba6
commit abd2c15287
14 changed files with 1043 additions and 0 deletions

View File

@ -1,2 +1,15 @@
# selfhosted-apps-docker
Guide by Example
---------
* bitwarden_rs - password manager
* bookstack - notes and documentation
* borg_backup - backup utility
* homer - home page
* nextcloud - file share & sync
* portaine - docker managment
how to get them runnig in each folder
containers dont have ports cuz caddy reverse proxy

180
bitwarden_rs/readme.md Normal file
View File

@ -0,0 +1,180 @@
# Bitwarden_rs in docker
###### guide by example
![logo](https://i.imgur.com/BQ9Ec6f.png)
### Purpose
Password manager. RS version is simpler and lighter than the official bitwarden.
* [Official site](https://bitwarden.com/)
* [Github](https://github.com/dani-garcia/bitwarden_rs)
* [DockerHub image used](https://hub.docker.com/r/bitwardenrs/server)
### Files and directory structure
```
/home
└── ~
└── docker
└── bitwarden
├── 🗁 bitwarden-backup
├── 🗁 bitwarden-data
├── 🗋 .env
├── 🗋 docker-compose.yml
└── 🗋 bitwarden-backup-script.sh
```
### docker-compose
[Documentation](https://github.com/dani-garcia/bitwarden_rs/wiki/Using-Docker-Compose) on compose.
`docker-compose.yml`
```
version: "3"
services:
bitwarden:
image: bitwardenrs/server
hostname: bitwarden
container_name: bitwarden
restart: unless-stopped
volumes:
- ./bitwarden-data/:/data/
environment:
- TZ
- ADMIN_TOKEN
- DOMAIN
- SIGNUPS_ALLOWED
- SMTP_SSL
- SMTP_EXPLICIT_TLS
- SMTP_HOST
- SMTP_PORT
- SMTP_USERNAME
- SMTP_PASSWORD
- SMTP_FROM
networks:
default:
external:
name: $DEFAULT_NETWORK
```
`.env`
```
# GENERAL
MY_DOMAIN=blabla.org
DEFAULT_NETWORK=caddy_net
TZ=Europe/Prague
# BITWARDEN
ADMIN_TOKEN=YdLo1TM4MYEQ948GOVZ29IF4fABSrZMpk9
DOMAIN=https://passwd.blabla.org
SIGNUPS_ALLOWED=true
# USING SENDGRID FOR SENDING EMAILS
SMTP_SSL=true
SMTP_EXPLICIT_TLS=true
SMTP_HOST=smtp.sendgrid.net
SMTP_PORT=465
SMTP_USERNAME=apikey
SMTP_PASSWORD=SG.MOQQegA3bgfodRN4IG2Wqwe.s23Ld4odqhOQQegf4466A4
SMTP_FROM=admin@blabla.org
```
### Reverse proxy
Caddy v2 is used, details [here.](https://github.com/DoTheEvo/Caddy-v2-examples)</br>
Bitwarden_rs documentation has a [section on reverse proxy.](https://github.com/dani-garcia/bitwarden_rs/wiki/Proxy-examples)
`Caddyfile`
```
{
# acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
passwd.{$MY_DOMAIN} {
encode gzip
reverse_proxy /notifications/hub/negotiate bitwarden:80
reverse_proxy /notifications/hub bitwarden:3012
reverse_proxy bitwarden:80
}
```
### Forward port 3012 on your router
- websocket protocol used for some kind of notifications
### Extra info
* **bitwarden can be managed** at `passwd.blabla.org/admin` and entering `ADMIN_TOKEN` set in the `.env` file
![interface-pic](https://i.imgur.com/5LxEUsA.png)
### Update
* [watchtower](https://github.com/DoTheEvo/docker-selfhosted-projects/tree/master/watchtower) updates the image automaticly
* manual image update</br>
`docker-compose pull`</br>
`docker-compose up -d`</br>
`docker image prune`
### Backup and restore
* **backup** using [borgbackup setup](https://github.com/DoTheEvo/docker-selfhosted-projects/tree/master/borg_backup)
that makes daily backup of the entire directory
* **restore**</br>
down the bitwarden container `docker-compose down`</br>
delete the entire bitwarden directory</br>
from the backup copy back the bitwarden directortory</br>
start the container `docker-compose up -d`
### Backup of just user data
For additional peace of mind.
Having user-data daily exported using the [official procedure.](https://github.com/dani-garcia/bitwarden_rs/wiki/Backing-up-your-vault)</br>
For bitwarden_rs it means sqlite database dump and the content of the `attachments` folder.
The backup files are overwriten on every run of the script,
but borg backup is backing the entire directory in to snapshots daily, so no need for some keeping-last-X consideration.
* **install sqlite on the host system**
* **create backup script**</br>
placed inside `bitwarden` directory on the host
`make_bitwarden_backup.sh`
```
#!/bin/sh
# GO IN TO THE DIRECTORY WHERE THIS SCRIPT RESIDES
cd "${0%/*}"
# CREATE BACKUP DIRECTORY IF IT DOES NOT EXIST
mkdir -p ./bitwarden-backup
# CREATE SQLITE BACKUP
sqlite3 ./bitwarden-data/db.sqlite3 ".backup './bitwarden-BACKUP.db.sqlite3'"
# BACKUP ATTACHMENTS
tar -czvf ./bitwarden-backup/attachments.tar.gz ./bitwarden-data/attachments
```
the script must be executabe - `chmod +x make_bitwarden_backup.sh`
* **cronjob** on the host</br>
`crontab -e` - add new cron job</br>
`0 2 * * * /home/bastard/docker/bitwarden/bitwarden-backup-script.sh` - run it [at 02:00](https://crontab.guru/#0_2_*_*_*)</br>
`crontab -l` - list cronjobs
### Restore the user data
- down the container `docker-compose down`</br>
- replace `db.sqlite3` with the one from the backup
- replace attachments folder with the one from the backup
- start the container `docker-compose up -d`

246
bookstack/readme.md Normal file
View File

@ -0,0 +1,246 @@
# Bookstack in docker
###### guide by example
![logo](https://i.imgur.com/qDXwqaU.png)
### Purpose
Documentation and notes.
* [Official site](https://www.bookstackapp.com/)
* [Github](https://github.com/BookStackApp/BookStack)
* [DockerHub image used](https://hub.docker.com/r/linuxserver/bookstack)
### Files and directory structure
```
/home
└── ~
└── docker
└── bookstack
├── 🗁 bookstack-data
├── 🗁 bookstack-data-db
├── 🗁 bookstack-backup
├── 🗋 .env
├── 🗋 docker-compose.yml
└── 🗋 bookstack-backup-script.sh
```
### docker-compose
Dockerhub linuxserver/bookstack [example compose.](https://hub.docker.com/r/linuxserver/bookstack)
`docker-compose.yml`
```
version: "2"
services:
bookstack:
image: linuxserver/bookstack
container_name: bookstack
hostname: bookstack
environment:
- PUID
- PGID
- DB_HOST
- DB_USER
- DB_PASS
- DB_DATABASE
- APP_URL
volumes:
- ./bookstack-data:/config
restart: unless-stopped
depends_on:
- bookstack_db
bookstack_db:
image: linuxserver/mariadb
container_name: bookstack_db
hostname: bookstack_db
environment:
- PUID
- PGID
- MYSQL_ROOT_PASSWORD
- TZ
- MYSQL_DATABASE
- MYSQL_USER
- MYSQL_PASSWORD
volumes:
- ./bookstack-data-db:/config
restart: unless-stopped
networks:
default:
external:
name: $DEFAULT_NETWORK
```
`.env`
```
# GENERAL
MY_DOMAIN=blabla.org
DEFAULT_NETWORK=caddy_net
TZ=Europe/Prague
# BOOKSTACK
PUID=1000
PGID=1000
DB_HOST=bookstack_db
DB_USER=bookstack
DB_PASS=bookstack
DB_DATABASE=bookstackapp
APP_URL=https://book.blabla.org
# BOOKSTACK-MARIADB
PUID=1000
PGID=1000
MYSQL_ROOT_PASSWORD=bookstack
MYSQL_DATABASE=bookstackapp
MYSQL_USER=bookstack
MYSQL_PASSWORD=bookstack
```
### reverse proxy
caddy v2 is used,
details [here](https://github.com/DoTheEvo/Caddy-v2-examples)
`Caddyfile`
```
{
# acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
book.{$MY_DOMAIN} {
reverse_proxy {
to bookstack:80
}
}
```
### update
* [watchguard]() updates the image automaticly
* manual image update</br>
`docker-compose pull`</br>
`docker-compose up -d`</br>
`docker image prune`
### backup and restore
* **backup** using [borgbackup setup](https://github.com/DoTheEvo/docker-selfhosted-projects/tree/master/borg_backup)
that makes daily backup of the entire directory
* **restore**</br>
down the bookstack containers `docker-compose down`</br>
delete the entire bookstack directory</br>
from the backup copy back the bookstack directortory</br>
start the container `docker-compose up -d`
### Backup of just user data
For additional peace of mind.
Having user-data daily exported using the [official procedure.](https://www.bookstackapp.com/docs/admin/backup-restore/)</br>
For bookstack it means database dump and the content of several directories
containing user uploaded files.
The backup files are overwriten on every run of the script,
but borg backup is backing entire directory in to snapshots, so no need for some keeping-last-X consideration.
* **database backup**</br>
script `make_backup.sh` placed in to `bookstack_db` container,
in to `/config` directory that is bind mounted to the host machine.</br>
made executable `chmod +x make_backup.sh` inside the container
- This script creates path `/config/backups-db`</br>
- deletes all files in the backup path except 30 newest</br>
- creates new mysqldump using env variables passed from `.env` file</br>
`make_backup.sh`
```
#!/bin/bash
# -----------------------------------------------
NUMB_BACKUPS_TO_KEEP=30
BACKUP_PATH=/config/backups-db
BACKUP_NAME=$(date +"%s").bookstack.database.backup.sql
# -----------------------------------------------
mkdir -p $BACKUP_PATH
cd $BACKUP_PATH
ls -tr | head -n -$NUMB_BACKUPS_TO_KEEP | xargs --no-run-if-empty rm
mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE > $BACKUP_PATH/$BACKUP_NAME
```
* **files backup**</br>
script `make_backup.sh` placed in to `bookstack` container,
in to `/config` directory that is bind mounted to the host machine.</br>
made executable `chmod +x make_backup.sh` inside the container
- This script creates path `/config/backups-files`</br>
- deletes all files in the backup path except 30 newest</br>
- creates new archive containing uploaded files</br>
`make_backup.sh`
```
#!/bin/bash
# -----------------------------------------------
NUMB_BACKUPS_TO_KEEP=30
BACKUP_PATH=/config/backups-files
BACKUP_NAME=$(date +"%s").bookstack.files.backup.tar.gz
# -----------------------------------------------
mkdir -p $BACKUP_PATH
cd $BACKUP_PATH
ls -tr | head -n -$NUMB_BACKUPS_TO_KEEP | xargs --no-run-if-empty rm
cd /config/www
tar -czvf $BACKUP_PATH/$BACKUP_NAME .env uploads files images
```
* **automatic periodic execution of the backup scripts**
Using cron running on the host machine that will execute scripts inside containers.
script `cron_job_do_backups.sh` inside `~/docker/bookstack`
`cron_job_do_backups.sh`
```
#!/bin/bash
docker exec bookstack /config/make_backup.sh
docker exec bookstack_db /config/make_backup.sh
```
`chmod +x cron_job_do_backups.sh` on the host machine
`crontab -e`
`0 2 * * * /home/bastard/docker/bookstack/cron_job_do_backups.sh`
### restore official way
* restore of the database
copy the backup sql dump file in to the bind mount `bookstack-data-db` directory
exec in to the container and tell mariadb to restore data from the copied file
`docker exec -it bookstack_db /bin/bash`</br>
`cd /config`</br>
`mysql -u $MYSQL_USER -p$MYSQL_PASSWORD $MYSQL_DATABASE < 1584566634.bookstack.database.backup.sql`
* restore of the files
copy the backup gz.tar archive in to bind mount `bookstack-data/www/` directory</br>
`docker exec -it bookstack /bin/bash`</br>
`cd /config/www`</br>
`tar -xvzf 1584566633.bookstack.files.backup.tar.gz`

99
borg_backup/readme.md Normal file
View File

@ -0,0 +1,99 @@
# BorgBackup in docker
###### guide by example
### purpose
Backup terminal utility.
* [Official site](https://www.borgbackup.org/)
* [Github](https://github.com/borgbackup/borg)
### files and directory structure
```
/home
└── ~
├── borg_backup
│ ├── 🗁 docker_backup
│ ├── 🗋 borg-backup.sh
│ └── 🗋 borg_backup.log
└── docker
├── container #1
├── container #2
└── ...
```
### borg-backup.sh
`borg-backup.sh`
```
#!/bin/bash
# INITIALIZE THE REPO WITH THE COMMAND:
# borg init --encryption=none /mnt/C1/backup_borg/
# THEN RUN THIS SCRIPT
# -----------------------------------------------
BACKUP_THIS='/home/spravca/docker /etc'
REPOSITORY='/home/spravca/borg_backup/docker_backup'
LOGFILE='/home/spravca/borg_backup/borg_backup.log'
# -----------------------------------------------
NOW=$(date +"%Y-%m-%d | %H:%M | ")
echo "$NOW Starting Backup and Prune" >> $LOGFILE
# CREATES NEW ARCHIVE IN PRESET REPOSITORY
borg create \
$REPOSITORY::'{now:%s}' \
$BACKUP_THIS \
\
--compression zstd \
--one-file-system \
--exclude-caches \
--exclude-if-present '.nobackup' \
--exclude '/home/*/Downloads/' \
# DELETES ARCHIVES NOT FITTING KEEP-RULES
borg prune -v --list $REPOSITORY \
--keep-daily=7 \
--keep-weekly=4 \
--keep-monthly=6 \
--keep-yearly=0 \
echo "$NOW Done" >> $LOGFILE
borg list $REPOSITORY >> $LOGFILE
echo '------------------------------' >> $LOGFILE
# --- USEFULL SHIT ---
# setup above ignores directories containing '.nobackup' file
# make '.nobackup' imutable using chattr to prevent accidental removal
# touch .nobackup
# chattr +i .nobackup
# in the repo folder, to list available backups:
# borg list .
# to mount one of them:
# borg mount .::1584472836 ~/temp
# to umount:
# borg umount ~/temp
# to delete single backup in a repo:
# borg delete .::1584472836
```
### automatic execution
* make the script executable `chmod +x borg-backup.sh`
* cron job, every day at 3:00
`crontab -e`
`0 3 * * * /home/bastard/borg_backup/borg-backup.sh`

61
ddclient/readme.md Normal file
View File

@ -0,0 +1,61 @@
# ddclient
###### guide by example
### purpose
Automatic DNS entries update. Useful if no static IP from ISP.
* [Github](https://github.com/ddclient/ddclient)
### files and directory structure
```
/etc
└── ddclient
└── 🗋 ddclient.conf
```
### configuration
Example is for cloudflare managed DNS.
`ddclient.conf`
```
daemon=300
syslog=yes
mail=root
mail-failure=root
pid=/var/run/ddclient.pid
ssl=yes
use=web, web=checkip.dyndns.org/, web-skip='IP Address'
wildcard=yes
##
## CloudFlare (www.cloudflare.com)
##
protocol=cloudflare, \
zone=blabla.org, \
ttl=1, \
login=bastard.blabla@gmail.com, \
password=global-api-key-goes-here \
blabla.org,*.blabla.org
##
protocol=cloudflare, \
zone=blabla.tech, \
ttl=1, \
login=bastard.blabla@gmail.com, \
password=global-api-key-goes-here \
blabla.tech,*.blabla.tech
```
### reverse proxy
no web interface
### update
during host linux package update

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

83
homer/readme.md Normal file
View File

@ -0,0 +1,83 @@
# Homer in docker
###### guide by example
### purpose
Homepage.
* [Github](https://github.com/bastienwirtz/homer)
* [DockerHub image used](https://hub.docker.com/r/linuxserver/bookstack)
### files and directory structure
```
/home
└── ~
└── docker
└── homer
├── 🗁 assets
├── 🗋 .config.yml
├── 🗋 .env
└── 🗋 docker-compose.yml
```
### docker-compose
`docker-compose.yml`
```
version: "2"
services:
homer:
image: b4bz/homer:latest
container_name: homer
hostname: homer
volumes:
- .config.yml:/www/config.yml
- ./assets/:/www/assets
restart: unless-stopped
expose:
- "8080"
networks:
default:
external:
name: $DEFAULT_NETWORK
```
`.env`
```
# GENERAL
MY_DOMAIN=blabla.org
DEFAULT_NETWORK=caddy_net
```
### reverse proxy
caddy v2 is used,
details [here](https://github.com/DoTheEvo/Caddy-v2-examples)
`Caddyfile`
```
{
# acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
netdata.{$MY_DOMAIN} {
reverse_proxy {
to netdata:80
}
}
```
### update
* image update using docker compose
`docker-compose pull`</br>
`docker-compose up -d`</br>
`docker image prune`

226
nextcloud/readme.md Normal file
View File

@ -0,0 +1,226 @@
# Nextcloud in docker
###### guide by example
chapters
1. [Docker compose](#1-docker-compose)
2. [Reverse proxy using caddy v2](#2-Reverse-proxy-using-caddy-v2)
3. [Some stuff afterwards](#3-Some-stuff-afterwards)
4. [Update Nextcloud](#4-Update-Nextcloud)
5. [Backup and restore](#5-Backup-and-restore)
# #1 Docker compose
Official examples [here](https://github.com/nextcloud/docker/tree/master/.examples/docker-compose)
There are several options, default recomendation is apache.
Alternative is fpm php as stand alone container with either apache or ngnix.</br>
The default apache with php as a module is used in this setup
- **Create a new docker network**</br> `docker network create caddy_net`</br>
All nextcloud containers must be on the same network.
- **Create a directory structure**
Where nextcloud docker stuff will be organized.</br>
Here will be `~/docker/nextcloud`.</br>
```
/home
└── ~
└── docker
└── nextcloud
├── nextcloud-data
├── .env
└── docker-compose.yml
```
- `nextcloud-data` the directory where '/var/www/html' will be bind-mounted
- `.env` the env file with the variables
- `docker-compose.yml` the compose file
- **Create `.env` file**</br>
`.env`
```
# GENERAL
MY_DOMAIN=blabla.org
DEFAULT_NETWORK=caddy_net
# NEXTCLOUD-MARIADB
MYSQL_ROOT_PASSWORD=nextcloud
MYSQL_PASSWORD=nextcloud
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
```
- **Create `docker-compose.yml` file**</br>
Four containers are spin up
- nextcloud-db - mariadb database where files and users meta data are stored
- nextcloud-redis - in memory data store for more responsive interface
- nextcloud-app - the nextcloud
- nextcloud-cron - for being able to run maintnance cronjobs
Two persinstent storages
- nextcloud-db named volume - nextcloud-db:/var/lib/mysql
- nextcloud-app bind mount - ./nextcloud-data/:/var/www/html
`docker-compose.yml`
```
version: '3'
services:
nextcloud-db:
image: mariadb
container_name: nextcloud-db
hostname: nextcloud-db
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: unless-stopped
volumes:
- nextcloud-db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD
- MYSQL_PASSWORD
- MYSQL_DATABASE
- MYSQL_USER
nextcloud-redis:
image: redis:alpine
container_name: nextcloud-redis
hostname: nextcloud-redis
restart: unless-stopped
nextcloud-app:
image: nextcloud:apache
container_name: nextcloud
hostname: nextcloud
restart: unless-stopped
depends_on:
- nextcloud-db
- nextcloud-redis
links:
- nextcloud-db
ports:
- 8080:80
volumes:
- ./nextcloud-data/:/var/www/html
environment:
- MYSQL_HOST=nextcloud-db
- REDIS_HOST=nextcloud-redis
- NEXTCLOUD_TRUSTED_DOMAINS
nextcloud-cron:
image: nextcloud:apache
container_name: nextcloud-cron
hostname: nextcloud-cron
restart: unless-stopped
volumes:
- ./nextcloud-data/:/var/www/html
entrypoint: /cron.sh
depends_on:
- nextcloud-db
- nextcloud-redis
volumes:
nextcloud-db:
networks:
default:
external:
name: $DEFAULT_NETWORK
```
- **Run docker compose**
`docker-compose -f docker-compose.yml up -d`
# #2 Reverse proxy using caddy v2
Provides reverse proxy so that more services can run on this docker host,</br>
and also provides https.</br>
This is a basic setup, for more details here is
[Caddy v2 tutorial + examples](https://github.com/DoTheEvo/Caddy-v2-examples)
- **Have nextcloud to Caddyfile**</br>
`Caddyfile`
```
{
# acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
nextcloud.{$MY_DOMAIN} {
reverse_proxy {
to nextcloud:80
}
}
```
- **Create docker-compose.yml**</br>
`docker-compose.yml`
```
version: "3.7"
services:
caddy:
image: "caddy/caddy:alpine"
container_name: "caddy"
hostname: "caddy"
ports:
- "80:80"
- "443:443"
volumes:
- "./Caddyfile:/etc/caddy/Caddyfile:ro"
- caddy_lets_encrypt_storage:/data
- caddy_config_storage:/config
environment:
- MY_DOMAIN
networks:
default:
external:
name: $DEFAULT_NETWORK
volumes:
caddy_lets_encrypt_storage:
caddy_config_storage:
```
Make sure docker-compose.yml has the .env file with the same variables for
$DEFAULT_NETWORK and $MY_DOMAIN
- **Run it**
`docker-compose -f docker-compose.yml up -d`
If something is fucky use `docker logs caddy` to see what is happening.
Restarting the container can help getting the certificates, if its stuck there.
Or investigate inside `docker container exec -it caddy /bin/sh`,
trying to ping hosts that are suppose to be reachable for example.
# #3. Some stuff afterwards
- in settings > overview, nextcloud complains about missing indexes or big int
- docker exec -it nextcloud /bin/sh
- chsh -s /bin/sh www-data
- su www-data
- cd /var/www/html
- php occ db:add-missing-indices
- php occ db:convert-filecache-bigint
# #4 Update Nextcloud
`docker-compose pull`
`docker-compose up -d`
`docker image prune`
# #5.Backup and restore
likely there will be container running borg or borgmatic and cron

80
portainer/readme.md Normal file
View File

@ -0,0 +1,80 @@
# Portainer in docker
###### guide by example
### purpose
User friendly overview of running containers.
### files and directory structure
```
/home
└── ~
└── docker
└── portainer
├── 🗁 portainer_data
├── 🗋 .env
└── 🗋 docker-compose.yml
```
### docker-compose
`docker-compose.yml`
```
version: '2'
services:
portainer:
image: portainer/portainer
container_name: portainer
hostname: portainer
command: -H unix:///var/run/docker.sock
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./portainer_data:/data
environment:
- TZ
networks:
default:
external:
name: $DEFAULT_NETWORK
```
`.env`
```
# GENERAL
MY_DOMAIN=blabla.org
DEFAULT_NETWORK=caddy_net
TZ=Europe/Prague
```
### reverse proxy
caddy v2 is used,
details [here](https://github.com/DoTheEvo/Caddy-v2-examples)
`Caddyfile`
```
{
# acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
portainer.{$MY_DOMAIN} {
reverse_proxy {
to portainer:9000
}
}
```
### update
* image update using docker compose
`docker-compose pull`</br>
`docker-compose up -d`</br>
`docker image prune`

55
watchtower/readme.md Normal file
View File

@ -0,0 +1,55 @@
# Watchtower in docker
###### guide by example
### purpose
Automatic updates of containers.
* [Github](https://github.com/containrrr/watchtower)
* [DockerHub image used](https://hub.docker.com/r/containrrr/watchtower)
### files and directory structure
```
/home
└── ~
└── docker
└── watchtower
└── 🗋 docker-compose.yml
```
### docker-compose
[scheduled](https://pkg.go.dev/github.com/robfig/cron@v1.2.0?tab=doc#hdr-CRON_Expression_Format)
to run every saturday at midnight</br>
no need to be on the same network as other containers, no need .env file</br>
`docker-compose.yml`
```
version: '3'
services:
watchtower:
image: containrrr/watchtower:latest
container_name: watchtower
hostname: watchtower
restart: unless-stopped
environment:
- TZ=Europe/Prague
- WATCHTOWER_SCHEDULE=0 0 0 * * SAT
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_TIMEOUT=30s
- WATCHTOWER_DEBUG=false
- WATCHTOWER_INCLUDE_STOPPED=false
volumes:
- /var/run/docker.sock:/var/run/docker.sock
```
### reverse proxy
no web interface
### update
it updates itself