This commit is contained in:
DoTheEvo 2023-02-05 19:26:14 +01:00
parent a2f7e6cf6f
commit d973916d56
7 changed files with 200 additions and 132 deletions

136
README.md
View File

@ -14,57 +14,58 @@
* [borg_backup](borg_backup/) - backup utility
* [ddclient](ddclient/) - automatic DNS update
* [dnsmasq](dnsmasq/) - DNS and DHCP server
* [gotify / ntfy / signal](gotify-ntfy-signal/) - instant notifications apps
* [homer](homer/) - homepage
* [minecraft](minecraft/) - game server
* [nextcloud](nextcloud/) - file share & sync
* [jellyfin](jellyfin/) - video and music streaming
* [kopia](kopia_backup/) - backup utility replacing borg
* [minecraft](minecraft/) - game server
* [meshcrentral](meshcrentral/) - web based remote desktop, like teamviewer or anydesk
* [rustdesk](rustdesk/) - remote desktop, like teamviewer or anydesk
* [nextcloud](nextcloud/) - file share & sync
* [opnsense](opnsense/) - a firewall, enterprise level
* [qbittorrent](qbittorrent/) - video and music streaming
* [portainer](portainer/) - docker management
* [prometheus_grafana](prometheus_grafana/) - monitoring
* [unifi](unifi/) - mangment utility for ubiquiti devices
* [unifi](unifi/) - management utility for ubiquiti devices
* [snipeit](snipeit/) - IT inventory management
* [trueNAS scale](trueNASscale/) - network file sharing
* [watchtower](watchtower/) - automatic docker images update
* [wireguard](wireguard/) - the one and only VPN to ever consider
* [zammad](zammad/) - ticketing system
* [arch_linux_host_install](arch_linux_host_install)
Can also check the directories listed at otp for work in progress
Can also just check the directories listed at the top for work in progress
Check also [StarWhiz / docker_deployment_notes](https://github.com/StarWhiz/docker_deployment_notes/blob/master/README.md)<br>
Who documents self hosted apps in similar format and also uses caddy for reverse proxy
Repo documents self hosted apps in similar format and also uses caddy for reverse proxy
# How to self host various services
# Core concepts
You do need to have **basic linux and basic docker-compose knowledge**,
the shit here is pretty hand holding and detailed, but it still should not be
- `docker-compose.yml` do not need any editing to get started,
changes are to be done in the `.env` file.
- Not using `ports` directive if theres only web traffic in a container.<br>
Theres an expectation of running a reverse proxy which makes mapping ports
on docker host unnecessary. Instead `expose` is used which is basically
just documentation.<br>
- For persistent storage bind mount `./whatever_data` is used.
No volumes, nor static path somewhere... just relative path next to compose file.
# Requirements
**Basic linux and basic docker-compose knowledge.**
The shit here is pretty hand holding and detailed, but it still should not be
your first time running a docker container.
a certain format is followed in the services pages
* **Purpose & Overview** - basic overview and intented use
* **Files and directory structure** - lists all the files/folder involved
and their placement
* **docker-compose** - the recipe file how to build a container, with .env file too
* **Reverse proxy** - reverse proxy specific settings, if a container has
a webserver providing web interface
* **Update** - how to update the container, usually just running Watchtower
* **Backup and restore** - of the entire container using borg backup
* **Backup of just user data** - steps to backup databases and other user data
* **Restore the user data** - steps to restore user data in a brand new setup
The core of the setup is Caddy reverse proxy.</br>
It's described in most details.
# Some extra info
Kinda the core of the setup is Caddy reverse proxy.</br>
It's described in most details, it's really amazingly simple but robust software.
### Compose
When making changes use `docker-compose down` and `docker-compose up -d`,
not just restart or stop/start.
* you **do not** need to fuck with `docker-compose.yml` to get something up,
simple copy paste should suffice
* you **do** need to fuck with `.env` file, that's where all the variables are
Often the `.env` file is used as `env_file`,
which can be a bit difficult concept at a first glance.
@ -73,9 +74,9 @@ which can be a bit difficult concept at a first glance.
* `.env` - actual name of a file that is used only by compose.</br>
It is used automatically just by being in the directory
with the `docker-compose.yml`</br>
Variables in it are available during the building of the container,
Variables in it are available during the building of a container,
but unless named in the `environment:` option, they are not available
in the running containers.
once the container is running.
* `env_file` - an option in compose that defines an existing external file.</br>
Variables in this file will be available in the running container,
but not during building of the container.
@ -89,8 +90,8 @@ looks much cleaner, less cramped.
Only issue is that **all** variables from the `.env` file are available in
all containers that use this `env_file: .env` method.</br>
That can lead to potential issues if a container picks up enviroment
variable that is intented for a different container of the stack.
That can lead to potential issues if a container picks up environment
variable that is intended for a different container of the stack.
In the setups here it works and is tested, but if you start to use this
everywhere without understanding it, you can encounter issues.
@ -101,39 +102,16 @@ the variables directly in the compose file only under containers that want them.
### Docker images latest tag
All images are without any tag, which defaults to `latest` tag being used.</br>
Most of the time the images are without any tag,
which defaults to `latest` tag being used.</br>
This is [frowned upon](https://vsupalov.com/docker-latest-tag/),
but feel free to choose a version and sticking with it once it goes to real use.
---
### Bind mount
No docker volumes are used. Directories and files from the host
are bind mounted in to containers.</br>
Don't feel like I know all of the aspects of this,
but I know it's easier to edit a random file on a host,
or backup a directory when it's just there, sitting on the host.
---
### SendGrid
For sending emails free sendgrid account is used, which provides 100 free emails
a day.
The configuration in `.env` files is almost universal, `apikey` is
really the username, not some placeholder.
Only the password(actual value of apikey) changes,
which you generate in apikey section on SendGrid website.
Though I heard complains lately that is not as easy as it was to register on SendGrid.
but feel free to put there the current version to lower the chance of a fuckup.
---
### Cloudflare
For managing DNS records. The free tier provides lot of managment options and
For managing DNS records. The free tier provides lot of management options and
benefits. Like proxy between your domain and your server, so no one
can get your public IP just from your domain name. Or 5 firewall rules that allow
you to geoblock whole world except your country.
@ -148,17 +126,49 @@ you to geoblock whole world except your country.
![ctop-look](https://i.imgur.com/nGAd1MQ.png)
htop like utility for quick containers managment.
htop like utility for quick containers management.
It is absofuckinglutely amazing in how simple yet effective it is.
* hardware use overview, so you know which container uses how much cpu, ram, bandwith, IO,...
* hardware use overview, so you know which container uses how much cpu, ram, bandwidth, IO,...
* detailed info on a container, it's IP, published and exposed ports, when it was created,..
* quick managment, quick exec in to a container, check logs, stop it,...
* quick management, quick exec in to a container, check logs, stop it,...
Written in Go, so its super fast and installation is trivial when it is a single binary,
as likely your distro does not have it in repos. If you use arch, like I do, its on AUR.
---
### Archlinux as a docker host
My go-to is archlinux as I know it the best.
Usually in a virtual machine with snapshots before updates.
For Arch installation I had [this notes](arch_linux_host_install/)
on how to install and what to do afterwards.<br>
But after [archinstall script](https://wiki.archlinux.org/title/archinstall)
started to be included with arch ISO I switched to that.<br>
For after the install setup I created
[Ansible-Arch repo](https://github.com/DoTheEvo/ansible-arch) that gets shit
done in few minutes without danger of forgetting something.<br>
Ansible is really easy to use and very easy to read and understand playbooks,
so it might be worth the time to check out the concept to setup own ansible scripts.
The best aspect of having such repo is that it is a dedicated place where
one can write solution to issues encountered,
or enable freshly discovered feature for all deployments.
---
### SendGrid and Sendinblue
Services often need ability to send emails, for registration, password recset and such...
I got free sendgrid account which provides 100 free emails a day.
But I heard complains that is not as easy as it was to register on SendGrid.
I also use Sendinblue, I guess it was easy cuz I dont remember anything about it.
It works and got 300 mails a day
---

View File

@ -4,6 +4,7 @@
![logo](https://i.imgur.com/xmSY5qu.png)
[update with this in mind](https://www.reddit.com/r/selfhosted/comments/10r9o4d/reverse_proxies_with_nginx_proxy_manager/)
1. [Purpose & Overview](#Purpose--Overview)
2. [Caddy as a reverse proxy in docker](#Caddy-as-a-reverse-proxy-in-docker)
@ -56,6 +57,8 @@ and only some special casess with extra functionality need extra work.
Caddy will be running as a docker container and will route traffic to other containers,
or machines on the network.
[gurucomputing caddy guide](https://blog.gurucomputing.com.au/reverse-proxies-with-caddy/)
### - Requirements
* have some basic linux knowledge, create folders, create files, edit files, run scripts,...

View File

@ -121,7 +121,7 @@ Type=simple
ExecStart=/bin/curl -d "%i | %H" https://ntfy.example.com/systemd
```
example of a service using the above defined service to send notifications.
Example of a service using the above defined service to send notifications.
`borg.service`
```

View File

@ -15,9 +15,9 @@ Backups.
* [Official site](https://kopia.io/)
* [Github](https://github.com/kopia/kopia)
Kopia is a very new open source backup utility with basicly **all** modern features.</br>
Kopia is a new open source backup utility with basicly **all** modern features.</br>
Cross-platform, deduplication, encryption, compression, multithreaded speed,
fully build in cloud storage support, GUI versions, snapshots mounting,...
native cloud storage support, GUI versions, snapshots mounting,...
Written in golang.
@ -163,7 +163,7 @@ Persistent=true
WantedBy=timers.target
```
# Mounting backup storage using systemd
# Mounting network storage using systemd
* file are placed in `/etc/systemd/system`
* the name of mount and automount files MUST correspond with the path<br>

View File

@ -43,7 +43,7 @@ Tested with wireshark. Pinging for a nonexisting hostname mans LLMNR
broadcast to every device on network is send asking who is that hostname.
Works same when pinging from archlinux or pinging from win8.1
[TCP vs UDP](https://youtu.be/jE_FcgpQ7Co)
# asdasdasd

View File

@ -233,8 +233,9 @@ checkboxes about NAT reflection, also called hairpin NAT or a NAT loopback.
Many consider NAT reflection to be a hack that should not be used.<br>
That the correct way is split DNS, where you maintain separate DNS records for
LAN side so that `a.example.com` points directly to some local ip.
Reason being that machines on LAN that use FQDN to access other machine on LAN
are not hitting the firewall with every traffic that goes between them.
Reason being that this way machines on LAN side that use FQDN(a.example.com)
to access other machine on LAN are not hitting the firewall with traffic
that goes between them.
But IMO in small scale selfhosted setup its perfectly fine
and it requires far less management.
@ -370,9 +371,25 @@ Assuming you are not in the country from which these run their test.
---
---
<details>
<summary><h1>DNS - Unbound</h1></summary>
Build in DNS server, enabled by default, listening at port 53
Services: Unbound DNS: General
</details>
---
---
<details>
<summary><h1>Monitoring</h1></summary>
### ARP table
Interfaces: Diagnostics: ARP Table<br>
### live view of connections
Firewall: Log Files: Live View<br>
@ -412,5 +429,13 @@ and autorefresh on/off and up to 20k last entries
### Extra info and encountered issues
* Health check - `System: Firmware` Run an audit button, Health
* zenarmor that was disabled caused an error notification<br>
opnsense and PHP Startup: Unable to load dynamic library 'mongodb.so'
* got error notice:<br>
*opnsense and PHP Startup: Unable to load dynamic library 'mongodb.so'*<br>
seems its some remnant of zenarmor.
[Heres](https://forum.opnsense.org/index.php?topic=29721.0) the talk on it.<br>
`pkg list | grep mongo` to get exact package name.<br>
`pkg remove php74-pecl-mongodb` to remove the package
zenarmor that was disabled caused an error notification<br>

View File

@ -8,29 +8,36 @@
Monitoring of the host and the running cointaners.
* [Official site](https://prometheus.io/)
* [Official Prometheus site](https://prometheus.io/)
* [Github](https://github.com/prometheus)
* [DockerHub](https://hub.docker.com/r/prom/prometheus/)
Everything here is based on the magnificent
Most of the stuff here is based off the magnificent
[stefanprodan/dockprom.](https://github.com/stefanprodan/dockprom)</br>
So maybe just go get that.
So maybe just go play with that.
[Great youtube overview](https://youtu.be/h4Sl21AKiDg) of Prometheus.</br>
Here's my [veeam-prometheus-grafana](https://github.com/DoTheEvo/veeam-prometheus-grafana)
how to setup pushgateway and send to it info on done backups
and visualize history of that in grafana.<br>
Also soon to be added, [Loki](https://youtu.be/h_GGd7HfKQ8) for logs,
to get that ntfy alarm when something happens in a log in a docker container.
# Chapters
---
6. [redirect HTTP traffic to HTTPS](#6-redirect-HTTP-traffic-to-HTTPS)
Setup here starts off with the basics and then theres chapters how to add features
* **[Core prometheus+grafana](Overview)** - to get nice dashboards with metrics from docker host and containers
* **Pushgateway** - how to use it to allow pushing metrics in to prometheus from anywhere
* **Alertmanager** - how to use it for notifications
* **Loki** - how to do the above things but for logs, not just metrics
* **Caddy** - adding dashboard for reverse proxy info
# Overview
[Good youtube overview](https://youtu.be/h4Sl21AKiDg) of Prometheus.</br>
Prometheus is an open source system for monitoring and alerting,
written in golang.<br>
It periodicly collects metrics from configured targets,
exposes collected metrics for visualization, and can trigger alerts.<br>
Prometheus is relatively young project, it is a **pull type** monitoring
and consists of several components.
makes these metrics available for visualization, and can trigger alerts.<br>
Prometheus is relatively young project, it is a **pull type** monitoring.
[Glossary.](https://prometheus.io/docs/introduction/glossary/)
* **Prometheus Server** is the core of the system, responsible for
* pulling new metrics
@ -41,7 +48,7 @@ and consists of several components.
* **exporter** - a script or a service that gathers metrics on the target,
converts them to prometheus server format,
and exposes them at an endpoint so they can be pulled
* **AlertManager** - responsible for handling alerts from Prometheus Server,
* **Alertmanager** - responsible for handling alerts from Prometheus Server,
and sending notifications through email, slack, pushover,..
In this setup [ntfy](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/gotify-ntfy-signal) webhook will be used.<br>
Grafana comes with own alerts, but grafana kinda feels... b-tier
@ -49,11 +56,8 @@ and consists of several components.
Should not be overused as it goes against the pull philosophy of prometheus.
Most commonly it is used to collect data from batch jobs, or from services
that have short execution time. Like a backup script.<br>
[Here's](https://github.com/DoTheEvo/veeam-prometheus-grafana) my use of it
to monitor veeam backup servers.
* **Grafana** - for web UI visualization of the collected metrics
* **Grafana** - for web UI visualization of the collected metrics
[glossary](https://prometheus.io/docs/introduction/glossary/)
![prometheus components](https://i.imgur.com/AxJCg8C.png)
@ -64,19 +68,15 @@ and consists of several components.
└── ~/
└── docker/
└── prometheus/
├─── alertmanager/
├─── grafana/
├─── grafana-data/
├─── prometheus-data/
├──── grafana_data/
├──── prometheus_data/
├── docker-compose.yml
├── .env
└── prometheus.yml
```
* `alertmanager/` - ...
* `grafana/` - a directory containing grafanas configs and dashboards
* `grafana-data/` - a directory where grafana stores its data
* `prometheus-data/` - a directory where prometheus stores its database and data
* `grafana_data/` - a directory where grafana stores its data
* `prometheus_data/` - a directory where prometheus stores its database and data
* `.env` - a file containing environment variables for docker compose
* `docker-compose.yml` - a docker compose file, telling docker how to run the containers
* `prometheus.yml` - a configuration file for prometheus
@ -86,15 +86,22 @@ The directories are created by docker compose on the first run.
# docker-compose
* **Prometheus** - prometheus server, pulling, storing, evaluating metrics
* **Grafana** - web UI visualization of the collected metrics
in nice dashboards
* **Prometheus** - Container with some extra commands run at the start up.
Setting stuff like storage, data rentetion (500hours - 20 days)...
Bind mounted prometheus_data for persistent storage
and `prometheus.yml` for some basic configuration.
* **Grafana** - Cotainer, bind mounted directory for persistent data storage
* **NodeExporter** - an exporter for linux machines,
in this case gathering the metrics of the linux machine runnig docker,
like uptime, cpu load, memory use, network bandwidth use, disk space,...
* **cAdvisor** - exporter for gathering docker **containers** metrics,
showing cpu, memory, network use of each container
* **alertmanager** - guess what that one do
like uptime, cpu load, memory use, network bandwidth use, disk space,...<br>
Also bind mount of some system directories to have access to required info.
* **cAdvisor** - an exporter for gathering docker **containers** metrics,
showing cpu, memory, network use of each container<br>
Runs in `privileged` mode and has some bind mounts of system directories
to have access to required info.
*Note* - ports are only `expose`, since expectation of use of a reverse proxy
and accessing the services by hostname, not ip and port.
`docker-compose.yml`
```yml
@ -106,7 +113,6 @@ services:
container_name: prometheus
hostname: prometheus
restart: unless-stopped
user: root
depends_on:
- cadvisor
command:
@ -114,17 +120,17 @@ services:
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=200h'
- '--storage.tsdb.retention.time=500h'
- '--web.enable-lifecycle'
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ./prometheus_data:/prometheus
ports:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
expose:
- 9090:9090
labels:
org.label-schema.group: "monitoring"
# WEB BASED UI VISUALISATION OF THE METRICS
# WEB BASED UI VISUALISATION OF METRICS
grafana:
image: grafana/grafana:9.3.6
container_name: grafana
@ -134,14 +140,12 @@ services:
user: root
volumes:
- ./grafana_data:/var/lib/grafana
- ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
- ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
expose:
- 3000
labels:
org.label-schema.group: "monitoring"
# HOST MACHINE METRICS EXPORTER
# HOST LINUX MACHINE METRICS EXPORTER
nodeexporter:
image: prom/node-exporter:v1.5.0
container_name: nodeexporter
@ -181,22 +185,6 @@ services:
labels:
org.label-schema.group: "monitoring"
# NOTIFICATIONS MANAGMENT
alertmanager:
image: prom/alertmanager:v0.25.0
container_name: alertmanager
hostname: alertmanager
restart: unless-stopped
volumes:
- ./alertmanager:/etc/alertmanager
command:
- '--config.file=/etc/alertmanager/config.yml'
- '--storage.path=/alertmanager'
expose:
- 9093
labels:
org.label-schema.group: "monitoring"
networks:
default:
name: $DOCKER_MY_NETWORK
@ -238,6 +226,9 @@ global:
scrape_interval: 15s
evaluation_interval: 15s
rule_files:
- "/etc/prometheus/rules/*.rules"
# A scrape configuration containing exactly one endpoint to scrape.
scrape_configs:
- job_name: 'nodeexporter'
@ -279,7 +270,7 @@ push.{$MY_DOMAIN} {
* add Prometheus as a Data source in configuration<br>
set URL to `http://prometheus:9090`<br>
* import dashboards from [json files in this repo](dashboards/)<br>
These dashboards are the preconfigured ones from
[stefanprodan/dockprom](https://github.com/stefanprodan/dockprom)
with few changes.<br>
@ -298,6 +289,45 @@ the time interval is set to show last 1h instead of last 15m
![interface-pic](https://i.imgur.com/wzwgBkp.png)
# Alertmanager
[ntfy](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/gotify-ntfy-signal)
will be used to notify about prometheus alerts.
`alertmanager.yml`
```yml
route:
receiver: 'ntfy'
receivers:
- name: "ntfy"
webhook_configs:
- url: 'https://ntfy.example.com/alertmanager'
send_resolved: true
```
test:<br>
`curl -H 'Content-Type: application/json' -d '[{"labels":{"alertname":"blabla"}}]' https://alert.example.com/api/v1/alerts`
reload rules
`curl -X POST http://admin:admin@<host-ip>:9090/-/reload`
`alert.rules`
```yml
groups:
- name: host
rules:
- alert: DiskspaceLow
expr: sum(node_filesystem_free_bytes{fstype="ext4"}) > 88.2
for: 10s
labels:
severity: critical
annotations:
description: "Diskspace is low!"
```
# Update
Manual image update:
@ -312,7 +342,7 @@ Manual image update:
Using [borg](https://github.com/DoTheEvo/selfhosted-apps-docker/tree/master/borg_backup)
that makes daily snapshot of the entire directory.
#### Restore
* down the prometheus containers `docker-compose down`</br>