Skip to content
⚠️ This documentation, is under development. Expect changes!

Monitoring

Create a folder in which to place the monitoring tools data.

You can create a first file named compose.yaml and put the following content in it

compose.yaml
# Monitoring stack
# Web UI: http://grafana:3000
volumes:
prometheus-data:
driver: local
grafana-data:
driver: local
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
networks:
- monitor-net
- proxy-net
volumes:
- ./prometheus:/etc/prometheus
- prometheus-data:/prometheus
restart: always
command:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.retention.time=30d"
pushgateway:
image: prom/pushgateway
container_name: pushgateway
restart: always
networks:
- monitor-net
ports:
- "9091:9091"
grafana:
image: grafana/grafana-oss
container_name: grafana
volumes:
- grafana-data:/var/lib/grafana
restart: always
networks:
- proxy-net
- monitor-net
labels:
traefik.enable: true
traefik.http.routers.monitor.entrypoints: web,websecure
traefik.http.routers.monitor.tls: true
traefik.http.routers.monitor.tls.certresolver: production
traefik.http.routers.monitor.rule: Host(`monitor.example.com`)
traefik.http.services.monitor.loadbalancer.server.port: 3000
traefik.docker.network: proxy-network
cadvisor:
image: zcube/cadvisor:latest
container_name: cadvisor
restart: always
networks:
- monitor-net
volumes:
- '/:/rootfs:ro'
- '/var/run:/var/run:rw'
- '/sys:/sys:ro'
- '/var/lib/docker/:/var/lib/docker:ro'
node_exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
command:
- '--path.rootfs=/host'
pid: host
restart: always
networks:
- monitor-net
volumes:
- '/:/host:ro,rslave'
networks:
monitor-net:
proxy-net:
name: proxy-network
external: true

You need to create a folder named prometheus inside of this folder you will put the following file

prometheus.yml
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
# external_labels:
# monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: [ 'localhost:9090' ]
# Example job for node_exporter
- job_name: 'node_exporter'
static_configs:
- targets: [ 'node_exporter:9100' ]
- job_name: 'pushgateway'
scrape_interval: 5s
static_configs:
- targets: [ 'pushgateway:9091' ]
- job_name: cadvisor
scrape_interval: 5s
static_configs:
- targets:
- cadvisor:8080
- job_name: "ntfy"
static_configs:
- targets: [ "ntfy.example.com" ]
- job_name: "traefik"
static_configs:
- targets: [ "traefik:8082" ]

Once the configuration is complete, you can run the following command

Terminal window
docker compose up -d