DNS Black-holing w/ DNS over TLS - Personal Privacy Part 1
So DNS Black-holing is not new obviously, and what stands out as the go to solution? Pihole probably... and yeah thats what im using because hey its a popular choice. Though I am running it in docker. Combining that with Unbound (also in docker), and configuring outbound DNS to use DNS over TLS, with a few additional minor tweaks, but otherwise mostly standard configuration on both.
Wondering what you guys might be using, and if you are using Pihole and/or Unbound if you have any tips on configuration.
I got two PiHoles running on my network via Docker Compose, I tried setting up Unbound in Docker-Compose and that fell flat, from my understanding DNSSEC was preventing DNS resolution outright.
Also tried OpenSense + Unbound which led to the same thing.
Eventually got tired of having my network cutting in and out over minor changes so I just stuck with Quad9 for my upstream needs.
happy to share my docker-compose with pihole and unbound. im not the original author its a compilation of a few peoples. no issues. normal DNS inside the house DoT outside.
If you don't mind DM'ing me or dropping it in a comment here it would be greatly appreciated! The docker engine isn't something entirely new to me so i'm a bit skeptical into thinking that i missed something but always happy to compare with others, actually Docker is what pushed me to switch fully to Linux on my personal computers.
Snippet from my docker-compose.yml:
pihole:
container_name: pihole
hostname: pihole
image: pihole/pihole:latest
networks:
main:
ipv4_address: 172.18.0.25
# For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
ports:
- "53:53/tcp"
- "53:53/udp"
- "127.0.0.1:67:67/udp" # Only required if you are using Pi-hole as your DHCP server
- "127.0.0.1:85:80/tcp"
- "127.0.0.1:7643:443"
environment:
TZ: 'America/Vancouver'
FTLCONF_webserver_api_password: 'insert-password-here'
FTLCONF_dns_listeningMode: 'all'
# Volumes store your data between container upgrades
volumes:
- './config/pihole/etc-pihole:/etc/pihole'
- './config/pihole/etc-dnsmasq.d:/etc/dnsmasq.d'
- '/etc/hosts:/etc/hosts:ro'
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
cap_add:
- NET_ADMIN # Required if you are using Pi-hole as your DHCP server, else not needed
- CAP_SYS_TIME
- CAP_SYS_NICE
- CAP_CHOWN
- CAP_NET_BIND_SERVICE
- CAP_NET_RAW
- CAP_NET_ADMIN
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.pihole.rule=Host(`pihole.my.domain`)"
- "traefik.http.routers.pihole.entrypoints=https"
- "traefik.http.routers.pihole.tls=true"
- "traefik.http.services.pihole.loadbalancer.server.port=80"
- "traefik.http.routers.pihole.middlewares=fail2ban@file"
unbound:
image: alpinelinux/unbound
container_name: unbound
hostname: unbound
networks:
main:
ipv4_address: 172.18.0.26
ports:
- "127.0.0.1:5334:5335"
volumes:
- ./config/unbound/:/var/lib/unbound/
- ./config/unbound/unbound.conf:/etc/unbound/unbound.conf
- ./config/unbound/unbound.conf.d/:/etc/unbound/unbound.conf.d/
- ./config/unbound/log/unbound.log:/var/log/unbound/unbound.log
restart: unless-stopped
Edit: After re-reading the Unbound github and their documentation it seems i may have missed some volume mounts that are key to the function of Unbound, i'll definitely have to dive deeper into it.
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
ports:
# DNS Ports
- "53:53/tcp"
- "53:53/udp"
# Default HTTP Port
- "8082:80/tcp"
# Default HTTPs Port. FTL will generate a self-signed certificate
- "8443:443/tcp"
# Uncomment the below if using Pi-hole as your DHCP Server
#- "67:67/udp"
# Uncomment the line below if you are using Pi-hole as your NTP server
#- "123:123/udp"
environment:
# Set the appropriate timezone for your location from
# https://en.wikipedia.org/wiki/List_of_tz_database_time_zones, e.g:
TZ: 'America/New_York'
# Set a password to access the web interface. Not setting one will result in a random password being assigned
FTLCONF_webserver_api_password: 'false cat call cup'
# If using Docker's default `bridge` network setting the dns listening mode should be set to 'all'
FTLCONF_dns_listeningMode: 'all'
FTLCONF_dns_upstreams: '127.0.0.1#5335' # Unbound
# Volumes store your data between container upgrades
volumes:
# For persisting Pi-hole's databases and common configuration file
- './etc-pihole:/etc/pihole'
# Uncomment the below if you have custom dnsmasq config files that you want to persist. Not needed for most starting fresh with Pi-hole v6. If you're upgrading from v5 you and have used this directory before, you should keep it enabled for the first v6 container start to allow for a complete migration. It can be removed afterwards. Needs environment variable FTLCONF_misc_etc_dnsmasq_d: 'true'
#- './etc-dnsmasq.d:/etc/dnsmasq.d'
cap_add:
# See https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
# Required if you are using Pi-hole as your DHCP server, else not needed
- NET_ADMIN
# Required if you are using Pi-hole as your NTP client to be able to set the host's system time
- SYS_TIME
# Optional, if Pi-hole should get some more processing time
- SYS_NICE
restart: unless-stopped
unbound:
container_name: unbound
image: mvance/unbound:latest # Change to use 'mvance/unbound-rpi:latest' on raspberry pi
# use pihole network stack
network_mode: service:pihole
volumes:
# main config
- ./unbound-config/unbound.conf:/opt/unbound/etc/unbound/unbound.conf:ro
# custom config (unbound.conf.d/your-config.conf). unbound.conf includes these via wilcard include
- ./unbound-config/unbound.conf.d:/opt/unbound/etc/unbound/unbound.conf.d:ro
# log file
- /srv/docker/pihole-unbound/unbound/etc-unbound/unbound.log:/opt/unbound/etc/unbound/unbound.log
restart: unless-stopped
I am relatively new to docker as well tbh. I did a lot with virtualization and a lot with linux and never bothered, but I totally get the use case now ha. just an FYI, if you use docker on Windows it runs slower as it has to leverage the Windows subsystem Linux (WSL) and a slightly different docker engine (forget which one). So linux is your best bet. If you do want to use a full VM I found Qemu to be the best option for least resource usage.