Currently have nice long docker compose file that hosts my PiHole V6 container (along with a bunch of other containers) however, reason i ask this question is because whenever I go to pull an updated image and recreate the container I experience about 20 minutes of no DNS resolution which to my knowledge is due to the NTP clock being out of sync.
What’s the best way to host a DNS sinkhole/resolver that can mitigate this issue?
Was thinking of utilizing Proxmox & LXC but I suspect I’ll get the same experience.
Update: Turns out PiHole doesn’t support two instances, I got both of them on separate devices also set the 2nd DNS server in my routers WAN & LAN DNS settings which did in fact split DNS between both instances however, I lost access to my routers web-ui, my Traefik instance & reverse proxies died and I lost all internet access.
So, don’t do what I did.
Update 2: So everything I said in my first update let’s disregard that, turns out I had my router forcing all DNS to PiHole server 1 which caused my issues mentioned above.
For a critical service like DNS, I decided to set it up bare metal on a Raspberry Pi 2 (even a Pi Zero should work). It's been working fine for years, I just update it from time to time. That way I can mess with my homelab without worrying about DNS issues.
I am running AdGuard Home DNS, not PiHole.. but same idea.
I have AGH running in two LXCs on proxmox (containers).
I have all DHCP zones configured to point to both instances, and I never reboot both at the same time. Additionally, I watch the status of the service to make sure it’s running before I reboot the other instance.
Outside of that, there’s really no other approach.
You would still need at least 2 DNS servers, but you could setup some sort of virtual IP or load balancing IP and configure DHCP to point to that IP, so when one instance goes down then it fails over to the other instance.
spin up a second pihole docker and upgrade them separately so they can failover to the other one while upgrading. I do not have an issue with 20min lose of DNS after updating my pi.hole docker, but I did spin up a second one when I wanted to try unbound+pi.hole and just kept them both up/running.
I think something else may be wrong if it breaks for 20 minutes.
When I originally setup my PiHole many, many, many months ago when I was still learning the Docker engine I had little to no issue.
I don’t know what caused it either being a power-outage or network loss but ever since I’ve been experiencing DNS related issues (I suspect it’s NTP not syncing), some days I’ll wake up before work realizing “oh shit I have no internet access” frantically trying to fix the issue.
I think i might take the advice of other commenters here and host two PiHole servers on separate devices/stacks, just got to hope my router supports it.
I run my pi-hole on a dedicated Pi, and I pull the updated image first without any trouble. Then after the updated image is pulled, recreating the container only takes a few seconds.
Dunno what's broken about your setup, but it definitely sounds like something unusual to me.
Where do you do DHCP? I had a primary pihole with DHCP enabled and a secondary with a cron job that enabled DHCP if the primary was down or disabled it if the primary was working. The cron job did sync DHCP leases from one to the other but it was a bit janky. I tried to update the secondary to pihole v6 and hosed it so I have no backup for now. I'd like to re-image the secondary and get a better setup - when I have time.
Edit to say I really wanted to try keepalived - that's really cool to fail over without clients noticing.
I have a dedicated raspberry pi for pihole, then two VMs running PowerDNS in Master/Slave mode. The PDNS servers use the Pihole as their primary recursive lookup, followed by some other Internet privacy DNS server that I can't recall right now.
If I need to do maintenance on the pihole, power DNS can fall back to the internet DNS server. If I need to do updates on the PowerDNS cluster, I can do it one at a time to reduce the outage window.
EDIT: I should have phrased the first sentence: "My setup is overkill" rather than "This is overkill" - the Op is asking a very valid question and the passive phrasing of my post's first sentence could be taken multiple ways.
Sorry, I wasn't clear - I use PowerDNS so that I can more easily deploy services that can be resolved by my internal networks (deployed via Kubernetes or Terraform). In my case, the secondary PowerDNS server does regular zone transfers from the primary in order to ensure it has a copy of all A, PTR, CNAME, etc records.
But PowerDNS (and all DNS servers really), can either be authoritative resolvers or recursors. In my case, the PDNS servers are authoritative for my homelab zone/domain and they perform recursive lookups (with caching) for non-authoritative domains like google.com, infosec.pub, etc. By pointing my PDNS servers to PiHole for recursive lookups, I ensure that I have ad blocking while still allowing for my automation to handle the homelab records.
I'm looking into Technitium, which doesn't get a ton of attention here. It looks to be much more feature packed than PiHole (DNS over HTTPS, for example), and similar to AdGuard Home.
Man, I was excited about Technitium, but I've had a hell of a time trying to get it to work. I'm not sure if it's intended to be on a DMZ in order to get TLS working or something, but I've not been able to get it to acknowledge a single DNS request, even when I think I've shut down DNSSec entirely.
How do I use nixos for docker? I've tried before but what I want is to be able to pull docker compose from a git and deploy it. I haven't been able to find an easy way to do that on docker
If you have the docker-compose.yml locally, you can nix run github:aksiksi/compose2nix to translate it into a nix file for inclusion in your nixos system config. I think that could be done in the config itself with a git url but I'm not that great at nix. You will surely still need some manual config to e.g. set environment variables for paths and secrets.