Currently have nice long docker compose file that hosts my PiHole V6 container (along with a bunch of other containers) however, reason i ask this question is because whenever I go to pull an updated image and recreate the container I experience about 20 minutes of no DNS resolution which to my knowledge is due to the NTP clock being out of sync.
What’s the best way to host a DNS sinkhole/resolver that can mitigate this issue?
Was thinking of utilizing Proxmox & LXC but I suspect I’ll get the same experience.
Update: Turns out PiHole doesn’t support two instances, I got both of them on separate devices also set the 2nd DNS server in my routers WAN & LAN DNS settings which did in fact split DNS between both instances however, I lost access to my routers web-ui, my Traefik instance & reverse proxies died and I lost all internet access.
So, don’t do what I did.
Update 2: So everything I said in my first update let’s disregard that, turns out I had my router forcing all DNS to PiHole server 1 which caused my issues mentioned above.
Two servers appears to work!
I’m looking into Technitium, which doesn’t get a ton of attention here. It looks to be much more feature packed than PiHole (DNS over HTTPS, for example), and similar to AdGuard Home.
Man, I was excited about Technitium, but I’ve had a hell of a time trying to get it to work. I’m not sure if it’s intended to be on a DMZ in order to get TLS working or something, but I’ve not been able to get it to acknowledge a single DNS request, even when I think I’ve shut down DNSSec entirely.
This is overkill.
I have a dedicated raspberry pi for pihole, then two VMs running PowerDNS in Master/Slave mode. The PDNS servers use the Pihole as their primary recursive lookup, followed by some other Internet privacy DNS server that I can’t recall right now.
If I need to do maintenance on the pihole, power DNS can fall back to the internet DNS server. If I need to do updates on the PowerDNS cluster, I can do it one at a time to reduce the outage window.
EDIT: I should have phrased the first sentence: “My setup is overkill” rather than “This is overkill” - the Op is asking a very valid question and the passive phrasing of my post’s first sentence could be taken multiple ways.
The **ONLY** DNS server you should have set on your network is a/the PiHole(s).
2 pihole instances 1 pi5 1 pi4 Keepalived provides vrrp at a set address.
Instances kept in sync via orbital
1 goes down the other takes over.
Quite elegantly.
Where do you do DHCP? I had a primary pihole with DHCP enabled and a secondary with a cron job that enabled DHCP if the primary was down or disabled it if the primary was working. The cron job did sync DHCP leases from one to the other but it was a bit janky. I tried to update the secondary to pihole v6 and hosed it so I have no backup for now. I’d like to re-image the secondary and get a better setup - when I have time.
Edit to say I really wanted to try keepalived - that’s really cool to fail over without clients noticing.
Debian & ubuntu sudo apt install keepalived
sudo apt install libipset13
Configuration
Find your IP
ip a
edit your config
sudo nano /etc/keepalived/keepalived.conf
First node
vrrp_instance VI_1 {
state MASTER
interface ens18
virtual_router_id 55
priority 150
advert_int 1
unicast_src_ip 192.168.30.31
unicast_peer {
192.168.30.32
}
authentication {
auth_type PASS
auth_pass C3P9K9gc
}
virtual_ipaddress {
192.168.30.100/24
}
}
Second node
vrrp_instance VI_1 {
state BACKUP
interface ens18
virtual_router_id 55
priority 100
advert_int 1
unicast_src_ip 192.168.30.32
unicast_peer {
192.168.30.31
}
authentication {
auth_type PASS
auth_pass C3P9K9gc
}
virtual_ipaddress {
192.168.30.100/24
}
}
Start and enable the service
sudo systemctl enable --now keepalived.service
stopping the service
sudo systemctl stop keepalived.service
get the status
sudo systemctl status keepalived.service
Make sure to change ip and auth pass.
Enjoy
How do you host your DNS sinkhole/resolver?
Like this, baby:
services.adguardhome = { enable = true; mutableSettings = false; openFirewall = true; settings = { dns = { # Web Interface bootstrap_dns = ["9.9.9.9" "149.112.112.112"]; upstream_dns = ["https://dns.quad9.net/dns-query"]; fallback_dns = ["tls://dns.quad9.net"]; }; filters = [ { name = "AdGuard DNS filter"; url = "https://adguardteam.github.io/HostlistsRegistry/assets/filter_1.txt"; enabled = true; } ]; filtering = { blocked_services = { ids = [ ]; }; protection_enabled = true; filtering_enabled = true; rewrites = [ ]; };
Deploy to the main home server, and the backup instance. NixOS is fucking awesome. No sync tool needed.
How do I use nixos for docker? I’ve tried before but what I want is to be able to pull docker compose from a git and deploy it. I haven’t been able to find an easy way to do that on docker
If you have the
docker-compose.yml
locally, you cannix run github:aksiksi/compose2nix
to translate it into a nix file for inclusion in your nixos system config. I think that could be done in the config itself with a git url but I’m not that great at nix. You will surely still need some manual config to e.g. set environment variables for paths and secrets.
If you run a single DNS server, you will always have downtime when it’s restarted.
The only way to mitigate that, is to run 2 DNS servers.
I setup my network to use pihole as the first DNS and the router as the second, most of the time pihole is used. Unless it’s down
How do you set up clients so they will always use the first one? I thought if a client knows 2 servers they will switch between them.
I plan to add a second Pihole at some point and keep them synced
Yeah, you can’t. There is no guarantee that clients will use dns servers in any particular order.
Not that it particularly matters for just queries. The problem is that DHCP can only be enabled on one host. If that one fails then devices can’t get on to the network themselves. I’d like to know a good way to have a failover DHCP server - my janky cronjob isn’t great.
You can just run two DHCP servers. Give them non-overlapping ranges or give them the same MAC to IP mapping.
How do the DNS servers resolve local hostnames then? The pihole DHCP integration adds local hostnames to DNS when they are assigned an address. If there’s two DHCP servers handing out leases, presumable only one would be accepted, how then would the DNS servers sync those names?
I think I had my secondary pihole resolve local names from the primary, and leases were copied over on a cronjob in case the secondary DHCP server had to be enabled.
Use the second option of a static MAC to IP map and add the relevant records to each pihole’s local DNS.
The **ONLY** DNS server you should have set on your network is a/the PiHole(s).
The **ONLY** DNS server you should have set on your network is a/the PiHole(s).
Just be sure that the second server in the list is also a black hole. If you don’t, all black holed requests will fallback to the second DNS… which, if it doesn’t also black hole them, will wind up serving you ads and defeating the point!
Personally I find a single Pi is just fine for DNS. It only takes like 10 seconds to reboot. Less, if you use M.2 storage via a HAT or boot from USB! That’s pretty fine downtime. But if you’re afraid you’ll knock over the network and get yelled at by your family or housemates, best to use a backup :)
For a critical service like DNS, I decided to set it up bare metal on a Raspberry Pi 2 (even a Pi Zero should work). It’s been working fine for years, I just update it from time to time. That way I can mess with my homelab without worrying about DNS issues.