Setting up piHole in Docker Swarm

At home I have set up three raspberry pi’s in a Docker cluster. You know, just for the fun of it and to keep learning new things. Because how could I ever love doing IT stuff if I didn’t set myself up for failure at every point?

I wanted to try out piHole at home using docker, which was easy enough. Just install the pi’s and make sure that they have their own dns (using googles 8.8.8.8) setup without looping back to the piHole docker container.

Running a standalone container was easy enough, but I wanted high-availability. If one server went down, another one would take up the slack which is where docker swarm comes into play. But as these things usually play out; I had trouble getting the piHole to start in the swarm. It wasn’t until I stumbled across this error report that I got it working. Yeah, I can’t take the credit for figuring this problem out, I’m just winging it until I make it. So it may be that this problem is solved in the future, but for now (march 2020) it matters.

docker-stack.yml:

version: "3"
services:

  pihole:
    image: pihole/pihole:latest
    deploy:
      replicas: 3
      restart_policy:
        condition: on-failure
        max_attempts: 3
    volumes:
      - "/my/host/docker/config/pihole/etc-pihole:/etc/pihole"
      - "/my/host/docker/config/pihole/etc-dnsmasq.d:/etc/dnsmasq.d"
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "67:67/udp"
      - "80:80/tcp"
      - "443:443/tcp"
    environment:
      - TZ='Europe/Oslo'
      - WEBPASSWORD="thats_just_personal_you_know"
      - FTL_CMD=debug
      - DNSMASQ_LISTENING=all
    dns:
      - 127.0.0.1
      - 1.1.1.1

If you try to try the “cap-add” option, you get a message that it has been deprecated and is therefore ignored. The solution is to add these two options in the environment:

  • FTL_CMD=debug
  • DNSMASQ_LISTENING=all

Yeah, I can’t even pretend to understand why these options work, but they do. I love that there are so many smarter people than me out there.

Remember to set at least two ip’s in your routers dns or the whole point of running piHole in swarm is lost.

All that needs to be done now is to run the thing:

$ docker stack deploy -c docker-stack.yml pihole
Creating network pihole_default
Creating service pihole_pihole

To check that it works, go to: http://your.ip/admin/ and you should get the pihole admin gui up. Tip: There are probably some sites you want to whitelist.

Dockers ingress routing mesh enables each node in the swarm to accept connections from any service in the swarm. The translation of this is that you can use any ip address of docker nodes to access the admin gui.

So I wanted to test this out, since taking piHole down means I don’t have internet access. I like having internet access – and I don’t want to piss my son off more than taking away all the lovely game commercials he so loves to click on.

This can easily be tested with the dig command.

I have one manager (192.168.1.113) and two workers (…117 & 119). When all nodes are up and working, all requests goes to 113. Notice how I query 119 and the request ends up with the manager (113).

$ dig 192.168.1.119 vg.no

; <<>> DiG 9.11.5-P4-5.1-Raspbian <<>> 192.168.1.119 vg.no
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 56378
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;192.168.1.119.			IN	A

;; Query time: 13 msec
;; SERVER: 192.168.1.113#53(192.168.1.113)
;; WHEN: Tue Mar 17 16:01:00 CET 2020
;; MSG SIZE  rcvd: 42

;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13178
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;vg.no.				IN	A

;; ANSWER SECTION:
vg.no.			216	IN	A	195.88.54.16
vg.no.			216	IN	A	195.88.55.16

;; Query time: 14 msec
;; SERVER: 192.168.1.113#53(192.168.1.113)
;; WHEN: Tue Mar 17 16:01:00 CET 2020
;; MSG SIZE  rcvd: 66

So when the manager was taken offline, it still worked as expected and the query went to 117:

$ dig 192.168.1.119 vg.no

; <<>> DiG 9.11.5-P4-5.1-Raspbian <<>> 192.168.1.119 vg.no
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 26125
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;192.168.1.119.			IN	A

;; Query time: 131 msec
;; SERVER: 192.168.1.117#53(192.168.1.117)
;; WHEN: Tue Mar 17 16:01:09 CET 2020
;; MSG SIZE  rcvd: 42

;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62281
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;vg.no.				IN	A

;; ANSWER SECTION:
vg.no.			206	IN	A	195.88.54.16
vg.no.			206	IN	A	195.88.55.16

;; Query time: 36 msec
;; SERVER: 192.168.1.117#53(192.168.1.117)
;; WHEN: Tue Mar 17 16:01:10 CET 2020
;; MSG SIZE  rcvd: 66

So yeah. I now got piHole running in a swarm without any downtime if a node goes down and I can now update the services without downtime. Yeah, I like this setup 🙂

About Haridasi

integrity - the state of being whole, entire, or undiminished.
This entry was posted in Miscellaneous. Bookmark the permalink.

2 Responses to Setting up piHole in Docker Swarm

  1. BrainVirus says:

    This is awesome. I am going to work on getting this setup, I do have a few questions.

    In this configuration,

    1. are the requests showing the correct ip of the computer/device that is making the request?
    2. are you doing anything to sync the whitelist/blacklists between the docker containers?
    3. what happens if you want to disable the filter for a few minutes? Do you have to go disable them on each instance of pi-hole, or do you have something else setup to “sync” those actions?

    Thanks!!!

  2. Haridasi says:

    Hi there,

    1. I’m not sure what you mean here. In the dig command you get the correct ip of the hosts that makes the requests.

    2. I have cheated a bit since I use a shared volume for the containers so therefore there is no need to sync between the containers. But its a great question because it addresses one of the headaches of docker – sharing data between containers. So far my experience is that pihole works great for a few weeks, then all the pihole containers fails because it has problems opening a database. So, I delete the containes and start them up again.

    3. If I want to disable pihole, I just delete the stack: docker stack rm pihole
    The best way to disable the filter, is to change the dns ip addresses in the router.
    Or – just manually configure the dns on your device to not use pihole. No matter how you look at it, pihole adds another layer of complexity to your network.

Leave a Reply

Your email address will not be published. Required fields are marked *