This server was overdue for a migration to new hardware, and I used this opportunity to make its setup reproducible by basing it on Docker containers. This allowed me to test things locally, setting everything up on the real server was only a matter of one hour then. Some issues I didn’t recognize locally however, most importantly Docker’s weird IPv6 support. Everything worked just fine when the server was accessed via IPv4, accessing it via an IPv6 address caused connections to hang however. I hit this issue with Docker 1.13.1 originally, updating to Docker 17.12 didn’t change anything. Figuring this out took me quite a while, so I want to sum up my findings here.
First, it is important to know that Docker currently has two entirely different mechanisms implementing published ports. The default is the userland proxy, which is an application listening to a port and forwarding any incoming traffic to the respective container. The downside of this solution is: the proxy needs to open a new connection to the container, which means that the container will no longer see the remote address of the real client but merely the proxy’s address. This might be acceptable for some applications, but if your web server runs inside a container for example it needs to log real remote addresses.
So you will often see recommendations to disable userland proxy, which was even supposed to become the default setting (didn’t happen yet because of stability issues). In this mode, Docker (at least on Linux) uses iptables to forward incoming traffic to the container, the way a router would do it. You will still see published ports being held open on the host by
dockerd but that’s merely a fake meant to prevent other applications from listening on the same port. In reality, the traffic destined for the published ports should never reach
dockerd. Except that for IPv6 traffic it does, because Docker only sets up forwarding rules in iptables for IPv4 traffic.
You can see IPv4 rules created by Docker if you run
iptables -nL, running
ip6tables -nL on the other hand will show no rules for IPv6 traffic. My understanding is that this isn’t due to implementation complexity, adding the same set of rules for IPv4 and IPv6 would be rather trivial. The official reason for handling IPv6 traffic differently is rather that IPv6 addresses aren’t supposed to be used behind a NAT. So instead of routing all traffic through the host’s external IP address, one is supposed to give containers public IPv6 addresses and direct the traffic to those directly. Needless to say that this inconsistency between IPv4 and IPv6 complicates the setup quite significantly when we are talking about a single host running multiple containers, not to mention potentially exposing container internals to the outside world. The official documentation is also hopelessly useless and merely confuses matters.
Luckily, community members have stepped in an devised a solution that would just make published ports work with IPv6. First of all, you need to make sure that IPv6 is enabled on the network used by your containers. If you are using the default network, you would do it like this in
version: "2.1" networks: default: driver: bridge enable_ipv6: true ipam: config: - subnet: 172.20.0.0/16 - subnet: fd00:dead:beef::/48
And then you need to add
ipv6nat as a privileged container that will take care of setting up the IPv6 forwarding rules:
services: ipv6nat: container_name: ipv6nat restart: always image: robbertkl/ipv6nat privileged: true network_mode: host volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - /lib/modules:/lib/modules:ro
There you go, it just works. Except that there is one more catch: don’t test your IPv6 setup on the
::1 address, it won’t work. The container will see a request coming from
::1 and will try sending a reply to it – meaning that it will send a reply to itself rather than the host. Using your external IPv6 address for testing will do.