Category Archives: docker

Quirks of DNS traffic with Docker Compose

Recently I had a scenario where I wanted to restrict the network traffic coming out of certain processes I started inside a container, so they could only do the minimum required for them to work, and not reach anything else outside the container. In order to explain what I found, let’s imagine that my process only wants to make a HEAD HTTP request to http://www.google.com (on port 80, not 443).

It will obviously need to send packets with destination port 80, and packets with destination port 53 so it can make DNS requests to resolve http://www.google.com. So let’s implement a quick setup with iptables to accomplish this. We’ll use the following Dockerfile that installs curl, iptables, and dnsutils on top of the default Ubuntu image, so we can test our scenario.

Dockerfile

FROM ubuntu:latest
RUN apt-get update && apt-get install -y curl iptables dnsutils

And the following docker-compose.yml file to help us build and run our container.

docker-compose.yml

version: "3.4"
services:
  my-container:
    build:
      context: .
    image: /my-custom-image
    cap_add:
      - NET_ADMIN
    command: tail -f /dev/null

The scenario I want to talk about only happens when starting services with Docker Compose, not when starting containers directly with docker run, so using a docker-compose.yml file is necessary even if it feels a bit overkill. Note we specify the NET_ADMIN capability for the container, which we need so we can use iptables, and a command that will keep the container running, so we can connect to it after Docker Compose starts it.

Now we run docker-compose -p test up -d in the folder that contains both our files, Docker Compose builds the image and starts a container. We can then connect to that container with docker exec -it test_my-container_1.

Let’s start by verifying that we can make our HEAD request to www.google.com:


HEAD request works

Great. Now let’s set up the iptables rules discussed above and make sure they look right.

iptables --append OUTPUT --destination 127.0.0.1 --jump ACCEPT
iptables --append OUTPUT --protocol tcp --dport 80 --jump ACCEPT
iptables --append OUTPUT --protocol udp --dport 53 --jump ACCEPT
iptables --append OUTPUT --jump DROP
iptables -L -v
Set up iptables rules

We add the rule for localhost just to make sure that we don’t break anything that’s connecting to the machine itself (without it, the rest of this scenario won’t work as expected).

Now we test curl --head www.google.com again to make sure everything’s fine… but it says it cannot resolve the host! Furthermore, nslookup www.google.com times out. And checking the iptables rules we see 5 packets dropped by the last rule, but none accepted by the rule for UDP port 53. How come?

CURL does not resolve host, nslookup times out

Well, it turns out that when Docker Compose creates a service, it creates iptables rules in another table (the NAT table) to reroute certain things through the Docker infrastructure. In particular, it changes the port of DNS requests from 53 to something else. You can see this by running iptables -L -v -t nat:

iptables rules in the NAT table

Here we can see that there’s a rule mapping UDP port 53 to 53789, when the request is going to IP 127.0.0.11 (where Docker hosts its DNS resolver). So if we now add another iptables rule for that port to our setup, we’ll see that our curl command works again!

CURL works again after adding new iptables rule

However, that port is not static, so the approach that I ended up taking was to create a rule to allow any packet with destination IP 127.0.0.11, which is the one where Docker hosts its DNS server, and the only one for which it maps ports.

Conclusion

If you plan to mess with DNS network traffic in your containers and you use Docker Compose to start them, be aware that Docker sets up rules to change the destination port for DNS requests.

Fixing error code 137 when building a Docker image

A few days ago I was containerizing an Angular web application and ran into an issue that I think is worth documenting for future reference.

Implementing the application itself went without a hitch, and everything looked good when hitting F5 in Visual Studio. But When I ran docker build, I got the following error from the step that ran dotnet publish:

> client-app@0.0.0 build /src/MyApp/ClientApp
> ng build "--prod"

Killed
npm ERR! code ELIFECYCLE
npm ERR! errno 137
npm ERR! client-app@0.0.0 build: `ng build "--prod"`
npm ERR! Exit status 137
npm ERR!
npm ERR! Failed at the client-app@0.0.0 build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2018-11-03T09_21_34_260Z-debug.log

The first thing to know is that the Dockerfile was building and publishing the application in Release mode, not Debug mode (which I had been using to run it in Visual Studio). So first things first, I tried to publish it in Release mode on its own (not by building the Dockerfile) with dotnet publish -c Release MyApp.csproj… and that worked fine. So the issue had to do with the fact that the app was being built/published inside a container.

With a bit of googling I found out that error 137 usually means that the process was killed by the Linux kernel when the system is running out of memory.

So I looked at my Docker configuration (right-click the docker icon in the system tray, go to Settings, then Advanced) and saw that the Linux VM was configured with only 2GB of RAM. I’m surprised that isn’t enough, but I bumped it to 4GB to see if it made any difference… and it did! docker build now ran successfully!

At some point I’ll figure out why my application requires so much RAM to build… but at least now I’m able to create the docker image successfully.