Tag Archives: Docker

Giving Docker containers routable networking

Docker is pretty sweet. If you’re like me, the concept of commodit-izing your infrastructure artifacts is very intriguing. However, since I’m still a fan of clarity and have no need for immense service density, I generally like my various nodes to be distinguished by IP. I also like them to be routable so I can control access from a central firewall.

The TASK:
Allow Docker containers to get routable, non-NAT IP addresses from DHCP.

The CAVEAT:
The containers need to have their traffic tagged for a specific VLAN, so that access can be controlled by the central network firewall. The solution also can’t require manually configuring IPTables or dummy interfaces that might persist after any given container is destroyed.

The SOLUTION:
Those familiar with Pipework are probably thinking the solution is pretty easy. While that does represent a significant *part* of the final solution, there’s more to it than just Pipework itself.

First things first, if we want our containers to operate on a VLAN distinct from the host they are running on, we have to prep our host. I’m assuming that this host will be reused constantly through multiple projects, so the configuration should persist.

I’m using Ubuntu Server 14.04 as the Docker host, so we need to install the prerequisite packages and basic setup for VLANs (you can find a great walk-through for Docker itself here):

apt-get install vlan bridge-utils
echo "8021q" >> /etc/modules
modprobe 8021q

Next, we need to set up our networking. My host only has one physical NIC and the switch is already pulling it into VLAN 10 natively. I want the containers to run in VLAN 500. To that end, lets create a VLAN interface, and dump it into a bridge with a static IP.

vconfig add eth1 500

Edit /etc/network/interfaces

auto eth1
iface eth1 inet manual
 
# Interface for vlan 500, which automatically adds/removes the vlans
# in case you forgot :P
auto eth1.500
iface eth1.500 inet manual
   vlan-raw-interface eth1
   pre-up vconfig add eth1 500
   post-down vconfig rem eth1.500
 
# Bridge for VLAN 10
auto br10
iface br10 inet static
   address 192.168.10.0
   netmask 255.255.255.0
   gateway 192.168.10.1
   dns-nameservers 192.168.10.1 8.8.8.8
   bridge_ports eth1   # Since this is natively tagged, we can use the base interface.
   bridge_stp off
   bridge_fd 0
   bridge_maxwait 0
 
# Bridge for VLAN 500
auto br500
iface br500 inet static
   address 10.0.10.0
   netmask 255.255.255.0
   gateway 10.0.10.254
   dns-nameservers 8.8.8.8
   bridge_ports eth1.500  # Using the VLAN-specific interface this time.
   bridge_stp off
   bridge_fd 0
   bridge_maxwait 0

Before I go any further, let me explain the issues I encountered while pursuing this solution. By default, new Docker containers are deployed on the Docker Bridge network (docker0: 172.17.0.0/16), which NATs eth0 on the container, and also acts as the default gateway. Despite this, Pipework can easily create eth1 with the right interface and subnet, which would allow the container to communicate with other devices in the same subnet. However, the container isn’t allowed to change its core routes without being launched with additional privileges, which means that the default gateway will still result in traffic going out on VLAN 10. I’m a fan of least-privilege, so this wasn’t acceptable to me. I discovered that if, instead of launching the container into the default network, I launch it with NO networking, the interface that Pipework adds will be allowed to define the default gateway.

Understanding this, launch a new container ensuring that you set ‘ –net=none’, then we’ll let Pipework plumb it together. This example creates a connection between the container and bro (VLAN 500), with the IP determined by DHCP (‘U:asterisk’ is optional, it just helps it stick to the same IP in case you’re using lease reservations):

CID=$(docker run -dit --name asterisk --net=none asterisk:latest)
pipework br500 $CID dhclient-f U:asterisk

No need to define ports to forward or anything else. It just works!