Author Archives: Brad

Giving Docker containers routable networking

Docker is pretty sweet. If you’re like me, the concept of commodit-izing your infrastructure artifacts is very intriguing. However, since I’m still a fan of clarity and have no need for immense service density, I generally like my various nodes to be distinguished by IP. I also like them to be routable so I can control access from a central firewall.

The TASK:
Allow Docker containers to get routable, non-NAT IP addresses from DHCP.

The CAVEAT:
The containers need to have their traffic tagged for a specific VLAN, so that access can be controlled by the central network firewall. The solution also can’t require manually configuring IPTables or dummy interfaces that might persist after any given container is destroyed.

The SOLUTION:
Those familiar with Pipework are probably thinking the solution is pretty easy. While that does represent a significant *part* of the final solution, there’s more to it than just Pipework itself.

First things first, if we want our containers to operate on a VLAN distinct from the host they are running on, we have to prep our host. I’m assuming that this host will be reused constantly through multiple projects, so the configuration should persist.

I’m using Ubuntu Server 14.04 as the Docker host, so we need to install the prerequisite packages and basic setup for VLANs (you can find a great walk-through for Docker itself here):

apt-get install vlan bridge-utils
echo "8021q" >> /etc/modules
modprobe 8021q

Next, we need to set up our networking. My host only has one physical NIC and the switch is already pulling it into VLAN 10 natively. I want the containers to run in VLAN 500. To that end, lets create a VLAN interface, and dump it into a bridge with a static IP.

vconfig add eth1 500

Edit /etc/network/interfaces

auto eth1
iface eth1 inet manual
 
# Interface for vlan 500, which automatically adds/removes the vlans
# in case you forgot :P
auto eth1.500
iface eth1.500 inet manual
   vlan-raw-interface eth1
   pre-up vconfig add eth1 500
   post-down vconfig rem eth1.500
 
# Bridge for VLAN 10
auto br10
iface br10 inet static
   address 192.168.10.0
   netmask 255.255.255.0
   gateway 192.168.10.1
   dns-nameservers 192.168.10.1 8.8.8.8
   bridge_ports eth1   # Since this is natively tagged, we can use the base interface.
   bridge_stp off
   bridge_fd 0
   bridge_maxwait 0
 
# Bridge for VLAN 500
auto br500
iface br500 inet static
   address 10.0.10.0
   netmask 255.255.255.0
   gateway 10.0.10.254
   dns-nameservers 8.8.8.8
   bridge_ports eth1.500  # Using the VLAN-specific interface this time.
   bridge_stp off
   bridge_fd 0
   bridge_maxwait 0

Before I go any further, let me explain the issues I encountered while pursuing this solution. By default, new Docker containers are deployed on the Docker Bridge network (docker0: 172.17.0.0/16), which NATs eth0 on the container, and also acts as the default gateway. Despite this, Pipework can easily create eth1 with the right interface and subnet, which would allow the container to communicate with other devices in the same subnet. However, the container isn’t allowed to change its core routes without being launched with additional privileges, which means that the default gateway will still result in traffic going out on VLAN 10. I’m a fan of least-privilege, so this wasn’t acceptable to me. I discovered that if, instead of launching the container into the default network, I launch it with NO networking, the interface that Pipework adds will be allowed to define the default gateway.

Understanding this, launch a new container ensuring that you set ‘ –net=none’, then we’ll let Pipework plumb it together. This example creates a connection between the container and bro (VLAN 500), with the IP determined by DHCP (‘U:asterisk’ is optional, it just helps it stick to the same IP in case you’re using lease reservations):

CID=$(docker run -dit --name asterisk --net=none asterisk:latest)
pipework br500 $CID dhclient-f U:asterisk

No need to define ports to forward or anything else. It just works!

CasperJS: Learn, and become mighty.

Time and time again I find myself needing to automate some time-consuming task in a cumbersome web-UI. Time and time again, I dig out what has become one of my most trusted automation tools:

CasperJS

 

Now, if there is one thing that separates a good SysAdmin from a bad one, it’s ambitious laziness. Minimizing (or eliminating) the time necessary to do repetitive tasks is the hallmark of a good SA, and nothing fits the ‘automate web shit’ rung in the tool belt quite as well as CasperJS

Headless WebKit with JavaScript interaction

I first came across CasperJS back in my days of heavy web-scraping. At the time I had developed a reliance on a horrible amalgamation of cURL and Regex to accomplish what I needed, but as websites got more sophisticated (and as their maintainers developed a diminishing tolerance for my shit), it became clear that I needed something better.

CasperJS is effectively an otherwise full, headless browser with an external interface for interacting with the pages via JavaScript – imagine GreaseMonkey without the browser. With CasperJS, you can automate endlessly complex interactions with dynamic web content directly from a command line.

I won’t bore you with the beginning parts (their website has fantastic documentation), but here are a couple idiosyncrasies as well as a few cool tricks I use often when dealing with web-UI automation.

Problem: Variables generally don’t persist
I discovered this almost immediately. Often, I’ll need to retrieve complex datasets from the page in a way that would typically necessitate a for() loop (think tables full of data). The problem is, you can only return content from the page as a result of an evaluate()’d function, and variables defined in evaluate() scope don’t inherently persist on the page. With that being the case, how can I get complex datasets out of the page, without writing dozens of evaluate() statements?

Solution: jQuery.map(), Array.prototype.map(), or Array.prototype.map.call().
Imagine that we want to retrieve a set of key:value pairs from some HTML-formatted data on the page. For the sake of argument, we’ll say it’s names and phone numbers. We inspect the relevant HTML, and find this:

<table>
<tr>
   <td class="name">John Smith</td>
   <td class="number">310-555-2811</td>
</tr>
<tr>
   <td class="name">Jason Rogers</td>
   <td class="number">404-555-8437</td>
</tr>

Well, this is pretty handy. There’s one name:number pair within each <tr>, and the values are classed with ‘name’ and ‘number’ respectively. Check this out:

var casper = require('casper').create();
 
casper.start('whatever_url.html', function() {
   console.log(this.evaluate(function() {
      return JSON.stringify(Array.prototype.map.call($('tr'), function(e) {
         name = $(e).find('td.name').html();
         number = $(e).find('td.number').html();
         return { name: name, number: number }
      }));
   }));
});

This might look complex, but it’s mostly just the wrapping necessary in order for CasperJS to pass it into the page. The important part is the Array.prototype.map.call – it loops over each ‘$(tr)’ element, passing it as ‘e’ into the supplied function. Boom. With a single, self-regulating command, we have just pulled the following result:

[{"name": "John Smith", "number": "310-555-2811"},
 {"name": "Jason Rogers", "number": "404-555-8437"}]

Damn. Doesn’t get much easier than that, does it? This resultset is now free from the confines of its original HTML and is universally transportable as JSON. Glorious.

 

Problem: Logging in, even with –cookies-file defined, doesn’t seem to persist across multiple scripts!
Yeah, this is a bummer. However, I’ve found a workaround.

Solution: ‘fs’
Yup. Those two letters hold the key to your salvation. First off, let’s create a cookie file (casperjs_cookies.txt) with an empty JSON string:

{}

Now, we’ll drop the following code segment into the beginning of our script:

var fs = require('fs');
 
Array.prototype.forEach.call(JSON.parse(fs.read('casperjs_cookies.txt')), function(x){
   phantom.addCookie(x);
});

This will read the file into phantomjs’ cookies, making them present to the page. However, since we created the file empty, we need to make sure that the cookie gets written at an appropriate time – inject this into your script after verifying a login:

fs.write('casperjs_cookies.txt', JSON.stringify(phantom.cookies), "w");

Once again: Boom. Now when you run this script in the future, it will reuse the cookies from the last script and retain any session states you created in the browser.

Doing dirty, dirty things with SSH

The TASK:
Allow SSH from ServerA on one network to directly connect with ServerB on a separate network.

The CAVEAT:
Both networks are isolated behind separate ‘jump’ servers (edge servers with only SSH enabled). No port forwarding, no routing, just a jump server.

The SOLUTION:
For those familiar with some of the more advanced (read: gross) features of SSH, tunnelling is not a difficult concept. Nevertheless, I think this is worth sharing:

Step one – tunnel to ServerA through JumpA:

ssh -L [localhost:]2202:ServerA:22 JumpA

Step two – connect through the first tunnel, and create a reverse tunnel to an unused port on the connecting workstation (I used 20052 in this example):

ssh -R 20052:localhost:20052 -p 2202 localhost

Step three – tunnel the port used in step two to ServerB port 22 through JumpB:

ssh -L 20052:ServerB:22 JumpB

There you have it. ServerA is now capable of SSHing into ServerB like this:

ssh -p 20052 localhost

The trick here is that we’re receiving from one inbound tunnel on port 20052, and forwarding that same port through another outbound tunnel. It’s gross, and unlikely to ever come in handy in a sane environment, but there it is. Food for thought.

Nagios: Could not complete SSL handshake. 1

I run into this from time to time.

The easiest solution is to ensure that your /etc/xinetd.d/nrpe ‘only_from’ and your nrpe.cfg ‘allow_from’ are configured properly. This doesn’t always cut it.

If you are attempting to add a new server to an existing Nagios architecture, have verified everything is solid, and still get this error:

Check your NRPE version

Assuming you have other servers where NRPE is working fine, try using the same version they have. Ran into this today. It should NOT have taken 2 hours to figure out. Just another day in the life.

Injecting variables into anonymous functions in PHP

I was coding up some tools yesterday, and decided to write a function in PHP that sorted an array numerically by a provided subkey name.

Assume an array with the following structure:

1
2
3
4
5
6
7
8
9
10
$sale_items = array(
   "first" => array(
      "value" => "100",
      "color" => "blue"
   ),
   "second" => array(
      "value" => "105",
      "color" => "red"
   )
);

uasort can easily handle this if you already know which subkey you intend to sort by:

1
2
3
$sorted = uasort($sale_items, function($a, $b) {
   return ($a['value'] > $b['value']);
});

In this case though, I wanted to make this function reusable. Unfortunately the uasort function passes the parameters the provided function on its own – how do we inject a variable into that anonymous function? Turns out that it’s insanely easy. Check it:

1
2
3
$sorted = uasort($sale_items, function($a, $b) use ($field) {
   return ($a[$field] > $b[$field]);
});

Done.

mod_rewrite

The TASK:
Add a new static content server running Apache

The CAVEAT:
Direct links to static content must vend the content
Content might exist on any (or multiple) servers
Old links pointing directly at the old server must still work
SSL Certificates must still validate

The SOLUTION:
Took me a couple hours to decide how to do this. Historically when faced with diminishing storage capacity, we would just buy a new server with a larger capacity, clone everything, and plop it in to replace the old one. Needless to say, this was neither efficient nor scalable. The solution came in the night: mod_rewrite.

For those not familiar with what Apache’s mod_rewrite does, it’s intended to allow you to use RegEx on URL requests and ‘rewrite’ them to something different. In this case, direct URLs to static content would look something like this:

/folder/sub/item.ext

What mod_rewrite will let us do is take that request, and mangle it into something infinitely more dynamic – like this:

/get_content.php?file=/folder/sub/item.ext

Yes, I understand that the mere idea of directly vending a by filename is terrifying to those with some appreciation for security. DO NOT just copy/paste this into your public-facing server and call it a day – the hardening has been stripped for brevity. First, let’s cover how we accomplish this rewrite.

In your Apache config (I suggest within a <Directory> definition), add the following directive:

   RewriteEngine On
   RewriteRule   "^/?(.*)$" "get_content.php?item=$1"

One Apache is reloaded, this rule will take any incoming request and instead interpret it as a parameter to our PHP script. This now empowers us to use the capabilities of PHP when vending our content. Once this is working, we just connect some hoses.

mkdir /server1; mount server1:/content /server1
mkdir /server2; mount server2:/content /server2

(it’s advisable to add these to /etc/fstab – less fires after an unexpected reboot)

Now, let’s create get_content.php

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/*
Seriously, filter your $_GET['item'].
If you let someone pull ../../../etc/shadow
or wordpress config.php, you deserve whatever happens to you.
*/
 
# Set your content directories:
$dirs = array(
   0 => '/mountpoint1/content',
   1 => '/mountpoint2/content'
);
 
foreach ($dirs as $id => $path) {
   if (file_exists($path . $_GET['item'])) {
      // vend the file
   }
}

And there we have it. All we have to do at this point is update the DNS record for our original server, and we’re golden!

Additional Notes:

If you are concerned about the file security of the get_content.php file, there are a few things you can do to mitigate the capabilities. The most obvious would be to hard-code accessible subfolders, or use a file naming convention that is secure and easy to validate (a-z, forward-slash, and ONE period for the file extension?). Another option would be to evaluate the full path, and ensure it’s a legal path before vending content from it. There are multiple solutions – use your head, and hack against it yourself. If you let someone type ‘&file=../../../etc/passwd’ and get away with it, you deserve whatever happens.

RewriteRule: I learned quickly that the ‘/?’ is important in the RewriteRule for the  original forward-slash. If your rule isn’t catching (going to 404 instead), make sure that you are accounting for the *possible* inclusion of that original forward-slash. After that, make sure your RegEx isn’t too rusty 😛

Blog the First

Howdy all,

I’m not a blogger, so I hope you can put up with my crap. I created this blog because I realized that, as a Systems Administrator, I am faced with daily challenges that are both annoying to encounter and exciting to resolve. My goal here is to document my experiences with new technologies and relive my battles with existing ones.

Here’s to new adventures. Brevity.