Generic Backupify
January 31, 2014
Backupify News

Deploying Open Source Platform Docker: Part One

Since I started my career at Backupify, one thing that has pained me is the length of time it takes to run the master test suite. Despite working on the latest quad core Macbook Pro, the process has taken well over 45 minutes to run 6,600 tests. It became clear that the bloated test suite could use some tender, loving care. For our company’s recent half-week hackathon, I decided to take action. I committed to improving the master test suite using Docker, a lightweight virtualization solution for creating small, portable Linux containers. Thanks to the powerful magic of Docker, I planned to build an image of my entire development environment, spin up multiple instances from that image, and use them to run chunks of the test suite in parallel. Unfortunately, I am a Linux novice and the plan was not as simple as I’d hoped. In the end, I had to manually build the image instead. Below is a recap of the issues and observations I encountered while completing the manual setup.

Step 1:

Since Docker has yet to release a version compatible with Mac OS, I needed to spin up a VM to have access to a Linux OS. I grabbed Vagrant (1.4.3) with Virtualbox (4.3.6). Docker requires Linux kernel version 3.8 or higher because of the advanced feature set it uses. Unfortunately, most VM boxes have Linux 3.2 or 3.5, which means you have to manually upgrade them. From what experienced, it usually devolves into losing your synced folder option and/or opening up a mess with incompatible guest additions. But the friendly guys over at Phusion have released “Docker-friendly” boxes that come with a 3.8 kernel. So I headed over there, grabbed the Ubuntu precise (12.0.4.3) box and added it to vagrant with vagrant box add ubuntu-12.04.3-amd64 ubuntu-12.04.3-amd64-vbox.box.

From my home directory I created a new directory named test-docker, cd’d into it and ran vagrant init, which created a new Vagrantfile. I made sure the synced folder option was on and then I created my data directory and ran vagrant up.

As a heads up, the default VM user is ‘vagrant’ with the password ‘vagrant’. If you later find that your synced folder gets out of, uh… sync, then you can use ssh to copy files to and from the VM. The default port 22 for ssh is forwarded to port 2222 on your machine. So, using scp as an example, you would do something like scp -P 2222 [email protected]:<path_to_file> <local_file_path>. Ain’t no non-syncing folders gonna keep us down!

Step 2:

I ssh’d in with vagrant ssh and installed Docker by following these instructions. When I installed Docker (version 0.7.6), the Docker group was added. I added the vagrant user to the Docker group and then restarted the Docker service with service docker restart.

Step 3:

Presto! The Docker daemon was up and running. For the uninitiated, the usual path of least resistance with Docker is to create a Dockerfile, use the provided commands to write your build instructions and use docker build to make an image. From there, you can spin up containers from said image. As I mentioned before, I did not use a Dockerfile. I tried, but I found it exceptionally hard to translate our development setup instructions into something that Docker could run. Instead I used ‘Docker run’ to run a bash session interactively (docker run -i -t -name kevin ubuntu:precise /bin/bash) with a base Ubuntu Precise image I got with docker pull ubuntu. I also named it Kevin. This is totally fine with Docker. I can make my changes while inside the container and save them with docker commit (not docker save!) to create my finalized image. Not to mention I can create incremental commits to fall back on in case I mess up on later steps. In Dockerland, everything is sunshine and ponies!

Step 4:

I had to apt-get install a whole lot of packages. It went pretty smoothly, except I ran into an issue installing the openjdk-7-jdk package that’s needed for Cassandra. Apt-get kept telling me it didn’t exist! I was hitting my head against my desk for a while before I realized that the container’s /etc/apt/sources.list didn’t contain the universe source, just main. I popped it right in and was good to go until I ran into the error “mknod: `fuse-’: Operation not permitted.” Luckily, I found a related problem on Github’s issue tracker. Turned out that my jdk install was also trying to create a fuse device, an operation securely locked down and made undoable by Docker. Some of the workarounds shown were pretty esoteric, but some digging around revealed that I could bypass that obstacle with some more forceful installation options. Voila!

apt-get -y -f install—no-install-recommends openjdk-7-jdk
update-alternatives—config java

Why did that work? I’m not certain. With only a few days to hack, I took the good luck and ran with it.

Random observation 1:

My docker Ubuntu image only came with the root user and no sudo command. So I had to run apt-get install -y sudo to get sudo and useradd -m G sudo backupify to create a user named “backupify” with sudo privileges.

I wish someone had told me about that new user. Its default shell is going to be sh, not bash. If you su backupify, you’ll be greeted with a pretty barebones prompt. If you try to use any of the advanced bash features, you’ll get those weird character artifacts like ^[[A^[[A. You’ll need to run chsh -s /bin/bash and enter your user’s password. Next, you’ll need to exit and su again for the change to take place. Or maybe I could have skipped all that sh stuff by adding an -s /bin/bash option to the useradd command in order to specify the shell to set for the user.

Step 5 + Random observation 2:

I needed to install Postgresql. For some reason my docker container didn’t have it’s locale configured correctly. When I installed postgresql it refused to start and repeatedly parroted the error:

fix pg_cluster LOCALE issue
G::Error: ERROR: encoding “UTF8” does not match locale “en_US”
DETAIL: The chosen LC_CTYPE setting requires encoding “LATIN1”.
: CREATE DATABASE “the_development_database_name_goes_here” ENCODING = ‘unicode’

Before that continued to rear its ugly head, I google’d the error and skimmed through a bunch of dated forum posts until I stumbled upon these commands:

export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
sudo dpkg-reconfigure locales

I ran the commands dutifully, and, eventually that seemed fix the issue. Next, in order to create that postgresql cluster I sorely needed, I was allowed to run the following commands:

sudo -u postgres pg_dropcluster—stop 9.2 main
sudo -u postgres pg_createcluster—start—locale en_US.UTF-8 9.2 main

Step 6:

As this point, I created a set of ssh keys specifically for Docker so that I could clone our private repositories through ssh and avoid the long chain of ssh forwarding.

Step 7:

Next, I ran through the usual setup steps of our development environment, which include the installation of rbenv, its plugins, and ruby 1.9.3., and the cloning of our main repositories.

Step 8 + Random observation 3:

And the penultimate step: running our in-house installation/configuration scripts for the local Postgresql, Cassandra and Redis instances. I had to run it as a non-root user as per policy. Just as I thought I had made it through, Cassandra didn’t take and started complaining about hostname issues. That’s when I remembered that Step 1 of our development setup is to add new host info to /etc/hosts. I popped open my favorite editor, made the change and then… nothing. Or rather, nothing changed. I kept getting read-only file errors. Not even sudo helped me. I went back and found this on the Github issue tracker. Wow. Such a show stopper. The bright side is that the ability to edit /etc/hosts is coming in version 0.8! Unfortunately, that doesn’t help me right now. I learned that being in the read-only section of the filesystem is what makes /etc/hosts off limits. If I just unmount it with umount then I can freely edit it. This might have dire repercussions that I am unaware of, but for the moment, it worked fine. Later, I went on to learn that ‘docker run’ has a new ‘privileged’ option that would allow editing. However, the ‘docker build’ command has no such option so if I was using a Dockerfile I would still run into the issue.

Step 9:

Finally, I got everything in place. The final problem with running our test suite is that it’s test-unit and not rspec (says the rspec fanboy). Aside from all the shallow reasons I don’t like test-unit, one thing I really miss from rspec is the granularity I had when it came to running only certain groups of files from the test suite. For test-unit, the options are one file or everything, which isn’t conducive to this exercise. I fiddled around with different ways to get the desired result and then promptly gave up, searched Stackoverflow, and copied this suggested snippet: find test/unit/ -name “*_test.rb” | xargs -t ruby -I.:test -e “ARGV.each{|f| require f}”. which, by itself, runs all the test files under the test/unit directory. I can then change it to run only the tests under whatever directory I choose. With some additional nudging, I can get it to run only certain groups of files. Next, I ran the test/unit tests. Lo and behold, they all passed. Sweet victory at last!

What a long and exciting adventure, right? Join me again for Part 2 to hear how I actually create that Dockerfile to reproduce all the steps (minus the headaches) and automate the chunking and parallelizing of the test suite.

Live-Zoom-Webinar-Reg

SaaS Data Under Siege: Ransomware’s Rising Threat

Your cloud data could be just as vulnerable to the next wave of cyberattacks as data hosted on-premises

Office 365 logo

Secure Office 365 data protection. Backupify delivers fast recovery of Exchange, OneDrive, SharePoint Online, Calendar, Contacts and Microsoft Teams data.

LEARN MORE

Office 365 logo

Google Vault alone does not ensure your G Suite data is recoverable. Quickly restore lost data from Gmail, Calendars & Contacts, Drive and Shared Drives with Backupify.

LEARN MORE

See Why Backupify Wins SaaS Backup

VIEW MORE