Docker, OSX and remote Ubuntu!
It was about time to post something... I've been so lazy, I'm not sure why, but it's been a very long time with little activity. I'm not sure if there's been a lack of productivity, interest, I'm not sure... My excuse is I moved places and I absolutely hated the workspace at the apartment, so I guess if you're not happy with your work environment at home, you just don't work as much off-hours. Anyway, enough with the excuses. Let's talk about something real: docker.
As most of you might know, docker is hot right now. It's nothing new at it's root really, containers (LXC) have been around in linux for like over a decade, and I believe even before that in BSD unices, as jails. They both probably came into fruition as an evolution to the typical Unix chroot segregation. I guess the turning point for containers came as a by-product of virtualization. All of a sudden everybody was, and still is, virtualizing their infrastructure... But if we look a little closer we realize a huge percentage of all server applications run on linux anyways. So virtualizing the entire VM seems a bit expensive in certain contexts and that's where container sound like a badass idea. Containers do have some security issues, that hypervisors have dialed-in, ie. resource isolation, but they provide an elegant, efficient, flexible solution in many scenarios. Anyway, enough with the container intro. So why docker, and why this post?
Docker defines itself as an open platform for building, shipping and running distributed applications. It gives programmers, development teams and operations engineers the common toolbox they need to take advantage of the distributed and networked nature of modern applications. What it really does is provide a set of tools that allow you to define the software that each of the containers must include - no more, no less. It also allows us to architect our application into different microservices, each running in its own container. By defining our applications in this manner we can immediately benefit from the isolation and "packaging", to scale and deploy almost seamlessly, and definitely in a very convenient manner. But also, the added flexibility allows us to spin up fresh environments very quickly, and what's more, take copies of live environments to run on a new endpoint as a developer ecosystem - think about the benefits of that in terms of speeding up development and improving time-to-bugfix. Ok, so you probably already knew all of that... So what's new? Well basically every tutorial out there, if you run OSX wants you to install virtualbox, and then use boot2docker. That works great and it might be the route for 90% of the people. But what if you have a lightweight rig, a macbook air for instance. Or say you actually have a linux VPS. Or you have access to the Google Cloud Platform, wouldn't you maybe want to spare your machine the burden of containerization? I did. In my case I run and underutilize an Ubuntu VPS, so I figured it made the perfect box for my containers, and since docker follows a client-server model, it had to be mad easy. It was, here are the steps.
If you don't already run Homebrew, please do, thank me later.
First you install docker in your OSX machine, the client.
user@macbook:~$ brew install docker
user@docker_vps:~$ curl -sSL https://get.docker.com/ | sh
At this stage I'd recommend you setup key-based authentication to your linux box from your OSX machine, but that's really up to you, here's how. Now let's modify docker's default configuration at /etc/default/docker, we need to do this so that docker not only listens on a local unix socket, but also for remote TCP connections (if you wish you can disable local connections altogether, I didn't).
DOCKER_OPTS="-H tcp://localhost:2375 -H unix:///var/run/docker.sock"
user@docker_vps:~$ sudo service docker restart
user@macbook:~$ ssh -N -L 2375:localhost:2375 user@docker_vps > /dev/null 2>&1
user@macbook:~$ docker -H tcp://localhost:2375 run -i -t ubuntu /bin/bash
user@macbook:~$ export DOCKER_HOST="tcp://localhost:2375"