First impressions of docker in anger

So I've just started my first foray into Docker in anger. Nothing has made it to production yet but I wanted to share my experience over the last couple of weeks.

A little bit of context about what I am doing right now. I'm starting down the path towards small independently deployable services instead of a monolithic application. The services that I am developing will replicate some existing functionality within the monolith but talk to a brand new iOS app. The architecture pattern I am looking to apply is Command Query Responsibility Segregation (or CQRS). The first small services I am developing are the “Query” side of things as it allows us to focus on one thing at a time. These “Query” services won't be truly independent in that they will still talk to the same database as the monolith. This will allow us to solve a whole bunch of problems before we start work on the “Command” services which will actually update data. Again, some of these services will be tied to the monolith's database but not all of them. These “Command” services will also embrace an eventually consistent approach, publishing "Commands" in such a way that any authorised services can listen to. This will eventually allow us to come back to our “Query” services and turn some of these into fully independent services.

This is the journey I have been on over the last 2 weeks, and the first stages of "Query" services.


This is the first challenge I had to face. We need to get our services into a test and production environment as quickly and easily as possible. I'm a huge fan of traditional build and deploy pipelines, they've been my bread and butter for the last few years. Not much has changed for me using Docker around this. The major thing that has changed is what I do with build artefacts. Typically, I would have relied on my build tool to provide some mechanism for me to store and retrieve artefacts. Now I no longer need to do this, I can push images to docker hub and pull them back down when I need them. I'm a sucker for being able to deploy specific versions, so I've been tagging images with the build number before they get pushed. That way I can still pull down that specific version when it comes time to run it.

Where to deploy my container

I started out deploying these containers to Amazon ECS, I didn't spend a huge amount of time playing with it but didn't quite give me everything I wanted right now. So instead I went for a single EC2 host and I'm deploying everything there. This isn't great, I don't have any form of scaling right now, so I will see how this changes before it makes it's way to production.

Composing Containers

So now I've got that first service in a test environment I can repeatedly deploy to. Now I need a second… CI/CD is just as easy to setup a second time. The problem now becomes that I'm running on a single host, I don't want each of my services to run on a different port and manage that somehow. Outwards looking in I want it to appear as if I am only running a single service (on port 80). I want some form of reverse proxy, and nginx is a good candidate for that, and I can run it in a container. Using Docker compose it becomes relatively easy to setup, for each Link the nginx container has Docker Compose will create an entry in the hosts file so I don't need to know where that container is running. Running multiple containers like this with Docker Compose becomes pretty straight forward.

Of course now I run into a new problem. The docker-compose file for nginx now controls which version of each container it is running. I can now no longer deploy each service through it's own pipeline. I can only deploy the docker-compose file and load that. Right now I don't have a solution to this problem. I'm manually specifying tags in my docker-compose file, version controlled, and then pushed to environments.

There are a few alternatives I can see:
- In the build tool, the GUI asks me which version of different services I want to deploy. I don't think this is any better. The deployment for the reverse proxy still has intimate knowledge of each and every service.
- The second is that each service maintains their own portion of the nginx config and docker-compose files. This one will be far more difficult to implement; and there is still the problem of collisions, how do services make sure they're not trying to do the same thing.
- There might be a tool that handles this problem for me?

My feelings so far

So far I've been pretty happy with Docker. I don't have a deep understanding of what's going on, nor do I have an understand of all the tooling surrounding Docker. I am pretty excited to dig deeper into it as well and see where it takes me.