Recently I did some enhancements in my private network. One of the enhancements was the introduction of a Docker server. The main driver was the ability to isolate services on a single machine. A full virtualization wasn’t possible because of the limited hardware resources on my server. On the search for a solution, I was reminding me for Docker. I already played around with Docker in the past but were tending to use full virtualization instead because of the flexibility.
Docker is a really cool solution but Docker requires a change in the way of thinking when it comes to deploying services. First of all, Docker images stand for itself. It is not best practice to deploy a Docker image, connect to the Docker image and do some administrative work inside the image to bring a service online. Instead, you write e.g. a Dockerfile that contains everything necessary to bring the service online. This means you have to take care that the configuration is already formally correct in place when you start the container. You also have to think about persistent storage as the storage of a container will be completely overwritten when the container is upgraded.
I’ll try to write some posts over the next months that cover the topic Docker and give some hints how you can optimize the usage of Docker. In the meantime you can, if you want, have a look at my personal Docker hub account where I provide two Docker images that should include all best practices that I’ve discovered so far.