Tag Archives: docker

The Sad State of Docker

I have always been a big fan of Docker. This is very visible if you regularly read this blog. However, I am very disappointed lately how Docker handled the 1.12 release. I like to think of version 1.12 as a great proof of concept that should not have received the amount of attention that it already received. Let’s dive deep into what I found wrong.

First, I do not think a company should market and promote exciting new features that have not been tested well. Every time Docker makes an announcement, the news spreads like a virus to blogs and news sites all over the globe. Tech blogs will basically copy and paste the exact same procedure that Docker discussed into a new blog post as if they were creating original content. This cycle repeats over and over again and becomes annoying because I am seeing the same story a million times. What I hate most about these recent redundant articles is that the features do not work as well as what is written about them.

I was really excited hearing about the new Swarm mode feature and wanted it to work as described because this means that one day I can easily make a Swarm cluster with my four Raspberry Pi’s and have container orchestration, load balancing, automatic failover, multi-host networking, and mesh networking features without any effort. Swarm in v1.12 is very easy to setup versus the predecessor and I wanted to put it in production at home (homeduction). To test Swarm, I setup a few virtual machines using docker-machine on my laptop and went through the Swarm creation process and then began to run into issues when deploying my applications.

An important feature to have in a Swarm cluster is multi-host networking for containers. This allows containers to talk to each other on a virtual network across many hosts running the Docker engine. Multi-host networking is important for containers to communicate with each other such as web application connecting to another container with MySQL. The problem I faced is that none of my containers could communicate across hosts. When it did work sometimes, the mesh networking would not route traffic properly to the host running my container. This means none of my applications worked properly. I went to the Docker forums and many people shared my pain.

It is not wise to explode the Internet and conventions with marketing material about exciting new features that do not work as presented. There are still many bugs in Swarm that need to be fixed before releasing to the general public to have them beta test for you. What is the rush to release? Will it hurt that much to wait a few more weeks or months to do it right and have the product properly working and tested? Yes, we all know Docker is awesome and is trying to play catch up with competitors such as Apcera and Kubernetes, but please take it slow and make Docker great again!

[Edit 8/31/2016]

Tweaked paragraphs to make it more clear that my testing was not done on the Raspberry Pi and done with docker-machine on a laptop.

Moving from a single machine with Docker to a cluster of Pi’s

I decided to finally make use of my four Raspberry Pi model 3’s and take the challenge to move all of my home services to them. Previously, I ran a x86 Desktop as a server in my living room. The loud noises coming from the server made it uncomfortable to be in sometimes. The loud noisy box is home to this website and many other applications such as Plex, Transmission, OpenVPN, Jenkins, Samba, and various Node.js projects all running in Docker. Having all of those applications running on a single box is a single point of failure and makes system administration harder when reboots are required.

Continue reading Moving from a single machine with Docker to a cluster of Pi’s

Using Docker Swarm in Production

[Introduction]

I have always been fascinated with Docker Swarm and how it can cluster multiple computers together to run containers. I mainly used Swarm via docker-machine with the Virtual Box provider for testing. I felt that now it is time to try and run this in production. This blog post will explain how to create a simple Swarm cluster and secure it with a firewall. Docker officially recommends that you enable TLS on each node but I wanted to make it simpler with firewall rules to prevent unauthorized access.

[Setup]

Docker v1.10 has been installed on each of these machines running Ubuntu 15.10:

node_0 – The Swarm Master.
node_1 – A Swarm node.
node_2 – Another Swarm node.

[Installation]

1. Setup each node to have Docker listen on it’s own host IP address and disable the firewall rules:

First, stop the Docker daemon so we can make configuration changes:

systemctl stop docker

Edit: /etc/default/docker.  Uncomment if needed and modify DOCKER_OPTS as follows:

DOCKER_OPTS=”-H tcp://node_0_ip:2375 –iptables=false”

Start the Docker daemon again:

systemctl start docker

(Repeat this process for all the nodes)

2. On the Swarm Master node, create a cluster token. Each Swarm client will need the token to form a cluster. The output of this command will be a long token that you will need in the next steps.

docker run swarm create

3. On the Swarm Master node, create a Swarm Manager using the token from step 2. The Swarm manager will listen on port 5000.

docker run -d -p 5000:2375 -t swarm manage token://6b11f566db288878e16e56f37c58599f

2. Type the following commands from the master node to join the slave nodes to the cluster using the token from step 2.

docker run -d swarm join –addr=node_0_ip:2375 token://6b11f566db288878e16e56f37c58599f
docker run -d swarm join –addr=node_1_ip:2375 token://6b11f566db288878e16e56f37c58599f
docker run -d swarm join –addr=node_2_ip:2375 token://6b11f566db288878e16e56f37c58599f

3. Since the Swarm manager is running on port 5000 on node_0, we need to tell the Docker client such as a laptop to connect to that host and port to use the cluster. The following command will show the status of the Swarm cluster.

docker -H tcp://node_0_ip:5000 ps

[Securing]

4. Finally, we need to secure the Swarm cluster with firewall rules so that only the nodes in the cluster can talk to the Docker engine. The following rules will deny all incoming traffic and only allow Docker access from the nodes.

Node_0:

ufw allow 22
ufw allow 5000
ufw default deny incoming
ufw allow from node_1_ip
ufw allow from node_2_ip
ufw enable

Node_1:

ufw allow 22
ufw default deny incoming
ufw allow from node_0_ip
ufw allow from node_2_ip
ufw enable

Node_2:

ufw allow 22
ufw default deny incoming
ufw allow from node_0_ip
ufw allow from node_1_ip
ufw enable

[Conclusion]

Now you should have a three node Docker Swarm Cluster that is locked down. If you need to enable an external port for a container, the firewall rules will need to be adjusted manually.

 

Goodbye Docker on CentOS. Hello Ubuntu!

I have been a hardcore CentOS user for many years now. I enjoyed its minimal install to create a light environment, intuitive installation process, and it’s package manager. Docker is the most popular container format today and provides developers and enthusiasts with an easy way to run workloads in containerized environments. I started using Docker in production at home for about a year now for services such as Plex Media Server, Web Server for this blog, ZNC, MineCraft, and MySQL to name a few. A Dockerfile is a set of instructions used to create a Docker image. I invested many hours creating perfect Dockerfiles using CentOS and Fedora to make deployments simple on any operating system. However, a personal revolution was brewing.

Continue reading Goodbye Docker on CentOS. Hello Ubuntu!

Betting the farm on Docker

The Challenge

I wanted to try out Docker in production to really understand it. I believe that to fully understand or master something, you must make it part of your life. Containers are the next buzz word in IT and future employment opportunities are likely to prefer candidates with container experience. My current home environment consists of a single server running: OpenVPN, Email, Plex Media Server, Nginx, Torrent client, IRC bouncer, and a Samba server. The challenge is to move each of these to Docker for easier deployment.

The Journey

I noticed that many Docker containers contain multiple files that are added to the container upon creation. I decided to take a simpler approach and have the Dockerfile modify most of the configuration files with the sed utility and also create the startup script. I think that for most cases, having a single file to keep track of and build your system is much easier than editing multiple files. In short, the Dockerfile file should be the focal point of the container creation process for simpler administration.

Sometimes it is easier to upload prebuilt configuration files and additional support files for an application. For example, the Nginx configuration file is simpler to edit manually and have Docker put it in the correct location upon creation. Finally, there is no way around importing existing SSL certificates for Nginx in the Dockerfile.

Creating an Email server was the most difficult container to create because I had to modify configuration files for many components such as SASL, Postfix, Dovecot, creating the mail users, and setting up the alias for the system. Also, Docker needed to be cleaned often because it consumes large amounts of disk space for each container made. Dockerfiles with many commands took very long to execute. Testing out a few changes in a Dockerfile and rebuilding the container took a long time. Amplify that by the many containers I made and the many typos and you can see how my weekend disappeared.

After my home production environment moved to Docker and is working well, I created a BitBucket account to store all of my Docker files and MySQL backups to. There is a cron job inside the container that does a database dump to a shared external folder. If my system ever died, I can setup the new system easier by cloning the Git repository and building the container with a single command.

Conclusion

In conclusion, Docker was hard to deploy initially, but will save time in the future if disasters happen such as a system failure. Dockerfiles can basically be used as a well-documented blue print of a perfect application or environment. For continuity purposes in organizations, Dockerfiles can be shared and administrators should easily be able to understand what needs to be done. Even if you do not like Docker, you can basically copy and paste the contents of a Dockerfile with little text removal and build the perfect system.