The Sad State of Docker

I have always been a big fan of Docker. This is very visible if you regularly read this blog. However, I am very disappointed lately how Docker handled the 1.12 release. I like to think of version 1.12 as a great proof of concept that should not have received the amount of attention that it already received. Let’s dive deep into what I found wrong.

First, I do not think a company should market and promote exciting new features that have not been tested well. Every time Docker makes an announcement, the news spreads like a virus to blogs and news sites all over the globe. Tech blogs will basically copy and paste the exact same procedure that Docker discussed into a new blog post as if they were creating original content. This cycle repeats over and over again and becomes annoying because I am seeing the same story a million times. What I hate most about these recent redundant articles is that the features do not work as well as what is written about them.

Let’s start with the exciting new Swarm mode feature. I really wanted it to work as described because this means I can easily make a Swarm cluster with my four Raspberry Pi’s and have container orchestration, load balancing, automatic failover, multi-host networking, and mesh networking features without any effort. Swarm in v1.12 is very easy to setup versus the predecessor and I really wanted to put it in production at home (homeduction). The hype sounds good right? What could go wrong?

Everything went wrong. An important feature to have in a Swarm cluster is multi-host networking for containers. This allows containers to talk to each other on a virtual network across many hosts running the Docker engine. Multi-host networking is important for containers to communicate with each other such as web application connecting to another container with MySQL. The problem I faced is that none of my containers could communicate across hosts. When it did work sometimes, the mesh networking would not route traffic properly to the host running my container. This means none of my applications worked properly. I went to the Docker forums and many people shared my pain.

Labels and constraints are a neat Docker feature that lets me schedule containers to run on hosts with a specific characteristic. For example, I can set an arbitrary label on a Docker node that I can reference later when running a container such as disk type, node name, foo, or bar. In theory when I run a container with a label, it should only run on a host that uses the specified label and fail if the request cannot be fulfilled. With Swarm 1.12, my labels rarely work. If a Docker node dies, the container will get rescheduled on any host randomly completely ignoring my labels!

It is not wise to explode the Internet and conventions with marketing material about exciting new features that do not work as presented. There are still many bugs in Swarm that need to be fixed before releasing to the general public to have them beta test for you. What is the rush to release? Will it hurt that much to wait a few more weeks or months to do it right and have the product properly working and tested? Yes, we all know Docker is awesome and is trying to play catch up with competitors such as Apcera and Kubernetes, but please take it slow and make Docker great again!

Moving from a single machine with Docker to a cluster of Pi’s

I decided to finally make use of my four Raspberry Pi model 3’s and take the challenge to move all of my home services to them. Previously, I ran a x86 Desktop as a server in my living room. The loud noises coming from the server made it uncomfortable to be in sometimes. The loud noisy box is home to this website and many other applications such as Plex, Transmission, OpenVPN, Jenkins, Samba, and various Node.js projects all running in Docker. Having all of those applications running on a single box is a single point of failure and makes system administration harder when reboots are required.

To make administration easier, I decided that one Pi should be a load balancer for as many applications as possible. Yes, I know that having a single Pi as a load balancer is also a single point of failure but it makes administrating the other Pi’s easier. I researched how to do HTTP and TCP load balancing with NGINX and made a Docker container for it which runs on one Pi.

Now I needed to think about where to run all of these containers and made a mental map of where to run them. I decided the best way for deploying containers would be through a private local registry so I created a Docker registry on one of the Pi’s and pushed all of the images. Let’s take a look at the application architecture to see what each Raspberry Pi is doing.

Now I have all of my applications from the loud server running on the quiet Pi’s and many of the containers are load balanced. The next task is to figure out a way to manage the containers and automate the image building process. I wrote a complex Bash script that accomplishes my requirements to manage the little cluster of Pi’s using many SSH and GIT commands. I found Git to be the easiest way for me to manage my HTML and configuration files. Since BitBucket offers private Git repositories for free, I used them. If I need to make any changes, my Bash script will do a simple pull from the repository and each node will be in sync. It was a tough journey but I learned a lot.

Running an SSH server in a container on Apcera

SSH is the Swiss Amy Knife of system administration and provides the easiest way to manage a system remotely. When running containers, there is typically someway to connect to a container’s shell from a client that communicates through an API like Docker or by using an SSH solution which is how Apcera does it. Some applications that run in containers may require SSH access to communicate with other containers or services. For example, Hadoop is a popular cluster application that uses a distributed filesystem spread across many nodes and communicates with each other via SSH. Let’s take a look on how to setup an SSH server running inside a capsule (a minimal OS container) on the Apcera Platform.

1. Create a capsule.

2. Connect to the capsule so it can be configured.

3. Run the following commands inside the capsule.

I had to change the port that the SSH server listens on because Apcera uses the default port 22 to provide access to the capsule with their command-line utility APC. Please note that once SSH is installed in a capsule, you will not be able to use “apc app connect” any longer.

4. Expose the SSH port so a external route can be added.

5. Add a route to the application for external access.

Now it is time to add an external route so I can remotely login the capsule with SSH. On Apcera, traffic goes through the central host which contains the router for the entire platform. I need to know the IP address of the central host so I can add the route. If you are using Apcera Community Edition, this can be acquired by running:

The port I am connecting to is 55540 which is a random free port on the system. When I use SSH to login remotely, I will need to specify the port.

6. Connect to the SSH server using the public routing port.

I hope that you found this blog useful on how to install SSH into a capsule on Apcera. The same capsule configuration instructions from step 3 can be used in your Dockerfile if you wanted to run Docker images on the platform. If you have time, check out the Apcera Community edition that can be installed on your laptop or in the cloud for free:

https://www.apcera.com/community-edition

Examining the Apcera Cloud Platform

I like to take a break from my usual Docker blog posts and discuss the Apcera Cloud Platform. The Apcera Cloud Platform runs containerized workloads such as Docker images or applications from source code in a clustered environment. For the past several weeks, I have been playing with Docker Swarm and spent time researching how to put this blog into production on it. Life has been very difficult for this migration because Swarm requires a lot of handholding and lacks the failover automation that I need. I began researching the Apcera Platform and tried out the community edition that users can try for free. The areas that I focused on for my needs regard ease of use and workload portability.

 

Workload mobility is important for me because I will eventually need to perform maintenance or upgrades on a server that will require downtime. Unlike Docker Swarm, Apcera features a built-in job and health manager that actually works and is not beta code. These managers ensure that my applications are running healthy with the desired number of instances. If a node running this website dies, it will automatically be scheduled to run on another machine in the cluster. This great technology by Apcera makes my life easier because I do not have to worry about Swarm not failing over my applications properly and also risk running rogue containers when the failed node comes back online.

 

The Apcera Platform makes it easier for me to host this site because the routing is automatically handled by the platform. For example, I can create a cluster that exists at home on multiple VMware ESXI servers and the built-in router can direct web traffic to where this site is running. Apcera features a built-in persistent storage provider called APCFS that I use to store the database for this site.  Another container running MySQL can use the persistent storage so data will not be lost when the container stops running.  If the website or MySQL container fails or moves machines, routing adjustments will be made automatically so the application can use the database.

 

Scaling the website for performance has also never been easier. From the Apcera Web Console, I can click on an application and increase or decrease the number of instances on the fly and the job manager will make the necessary adjustments on this cluster. It is also easy to accomplish this with the APC command-line utility that comes with Apcera. I hope that you enjoyed this blog post and find a way to run containers in production easier than ever!

 

 

Getting started with the many ways to Docker

This is a followup on how to use Docker after building a Swarm cluster. I think it is important for people to understand the different ways to create containers and choose the best way for their needs.This blog post will explain docker-compose, docker engine, and how to do persistent storage.

 

[Docker Compose]

Let’s begin with docker-compose. This utility allows a user to create a manifest file of all the containers needed and how they communicate with each other. This example will show you how to create a MySQL container and connect it to a web application called nodechat.

 

Download the sample docker-compose.yml in a new directory. Below is the contents of the file for reference.  Since YAML files are space sensitive and not easy share in a blog post, please do not copy and paste the contents below.

 

docker-compose.yml:

mysql:
image: rusher81572/mysql
restart: always

nodechat:
image: rusher81572/nodechat
restart: always
ports:
– 8080:8080
links:
– mysql:mysql

 

Type the following command to create the containers.

docker-compose up

A lot of output will be displayed on the screen next and may not bring you back to a terminal prompt. It is safe to press ctrl+c when you see the following:

nodechat_1 |
nodechat_1 | listening on *:8080
nodechat_1 | Creating Database…..
nodechat_1 | Creating Table……

Now that the containers have been created, it is time to start them.

docker-compose start

Run the following command to find out which host is running nodechat with:

docker ps

 

Use your web browser to navigate to the host running nodechat on port 8080. Feel free to chat with yourself =)

 

This is how you can stop and remove your running containers built with the compose file:

docker-compose stop

docker-compose rm -f

 

[Docker Engine]

Now let’s run the equivalent Docker engine commands to accomplish the same result as the docker-compose file so you will have a better understanding on how Docker works.

 

Pull the image from the repository:

docker pull rusher81572/mysql
docker pull rusher81572/nodechat

Run the containers in daemon mode (In the background) with -d. The -p argument exposes a port for outside access. The format for -p is outside_port:inside_port. The “name” argument specifies a container name. This will allow us to link the nodechat application to the MySQL container simply by using a name. The”link” argument links the MySQL container to Nodechat using the container name. This will allow connectivity between nodechat and MySQL to store the chat data. The format for “link” is:  container_name:link_name.

docker run -d –name mysql rusher81572/mysql

docker run -d –link mysql:mysql -p 8080:8080 rusher81572/nodechat

(If you have any issues copying and pasting the above commands….There should be two “-” before name and link. For some reason, WordPress changes them to a single minus sign)

Find out what host is running nodechat with “docker ps” and use your web browser to navigate to the host running nodechat on port 8080

 

[Dockerfile’s]

Dockerfile’s contain all of the steps needed to create a container such as adding files, defining volumes, installing software, and setting environment variables. The following steps will explain how to create persistent storage for containers by creating a container to share volumes with other containers.

 

Create a directory called “fileserver” with a file called “Dockerfile” with the following contents:

FROM ubuntu
VOLUME /share
CMD sleep infinity

Build the filesever container. The -t argument specifies the tag for the container which is basically a name for it.

docker build -t fileserver .
mkdir data

Run the container in daemon mode. The -v argument allows you to share a local directory inside the container as a volume. Replace location_to_data_dir with the full path to the data directory created in the previous step.

docker run -d -v location_to_data_dir:/share –name fileserver fileserver

(If you have any issues copying and pasting the above command….There should be two “-” before name)

 

Now we have a container named fileserver that can share volumes with other containers. The files will be store locally in the data directory. To create a client, create a directory called “fileserver-client” with a file called “Dockerfile” with the following contents:

FROM ubuntu
CMD sleep infinity

Build the fileserver-client container image.

docker build -t fileserver-client .

Now let’s run the fileserver-client container in interactive mode to create a test file. Interactive mode runs a container in the foreground so you can see what is happening and even interact with the shell. The argument “volumes-from” will mount all of the volumes from the container specified. Please note that the container will stop and return you to the shell after running the command.

docker run -it –volumes-from fileserver fileserver-client touch /share/foo.txt

(If you have any issues copying and pasting the above command….There should be two “-” before volumes-from)

 

Run another fileserver-client container to see list of files on the fileserver.

docker run -it –volumes-from fileserver fileserver-client ls /share

Check to ensure that the files are being stored locally.

ls location_to_data_dir/data

The file should be displayed in the terminal. Feel free to play around with this more. I hope that you learned something new today.

Cloud Explorer is back with v7.2

Introducing Cloud Explorer 7.2!

Cloud Explorer is a open-source Amazon S3 client that works on any operating system. The program features a graphical or command line interface. Today I just released version 7.2 and hope that you give it a test drive. Feedback and uses cases are always encouraged.

 

What’s new in this release?

To start,  this release of Cloud Explorer was compiled with Java 1.8.0_72 and version 1.10.56 of the Amazon S3 Java Development Kit ( JDK). The major improvements in this release regard file synchronization. Basically, it was mostly rewritten. By putting forth the effort, it helped reduce technical debt and consistency between the command line and graphical version of Cloud Explorer.

 

How do I get it?

Cloud Explorer v7.2 is available under the “Downloads” section of the Release page on GitHub. Simply click on “cloudExplorer-7.2.zip” and the download will begin. When the download is finished, extract the zip file and double click on “CloudExplorer.jar”.

 

Where do we go from here?

I know it has been a while since Cloud Explorer has been touched. It is hard to handle a project all by yourself and keep innovating. I feel that with this release, Cloud Explorer reached a stable point.  I am always looking for new ideas and help from the community. If you are interested in contributing, please contact me or open an issue on the GitHub page.

 

Using Docker Swarm in Production

[Introduction]

I have always been fascinated with Docker Swarm and how it can cluster multiple computers together to run containers. I mainly used Swarm via docker-machine with the Virtual Box provider for testing. I felt that now it is time to try and run this in production. This blog post will explain how to create a simple Swarm cluster and secure it with a firewall. Docker officially recommends that you enable TLS on each node but I wanted to make it simpler with firewall rules to prevent unauthorized access.

[Setup]

Docker v1.10 has been installed on each of these machines running Ubuntu 15.10:

node_0 – The Swarm Master.
node_1 – A Swarm node.
node_2 – Another Swarm node.

[Installation]

1. Setup each node to have Docker listen on it’s own host IP address and disable the firewall rules:

First, stop the Docker daemon so we can make configuration changes:

systemctl stop docker

Edit: /etc/default/docker.  Uncomment if needed and modify DOCKER_OPTS as follows:

DOCKER_OPTS=”-H tcp://node_0_ip:2375 –iptables=false”

Start the Docker daemon again:

systemctl start docker

(Repeat this process for all the nodes)

2. On the Swarm Master node, create a cluster token. Each Swarm client will need the token to form a cluster. The output of this command will be a long token that you will need in the next steps.

docker run swarm create

3. On the Swarm Master node, create a Swarm Manager using the token from step 2. The Swarm manager will listen on port 5000.

docker run -d -p 5000:2375 -t swarm manage token://6b11f566db288878e16e56f37c58599f

2. Type the following commands from the master node to join the slave nodes to the cluster using the token from step 2.

docker run -d swarm join –addr=node_0_ip:2375 token://6b11f566db288878e16e56f37c58599f
docker run -d swarm join –addr=node_1_ip:2375 token://6b11f566db288878e16e56f37c58599f
docker run -d swarm join –addr=node_2_ip:2375 token://6b11f566db288878e16e56f37c58599f

3. Since the Swarm manager is running on port 5000 on node_0, we need to tell the Docker client such as a laptop to connect to that host and port to use the cluster. The following command will show the status of the Swarm cluster.

docker -H tcp://node_0_ip:5000 ps

[Securing]

4. Finally, we need to secure the Swarm cluster with firewall rules so that only the nodes in the cluster can talk to the Docker engine. The following rules will deny all incoming traffic and only allow Docker access from the nodes.

Node_0:

ufw allow 22
ufw allow 5000
ufw default deny incoming
ufw allow from node_1_ip
ufw allow from node_2_ip
ufw enable

Node_1:

ufw allow 22
ufw default deny incoming
ufw allow from node_0_ip
ufw allow from node_2_ip
ufw enable

Node_2:

ufw allow 22
ufw default deny incoming
ufw allow from node_0_ip
ufw allow from node_1_ip
ufw enable

[Conclusion]

Now you should have a three node Docker Swarm Cluster that is locked down. If you need to enable an external port for a container, the firewall rules will need to be adjusted manually.

 

Goodbye Docker on CentOS. Hello Ubuntu!

I have been a hardcore CentOS user for many years now. I enjoyed its minimal install to create a light environment, intuitive installation process, and it’s package manager. Docker is the most popular container format today and provides developers and enthusiasts with an easy way to run workloads in containerized environments. I started using Docker in production at home for about a year now for services such as Plex Media Server, Web Server for this blog, ZNC, MineCraft, and MySQL to name a few. A Dockerfile is a set of instructions used to create a Docker image. I invested many hours creating perfect Dockerfiles using CentOS and Fedora to make deployments simple on any operating system. However, a personal revolution was brewing.

Read the rest of this entry »

Using LVM cache on Linux with a RAM disk

The Challenge

This is a follow up article from using a USB drive for a LVM cache. I decided to test things further by using a RAM disk instead of a USB drive.

 

The Journey

1. Create a RAM disk:

modprobe brd rd_nr=1 rd_size=4585760 max_part=0

2. Create the cache

pvcreate /dev/ram0
vgextend vg /dev/ram0
lvcreate -L 300M -n cache_meta vg /dev/ram0
lvcreate -L 4G -n cache_vol vg /dev/ram0
lvconvert –type cache-pool –poolmetadata vg/cache_meta –cachemode=writeback vg/cache_vol -y
lvconvert –type cache –cachepool vg/cache_vol vg/docker-pool

3. Run the DD test again

[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.89586 s, 553 MB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.79864 s, 583 MB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 0.922467 s, 1.1 GB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.33757 s, 784 MB/s

Average Speed: 736 MB/s

 

Conclusion

In Conclusion, my average write speed is 736 MB/s using LVM caching with a RAM disk. With a USB thumb drive, my average speed is 411.25 MB/s. With no cache, my average speed is 256 MB/s.

 

 

Using LVM cache on Linux

The Challenge

My home server uses a RAID 1 configuration. I was very disappointed in the performance and wanted to find a way to make it faster. After browsing the Internet one day, I came across news headlines that said CentOS 7  supports LVM cache. I found an old USB thumb drive and decided to take the cache challenge and see how it performs.

The Journey

Here is a simple DD test prior to enabling cache:

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 6.27698 s, 167 MB/s

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 5.04032 s, 208 MB/s

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 3.41007 s, 307 MB/s

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 2.94413 s, 356 MB/s

Average write speed: 256.5 MB/s

Time to enable caching and try to make the system perform better:

vgextend vg /dev/sdc

lvcreate -L 1G -n cache_metadata /dev/sdc

lvcreate -L 8G -n cache_vol /dev/sdc

lvconvert –type cache-pool –poolmetadata vg/cache_metadata vg/cache_vol

lvconvert –type cache –cachepool vg/cache_vol vg/original_volume_name

 

The write results with caching enabled:

# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 3.73197 s, 281 MB/s

# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 1.70449 s, 615 MB/s

# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 3.91247 s, 268 MB/s

]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 2.18025 s, 481 MB/s

Average write speed: 411.25 MB/s

Conclusion:

When I originally built this machine from used parts on Amazon, I decided to reuse two old Western Digital Green drives which offer low performance and power usage.  I had no idea that they would perform poorly in RAID 1.  I was surprised and glad that a cheap USB flash drive helped me get a significant increase in write performance by an average of 155 MB/s. I find it fascinating how the Linux ecosystem helps people recycle old junk and put it to good use. Hooray.

 

←Older