Getting started with the many ways to Docker

This is a followup on how to use Docker after building a Swarm cluster. I think it is important for people to understand the different ways to create containers and choose the best way for their needs.This blog post will explain docker-compose, docker engine, and how to do persistent storage.

 

[Docker Compose]

Let’s begin with docker-compose. This utility allows a user to create a manifest file of all the containers needed and how they communicate with each other. This example will show you how to create a MySQL container and connect it to a web application called nodechat.

 

Download the sample docker-compose.yml in a new directory. Below is the contents of the file for reference.  Since YAML files are space sensitive and not easy share in a blog post, please do not copy and paste the contents below.

 

docker-compose.yml:

mysql:
image: rusher81572/mysql
restart: always

nodechat:
image: rusher81572/nodechat
restart: always
ports:
– 8080:8080
links:
– mysql:mysql

 

Type the following command to create the containers.

docker-compose up

A lot of output will be displayed on the screen next and may not bring you back to a terminal prompt. It is safe to press ctrl+c when you see the following:

nodechat_1 |
nodechat_1 | listening on *:8080
nodechat_1 | Creating Database…..
nodechat_1 | Creating Table……

Now that the containers have been created, it is time to start them.

docker-compose start

Run the following command to find out which host is running nodechat with:

docker ps

 

Use your web browser to navigate to the host running nodechat on port 8080. Feel free to chat with yourself =)

 

This is how you can stop and remove your running containers built with the compose file:

docker-compose stop

docker-compose rm -f

 

[Docker Engine]

Now let’s run the equivalent Docker engine commands to accomplish the same result as the docker-compose file so you will have a better understanding on how Docker works.

 

Pull the image from the repository:

docker pull rusher81572/mysql
docker pull rusher81572/nodechat

Run the containers in daemon mode (In the background) with -d. The -p argument exposes a port for outside access. The format for -p is outside_port:inside_port. The “name” argument specifies a container name. This will allow us to link the nodechat application to the MySQL container simply by using a name. The”link” argument links the MySQL container to Nodechat using the container name. This will allow connectivity between nodechat and MySQL to store the chat data. The format for “link” is:  container_name:link_name.

docker run -d –name mysql rusher81572/mysql

docker run -d –link mysql:mysql -p 8080:8080 rusher81572/nodechat

(If you have any issues copying and pasting the above commands….There should be two “-” before name and link. For some reason, WordPress changes them to a single minus sign)

Find out what host is running nodechat with “docker ps” and use your web browser to navigate to the host running nodechat on port 8080

 

[Dockerfile’s]

Dockerfile’s contain all of the steps needed to create a container such as adding files, defining volumes, installing software, and setting environment variables. The following steps will explain how to create persistent storage for containers by creating a container to share volumes with other containers.

 

Create a directory called “fileserver” with a file called “Dockerfile” with the following contents:

FROM ubuntu
VOLUME /share
CMD sleep infinity

Build the filesever container. The -t argument specifies the tag for the container which is basically a name for it.

docker build -t fileserver .
mkdir data

Run the container in daemon mode. The -v argument allows you to share a local directory inside the container as a volume. Replace location_to_data_dir with the full path to the data directory created in the previous step.

docker run -d -v location_to_data_dir:/share –name fileserver fileserver

(If you have any issues copying and pasting the above command….There should be two “-” before name)

 

Now we have a container named fileserver that can share volumes with other containers. The files will be store locally in the data directory. To create a client, create a directory called “fileserver-client” with a file called “Dockerfile” with the following contents:

FROM ubuntu
CMD sleep infinity

Build the fileserver-client container image.

docker build -t fileserver-client .

Now let’s run the fileserver-client container in interactive mode to create a test file. Interactive mode runs a container in the foreground so you can see what is happening and even interact with the shell. The argument “volumes-from” will mount all of the volumes from the container specified. Please note that the container will stop and return you to the shell after running the command.

docker run -it –volumes-from fileserver fileserver-client touch /share/foo.txt

(If you have any issues copying and pasting the above command….There should be two “-” before volumes-from)

 

Run another fileserver-client container to see list of files on the fileserver.

docker run -it –volumes-from fileserver fileserver-client ls /share

Check to ensure that the files are being stored locally.

ls location_to_data_dir/data

The file should be displayed in the terminal. Feel free to play around with this more. I hope that you learned something new today.

Cloud Explorer is back with v7.2

Introducing Cloud Explorer 7.2!

Cloud Explorer is a open-source Amazon S3 client that works on any operating system. The program features a graphical or command line interface. Today I just released version 7.2 and hope that you give it a test drive. Feedback and uses cases are always encouraged.

 

Cloud Explorer

 

What’s new in this release?

To start,  this release of Cloud Explorer was compiled with Java 1.8.0_72 and version 1.10.56 of the Amazon S3 Java Development Kit ( JDK). The major improvements in this release regard file synchronization. Basically, it was mostly rewritten. By putting forth the effort, it helped reduce technical debt and consistency between the command line and graphical version of Cloud Explorer.

 

How do I get it?

Cloud Explorer v7.2 is available under the “Downloads” section of the Release page on GitHub. Simply click on “cloudExplorer-7.2.zip” and the download will begin. When the download is finished, extract the zip file and double click on “CloudExplorer.jar”.

 

Where do we go from here?

I know it has been a while since Cloud Explorer has been touched. It is hard to handle a project all by yourself and keep innovating. I feel that with this release, Cloud Explorer reached a stable point.  I am always looking for new ideas and help from the community. If you are interested in contributing, please contact me or open an issue on the GitHub page.

 

Using Docker Swarm in Production

[Introduction]

I have always been fascinated with Docker Swarm and how it can cluster multiple computers together to run containers. I mainly used Swarm via docker-machine with the Virtual Box provider for testing. I felt that now it is time to try and run this in production. This blog post will explain how to create a simple Swarm cluster and secure it with a firewall. Docker officially recommends that you enable TLS on each node but I wanted to make it simpler with firewall rules to prevent unauthorized access.

[Setup]

Docker v1.10 has been installed on each of these machines running Ubuntu 15.10:

node_0 – The Swarm Master.
node_1 – A Swarm node.
node_2 – Another Swarm node.

[Installation]

1. Setup each node to have Docker listen on it’s own host IP address and disable the firewall rules:

First, stop the Docker daemon so we can make configuration changes:

systemctl stop docker

Edit: /etc/default/docker.  Uncomment if needed and modify DOCKER_OPTS as follows:

DOCKER_OPTS=”-H tcp://node_0_ip:2375 –iptables=false”

Start the Docker daemon again:

systemctl start docker

(Repeat this process for all the nodes)

2. On the Swarm Master node, create a cluster token. Each Swarm client will need the token to form a cluster. The output of this command will be a long token that you will need in the next steps.

docker run swarm create

3. On the Swarm Master node, create a Swarm Manager using the token from step 2. The Swarm manager will listen on port 5000.

docker run -d -p 5000:2375 -t swarm manage token://6b11f566db288878e16e56f37c58599f

2. Type the following commands from the master node to join the slave nodes to the cluster using the token from step 2.

docker run -d swarm join –addr=node_0_ip:2375 token://6b11f566db288878e16e56f37c58599f
docker run -d swarm join –addr=node_1_ip:2375 token://6b11f566db288878e16e56f37c58599f
docker run -d swarm join –addr=node_2_ip:2375 token://6b11f566db288878e16e56f37c58599f

3. Since the Swarm manager is running on port 5000 on node_0, we need to tell the Docker client such as a laptop to connect to that host and port to use the cluster. The following command will show the status of the Swarm cluster.

docker -H tcp://node_0_ip:5000 ps

[Securing]

4. Finally, we need to secure the Swarm cluster with firewall rules so that only the nodes in the cluster can talk to the Docker engine. The following rules will deny all incoming traffic and only allow Docker access from the nodes.

Node_0:

ufw allow 22
ufw allow 5000
ufw default deny incoming
ufw allow from node_1_ip
ufw allow from node_2_ip
ufw enable

Node_1:

ufw allow 22
ufw default deny incoming
ufw allow from node_0_ip
ufw allow from node_2_ip
ufw enable

Node_2:

ufw allow 22
ufw default deny incoming
ufw allow from node_0_ip
ufw allow from node_1_ip
ufw enable

[Conclusion]

Now you should have a three node Docker Swarm Cluster that is locked down. If you need to enable an external port for a container, the firewall rules will need to be adjusted manually.

 

Goodbye Docker on CentOS. Hello Ubuntu!

I have been a hardcore CentOS user for many years now. I enjoyed its minimal install to create a light environment, intuitive installation process, and it’s package manager. Docker is the most popular container format today and provides developers and enthusiasts with an easy way to run workloads in containerized environments. I started using Docker in production at home for about a year now for services such as Plex Media Server, Web Server for this blog, ZNC, MineCraft, and MySQL to name a few. A Dockerfile is a set of instructions used to create a Docker image. I invested many hours creating perfect Dockerfiles using CentOS and Fedora to make deployments simple on any operating system. However, a personal revolution was brewing.

Read the rest of this entry »

Using LVM cache on Linux with a RAM disk

The Challenge

This is a follow up article from using a USB drive for a LVM cache. I decided to test things further by using a RAM disk instead of a USB drive.

 

The Journey

1. Create a RAM disk:

modprobe brd rd_nr=1 rd_size=4585760 max_part=0

2. Create the cache

pvcreate /dev/ram0
vgextend vg /dev/ram0
lvcreate -L 300M -n cache_meta vg /dev/ram0
lvcreate -L 4G -n cache_vol vg /dev/ram0
lvconvert –type cache-pool –poolmetadata vg/cache_meta –cachemode=writeback vg/cache_vol -y
lvconvert –type cache –cachepool vg/cache_vol vg/docker-pool

3. Run the DD test again

[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.89586 s, 553 MB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.79864 s, 583 MB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 0.922467 s, 1.1 GB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.33757 s, 784 MB/s

Average Speed: 736 MB/s

 

Conclusion

In Conclusion, my average write speed is 736 MB/s using LVM caching with a RAM disk. With a USB thumb drive, my average speed is 411.25 MB/s. With no cache, my average speed is 256 MB/s.

 

 

Using LVM cache on Linux

The Challenge

My home server uses a RAID 1 configuration. I was very disappointed in the performance and wanted to find a way to make it faster. After browsing the Internet one day, I came across news headlines that said CentOS 7  supports LVM cache. I found an old USB thumb drive and decided to take the cache challenge and see how it performs.

The Journey

Here is a simple DD test prior to enabling cache:

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 6.27698 s, 167 MB/s

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 5.04032 s, 208 MB/s

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 3.41007 s, 307 MB/s

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 2.94413 s, 356 MB/s

Average write speed: 256.5 MB/s

Time to enable caching and try to make the system perform better:

vgextend vg /dev/sdc

lvcreate -L 1G -n cache_metadata /dev/sdc

lvcreate -L 8G -n cache_vol /dev/sdc

lvconvert –type cache-pool –poolmetadata vg/cache_metadata vg/cache_vol

lvconvert –type cache –cachepool vg/cache_vol vg/original_volume_name

 

The write results with caching enabled:

# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 3.73197 s, 281 MB/s

# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 1.70449 s, 615 MB/s

# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 3.91247 s, 268 MB/s

]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 2.18025 s, 481 MB/s

Average write speed: 411.25 MB/s

Conclusion:

When I originally built this machine from used parts on Amazon, I decided to reuse two old Western Digital Green drives which offer low performance and power usage.  I had no idea that they would perform poorly in RAID 1.  I was surprised and glad that a cheap USB flash drive helped me get a significant increase in write performance by an average of 155 MB/s. I find it fascinating how the Linux ecosystem helps people recycle old junk and put it to good use. Hooray.

 

Betting the farm on Docker

The Challenge

I wanted to try out Docker in production to really understand it. I believe that to fully understand or master something, you must make it part of your life. Containers are the next buzz word in IT and future employment opportunities are likely to prefer candidates with container experience. My current home environment consists of a single server running: OpenVPN, Email, Plex Media Server, Nginx, Torrent client, IRC bouncer, and a Samba server. The challenge is to move each of these to Docker for easier deployment.

The Journey

I noticed that many Docker containers contain multiple files that are added to the container upon creation. I decided to take a simpler approach and have the Dockerfile modify most of the configuration files with the sed utility and also create the startup script. I think that for most cases, having a single file to keep track of and build your system is much easier than editing multiple files. In short, the Dockerfile file should be the focal point of the container creation process for simpler administration.

Sometimes it is easier to upload prebuilt configuration files and additional support files for an application. For example, the Nginx configuration file is simpler to edit manually and have Docker put it in the correct location upon creation. Finally, there is no way around importing existing SSL certificates for Nginx in the Dockerfile.

Creating an Email server was the most difficult container to create because I had to modify configuration files for many components such as SASL, Postfix, Dovecot, creating the mail users, and setting up the alias for the system. Also, Docker needed to be cleaned often because it consumes large amounts of disk space for each container made. Dockerfiles with many commands took very long to execute. Testing out a few changes in a Dockerfile and rebuilding the container took a long time. Amplify that by the many containers I made and the many typos and you can see how my weekend disappeared.

After my home production environment moved to Docker and is working well, I created a BitBucket account to store all of my Docker files and MySQL backups to. There is a cron job inside the container that does a database dump to a shared external folder. If my system ever died, I can setup the new system easier by cloning the Git repository and building the container with a single command.

Conclusion

In conclusion, Docker was hard to deploy initially, but will save time in the future if disasters happen such as a system failure. Dockerfiles can basically be used as a well-documented blue print of a perfect application or environment. For continuity purposes in organizations, Dockerfiles can be shared and administrators should easily be able to understand what needs to be done. Even if you do not like Docker, you can basically copy and paste the contents of a Dockerfile with little text removal and build the perfect system.

Cloud Explorer 6.0 Released

I have been working hard on new features and GUI enhancements for Cloud Explorer 6.0. I think it is important for an application to improve over time and this project has seen it’s share of visual evolutions. If you look back at the 1.0 release, the changes are night and day. I enjoy the challenge of making this program look pretty for each major release and hope that the users find it to be good. This release uses many free-licenses images and GPL images found in the KDE Desktop Environment.

Getting Cloud Explorer

Cloud Explorer is available for download from here.

Upgrading to 6.0

Starting with Cloud Explorer 5.0 or later, you can upgrade by clicking on Help -> Check for updates. After the update is complete, restart Cloud Explorer.

 

6.0 Changes:

Bug Fixes:

1. Fixed a bug when switching accounts.
2. If no region is specified, it will default to “defaultAWS” to
avoid a null pointer exception. To resolve the issue, delete and
add your account back.
3. If no image is selected when the image button is pressed, nothing
happens.
4. If no music is selected when the play button is pressed, nothing
happens.
5. Support for line spaces in s3.config
6. Versioning display fix.

Improvements:
1. Snapshots and Migration are now located under the new menu
“Snapshots and Migration”.
2. New Icons for GUI.
3. Maximize Window.
4. For improved GUI responsiveness, many actions now avoid
reloading buckets.

New Features:
1. Create Bucket snapshots.
2. Restore Bucket snapshots.

Cloud Explorer 6.0 Sneak Preview

I have been working hard on new features and GUI enhancements for Cloud Explorer 6.0. I think it is important for an application to improve over time and this project has seen it’s share of visual evolutions. If you look back at the 1.0 release, the changes are night and day. I enjoy the challenge of making this program look pretty for each major release and hope that the users find it to be good. This release uses many free-licenses images and GPL images found in the KDE Desktop Environment.

If you like to test this early release, please create an issue on GitHub and I will provide a release candidate. I can always can use help in testing. You can also build the latest version of CloudExplorer yourself. The expected release date for 6.0 is July 22, 2015.

6.0 Changes:

Bug Fixes:

1. Fixed a bug when switching accounts.
2. If no region is specified, it will default to “defaultAWS” to
avoid a null pointer exception. To resolve the issue, delete and
add your account back.
3. If no image is selected when the image button is pressed, nothing
happens.
4. If no music is selected when the play button is pressed, nothing
happens.
5. Support for line spaces in s3.config
6. Versioning display fix.

Improvements:
1. Snapshots and Migration are now located under the new menu
“Snapshots and Migration”.
2. New Icons for GUI.
3. Maximize Window.
4. For improved GUI responsiveness, many actions now avoid
reloading buckets.

New Features:
1. Create Bucket snapshots.
2. Restore Bucket snapshots.

Introducing Cloud Explorer 5.6

Introduction

I am pleased to announce v5.6 of Cloud Explorer.  The biggest new feature for this release allows users to record audio messages that will be saved into their selected bucket.  Cloud Explorer provides a great way to share audio messages because the S3 API allows users to share files via a public URL. Syncing is now more stable intelligent. When syncing to or from a bucket, sync compares the last modified time and if the destination has a newer version, the file will not be copied. The text editor now has a folder list box so users can save notes into a specific folder. This is an excellent way to stay organized and use Cloud Explorer for note taking.

Getting Cloud Explorer

Cloud Explorer is available for download from here.  After downloading, please upgrade to 5.6 if you are running an earlier release.

 

Upgrading to 5.6

Starting with Cloud Explorer 5.0 or later, you can upgrade by clicking on Help -> Check for updates. After the update is complete, restart Cloud Explorer.

 

Here is a complete list of changes in v5.6:

Bug Fixes:
1. Syncing GUI and CLI: Prevent duplicated transfers of synced files in folders and items go in the correct folder when syncing to the local machine,

2. Syncing from CLI: Syncing from S3 saves to appropriate directory level.

Improvements:
1. Music Player:  Plays WAV files, stop button renamed to “Stop /Close” for clarity, and no longer case sensitive for MP3 file extensions.

2. Syncing GUI and CLI: Overwrite button removed.

New Features:
1. Audio recorder.
2. Sync GUI and CLI: Timestamps are compared and the outdated files are overwritten.
3. Folder support for saving files in the Text Editor.

←Older