Most people will say that 2016 was a terrible year and can’t wait for 2017. I agree that 2016 was not perfect for many people but it was a great year for linux-toys. I made this blog with the goal to influence people and drive that creative spark that we all have inside. In this blog post I will go over the website statistics and discuss a few of the blog entries that I thought were most influential for the year.
I switched to Gluster FS a while ago to provide easier container mobility across my Raspberry Pi Docker Cluster. Gluster worked great and was easy to get up and running but I had very poor performance. The average write speed was about 1 MB/s which is unacceptable for a filesystem that will undergo a lot of writes. I decided that it was time to take action and started looking at kernel parameters that could be changed.
From researching performance over the years, I often came across the following settings and thought that this is the best place to start. Since the Raspberry Pi is not really made for performance, I did not think it was worth the time investigating additional settings.
I released PiCluster last week and wanted to show how to run the Scality S3 server with it using Docker. Scality S3 is an open-source object storage server. PiCluster is a simple and lightweight container management and orchestration framework that I wrote in Node.js. Besides running containers, PiCluster can also perform health checks on applications to ensure that a service is actually running. Before we begin, I am assuming that you already have Docker installed. Lets get started by downloading PiCluster.
I was always fascinated with distributed filesystems and wanted to learn more about Gluster since it is becoming more popular in larger open-source projects. Since I have a few Raspberry Pi’s, I thought that now is the best time to learn. This blog post will explain how to run Gluster on a two-node Raspberry Pi cluster from a Docker container.
Two Raspberry Pi’s (rpi-1 and rpi-2)
Running a Gluster image from a local Docker registry
Hostnames are resolvable in /etc/hosts on both Pi’s
My previous post was about using Cloud Explorer with the Scality S3 server. After I published that post, I thought it would be informative to go one step further and explain how I use the S3 server in homeduction (applications run at home in production). My homeduction environment consists of four Raspberry Pi’s running Docker that power this WordPress blog and many other applications . My goal is to add an S3 server to store the images for this blog and anything else that I can come up with.
I spent a few weeks searching for an open-source S3 server that I can run at home to test Cloud Explorer. I first came across Minio which is an open-source S3 server but I could not get it to work with Cloud Explorer because it had issues resolving bucket names via DNS which is a requirement using the AWS SDK. I then read an article about Scality releasing an open-source S3 server that you can run inside a Docker image. I was able to get Scality up and running quickly with little effort. In this post, I will explain how I got the Scality S3 server setup and how to use it with Cloud Explorer.
I recently started to check out Kubernetes and wanted to share with everyone how I got WordPress running on it as a three-tier application. I made the decision to learn Kubernetes because Docker Swarm was not working well for me. To start, I downloaded and installed MiniKube on my laptop.
I then created three Docker images and pushed them to my Docker Hub registry:
If you have been keeping up with Docker lately, you may have come across my blog post about the sad state of Docker. In this post, I go over how the 1.12 release appeared interesting from all the marketing announcements and the constant copying and pasting of the same Docker content into blogs over the world. However, many others and I expressed our opinions on Hacker News on how Docker failed to deliver a quality product and how they failed to create a quality release. The New Stack then summarized all of the weekend discussions going on in a new blog post and discussed that a fork of Docker may arise. Is a fork really the best answer? Let’s take a look.
I have always been a big fan of Docker. This is very visible if you regularly read this blog. However, I am very disappointed lately how Docker handled the 1.12 release. I like to think of version 1.12 as a great proof of concept that should not have received the amount of attention that it already received. Let’s dive deep into what I found wrong.
First, I do not think a company should market and promote exciting new features that have not been tested well. Every time Docker makes an announcement, the news spreads like a virus to blogs and news sites all over the globe. Tech blogs will basically copy and paste the exact same procedure that Docker discussed into a new blog post as if they were creating original content. This cycle repeats over and over again and becomes annoying because I am seeing the same story a million times. What I hate most about these recent redundant articles is that the features do not work as well as what is written about them.
I was really excited hearing about the new Swarm mode feature and wanted it to work as described because this means that one day I can easily make a Swarm cluster with my four Raspberry Pi’s and have container orchestration, load balancing, automatic failover, multi-host networking, and mesh networking features without any effort. Swarm in v1.12 is very easy to setup versus the predecessor and I wanted to put it in production at home (homeduction). To test Swarm, I setup a few virtual machines using docker-machine on my laptop and went through the Swarm creation process and then began to run into issues when deploying my applications.
An important feature to have in a Swarm cluster is multi-host networking for containers. This allows containers to talk to each other on a virtual network across many hosts running the Docker engine. Multi-host networking is important for containers to communicate with each other such as web application connecting to another container with MySQL. The problem I faced is that none of my containers could communicate across hosts. When it did work sometimes, the mesh networking would not route traffic properly to the host running my container. This means none of my applications worked properly. I went to the Docker forums and many people shared my pain.
It is not wise to explode the Internet and conventions with marketing material about exciting new features that do not work as presented. There are still many bugs in Swarm that need to be fixed before releasing to the general public to have them beta test for you. What is the rush to release? Will it hurt that much to wait a few more weeks or months to do it right and have the product properly working and tested? Yes, we all know Docker is awesome and is trying to play catch up with competitors such as Apcera and Kubernetes, but please take it slow and make Docker great again!
Tweaked paragraphs to make it more clear that my testing was not done on the Raspberry Pi and done with docker-machine on a laptop.
I decided to finally make use of my four Raspberry Pi model 3’s and take the challenge to move all of my home services to them. Previously, I ran a x86 Desktop as a server in my living room. The loud noises coming from the server made it uncomfortable to be in sometimes. The loud noisy box is home to this website and many other applications such as Plex, Transmission, OpenVPN, Jenkins, Samba, and various Node.js projects all running in Docker. Having all of those applications running on a single box is a single point of failure and makes system administration harder when reboots are required.