I was very happy to have been interviewed on CoderRadio! We had a great discussion on how PiCluster started and programming in general. I like that I was able to talk about my life’s work in open source. We discussed how I started my first project MephistoBackup and how hard it is to get community involvement in an open source project. This blog was also featured and I explained what linux-toys.com is all about. Finally, I was able to talk about my other main Project Cloud Explorer and the exciting features that it has.
I embedded the interview in this post so you can watch it easily here or on YouTube.
Thank you JupiterBroadcasting for having me on CoderRadio. I was always a big fan of the network. Their wonderful content helped me get back into Linux in their early days with the Linux Action Show. Their shows are very inspirational and I hope that you will also learn a lot about the Linux ecosystem and become a fan.
Web applications typically feed information back and forth from a database to process information for the user. Organizations need to build applications that can scale with their business. While it is easy to scale web applications with containers and cloud platforms, the last thing that an IT administrator would want is a bottleneck at the database because it would affect application performance and availability at scale. One way to address these concerns is by using a clustered database solution such as ScyllaDB. This blog post will demonstrate how to use Node.js and ScyllaDB running in Docker.
Cloud Explorer is a powerful GUI and CLI Amazon S3 client. In this release there is many code improvements to help users sync their data to a S3 bucket and migrate data between different S3 providers. There is also better support for S3 compatible servers such as Scality S3 Server and Minio.
Syncing, bucket migrations, and snapshots were completely rewritten for optimal performance. Now five sync tasks can run at the same time. Each task will check the file metadata and perform the necessary upload and download operations concurrently instead of a single operation at a time.
The Background sync feature enables users to perform a bidirectional sync on a folder like Dropbox in the GUI or CLI every five minutes. This feature was also rewritten and takes advantage of the improved syncing algorithms discussed earlier. Since it now runs in it’s own thread with a separate configuration file, users can use Cloud Explorer while the sync tasks run in the background.
Path Style access is now enabled for non-aws accounts providing better support for private S3 compatible servers like Scality and Minio. Users will now also be able to connect to these servers by IP address or DNS.
Regions have been removed from the code and configuration file. Cloud Explorer will retrieve the appropriate region from the S3 account resulting in better functionality and easier use. This being said, that means that previous Cloud Explorer configuration files will not work in the new release and the accounts will have to be added again.
The CLI now supports bucket snapshots and migrations with the ability to use environment variables instead of a configuration file. This functionality makes it easier to run in a container such as Docker or Rocket.
I hope that you will enjoy this exciting new release and please provide feedback on the GitHub page or directly to me on Twitter.
In my previous blog posts, I always talked about my setup but never showed a completed diagram. In the above image, you can see everything that I am running on my 6 node Raspberry Pi cluster. All of the applications above are running in Docker containers managed by PiCluster except for Gluster.
I am pleased to announce PiCluster v1.7. In this release, I wanted to make PiCluster easier to use by having the Web Console handle most of the common configuration file changes. Not everyone enjoys editing json files including myself. Now let’s go over what is new in this release.
I released PiCluster last week and wanted to show how to run the Scality S3 server with it using Docker. Scality S3 is an open-source object storage server. PiCluster is a simple and lightweight container management and orchestration framework that I wrote in Node.js. Besides running containers, PiCluster can also perform health checks on applications to ensure that a service is actually running. Before we begin, I am assuming that you already have Docker installed. Lets get started by downloading PiCluster.
I was always fascinated with distributed filesystems and wanted to learn more about Gluster since it is becoming more popular in larger open-source projects. Since I have a few Raspberry Pi’s, I thought that now is the best time to learn. This blog post will explain how to run Gluster on a two-node Raspberry Pi cluster from a Docker container.
Two Raspberry Pi’s (rpi-1 and rpi-2)
Running a Gluster image from a local Docker registry
Hostnames are resolvable in /etc/hosts on both Pi’s
I spent a few weeks searching for an open-source S3 server that I can run at home to test Cloud Explorer. I first came across Minio which is an open-source S3 server but I could not get it to work with Cloud Explorer because it had issues resolving bucket names via DNS which is a requirement using the AWS SDK. I then read an article about Scality releasing an open-source S3 server that you can run inside a Docker image. I was able to get Scality up and running quickly with little effort. In this post, I will explain how I got the Scality S3 server setup and how to use it with Cloud Explorer.
If you have been keeping up with Docker lately, you may have come across my blog post about the sad state of Docker. In this post, I go over how the 1.12 release appeared interesting from all the marketing announcements and the constant copying and pasting of the same Docker content into blogs over the world. However, many others and I expressed our opinions on Hacker News on how Docker failed to deliver a quality product and how they failed to create a quality release. The New Stack then summarized all of the weekend discussions going on in a new blog post and discussed that a fork of Docker may arise. Is a fork really the best answer? Let’s take a look.
The nice thing about open source software is that anyone can take the software and modify it as needed or even create their own version of the software for redistribution. Software repositories like GitHub make it really easy for developers to fork a project and begin making their own changes and improvements. A recent example was the fork of OwnCloud into NextCloud. My problem with forking is that it leads to fragmentation. I personally like one or two ways of doing something well versus many different ways to partially achieve the same goal.
The container space is already starting to grow rapidly in terms of building and orchestration. The biggest container format is of course Docker and they have Swarm for their container orchestration. CoreOS also has their own container runtime called Rocket which is starting to gain traction and uses Kubernetes for orchestration. There are many other companies sprouting up in the container management area with their own unique solutions. However, Kubernetes appears to be coming the standard orchestration layer that many products use now. To help standardize containers, the Open Container Initiative (OCI) was formed to help define how containers work.
The OCI was created by members of CoreOS, Red Hat, Docker, and a few others on June 22, 2015 and gained support from companies like Apcera, Google, Apprenda, Amazon, and many more. Collaboration between companies for the greater good is terrific and we need more of this. Docker made strides to make an official OCI compliant version in their v1.11 release. Progress is being made to better standardize in this space but it takes time to achieve. Instead of forking Docker, the community should continue to raise their concerns in a nice manner and wait a little bit longer for change to happen.
Creating more fragmentation will be counterproductive because the attention of the people will be split amongst projects. How will companies new to containers and microservices ever learn to adopt this great new way of doing things if they can never decide on what to use? Anyone can fork Docker but we need to ask ourselves if another container solution is really needed when we have many to choose from already. If the answer is yes, we must ask ourselves who will maintain it? How long can this fork last? How much time will be wasted? Do the forkers have enough resources to make a quality project? How will they make their product secure and address vulnerabilities?
How about instead we stay positive and keep containing?