Tag Archives: linux

Introducing Cloud Explorer 10

Cloud Explorer is a powerful GUI and CLI Amazon S3 client. In this release there is many code improvements to help users sync their data to a S3 bucket and migrate data between different S3 providers. There is also better support for S3 compatible servers such as Scality S3 Server and Minio.

Syncing, bucket migrations, and snapshots were completely rewritten for optimal performance. Now five sync tasks can run at the same time. Each task will check the file metadata and perform the necessary upload and download operations concurrently instead of a single operation at a time.

The Background sync feature enables users to perform a bidirectional sync on a folder like Dropbox in the GUI or CLI every five minutes. This feature was also rewritten and takes advantage of the improved syncing algorithms discussed earlier.  Since it now runs in it’s own thread with a separate configuration file, users can use Cloud Explorer while the sync tasks run in the background.

Path Style access is now enabled for non-aws accounts providing better support for private S3 compatible servers like Scality and Minio. Users  will now also be able to connect to these servers by IP address or DNS.

Regions have been removed from the code and configuration file. Cloud Explorer will retrieve the appropriate region from the S3 account resulting in better functionality and easier use. This being said, that means that previous Cloud Explorer configuration files will not work in the new release and the accounts will have to be added again.

The CLI now supports bucket snapshots and migrations with the ability to use environment variables instead of a configuration file. This functionality makes it easier to run in a container such as Docker or Rocket.

I hope that you will enjoy this exciting new release and please provide feedback on the GitHub page or directly to me on Twitter.

 

 

How to use Cloud Explorer with Minio

I did some updating to Cloud Explorer recently to make it work with Minio. Minio is one of many open source S3 servers available today for people to use on-premises for their personal cloud storage needs. With the added support, Minio users can take advantage of Cloud Explorer’s unique features such as performance testing, note taking, playing music, viewing images, and search.

Continue reading How to use Cloud Explorer with Minio

PiCluster 1.6 – Move your Containers to Different Hosts

I am pleased to announce v1.6 of PiCluster. In this release there are a few usability bugs fixed and a new feature that allows you to change the host of a running container.  Having the ability to easily change where a container is running is a standard and crucial feature to expect from a container management platform. I am glad that it is finally here and let’s explore how it works!

Continue reading PiCluster 1.6 – Move your Containers to Different Hosts

Announcing PiCluster 1.4

I am pleased to announce the new version of PiCluster. In this release, users can connect to a host running an rsyslog server and the PiCluster agent to view the log drain in the PiCluster web console and run searches. This combined integration provides a single pane of glass to monitor physical hosts and Docker containers easily. Let’s take a look on how to enable this functionality.

Continue reading Announcing PiCluster 1.4

My Journey to Improve Disk Performance on the Raspberry Pi

 

I switched to Gluster FS a while ago to provide easier container mobility across my Raspberry Pi Docker Cluster. Gluster worked great and was easy to get up and running but I had very poor performance. The average write speed was about 1 MB/s which is unacceptable for a filesystem that will undergo a lot of writes. I decided that it was time to take action and started looking at kernel parameters that could be changed.

Continue reading My Journey to Improve Disk Performance on the Raspberry Pi

How to use Cloud Explorer with Scality S3 server

I spent a few weeks searching for an open-source S3 server that I can run at home to test Cloud Explorer. I first came across Minio which is an open-source S3 server but I could not get it to work with Cloud Explorer because it had issues resolving bucket names via DNS which is a requirement using the AWS SDK. I then read an article about Scality releasing an open-source S3 server that you can run inside a Docker image. I was able to get Scality up and running quickly with little effort. In this post, I will explain how I got the Scality S3 server setup and how to use it with Cloud Explorer.

Continue reading How to use Cloud Explorer with Scality S3 server

Goodbye Docker on CentOS. Hello Ubuntu!

I have been a hardcore CentOS user for many years now. I enjoyed its minimal install to create a light environment, intuitive installation process, and it’s package manager. Docker is the most popular container format today and provides developers and enthusiasts with an easy way to run workloads in containerized environments. I started using Docker in production at home for about a year now for services such as Plex Media Server, Web Server for this blog, ZNC, MineCraft, and MySQL to name a few. A Dockerfile is a set of instructions used to create a Docker image. I invested many hours creating perfect Dockerfiles using CentOS and Fedora to make deployments simple on any operating system. However, a personal revolution was brewing.

Continue reading Goodbye Docker on CentOS. Hello Ubuntu!

Using LVM cache on Linux with a RAM disk

The Challenge

This is a follow up article from using a USB drive for a LVM cache. I decided to test things further by using a RAM disk instead of a USB drive.

 

The Journey

1. Create a RAM disk:

modprobe brd rd_nr=1 rd_size=4585760 max_part=0

2. Create the cache

pvcreate /dev/ram0
vgextend vg /dev/ram0
lvcreate -L 300M -n cache_meta vg /dev/ram0
lvcreate -L 4G -n cache_vol vg /dev/ram0
lvconvert –type cache-pool –poolmetadata vg/cache_meta –cachemode=writeback vg/cache_vol -y
lvconvert –type cache –cachepool vg/cache_vol vg/docker-pool

3. Run the DD test again

[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.89586 s, 553 MB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.79864 s, 583 MB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 0.922467 s, 1.1 GB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.33757 s, 784 MB/s

Average Speed: 736 MB/s

 

Conclusion

In Conclusion, my average write speed is 736 MB/s using LVM caching with a RAM disk. With a USB thumb drive, my average speed is 411.25 MB/s. With no cache, my average speed is 256 MB/s.