Using LVM cache on Linux with a RAM disk

The Challenge

This is a follow up article from using a USB drive for a LVM cache. I decided to test things further by using a RAM disk instead of a USB drive.

 

The Journey

1. Create a RAM disk:

modprobe brd rd_nr=1 rd_size=4585760 max_part=0

2. Create the cache

pvcreate /dev/ram0
vgextend vg /dev/ram0
lvcreate -L 300M -n cache_meta vg /dev/ram0
lvcreate -L 4G -n cache_vol vg /dev/ram0
lvconvert –type cache-pool –poolmetadata vg/cache_meta –cachemode=writeback vg/cache_vol -y
lvconvert –type cache –cachepool vg/cache_vol vg/docker-pool

3. Run the DD test again

[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.89586 s, 553 MB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.79864 s, 583 MB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 0.922467 s, 1.1 GB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.33757 s, 784 MB/s

Average Speed: 736 MB/s

 

Conclusion

In Conclusion, my average write speed is 736 MB/s using LVM caching with a RAM disk. With a USB thumb drive, my average speed is 411.25 MB/s. With no cache, my average speed is 256 MB/s.

 

 

Using LVM cache on Linux

The Challenge

My home server uses a RAID 1 configuration. I was very disappointed in the performance and wanted to find a way to make it faster. After browsing the Internet one day, I came across news headlines that said CentOS 7  supports LVM cache. I found an old USB thumb drive and decided to take the cache challenge and see how it performs.

The Journey

Here is a simple DD test prior to enabling cache:

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 6.27698 s, 167 MB/s

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 5.04032 s, 208 MB/s

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 3.41007 s, 307 MB/s

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 2.94413 s, 356 MB/s

Average write speed: 256.5 MB/s

Time to enable caching and try to make the system perform better:

vgextend vg /dev/sdc

lvcreate -L 1G -n cache_metadata /dev/sdc

lvcreate -L 8G -n cache_vol /dev/sdc

lvconvert –type cache-pool –poolmetadata vg/cache_metadata vg/cache_vol

lvconvert –type cache –cachepool vg/cache_vol vg/original_volume_name

 

The write results with caching enabled:

# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 3.73197 s, 281 MB/s

# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 1.70449 s, 615 MB/s

# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 3.91247 s, 268 MB/s

]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 2.18025 s, 481 MB/s

Average write speed: 411.25 MB/s

Conclusion:

When I originally built this machine from used parts on Amazon, I decided to reuse two old Western Digital Green drives which offer low performance and power usage.  I had no idea that they would perform poorly in RAID 1.  I was surprised and glad that a cheap USB flash drive helped me get a significant increase in write performance by an average of 155 MB/s. I find it fascinating how the Linux ecosystem helps people recycle old junk and put it to good use. Hooray.

 

Betting the farm on Docker

The Challenge

I wanted to try out Docker in production to really understand it. I believe that to fully understand or master something, you must make it part of your life. Containers are the next buzz word in IT and future employment opportunities are likely to prefer candidates with container experience. My current home environment consists of a single server running: OpenVPN, Email, Plex Media Server, Nginx, Torrent client, IRC bouncer, and a Samba server. The challenge is to move each of these to Docker for easier deployment.

The Journey

I noticed that many Docker containers contain multiple files that are added to the container upon creation. I decided to take a simpler approach and have the Dockerfile modify most of the configuration files with the sed utility and also create the startup script. I think that for most cases, having a single file to keep track of and build your system is much easier than editing multiple files. In short, the Dockerfile file should be the focal point of the container creation process for simpler administration.

Sometimes it is easier to upload prebuilt configuration files and additional support files for an application. For example, the Nginx configuration file is simpler to edit manually and have Docker put it in the correct location upon creation. Finally, there is no way around importing existing SSL certificates for Nginx in the Dockerfile.

Creating an Email server was the most difficult container to create because I had to modify configuration files for many components such as SASL, Postfix, Dovecot, creating the mail users, and setting up the alias for the system. Also, Docker needed to be cleaned often because it consumes large amounts of disk space for each container made. Dockerfiles with many commands took very long to execute. Testing out a few changes in a Dockerfile and rebuilding the container took a long time. Amplify that by the many containers I made and the many typos and you can see how my weekend disappeared.

After my home production environment moved to Docker and is working well, I created a BitBucket account to store all of my Docker files and MySQL backups to. There is a cron job inside the container that does a database dump to a shared external folder. If my system ever died, I can setup the new system easier by cloning the Git repository and building the container with a single command.

Conclusion

In conclusion, Docker was hard to deploy initially, but will save time in the future if disasters happen such as a system failure. Dockerfiles can basically be used as a well-documented blue print of a perfect application or environment. For continuity purposes in organizations, Dockerfiles can be shared and administrators should easily be able to understand what needs to be done. Even if you do not like Docker, you can basically copy and paste the contents of a Dockerfile with little text removal and build the perfect system.

Cloud Explorer 6.0 Released

I have been working hard on new features and GUI enhancements for Cloud Explorer 6.0. I think it is important for an application to improve over time and this project has seen it’s share of visual evolutions. If you look back at the 1.0 release, the changes are night and day. I enjoy the challenge of making this program look pretty for each major release and hope that the users find it to be good. This release uses many free-licenses images and GPL images found in the KDE Desktop Environment.

Getting Cloud Explorer

Cloud Explorer is available for download from here.

Upgrading to 6.0

Starting with Cloud Explorer 5.0 or later, you can upgrade by clicking on Help -> Check for updates. After the update is complete, restart Cloud Explorer.

 

6.0 Changes:

Bug Fixes:

1. Fixed a bug when switching accounts.
2. If no region is specified, it will default to “defaultAWS” to
avoid a null pointer exception. To resolve the issue, delete and
add your account back.
3. If no image is selected when the image button is pressed, nothing
happens.
4. If no music is selected when the play button is pressed, nothing
happens.
5. Support for line spaces in s3.config
6. Versioning display fix.

Improvements:
1. Snapshots and Migration are now located under the new menu
“Snapshots and Migration”.
2. New Icons for GUI.
3. Maximize Window.
4. For improved GUI responsiveness, many actions now avoid
reloading buckets.

New Features:
1. Create Bucket snapshots.
2. Restore Bucket snapshots.

Screen shots of the GUI:

 

Cloud Explorer 6.0 Sneak Preview

I have been working hard on new features and GUI enhancements for Cloud Explorer 6.0. I think it is important for an application to improve over time and this project has seen it’s share of visual evolutions. If you look back at the 1.0 release, the changes are night and day. I enjoy the challenge of making this program look pretty for each major release and hope that the users find it to be good. This release uses many free-licenses images and GPL images found in the KDE Desktop Environment.

If you like to test this early release, please create an issue on GitHub and I will provide a release candidate. I can always can use help in testing. You can also build the latest version of CloudExplorer yourself. The expected release date for 6.0 is July 22, 2015.

6.0 Changes:

Bug Fixes:

1. Fixed a bug when switching accounts.
2. If no region is specified, it will default to “defaultAWS” to
avoid a null pointer exception. To resolve the issue, delete and
add your account back.
3. If no image is selected when the image button is pressed, nothing
happens.
4. If no music is selected when the play button is pressed, nothing
happens.
5. Support for line spaces in s3.config
6. Versioning display fix.

Improvements:
1. Snapshots and Migration are now located under the new menu
“Snapshots and Migration”.
2. New Icons for GUI.
3. Maximize Window.
4. For improved GUI responsiveness, many actions now avoid
reloading buckets.

New Features:
1. Create Bucket snapshots.
2. Restore Bucket snapshots.

Screen shots of the GUI:

 

 

 

Introducing Cloud Explorer 5.6

Introduction

I am pleased to announce v5.6 of Cloud Explorer.  The biggest new feature for this release allows users to record audio messages that will be saved into their selected bucket.  Cloud Explorer provides a great way to share audio messages because the S3 API allows users to share files via a public URL. Syncing is now more stable intelligent. When syncing to or from a bucket, sync compares the last modified time and if the destination has a newer version, the file will not be copied. The text editor now has a folder list box so users can save notes into a specific folder. This is an excellent way to stay organized and use Cloud Explorer for note taking.

Screen Shot 2015-05-17 at 6.29.43 PM

Getting Cloud Explorer

Cloud Explorer is available for download from here.  After downloading, please upgrade to 5.6 if you are running an earlier release.

 

Upgrading to 5.6

Starting with Cloud Explorer 5.0 or later, you can upgrade by clicking on Help -> Check for updates. After the update is complete, restart Cloud Explorer.

 

Here is a complete list of changes in v5.6:

Bug Fixes:
1. Syncing GUI and CLI: Prevent duplicated transfers of synced files in folders and items go in the correct folder when syncing to the local machine,

2. Syncing from CLI: Syncing from S3 saves to appropriate directory level.

Improvements:
1. Music Player:  Plays WAV files, stop button renamed to “Stop /Close” for clarity, and no longer case sensitive for MP3 file extensions.

2. Syncing GUI and CLI: Overwrite button removed.

New Features:
1. Audio recorder.
2. Sync GUI and CLI: Timestamps are compared and the outdated files are overwritten.
3. Folder support for saving files in the Text Editor.

What is new in Cloud Explorer 5.2 ?

What is Cloud Explorer?

Cloud Explorer is a open-source S3 client. It works on Windows, Linux, and Mac. It has a graphical and command line interface for each supported operating system.

Features:

  • Search
  • Performance testing
  • Music player
  • Transition buckets to Amazon Glacier
  • Amazon RRS
  • Migrate buckets between S3 accounts
  • Compress files prior to upload
  • Take screen shots to S3
  • Simple text editor
  • IRC client
  • Share buckets with users
  • Access shared buckets
  • View images
  • Sync folders
  • Graph CSV files and save them to a bucket.

 

What is new in 5.2?

The main new feature in 5.2 is the ability to graph a CSV file from a bucket. From the screenshot below, you can configure the settings for your graph.

Screen Shot 2015-03-05 at 7.54.43 AM

 

Next, click “Graph” and we get the output below. The graph will be saved to the S3 bucket and hard drive.

 

Screen Shot 2015-03-05 at 7.58.00 AM

 

The text editor  has the ability to substitute and replace text.

 

Screen Shot 2015-03-05 at 7.59.30 AM

 

There is also a few minor bug fixes. Please see the release notes for more information.

 

Getting Cloud Explorer

Cloud Explorer is available for download from here.  After downloading, please upgrade to 5.2.

 

Upgrading to 5.2

Starting with Cloud Explorer 5.0 or later, you can upgrade by clicking on Help -> Check for updates. After the update is complete, restart Cloud Explorer.

Using Cloud Explorer in a build system for Cloud Explorer

I pondered hard to think how I can make the build process easier for Cloud Explorer by using Cloud Explorer. Currently, I have a bash script that runs and puts files into place locally and compresses the program directory into a zip file. Finally, I have to manually upload Cloud Explorer to the S3 account for sharing. This process involves multiple steps and is tedious to do from different locations. For example, If I want to build a copy at another location, I would have to manually copy the file to my location and then upload with a client. There has to be an easier way.

For a more efficient solution, I added a command line argument for Cloud Explorer that will upload a given file with an object name by my choosing to a specified bucket. After the upload is completed, permissions will be set automatically and configured for sharing.

Example:

java -jar CloudExplorer.jar build $BUILD_NAME $ZIP $BUCKET

The above command is contained in a bash script that runs Cloud Explorer to do the upload. The build argument followed by the remaining arguments runs the program in “Build Mode”. The $BUILD_NAME argument specifies the name of the file when stored on S3. The $ZIP argument contains the location of the Cloud Explorer zip file. The $BUCKET argument specifies the bucket to be used on the S3 server. The account used for the upload will be the first account listed in the ~/s3.config file.

When the build system is ran and the arguments are accepted, Cloud Explorer will perform a parallel multi-part upload of the zip file. Upon completion, the zip file will have public access and a signed URL for simplified sharing. The signed URL will be displayed in the terminal and I can copy and paste it to my peers to download.

By adding this support to Cloud Explorer, I can run my build script and then wait a few minutes to share the build with anybody.

Cloud Explorer is located on Git Hub: https://github.com/rusher81572/cloudExplorer

Installing the official NVIDIA driver on CentOS

I found it very hard to find a guide to properly disable the nouveau driver so I can install the Nvidia driver on CentOS 6. This should help simplify the installation.

1. Install the kernel-devel and development packages for your running kernel.

yum -y install kernel-devel-`uname -r` kernel-headers-`uname -r`
yum groupinstall “Development Tools”

2. Blacklist the nouveau driver. This needs to be done to properly load the Nvidia driver. First, the module must be blacklisted on the system. The final step is to remove the nouveau driver from the kernel ram disk image.

echo “blacklist nouveau” >> /etc/modprobe.d/blacklist.conf

dracut -f -o nouveau

3. Boot into runlevel 3 with “nomodeset” on the kernel line in grub.cfg or add it to the kernel line in the grub boot menu. If you do not do this, you will see a black screen when you reboot.

4. Run the Nvidia installer

5. Reboot.

Installing the Nvidia driver on the latest Fedora

I found it very hard to find a guide to properly disable the nouveau driver so I can install the Nvidia driver on Fedora 20. This should help simplify the installation.

1. Install the kernel-devel and development packages for your running kernel.

yum -y install kernel-devel-`uname -r` kernel-headers-`uname -r`
yum groups mark-install “Development Tools”
yum groups install “Development Tools”

2. Blacklist the nouveau driver. This needs to be done to properly load the Nvidia driver. First, the module must be blacklisted on the system. The final step is to remove the nouveau driver from the kernel ram disk image.

echo “blacklist nouveau” >> /usr/lib/modprobe.d/dist-blacklist.conf

dracut -f -o nouveau

3. Boot into runlevel 3 with “nomodeset” on the kernel line in grub.cfg or add it to the kernel line in the grub boot menu. If you do not do this, you will see a black screen when you reboot.

4. Run the Nvidia installer

5. Reboot.

←Older