Tag Archives: disk

Thanks for the support in 2016

Most people will say that 2016 was a terrible year and can’t wait for 2017. I agree that 2016 was not perfect for many people but it was a great year for linux-toys. I made this blog with the goal to influence people and drive that creative spark that we all have inside. In this blog post I will go over the website statistics and discuss a few of the blog entries that I thought were most influential for the year.

Continue reading Thanks for the support in 2016

My Journey to Improve Disk Performance on the Raspberry Pi

 

I switched to Gluster FS a while ago to provide easier container mobility across my Raspberry Pi Docker Cluster. Gluster worked great and was easy to get up and running but I had very poor performance. The average write speed was about 1 MB/s which is unacceptable for a filesystem that will undergo a lot of writes. I decided that it was time to take action and started looking at kernel parameters that could be changed.

Continue reading My Journey to Improve Disk Performance on the Raspberry Pi

Running a Gluster Cluster on the Raspberry Pi with Docker

I was always fascinated with distributed filesystems and wanted to learn more about Gluster since it is becoming more popular in larger open-source projects. Since I have a few Raspberry Pi’s, I thought that now is the best time to learn. This blog post will explain how to run Gluster on a two-node Raspberry Pi cluster from a Docker container.

Architecture

  1. Two Raspberry Pi’s (rpi-1 and rpi-2)
  2. Running a Gluster image from a local Docker registry
  3. Hostnames are resolvable in /etc/hosts on both Pi’s
  4. Docker 1.12.x installed

Continue reading Running a Gluster Cluster on the Raspberry Pi with Docker

Using LVM cache on Linux

The Challenge

My home server uses a RAID 1 configuration. I was very disappointed in the performance and wanted to find a way to make it faster. After browsing the Internet one day, I came across news headlines that said CentOS 7  supports LVM cache. I found an old USB thumb drive and decided to take the cache challenge and see how it performs.

The Journey

Here is a simple DD test prior to enabling cache:

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 6.27698 s, 167 MB/s

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 5.04032 s, 208 MB/s

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 3.41007 s, 307 MB/s

dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 2.94413 s, 356 MB/s

Average write speed: 256.5 MB/s

Time to enable caching and try to make the system perform better:

vgextend vg /dev/sdc

lvcreate -L 1G -n cache_metadata /dev/sdc

lvcreate -L 8G -n cache_vol /dev/sdc

lvconvert –type cache-pool –poolmetadata vg/cache_metadata vg/cache_vol

lvconvert –type cache –cachepool vg/cache_vol vg/original_volume_name

 

The write results with caching enabled:

# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 3.73197 s, 281 MB/s

# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 1.70449 s, 615 MB/s

# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 3.91247 s, 268 MB/s

]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000

1048576000 bytes (1.0 GB) copied, 2.18025 s, 481 MB/s

Average write speed: 411.25 MB/s

Conclusion:

When I originally built this machine from used parts on Amazon, I decided to reuse two old Western Digital Green drives which offer low performance and power usage.  I had no idea that they would perform poorly in RAID 1.  I was surprised and glad that a cheap USB flash drive helped me get a significant increase in write performance by an average of 155 MB/s. I find it fascinating how the Linux ecosystem helps people recycle old junk and put it to good use. Hooray.

 

Improving IO performance on Linux

I purchased a new server and have been struggling with IO performance on a RAID 1 setup. I first tried RAID 5 but it was horrible. Through all my struggles, I found a few kernel and filesystem tunables that helped me out.

The virtual machines improved also. They were barely usable.

Filesystem Tweaks: /etc/fstab

Change your mount options to:

defaults,noatime,data=writeback,barrier=0,nobh
Linux Kernel tunables:
sysctl -w vm.swappiness=0
sysctl -w vm.dirty_background_ratio=5
sysctl -w vm.dirty_ratio=5
sysctl -w vm.vfs_cache_pressure=200
The dirty tunables controls the portion of memory that the kernel will store pages in. I decreased the value to 5 so that I will have more memory free.  The cache_pressure tunable tells the kernel how to quickly free up the cache.

Convert root partition to LVM and mirror the root disk

1. Create a tar backup of your filesystem.

# tar czpf /root/redhat.tar –exclude=/var/tmp/portage/* –exclude=/root/* –exclude=/usr/portage/* –exclude=*.deb –exclude=/tmp/* –exclude=*.rpm –exclude=/sys/* –exclude=/proc/* –exclude=/dev/* –exclude=/mnt/* –exclude=/media/* –exclude=/home/* –exclude=/var/lib/libvirt/images/* –exclude=/oracle/* –exclude=redhat.tar

2. Use fdisk to create /boot and 1 LVM partition on the new disk.

/dev/sda1 * 1 100 803218+ 83 Linux
/dev/sda2 101121601 975956782+ 8e Linux LVM

3. Set /dev/sda1 to be bootable.

# parted /dev/sda set 1 boot on

4. Create the new LVM partition.

# pvcreate /dev/sda2
# vgcreate vg /dev/sda2
# lvcreate -L 200G /dev/vg -n root
# mkfs /dev/vg/root
# mkfs /dev/sda1
# mount /dev/vg/root /mnt
# mount /dev/sda1 /mnt/boot

5. Extract the tar file to /mnt

# tar xpf /root/redhat.tar -C /mnt/

6. Modify the following files:

/mnt/boot/grub/menu.list

Modify the kernel line to support LVM by adding the following LVM details:

rd_LVM_VG=vg rd_LVM_LV=root

Also ensure that initrd and kernel does not have /boot/ in the location.

Example:

kernel /vmlinuz-2.6.32-279.2.1.el6.x86_64 ro root=/dev/mapper/vg-root LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=128M rd_LVM_VG=vg rd_LVM_LV=root rhgb quiet

initrd /initramfs-2.6.32-279.2.1.el6.x86_64.img

/mnt/etc/fstab:

Change the /boot and / entries to LVM:

/dev/sda1 /boot ext4 defaults 0 0
/dev/mapper/vg-root / ext4 defaults 1 1

7. Mount and configure the new environment:

# mount /dev/vg/root /mnt
# mount /dev/sda1 /mnt/boot
# mount -o bind /sys /mnt/sys
# mount -o bind /dev /mnt/dev
# mount -o bind /proc /mnt/proc
# grep -v rootfs /proc/mounts > /mnt/etc/mtab

Modify /mnt/etc/mtab and add:

/dev/sda1 /boot ext4 rw 0 0

# chroot /mnt

8. Install GRUB and reconfigure the ram disk image:

# grub-install –recheck /dev/sda
# dracut -force

9. Unmount and reboot:

Type exit to exit the chroot environment
# cd /
# umount /mnt/*
# umount /mnt
# reboot

Set your system to boot from the disk known as /dev/sda

10. Initialize and format your original boot disk.

Just like we did for /dev/sda. (1 bootable partition for /boot and 1 Linux LVM partition.

Device Boot Start End Blocks Id System
/dev/sdc1 * 1 100 803218+ 83 Linux
/dev/sdc2 101 121601 975956782+ 8e Linux LVM

11. Add /dev/sdc to the volume group.
# vgextend /dev/vg /dev/sdc2

12. Format the boot partition on the drive and set it bootable:
# mkfs /dev/sdc1
# parted /dev/sdc set 1 boot on

13. Mirror the boot disk:

# lvconvert -m1 /dev/vg/root