zfs compression and latency

Since im using ZFS as storage via NFS for my some of my vmware environments i need to ensure that latency on my disk is reduced where ever possible.

There is alot of talk about ZFS compression being “faster” than a non-compressed pool due to less physical data being pulled off the drives. This of course depends on the system powering ZFS, but i wanted to run some tests specifically on latency. Throughput is fine in some situations, but latency is a killer when it comes to lots of small reads and writes (in the case of hosting virtual machines)

I recently completed some basic tests focusing on the differences in latency when ZFS compression (lzjb) is enabled or disabled. IOMeter was my tool of choice and i hit my ZFS box via a mapped drive.

I’m not concerned with the actual figures, but the difference between the figures

I have run the test multiple times (to eliminate caching as a factor) and can validate that compression (on my system anyhow) increases latency

Basic Results from a “All in one” test suite… (similar results across all my tests)

ZFS uncompressed:

IOps : 2376.68
Read MBps : 15.14
Write MBps : 15.36
Average Response Time : 0.42
Average Read Response Time : 0.42
Average Write Response Time : 0.43
Average Transaction Time : 0.42

ZFS compressed: (lzjb)

IOps : 1901.82
Read MBps : 12.09
Write MBps : 12.28
Average Response Time : 0.53
Average Read Response Time : 0.44
Average Write Response Time : 0.61
Average Transaction Time : 0.53

As you can see from the results, the AWRT especially is much higher due to compression. I wouldn’t recommend using zfs compression where latency is a large factor (virtual machines)

Note: Under all the tests performed the CPU (dual core) on the zfs box was never 100% – eliminating that as a bottleneck.

ESX 4 (vSphere) – installation issues

I’ve had a couple of installation issues with ESX4

One was particularly strange, the install process would freeze every now and again until i pressed a key on the keyboard. This was related to AMD’s power saving C1 mode – disable this in the bios to fix this problem.

Secondly it was a problem with my nic. Essentially i didn’t have card that was supported (realtek). After inserting a intel card all was well. This fires up a ambiguous error and cans the whole install. (lvmdriver error)

It was nice to see that my pata drive was supported – i’ve been using it for quick tests but was unsure if it was supported in vSphere. Note: i’ve got a flash card enclosure that i’m planning on using in the future, just awaiting the card.   (hopefully this saves me some power)

Citrix XenServer 5.5 – First impressions

I had decided to try out Citrix Xen Server at home since i work a lot with vmware during my working week and felt like a change. It all seemed well… That is until i had to deal with snapshots. I suppose i have taken for granted almost all other virtual host software that provides a simple “revert to snapshot” option. From what i can tell this is totally absent from Citrix XenServer 5.5.

There are comments from within Citrix that they are working on this as a feature, but it has yet to make it to fruition. Unfortunately for me with the type of work i do (testing / proof of concept etc) this is a deal breaker. Looks like i’m gunna have to try out vSphere at home. (currently only using 3.5 at work)

Least vSphere has thin provisioning now, so nothing (feature wise) i’ll be missing from Citrix’s Xen Server.

Updated : 28/07/2009

I’ve actually got no choice but to stay with Citrix Xen for now, looks like the sata controller and network chip on my motherboard is not supported by either 3.5 U4 or vSphere. Doh! (i should have checked the HCL but sometimes just like to try my luck)

Opensolaris : Citrix XenServer / ESX – Hooking into ZFS

To share your zfs pool via NFS (that works with Citrix Xen / ESX) to a host called “esxhost”;

zfs set sharenfs=rw,nosuid,root=esxhost tank/nfs

Note : You MUST have a resolvable name from the opensolaris box. i.e. you should be able to ping it. I have tried with ip’s only and it will fail. I have edited the /etc/hosts file to include the following line for my config;

# Copyright 2007 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# ident “%Z%%M% %I% %E% SMI”
#
# Internet host table
#
192.168.9.120 esxhost

This also requires that you are using both DNS and Files in your /etc/nsswitch.conf file. You should have a line like so;

# You must also set up the /etc/resolv.conf file for DNS name
# server lookup. See resolv.conf(4). For lookup via mdns
# svc:/network/dns/multicast:default must also be enabled. See mdnsd(1M)
hosts: files dns mdns

# Note that IPv4 addresses are searched for in all of the ipnodes databases
# before searching the hosts databases.
ipnodes: files dns mdns

i’ve also run this before hand; (to allow full access)

chmod -R 777 /tank/nfs

Update : check this guide http://blog.laspina.ca/ubiquitous/running-zfs-over-nfs-as-a-vmware-store

Update 2: there are known issues with waiting for sync when using both NFS and ZFS together…. There are reasons why you shouldnt do this, but in a test enviornemnt disabling sync at ZFS level may help performance (zfs set sync=disabled)

I like this idea of spliting up your SSD too… again in test enviornment no problems, in production i would utilize the entire drive to the tasks https://blogs.oracle.com/ds/entry/make_the_most_of_your

Using W2k3 R2 / W2k8 server as a NFS share for vmware

This is something i do in the lab so all of my vm’s are able to access iso’s etc (very handy for quick builds). Quite handy doing it through windows since its easiey enough to setup a windows network share to the same location and update various files via that.

This site has some good clear instructions : http://vmetc.com/2008/02/19/create-a-nfs-share-for-vm-iso-files-with-windows-2003-server-r2/

  1. On the Windows 2003 Server make sure “Microsoft Services for NFS” in installed. If not you need to add it under Add/Remove Programs, Windows
    Components, Other Network File and Print Services
  2. Next go to folder you want to share and right-click on it and select Properties
  3. Click on the NFS Sharing tab and select “Share this Folder”
  4. Enter a Share Name, check “Anonymous Access”
  5. In VirtualCenter, select your ESX server and click the “Configuration” tab and then select “Storage”
  6. Click on “Add Storage” and select “Network File System” as the storage type
  7. Enter the Windows Server name, the folder (share) name and a descriptive Datastore Name
  8. Done. Now you can map CD iso’s to your various vm’s.

 

This is similar in windows 2008 — screenshots of settings below…