EMC Celerra Optimizations for VMware on NFS

go here http://blog.scottlowe.org/2010/01/31/emc-celerra-optimizations-for-vmware-on-nfs/

Turn on the uncached write mechanism for NFS file systems used as VMware datastores. This can have a significant performance improvement for VMDKs on NFS but isn’t the default setting. From the Control Station, you can use this command to turn on the uncached write mechanism:
server_mount <data mover name> -option <options>,uncached <file system name> <mount point>
Be sure to review pages 99 through 101 of the VMware on Celerra best practices document for more information on the uncached write mechanism and any considerations for its use.

vmware and load balancing NFS trunks

Straight from the post below, this is the best way (currently) to load balance your NFS datastores…  No MPIO magic here unfortunately.

http://communities.vmware.com/message/1466595#1466595

Basically you can setup IP alias on the NFS side and then setup multiple connections (using a unique IP) per datastore  on each ESX host. This works well if you are using a team of nics running IP-hash load balancing…

Static EtherChannel.

My setup is as follows:

ESXi 4.0 U1, Cisco 3750 Switches, and NetApp NFS on the storage side.

I have a total of 8 nics. I divided the nics into 3 groups.

2 nics on vSwitch0 for Mgmt & vMotion
3 nics on vSwitch1 for VM’s (Multiple port groups (3 VLANS))
3 nics on vSwitch2 for IP Storage (Mostly NFS, a little iSCSI)
(One vSwitch3 I also have a VM port group for iSCSI access from within the VM)

Since I have 3 nics on my IPStorage port group I needed a way to be able to utilize all three nics and not have the server just use one for ingress and egress traffic. This was done by:

Setting up a static EtherChannel on the cisco switch (Port Channel).
Configuring the cisco switch to IP Hash
Configure the vSwitch to “Route based on IP Hash” as well.

The next part is to create multiple datastores on the NFS device. Each of my NFS datastores is about 500GB in size. Reason for this is that my larger luns are iSCSI and are access directly from the VM using the MS iSCSI initiator from the VM itself.
My NetApp NAS has an address of, let say, 192.168.1.50. So all my data stores are accessible by utilizing the address of “\\192.168.1.50\NFS-Store#”. This will not be useful as the esx box and the cisco switch will always use the same nic/port to access the nas device. This is due to the algorithm (IP HASH) to decide what link it’ll go over. So to resolve the issue, I added IP aliases to the NFS box. NetApp allows to have multiple Ip addresses pointing to the same NFS export, I suspect EMC would do the same. So, I added 2 aliases 51 & 52. Now my NFS datastores are accessible by using Ip address 192.168.1.50,.51, & .52.

So I went ahead and added the datastores to the ESX box using the multiple IP addresses:

Datastore1 = \\192.168.1.50\NFS-Store1
Datastore2 = \\192.168.1.51\NFS-Store2
Datastore3 = \\192.168.1.52\NFS-Store3

If you have more datastores it’ll just repeat: Datastore4 = \\192.168.1.50\NFS-Store4 and so on…

Since having multiple datastores and address to each, the 3 nics on the ESX box dedicated to IP Storage get utilized. It does not aggregate the bandwidth but it does use all three to send and recieve packets. So the fastest speed you will get is 1Gbit, theoretically, each way for traffic but, it is better than trying to cram all the traffic over 1 nic.

I also enabled Jumbo Frames on the vSwitch as well as the vmNic for IP-Stroage. (need the best performance!)
I should mention that your NFS storage device should have EtherChannel setup on it as well. Otherwise, you’ll be on the same boat just on the other end of it.

Hope it helps!

Larry B.

I should mention that you should not use different addresses to access the same NFS share (datastore). It is not supported and may cause you issues.

zfs compression and latency

Since im using ZFS as storage via NFS for my some of my vmware environments i need to ensure that latency on my disk is reduced where ever possible.

There is alot of talk about ZFS compression being “faster” than a non-compressed pool due to less physical data being pulled off the drives. This of course depends on the system powering ZFS, but i wanted to run some tests specifically on latency. Throughput is fine in some situations, but latency is a killer when it comes to lots of small reads and writes (in the case of hosting virtual machines)

I recently completed some basic tests focusing on the differences in latency when ZFS compression (lzjb) is enabled or disabled. IOMeter was my tool of choice and i hit my ZFS box via a mapped drive.

I’m not concerned with the actual figures, but the difference between the figures

I have run the test multiple times (to eliminate caching as a factor) and can validate that compression (on my system anyhow) increases latency

Basic Results from a “All in one” test suite… (similar results across all my tests)

ZFS uncompressed:

IOps : 2376.68
Read MBps : 15.14
Write MBps : 15.36
Average Response Time : 0.42
Average Read Response Time : 0.42
Average Write Response Time : 0.43
Average Transaction Time : 0.42

ZFS compressed: (lzjb)

IOps : 1901.82
Read MBps : 12.09
Write MBps : 12.28
Average Response Time : 0.53
Average Read Response Time : 0.44
Average Write Response Time : 0.61
Average Transaction Time : 0.53

As you can see from the results, the AWRT especially is much higher due to compression. I wouldn’t recommend using zfs compression where latency is a large factor (virtual machines)

Note: Under all the tests performed the CPU (dual core) on the zfs box was never 100% – eliminating that as a bottleneck.

Opensolaris : Citrix XenServer / ESX – Hooking into ZFS

To share your zfs pool via NFS (that works with Citrix Xen / ESX) to a host called “esxhost”;

zfs set sharenfs=rw,nosuid,root=esxhost tank/nfs

Note : You MUST have a resolvable name from the opensolaris box. i.e. you should be able to ping it. I have tried with ip’s only and it will fail. I have edited the /etc/hosts file to include the following line for my config;

# Copyright 2007 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# ident “%Z%%M% %I% %E% SMI”
#
# Internet host table
#
192.168.9.120 esxhost

This also requires that you are using both DNS and Files in your /etc/nsswitch.conf file. You should have a line like so;

# You must also set up the /etc/resolv.conf file for DNS name
# server lookup. See resolv.conf(4). For lookup via mdns
# svc:/network/dns/multicast:default must also be enabled. See mdnsd(1M)
hosts: files dns mdns

# Note that IPv4 addresses are searched for in all of the ipnodes databases
# before searching the hosts databases.
ipnodes: files dns mdns

i’ve also run this before hand; (to allow full access)

chmod -R 777 /tank/nfs

Update : check this guide http://blog.laspina.ca/ubiquitous/running-zfs-over-nfs-as-a-vmware-store

Update 2: there are known issues with waiting for sync when using both NFS and ZFS together…. There are reasons why you shouldnt do this, but in a test enviornemnt disabling sync at ZFS level may help performance (zfs set sync=disabled)

I like this idea of spliting up your SSD too… again in test enviornment no problems, in production i would utilize the entire drive to the tasks https://blogs.oracle.com/ds/entry/make_the_most_of_your

Using W2k3 R2 / W2k8 server as a NFS share for vmware

This is something i do in the lab so all of my vm’s are able to access iso’s etc (very handy for quick builds). Quite handy doing it through windows since its easiey enough to setup a windows network share to the same location and update various files via that.

This site has some good clear instructions : http://vmetc.com/2008/02/19/create-a-nfs-share-for-vm-iso-files-with-windows-2003-server-r2/

  1. On the Windows 2003 Server make sure “Microsoft Services for NFS” in installed. If not you need to add it under Add/Remove Programs, Windows
    Components, Other Network File and Print Services
  2. Next go to folder you want to share and right-click on it and select Properties
  3. Click on the NFS Sharing tab and select “Share this Folder”
  4. Enter a Share Name, check “Anonymous Access”
  5. In VirtualCenter, select your ESX server and click the “Configuration” tab and then select “Storage”
  6. Click on “Add Storage” and select “Network File System” as the storage type
  7. Enter the Windows Server name, the folder (share) name and a descriptive Datastore Name
  8. Done. Now you can map CD iso’s to your various vm’s.

 

This is similar in windows 2008 — screenshots of settings below…