P4000 / Lefthand and Windows DSM (MPIO)

Steps to setup MPIO round robin and DSM with P4000 / Lefthand nodes

This example is windows 2008 with 2 x 1gbit nics

  1. Install HP DSM driver onto windows 2008  (this should also install MPIO feature on windows 2008)
  2. Setup 2 x IP’s on windows host – storage subnet (used with MPIO)
    1. Confirm you can ping target from both (enable one at a time – ensure there is no routing on your other nics)
  3. Provision storage and allow server read / write access to windows server (via initiator name)
  4. Open iSCSI on windows 2008
  5. Put in IP of target in discovery tab
  6. On first tab confirm that iSCSI drive is presented to host
  7. Click “connect” – check both auto-connect and use MPIO
    1. click advanced – chose MS iSCSI initiator, choose first IP and target IP. Click o.k. / o.k
    2. click advanced – chose MS iSCSI initiator, choose second IP and target IP. Click o.k. / o.k
    3. Repeat above per additional Nic
  8. Confirm via “devices” that there is x (as many targets as nics) targets per disk
    1. Within devices choose MPIO – change from vendor specific to “round robin”
      1. Note : read / write access to LUN is required when using “round robin” MPIO opposed to the default “vendor specific” which works with read only access. Else you will get an error – “not supported”
    2. Repeat above per “device”
  9. Confirm on Lefthand / P4000 CMC that the LUN has one connection per initiator Nic, and that each connection also has its DSM children (visible in CMC if working).

Confirm that the connection are as expected…

Run some disk benchmark utilities (iometer) and check that traffic is travelling over all the nics you have setup above. You can just use windows builtin task manager to do this.

Check that the right amount of connections are on the CMC for that particular LUN and initiator.  So if you had 3 nodes via 2 initiator nics you would actually have 8 active connections in total (1 per nic (2) and an additional for every nic to each nodes (6))

Note : There are some reported issues with DMS and data corruption. Although i have not seen this myself please be diligent when it comes to data backup esp when production data is involved.

vmware and load balancing NFS trunks

Straight from the post below, this is the best way (currently) to load balance your NFS datastores…  No MPIO magic here unfortunately.

http://communities.vmware.com/message/1466595#1466595

Basically you can setup IP alias on the NFS side and then setup multiple connections (using a unique IP) per datastore  on each ESX host. This works well if you are using a team of nics running IP-hash load balancing…

Static EtherChannel.

My setup is as follows:

ESXi 4.0 U1, Cisco 3750 Switches, and NetApp NFS on the storage side.

I have a total of 8 nics. I divided the nics into 3 groups.

2 nics on vSwitch0 for Mgmt & vMotion
3 nics on vSwitch1 for VM’s (Multiple port groups (3 VLANS))
3 nics on vSwitch2 for IP Storage (Mostly NFS, a little iSCSI)
(One vSwitch3 I also have a VM port group for iSCSI access from within the VM)

Since I have 3 nics on my IPStorage port group I needed a way to be able to utilize all three nics and not have the server just use one for ingress and egress traffic. This was done by:

Setting up a static EtherChannel on the cisco switch (Port Channel).
Configuring the cisco switch to IP Hash
Configure the vSwitch to “Route based on IP Hash” as well.

The next part is to create multiple datastores on the NFS device. Each of my NFS datastores is about 500GB in size. Reason for this is that my larger luns are iSCSI and are access directly from the VM using the MS iSCSI initiator from the VM itself.
My NetApp NAS has an address of, let say, 192.168.1.50. So all my data stores are accessible by utilizing the address of “\\192.168.1.50\NFS-Store#”. This will not be useful as the esx box and the cisco switch will always use the same nic/port to access the nas device. This is due to the algorithm (IP HASH) to decide what link it’ll go over. So to resolve the issue, I added IP aliases to the NFS box. NetApp allows to have multiple Ip addresses pointing to the same NFS export, I suspect EMC would do the same. So, I added 2 aliases 51 & 52. Now my NFS datastores are accessible by using Ip address 192.168.1.50,.51, & .52.

So I went ahead and added the datastores to the ESX box using the multiple IP addresses:

Datastore1 = \\192.168.1.50\NFS-Store1
Datastore2 = \\192.168.1.51\NFS-Store2
Datastore3 = \\192.168.1.52\NFS-Store3

If you have more datastores it’ll just repeat: Datastore4 = \\192.168.1.50\NFS-Store4 and so on…

Since having multiple datastores and address to each, the 3 nics on the ESX box dedicated to IP Storage get utilized. It does not aggregate the bandwidth but it does use all three to send and recieve packets. So the fastest speed you will get is 1Gbit, theoretically, each way for traffic but, it is better than trying to cram all the traffic over 1 nic.

I also enabled Jumbo Frames on the vSwitch as well as the vmNic for IP-Stroage. (need the best performance!)
I should mention that your NFS storage device should have EtherChannel setup on it as well. Otherwise, you’ll be on the same boat just on the other end of it.

Hope it helps!

Larry B.

I should mention that you should not use different addresses to access the same NFS share (datastore). It is not supported and may cause you issues.

vSphere and Multipathing iSCSI

This is just a quick reference to create a multiplathing iSCSI setup…

Create two virtual kernel switches, one called “iSCSI-1” and the other called “iSCSI-2” (and so on if you have more nics)

Then per kernel portgroup ensure that only one of the nics is active. For the “iSCSI-1” portgroup configure it to override the virtual switch settings and move nic 0 to active and nic 1 to unused. For the “iSCSI-2” portgroup configure it to override the virtual switch settings and move nic 1 to active and nic 0 to unused.

Now you have to run some esxcli commands to gel things together…. alt-f1 on the esxi console and type “unsupported” followed by your root password. The following is the command i have to run to get both my portgroups to work together on the iscsi hba….  (you can check your vmk number from the networking config screen)

esxcli swiscsi nic add -n vmk1 -d vmhba33
esxcli swiscsi nic add -n vmk2 -d vmhba33

For each target then change the path selection method to “round-robin”.

Go back to storage adapters and click “rescan”

If you want all future iSCSI targets to automatically use round-robin you must also run the following from commandline…   (this is for our HP Lefthand, your “storage array type” may be different. Its listed under your target details). Basically sets round robin as a default for this type of array. In general you should do this first before presenting any LUNs etc, else you may have to bounce your box.

esxcli nmp satp setdefaultpsp –satp VMW_SATP_DEFAULT_AA –psp VMW_PSP_RR

Update : Hardware iSCSI nics

The process for hardware iSCSI initiators is similar to above, but you assign a single kernel port per nic. To find which nic belongs to which iSCSI initiator you must run this command from the CLI;

esxcli swiscsi vmnic list -d vmhba#

vmhba# is the name of the iSCSI adapter.