vmware – measuring iscsi write performance

I picked this trick up off vmware support. If you’ve got your iscsi all setup you can drop to the shell (either ssh or console) and do this to measure your average write throughput.

time vmkfstools -c 10G /vmfs/volumes/san_vmfs/my_vm/fat_disk.vmdk -d eagerzeroedthick

Try larger a larger disk if this is too quick (free space permitting)

Essentially this will initiate the host to create a fat disk in the location above. You will then get a time recorded on how long it takes to execute this command. Then you can use your maths skill to work out the transfer rate…

While this is happening you can open another SSH type esxtop then press “d” and watch the (d)isk throughput on the console. Pressing “v” will show you stats per (v)irtual machine.

hyper-v to esx conversion

V2V cold clone process for SBS / exchange / sql / DC’s etc…..    (works in vSphere)

Steps to convert from hyper-v guest to vmware guest

  • Remove hyper-v integrated services while hyper-v guest is running. (if possible)
  • Note down NIC networking details first
  • Clean shutdown hyper-v guest
  • Use StarWind Converter (free) to convert VHD to “dynamically growing VMware” image and use “IDE” as type. Note “SCSI” does NOT work if you have not got the LSI / BUS drivers in the source image, just use the default options – dynamic vmware and IDE.
  • Create VMware guest shell (virtual machine equivalent to hyper-v specs but without the disks) – remove all hard drives, iscsi controller should also disappear.
  • Upload the VMDK to same data store as virtual machine and attach to the virtual guest in the right order. They should be detected as “IDE”
  • Start new vmware guest
  • If you could not remove integrated services (i.e. hot clone), then you MUST disable hyper-v services immediately.
  • Install vmware tools – reboot
  • Remove hidden NIC and other orphaned devices, and reconfigure network card as per original.
  • Windows may need to be re-activated – do this.

Note : converting a machine to use an IDE controller will limit its performance!

VMWare “Another task is already in progress” error

Straight from : http://community.spiceworks.com/how_to/show/662

I’m using ESXi 4.0, and i was facing the problem of “Another task is already in progress” error, practically the VM could not be used at all (turn on restart or even force off).

in ESXi 4.0 SSH console, using the command “service mgmt-vmware restart” will do no good :-(

therefore the solution is by using: services.sh restart command, quite simple and it doesn’t kill the whole VM process which is currently running on production, the only thing that is affected is the VCB backup, it failed when that command issued. Finally in your vCenter console, right click on your ESXi host in which you run the command and then click on reconnect.

So on the ESXi box hosting the troubled VM run;

services.sh restart

Then reconnect to that ESX box via VI Client. Shouldnt effect any of the other already running virtual machines.

Jumbo Frames on your vSphere ESXi box

Continuing on from this https://sigtar.com/2010/02/04/vsphere-and-multipathing-iscsi/

you may want to implement jumbo frames to your iSCSI backend…

Enable jumbo frames on your iSCSI target and switches then complete the following on the ESXi hosts (iSCSI initiators)…

This to list your current switch details and port group names…

esxcfg-vswitch -l

The following to allow jumbo frames onto your vswtich (insert your vSwich in following)

esxcfg-vswtich -m 9000 vSwitch0

Then create your kernel port groups that you want to use for iscsi, my commands looked like this… (repeat for each iscsi kernel port you have) — note : DO NOT ADD ANY VMKERNEL(s) from the GUI, instead use the steps below

esxcfg-vswitch -A iSCSI vSwitch0

If you are using a tagged vlan you will also need to add the vlan tag to the above port group

esxcfg-vswitch -v 192 -p iSCSI vSwitch0

esxcfg-vmknic -a iSCSI -i 10.0.0.101 -n 255.255.0.0 -m 9000

To confirm you have set the MTU (frame size) correctely, run the following….

esxcfg-vmknic -l

you should see your iSCSI kernel ports with a MTU of 9000 all going well. Confirm connectivity using

vmkping –s 9000 10.0.0.10