windows w2k8 R2 64bit – 32bit ODBC for virtual center server

You must create your ODBC connection using the 32-bit ODBC program. It is not as simple as running odbcad32.exe from the run / search bar.

you must run the following to get the actual 32-bit version;

%systemdrive%\Windows\SysWoW64\Odbcad32.exe

Populate this with the details and you should see it in the virtual center installer.

4k sector hard drives and zfs

I hit this as a problem recently. One of my disks died in my raidz so i ran down to the store and grabbed me a replacement WD10EARS (Western Digital 1Tb Green) drive.

BUT…

The one thing the store didn’t mention to me is the new 4K cluster sizing on the drive. I guess they assume most people run windows (though the issues are also present in XP). See these posts…

http://blog.temeletry.co.uk/2010/05/wd-green-wd10ears/

Unfortunately they really don’t work as well as you’d like in a server :(

  • They come with a 5 second head spin down setting that causes them to park their heads if they have been left idle for more than 5 seconds. As it takes a second or two to spin back up this can result in a very laggy experience during interactive sessions.
  • They do not have NCQ or any form of command queing/optimisation. This means that (on FreeBSD at least) you are stuck in the LOOK elevator. In particular this was noticed when doing sequential read & write (think dump|restore tar|untar etc) and interactive tasks simultaneously
  • They really suck with FreeBSD and ZFS…

http://community.wdc.com/t5/Desktop/Poor-performace-in-OpenSolaris-with-4K-sector-drive-WD10EARS-in/m-p/21132

While the other 512-byte sector HDDs were reading/writing at 30MB/s sustained, this EARS model did not exceeded the 1MB/s barrier.

I know for sure that this is related to the 512-byte sector firmware emulation, because the disk works perfectly well if I partition it in a 4k-sector alignment.

The thing is that even in that way, using it in a ZFS RAIDZ configuration the performance is very poor because RAIDZ uses a dynamic stripe size.

The bottom line here is that folks like me, that use different versions of Unix, need the firmware to present the disk as a 4K-sector disk to unleash the full potential of the technology. The OS is already prepared to support that sector size, no need for emulation here.

http://opensolaris.org/jive/thread.jspa?threadID=125702

Some preliminary testing that I have done…the WD20EARS (2TB advanced format drives) actually presents emulated 512byte sectors to the host o/s.

The drive documentation indicates that jumpers 7-8 should be enabled if the o/s does not support advanced format drives – the drive still present 512 bytes sectors.

I have attempted to raise a support ticket querying this, and how one can disable 512byte sector emulation in the drive (perhaps through a firmware upgrade) but I have not received any response to date.

Hopefully is enough people raise support tickets, WD may release firmware that allows the drive to natively present 4k blocks. Other doco indicates several other jumper combinations – all do not seem to make the drive present 4k byte blocks.

Perhaps someone internal to sun that has a relationship with WD may be able to shed some light on this? It would be fantastic to find out that I was just doing something wrong -> then I can get the drives to be seen on 32bit systems (ie – our embedded kit for osol, velitium)

Tested using b133 (64bit intel).

Try to avoid the green drives in ZFS for now. Remember to do your research before you buy a bunch of disks. I was caught off guard by this small change (works fine in win7 etc) which kills performance in ZFS. Ouch.

opensolaris – pkg verify

I ran through a pkg verify the other day and came across a lot of errors. Running the subsequent pkg fix command presented me with this…

pkg: Requested “install” operation would affect files that cannot be modified in live image.

Please retry this operation on an alternate boot environment.

This is the fix….

mkdir /mnt/osol-134fix
beadm create osol-134fix
beadm mount osol-134fix /mnt/osol-134fix
pkg -R /mnt/osol-134fix fix –accept
beadm activate osol-134fix

Then reboot your machine into the new boot image

vmware – measuring iscsi write performance

I picked this trick up off vmware support. If you’ve got your iscsi all setup you can drop to the shell (either ssh or console) and do this to measure your average write throughput.

time vmkfstools -c 10G /vmfs/volumes/san_vmfs/my_vm/fat_disk.vmdk -d eagerzeroedthick

Try larger a larger disk if this is too quick (free space permitting)

Essentially this will initiate the host to create a fat disk in the location above. You will then get a time recorded on how long it takes to execute this command. Then you can use your maths skill to work out the transfer rate…

While this is happening you can open another SSH type esxtop then press “d” and watch the (d)isk throughput on the console. Pressing “v” will show you stats per (v)irtual machine.