Encoding Vids – Atomic advice on mencoder

One of the writers for Atomic mag spent some time upscalling his dvd collection to 720p. This is the commandline he used in the end to get a great clear result.

commandline ;

mencoder -sws 9 -vf scale=1280:720,hqdn3d=6:5:8,unsharp=17×5:1,pp=ha:128:7/va/dr/al – ovc x264 -x264encopts subq=9:b_pyramid:weight_b:8x8dct:me=umh:cabac:partitions=all:trellis=2:mixed_refs:direct_pred=auto:psy-rd=1.0,0.2:qp=16:threads=2:nodct_decimate:nofast_pskip:bframes=6:frameref=6 -oac copy dvd://1 -o output.avi

Of course it may change depending on what you are encoding, but its a good place to start. Check the atomic fourms (au) for more detail etc…

ESX – network utilization

One of the best articles i have found on this subject is here : http://blog.scottlowe.org/2008/07/16/understanding-nic-utilization-in-vmware-esx/

There is some additional information here on setting up an etherchannel on the cisco side : http://blog.scottlowe.org/2006/12/04/esx-server-nic-teaming-and-vlan-trunking/

This can be handy if you need a single VM to use both physical nics in a load-balanced manner – both outbound and inbound. Of course its not really that simple though. This will really only add a benefit if the VM is communicating to multiple destinations (using ip hash – a single destination from a single VM with one IP will always be limited to the same physical nic).

switch(config)#int port-channel 1
switch(config-if)#description NIC team for ESX server
switch(config-if)#int gi0/1
switch(config-if)#channel-group 1 mode on
switch(config-if)#int gi0/2
switch(config-if)#channel-group 1 mode on

As per the article ensure you are using the same etherchannel method. The first command shows your current load-blance method, the 2nd command changes it to ip hash.

show etherchannel load-balance
port-channel load-balance src-dst-ip

Another solution is to use multiple iSCSI paths. This is newly supported within vSphere, see this post on setting up multiple paths : http://goingvirtual.wordpress.com/2009/07/17/vsphere-4-0-with-software-iscsi-and-2-paths/

Here is another good article on iSCSI within vSphere : http://www.delltechcenter.com/page/A+“Multivendor+Post”+on+using+iSCSI+with+VMware+vSphere

Some important points on using EMC Clariion with vSphere : http://virtualgeek.typepad.com/virtual_geek/2009/08/important-note-for-all-emc-clariion-customers-using-iscsi-and-vsphere.html

zfs compression and latency

Since im using ZFS as storage via NFS for my some of my vmware environments i need to ensure that latency on my disk is reduced where ever possible.

There is alot of talk about ZFS compression being “faster” than a non-compressed pool due to less physical data being pulled off the drives. This of course depends on the system powering ZFS, but i wanted to run some tests specifically on latency. Throughput is fine in some situations, but latency is a killer when it comes to lots of small reads and writes (in the case of hosting virtual machines)

I recently completed some basic tests focusing on the differences in latency when ZFS compression (lzjb) is enabled or disabled. IOMeter was my tool of choice and i hit my ZFS box via a mapped drive.

I’m not concerned with the actual figures, but the difference between the figures

I have run the test multiple times (to eliminate caching as a factor) and can validate that compression (on my system anyhow) increases latency

Basic Results from a “All in one” test suite… (similar results across all my tests)

ZFS uncompressed:

IOps : 2376.68
Read MBps : 15.14
Write MBps : 15.36
Average Response Time : 0.42
Average Read Response Time : 0.42
Average Write Response Time : 0.43
Average Transaction Time : 0.42

ZFS compressed: (lzjb)

IOps : 1901.82
Read MBps : 12.09
Write MBps : 12.28
Average Response Time : 0.53
Average Read Response Time : 0.44
Average Write Response Time : 0.61
Average Transaction Time : 0.53

As you can see from the results, the AWRT especially is much higher due to compression. I wouldn’t recommend using zfs compression where latency is a large factor (virtual machines)

Note: Under all the tests performed the CPU (dual core) on the zfs box was never 100% – eliminating that as a bottleneck.

windows 2000 – ITMU / Windows Update Issues

Get this when doing a windows update – Error number: 0xC8000408

First check the basics, ensure that windows update is a trusted site and the seucirty level is set corretely within the Zone – test with lower secuirty to see if that resolves the problem.

If that doesnt work try the following;
1) Added the user SYSTEM with every privilege to C: and subs
2) Stopped the “Automatic Updates” service
3) Renamed the C:\WINNT\SoftwareDistribution folder
4) Started the “Automatic Updates” service again

ESX 4 (vSphere) – installation issues

I’ve had a couple of installation issues with ESX4

One was particularly strange, the install process would freeze every now and again until i pressed a key on the keyboard. This was related to AMD’s power saving C1 mode – disable this in the bios to fix this problem.

Secondly it was a problem with my nic. Essentially i didn’t have card that was supported (realtek). After inserting a intel card all was well. This fires up a ambiguous error and cans the whole install. (lvmdriver error)

It was nice to see that my pata drive was supported – i’ve been using it for quick tests but was unsure if it was supported in vSphere. Note: i’ve got a flash card enclosure that i’m planning on using in the future, just awaiting the card.   (hopefully this saves me some power)