vmware : IDE to SCSI

I’ve found that vmware converter (this may be fixed in newer verions) creates vmware guests with an IDE controller. There can be performance issues if you choose to remain with this particular controller… Best bet is to change it to one of the various vmware SCSI controllers…

Depending on which windows operating system you are running depends on which controller you use….  http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006621

Guest Operating System
Adapter Type
Windows 2003, 2008, Vista
lsilogic
Windows NT, 2000, XP
buslogic
Linux
lsilogic

 

http://sanbarrow.com/vmdk/vmx-ide2scsi.html

You can easily change the type of the virtual controller for a given disk.
Lets have a look at an example.

# Disk DescriptorFile

version=1
CID=fffffffe
parentCID=ffffffff
createType=”twoGbMaxExtentFlat”
# Extent description
RW 4193792 FLAT “diskname-f001.vmdk” 0
RW 2097664 FLAT “diskname-f002.vmdk” 0
# The Disk Data Base
#DDB
ddb.adapterType = “ide”
ddb.virtualHWVersion = “3”
ddb.geometry.cylinders = “6241”
ddb.geometry.heads = “16”
ddb.geometry.sectors = “63”

The disk above uses a virtual ide-controller.

ddb.adapterType = “buslogic” This entry converts the disk into a SCSI-disk with BusLogic Controller

ddb.adapterType = “lsilogic”   This entry converts the disk into a SCSI-disk with LSILogic Controller
ddb.adapterType = “ide”   This entry converts the disk into a IDE-disk with Intel-IDE Controller

This changes the harddisk – but doesn’t change the controller itself.

ide0.present = “TRUE”
ide1.present = “TRUE”
scsi0.virtualDev = “lsilogic”
scsi0.virtualDev = “buslogic”
scsi1.virtualDev = “lsilogic”
scsi1.virtualDev = “buslogic”

Use entries like this in your *.vmx file. By the way, you can have LSI-logic and BUS-logic controllers in one VM.

Think twice before you make changes like this with a boot-disk.

Bluescreen 07b – mass-storage driver:
Activate the apropriate driver in the registry: intelide.sys or vmscsi.sys or symmpi.sys – you may have to add files as well.

If you get the above issue on a w2k8 box you might be able to enable the LSI_SAS driver before you convert the machine to SCSI controller.

  1. Boot machine with IDE controller
  2. Take a snapshot (for failback)
  3. Regedit and find the following key \\HKLM\SYSTEM\ControlSet001\Services\LSI_SAS
  4. Change the “Start” dword from 4 to 0
  5. Shutdown the machine
  6. Remove all the virtual disks (do not delete the disks, just remove them)
  7. Create copies of each .vmdk file (cp) (for failback)
  8. Edit the .vmdk file for each disk (vi)
  9. Change the “adaptertype” to “lsilogic” (if w2k8)
  10. Re-add existing disks (this should also bring in a LSI SAS controller)
  11. Boot the machine

Black screen with cursor blinking in the topleft of the screen:
Write a new partition boot-sector.

vmware – finding orphaned files

Use following script for finding orphaned files in your vmware env…


#
# Purpose : List all orphaned vmdk on all datastores in all VC's
# Version: 1.1
# Author : HJA van Bokhoven
# Modifications: LucD

$arrayVC = @(“server”)
$report = @()

foreach ($strVC in $arrayVC)
{
Connect-VIServer $strVC
$arrUsedDisks = Get-View -ViewType VirtualMachine | % {$_.Layout} | % {$_.Disk} | % {$_.DiskFile}
$arrDS = Get-Datastore | Sort-Object -property Name
foreach ($strDatastore in $arrDS)
{
Write-Host $strDatastore.Name
$ds = Get-Datastore -Name $strDatastore.Name | % {Get-View $_.Id}
$fileQueryFlags = New-Object VMware.Vim.FileQueryFlags
$fileQueryFlags.FileSize = $true
$fileQueryFlags.FileType = $true
$fileQueryFlags.Modification = $true
$searchSpec = New-Object VMware.Vim.HostDatastoreBrowserSearchSpec
$searchSpec.details = $fileQueryFlags
$searchSpec.matchPattern = "*.vmdk"
$searchSpec.sortFoldersFirst = $true
$dsBrowser = Get-View $ds.browser
$rootPath = "["+$ds.summary.Name+"]"

#Workaround for vSphere 4 fileOwner bug
if ($dsBrowser.Client.Version -eq "Vim4") {
$searchSpec = [VMware.Vim.VIConvert]::ToVim4($searchSpec)
$searchSpec.details.fileOwnerSpecified = $true
$dsBrowserMoRef = [VMware.Vim.VIConvert]::ToVim4($dsBrowser.MoRef);
$searchTaskMoRef = $dsBrowser.Client.VimService.SearchDatastoreSubFolders_Task($dsBrowserMoRef, $rootPath, $searchSpec)
$searchResult = [VMware.Vim.VIConvert]::ToVim($dsBrowser.WaitForTask([VMware.Vim.VIConvert]::ToVim($searchTaskMoRef)))
} else {
$searchResult = $dsBrowser.SearchDatastoreSubFolders($rootPath, $searchSpec)
}

foreach ($folder in $searchResult)
{
foreach ($fileResult in $folder.File)
{
if ($fileResult.Path)
{
if (-not ($arrUsedDisks -contains ($folder.FolderPath + $fileResult.Path))){
$row = "" | Select DS, Path, File, Size, ModDate, Host
$row.DS = $strDatastore.Name
$row.Path = $folder.FolderPath
$row.File = $fileResult.Path
$row.Size = $fileResult.FileSize
$row.ModDate = $fileResult.Modification
$row.Host = (Get-View $ds.Host[0].Key).Name
$report += $row
}
}
}
}
}
# Disconnect session from VC
disconnect-viserver -confirm:$false
}

$report | Export-Csv "C:\VMDK-orphaned.csv" -noTypeInformation

vmware and NLB (unicast vs multicast)

Thought I’d get a bit of clarification around this one. VMWare states –

VMware recommends that you use multicast mode, because unicast mode forces the physical switches on the LAN to broadcast all Network Load Balancing traffic to every machine on the LAN.

There are some issues which come to the surface if you are using unicast. Where possible we should always try to use a Multicast NLB as Unicast can cause some complications around vSwitch configurations. (requirement to change switch notifications to “No” – and seemingly break port id teaming). Detail below;

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1556

• All members of the NLB cluster must be running on the same ESX host.
• All members of the NLB cluster must be connected to the single portgroup on the virtual switch
• VMotion for unicast NLB virtual machines is not supported (unless you want to migrate ALL NLB members to a different ESX host)
• The Security Policy Forged Transmit on the Portgroup is set to Accept
• The transmission of RARP Packet is prevented on the Portgroup/Virtual Switch as explained in the later part of the article.

Here is a bit about each type… (from vmware docs)

Multicast

Multicast mode allows communication among hosts because it adds a Layer 2 multicast address to the cluster instead of changing the cluster. Communication among hosts is possible because the hosts retain their original unique media access control (MAC) addresses and already have unique media access control (MAC) addresses and already have unique dedicated IP addresses. However, the address resolution protocol (ARP) reply that is sent by a host in the cluster (in response to an ARP request) maps the cluster’s unicast IP address to its multicast MAC address.

Some routers do not support the resolution of unicast IP addresses to multicast MAC addresses, and they discard the ARP reply. As a result, an administrator must add a static ARP entry in the router, mapping the cluster IP address to its MAC address.

  • Can be single nic
  • Add static ARP to default gateway

Unicast

Unicast mode works seamlessly with all routers and Layer 2 switches. However, this mode induces switch flooding, a condition in which all switch ports are flooded with Network Load Balancing
traffic, even ports to which servers not involved in Network Load Balancing are attached. To communicate among hosts, you must have a second virtual adapter for each host.

Normally, switched environments avoid port flooding when a switch learns the MAC addresses of the hosts that are sending network traffic through it. The Network Load Balancing cluster masks the cluster’s MAC address for all outgoing traffic to prevent the switch from learning the MAC address.

On an ESX host, the VMkernel sends a reverse address resolution protocol (RARP) packet each time
certain actions occur—for example, when a virtual machine is powered on, when there is a teaming failover, or when certain VMotion operations occur. The RARP packet gives physical switches the MAC
address of the virtual machine involved in the action. In a Network Load Balancing cluster environment, after a Network Load Balancing node is powered on, the notification in the RARP packet exposes the MAC address of the cluster NIC. As a result, switches might begin to send all inbound traffic destined for the Network Load Balancing cluster through one switch port to a single node of the cluster.

Because the virtual switch operates with complete data about the underlying MAC addresses of the virtual NICs inside each virtual machine, it always correctly forwards packets containing a MAC address
matching that of a running virtual machine. As a result of this behavior, the virtual switch does not forward traffic destined for the Network Load Balancing MAC address outside the virtual environment
into the physical network, because it is able to forward it to a local virtual machine.

  • Requires 2 nics if you want host to host communication.

vmware – cpu ready (what to look for) and performance

keeping things simple – below are the numbers to look for when your investigating cpu scheduling issues withing a vmware environment…

Look for the cpu “ready” counter which is a percentage of time that the virtual machine was ready, but could not get scheduled to run on the physical CPU. CPU ready time is dependent on the number of virtual machines on the host and their CPU loads.

These are the peak / average figures to watch for in vmware performance graphs. Anything above 5% cpu ready time is worth fixing. (Note : the sampling rate is different depending on which view you are looking at)

Realtime : 1000 x 20 (20 second sample) x 5% = 1000

Daily : 1000 x 300 (5 min sample) x 5% = 15000
Weekly : 1000 x 1800 (30 min sample) x 5% = 90000
Monthly : 1000 x 7200 (2hrs sample) x 5% = 360000

Note : above is 1vCPU only, see this reference for some more detail and some nice charts!

http://vmtoday.com/2013/01/cpu-ready-revisted-quick-reference-charts/