HP ProCurve 5400zl Switch Series – How to Enable Jumbo Frames?

I couldn’t find how to do this via the web GUI (hate the new 5400 web console – what happened to the performance graphs??) — anyhow this is the commands to do it via commandline…

5406zl-A# show vlan 200
Status and Counters – VLAN Information – Ports – VLAN 200   VLAN ID : 200  Name : VLAN200   Status : Port-based   Voice : No   Jumbo : No

5406zl-A# configure
5406zl-A(config)# vlan 200 jumbo
5406zl-A(config)# end
5406zl-A# sho vlan 200

Status and Counters – VLAN Information – Ports – VLAN 200  VLAN ID : 200   Name : VLAN200   Status : Port-based   Voice : No  Jumbo : Yes

From a windows box…

ping 10.10.9.1 -f -l 8972

vmware and NLB (unicast vs multicast)

Thought I’d get a bit of clarification around this one. VMWare states –

VMware recommends that you use multicast mode, because unicast mode forces the physical switches on the LAN to broadcast all Network Load Balancing traffic to every machine on the LAN.

There are some issues which come to the surface if you are using unicast. Where possible we should always try to use a Multicast NLB as Unicast can cause some complications around vSwitch configurations. (requirement to change switch notifications to “No” – and seemingly break port id teaming). Detail below;

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1556

• All members of the NLB cluster must be running on the same ESX host.
• All members of the NLB cluster must be connected to the single portgroup on the virtual switch
• VMotion for unicast NLB virtual machines is not supported (unless you want to migrate ALL NLB members to a different ESX host)
• The Security Policy Forged Transmit on the Portgroup is set to Accept
• The transmission of RARP Packet is prevented on the Portgroup/Virtual Switch as explained in the later part of the article.

Here is a bit about each type… (from vmware docs)

Multicast

Multicast mode allows communication among hosts because it adds a Layer 2 multicast address to the cluster instead of changing the cluster. Communication among hosts is possible because the hosts retain their original unique media access control (MAC) addresses and already have unique media access control (MAC) addresses and already have unique dedicated IP addresses. However, the address resolution protocol (ARP) reply that is sent by a host in the cluster (in response to an ARP request) maps the cluster’s unicast IP address to its multicast MAC address.

Some routers do not support the resolution of unicast IP addresses to multicast MAC addresses, and they discard the ARP reply. As a result, an administrator must add a static ARP entry in the router, mapping the cluster IP address to its MAC address.

  • Can be single nic
  • Add static ARP to default gateway

Unicast

Unicast mode works seamlessly with all routers and Layer 2 switches. However, this mode induces switch flooding, a condition in which all switch ports are flooded with Network Load Balancing
traffic, even ports to which servers not involved in Network Load Balancing are attached. To communicate among hosts, you must have a second virtual adapter for each host.

Normally, switched environments avoid port flooding when a switch learns the MAC addresses of the hosts that are sending network traffic through it. The Network Load Balancing cluster masks the cluster’s MAC address for all outgoing traffic to prevent the switch from learning the MAC address.

On an ESX host, the VMkernel sends a reverse address resolution protocol (RARP) packet each time
certain actions occur—for example, when a virtual machine is powered on, when there is a teaming failover, or when certain VMotion operations occur. The RARP packet gives physical switches the MAC
address of the virtual machine involved in the action. In a Network Load Balancing cluster environment, after a Network Load Balancing node is powered on, the notification in the RARP packet exposes the MAC address of the cluster NIC. As a result, switches might begin to send all inbound traffic destined for the Network Load Balancing cluster through one switch port to a single node of the cluster.

Because the virtual switch operates with complete data about the underlying MAC addresses of the virtual NICs inside each virtual machine, it always correctly forwards packets containing a MAC address
matching that of a running virtual machine. As a result of this behavior, the virtual switch does not forward traffic destined for the Network Load Balancing MAC address outside the virtual environment
into the physical network, because it is able to forward it to a local virtual machine.

  • Requires 2 nics if you want host to host communication.

Celerra – Unable to create, refresh, or delete checkpoint of a replicated destination file system

I’ve been stuck with this error recently on a Celerra manager device : DpRequest_Max_VS_SuspendedCheckpointsReached

This can sometimes happen when a filesystem is held during a replication task. To check what filesystems are currently being replcated run the following command;

nas_replicate -info -all

Either wait until the replication has completed – check status. Or temporary stop the replication task (stop it from the source side if possible).


EMC Celerra Optimizations for VMware on NFS

go here http://blog.scottlowe.org/2010/01/31/emc-celerra-optimizations-for-vmware-on-nfs/

Turn on the uncached write mechanism for NFS file systems used as VMware datastores. This can have a significant performance improvement for VMDKs on NFS but isn’t the default setting. From the Control Station, you can use this command to turn on the uncached write mechanism:
server_mount <data mover name> -option <options>,uncached <file system name> <mount point>
Be sure to review pages 99 through 101 of the VMware on Celerra best practices document for more information on the uncached write mechanism and any considerations for its use.