HP ProCurve 5400zl Switch Series – How to Enable Jumbo Frames?

I couldn’t find how to do this via the web GUI (hate the new 5400 web console – what happened to the performance graphs??) — anyhow this is the commands to do it via commandline…

5406zl-A# show vlan 200
Status and Counters – VLAN Information – Ports – VLAN 200   VLAN ID : 200  Name : VLAN200   Status : Port-based   Voice : No   Jumbo : No

5406zl-A# configure
5406zl-A(config)# vlan 200 jumbo
5406zl-A(config)# end
5406zl-A# sho vlan 200

Status and Counters – VLAN Information – Ports – VLAN 200  VLAN ID : 200   Name : VLAN200   Status : Port-based   Voice : No  Jumbo : Yes

From a windows box…

ping 10.10.9.1 -f -l 8972

vmware – hp procurve lacp / trunk

Cisco’s Etherchannel and HP’s LACP are very similar – probably why I assumed both are supported by vmware. But as per below – it is not the case.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004048

From a procurve perspective the differences between a “trunk” and “static lacp” trunk is the total bandwidth per connection. A typical end point on a “trunk” trunk can transmit various connections down both pipes, but only receive down one – sometimes refered to as TLB (transmit load balancing). In vmwares case when you are using a “trunk” trunk you will have ip hash set as a load balancer, effectively meaning a different nic will be used on the vmware side for each connection a virtual machine makes.

This most probably explains why we hit a 1GBit limit per connection over vmware since a switch “trunk” can only receive back down a single interface. Where as LACP the interface is the team itself (i.e. it is considered as a larger single interface) – and can load balance. Matches up with what i’ve seen on the live switch statistics.

HP’s LACP – should be used where possible – between switches and servers that support it. LACP is a protocol that is wrapped around packets (why both ends need to support it).

“trunk” trunks  – have to be used with vmware at the moment – limits each connections bandwidth to a single interface. (i.e. you will never get more than 1gbit per connection if your nics are all 1gbit).