Great article on setting up connection between HP & Cisco….
http://evilrouters.net/2011/08/12/configure-lacp-between-hp-procurve-and-cisco-catalyst-switches/
Daz's bits and bobs …bytes bits
Great article on setting up connection between HP & Cisco….
http://evilrouters.net/2011/08/12/configure-lacp-between-hp-procurve-and-cisco-catalyst-switches/
Cisco’s Etherchannel and HP’s LACP are very similar – probably why I assumed both are supported by vmware. But as per below – it is not the case.
From a procurve perspective the differences between a “trunk” and “static lacp” trunk is the total bandwidth per connection. A typical end point on a “trunk” trunk can transmit various connections down both pipes, but only receive down one – sometimes refered to as TLB (transmit load balancing). In vmwares case when you are using a “trunk” trunk you will have ip hash set as a load balancer, effectively meaning a different nic will be used on the vmware side for each connection a virtual machine makes.
This most probably explains why we hit a 1GBit limit per connection over vmware since a switch “trunk” can only receive back down a single interface. Where as LACP the interface is the team itself (i.e. it is considered as a larger single interface) – and can load balance. Matches up with what i’ve seen on the live switch statistics.
HP’s LACP – should be used where possible – between switches and servers that support it. LACP is a protocol that is wrapped around packets (why both ends need to support it).
“trunk” trunks – have to be used with vmware at the moment – limits each connections bandwidth to a single interface. (i.e. you will never get more than 1gbit per connection if your nics are all 1gbit).