Great article on setting up connection between HP & Cisco….
http://evilrouters.net/2011/08/12/configure-lacp-between-hp-procurve-and-cisco-catalyst-switches/
Daz's bits and bobs …bytes bits
Great article on setting up connection between HP & Cisco….
http://evilrouters.net/2011/08/12/configure-lacp-between-hp-procurve-and-cisco-catalyst-switches/
Cisco’s Etherchannel and HP’s LACP are very similar – probably why I assumed both are supported by vmware. But as per below – it is not the case.
From a procurve perspective the differences between a “trunk” and “static lacp” trunk is the total bandwidth per connection. A typical end point on a “trunk” trunk can transmit various connections down both pipes, but only receive down one – sometimes refered to as TLB (transmit load balancing). In vmwares case when you are using a “trunk” trunk you will have ip hash set as a load balancer, effectively meaning a different nic will be used on the vmware side for each connection a virtual machine makes.
This most probably explains why we hit a 1GBit limit per connection over vmware since a switch “trunk” can only receive back down a single interface. Where as LACP the interface is the team itself (i.e. it is considered as a larger single interface) – and can load balance. Matches up with what i’ve seen on the live switch statistics.
HP’s LACP – should be used where possible – between switches and servers that support it. LACP is a protocol that is wrapped around packets (why both ends need to support it).
“trunk” trunks – have to be used with vmware at the moment – limits each connections bandwidth to a single interface. (i.e. you will never get more than 1gbit per connection if your nics are all 1gbit).
One of the best articles i have found on this subject is here : http://blog.scottlowe.org/2008/07/16/understanding-nic-utilization-in-vmware-esx/
There is some additional information here on setting up an etherchannel on the cisco side : http://blog.scottlowe.org/2006/12/04/esx-server-nic-teaming-and-vlan-trunking/
This can be handy if you need a single VM to use both physical nics in a load-balanced manner – both outbound and inbound. Of course its not really that simple though. This will really only add a benefit if the VM is communicating to multiple destinations (using ip hash – a single destination from a single VM with one IP will always be limited to the same physical nic).
switch(config)#int port-channel 1
switch(config-if)#description NIC team for ESX server
switch(config-if)#int gi0/1
switch(config-if)#channel-group 1 mode on
switch(config-if)#int gi0/2
switch(config-if)#channel-group 1 mode on
As per the article ensure you are using the same etherchannel method. The first command shows your current load-blance method, the 2nd command changes it to ip hash.
show etherchannel load-balance
port-channel load-balance src-dst-ip
Another solution is to use multiple iSCSI paths. This is newly supported within vSphere, see this post on setting up multiple paths : http://goingvirtual.wordpress.com/2009/07/17/vsphere-4-0-with-software-iscsi-and-2-paths/
Here is another good article on iSCSI within vSphere : http://www.delltechcenter.com/page/A+“Multivendor+Post”+on+using+iSCSI+with+VMware+vSphere
Some important points on using EMC Clariion with vSphere : http://virtualgeek.typepad.com/virtual_geek/2009/08/important-note-for-all-emc-clariion-customers-using-iscsi-and-vsphere.html