opensolaris – network teaming

Otherwise known as trunking or link aggregation. I believe it is the best way to get that additional boost out of your network server while providing a bit of redundancy on link failure. here is how to do it…

Official docs on the process here… http://docs.sun.com/app/docs/doc/819-6990/gdysn?a=view and some good bits here http://blogs.sun.com/nickyv/entry/link_aggregation_jumpstart_post_install

dladm (data link admin) is the tool for the job. List the links you currently have…

dladm show-link

First shut down the links you are currently using..  (you will have to do this on the console)

ifconfig e1000g1 unplumb

Now join the two nics into one aggregate connection via….

dladm create-aggr -l e1000g1 -l rge0 aggr1

then bring up the new aggregate link

ifconfig aggr1 plumb IP-address up

Show link

dladm show-aggr

(Optional) Make the IP configuration of the link aggregation persist across reboots.

  1. Create the /etc/hostname file for the aggregation’s interface.

    If the aggregation contains IPv4 addresses, the corresponding hostname file is/etc/hostname.aggr1. For IPv6–based link aggregations, the corresponding hostname file is/etc/hostname6.aggr1.

  2. Type the IPv4 or IPv6 address of the link aggregation into the file.

  3. Perform a reconfiguration boot.

I have teamed an intel nic (e1000g) and a (rge) together without any issues…  the rge drive by itself had issues, but i have not come across them again since i trunked both interfaces together. Perhaps the e1000g takes the load while the other nic dies off..

Updated : 4/08/2009

To test the throughput / load balancing run these commands (in two terminal sesssions);

dladm show-link -s -i 5 rge0

dladm show-link -s -i 5 e1000g1

It will return the packets going over each nic. Copy some files back and forth over the interface and watch the numbers. RBYTES and OBYTES are the fields to watch (received and out bytes)

7 Replies to “opensolaris – network teaming”

  1. I can say that i haven’t had any problems while they are teamed together, unsure if the Intel card (e1000g) is covering for it. For the most part they seem to work well together.

    Haven’t had the chance to re-tested the rge driver by itself in 2009.06

  2. Has anyone had an issue booting back into Gnome after creating an aggr and setting a manual IP, and on my second time around I actually added the default route and modified the nsswitch.conf to its original configuration (nsswitch.dns values).

    I got the same result, stuck on the OpenSolaris Loading screen with the bar moving across the bottom.

  3. I have 2 e1000g devices with both set in the /kernel/drv/e1000g.conf file for a max MTU of 3 or 1500-16298

    1 NIC plays nice and it allows the larger MTU. The other NIC, no matter what i do will not go above 1500 MTU:

    Any ideas, Thank you!

    root@keeper:/volumes# dladm show-linkprop -p mtu e1000g0
    LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
    e1000g0 mtu rw 9000 1500 1500-16298
    root@keeper:/volumes# dladm show-linkprop -p mtu e1000g1
    LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
    e1000g1 mtu rw 1500 1500 1500
    root@keeper:/volumes#

  4. With 3 e1000g interfaces I am still experiencing this problem on 2 out of the 3 interfaces. My add-on PCI-E Intel card displays the correct MTU range of 1500-16298.

    The 2 built-in Intel interfaces show only MTU of 1500.

    my /kernel/drv/e1000g.conf and /var/lib/dpkg/alien/sunwintgige/reloc/kernel/drv/e1000g.conf (If I do not set this one too, the above conf file gets overwritten) are set to: MaxFrameSize=3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3;

    And root@keeper:/kernel/drv# dladm show-link LINK CLASS MTU STATE BRIDGE OVER e1000g2 phys 1500 up — — e1000g0 phys 9000 up — — e1000g1 phys 1500 up — —

    How do I get the e1000g driver to allow max MTU on all 3 e1000g interfaces?

    Thank you, Matthew

Leave a Reply

Your email address will not be published.