Pages

Sunday, September 16, 2012

Bonding with Linux

Before you begin with bonding you need the command ifenslave. Depending on your distribution you have to compile it by yourself. I am using Slackware as usual where ifenslave is shipped as source only. You can find the source of ifenslave in the kernel sources. To install the kernel sources run installpkg with the package for the kernel source (if not done already):

# installpkg kernel-source-2.6.37.6_smp-noarch-2.txz
...

Then change into the folder /usr/src/linux/Documentation/networking/ and compile ifenslave. At last check that ifenslave is available in /sbin/:

# cd /usr/src/linux/Documentation/networking/
# gcc -Wall -O -I/usr/src/linux/include ifenslave.c -o /sbin/ifenslave
# ls -la /sbin/ifenslave
-rwxr-xr-x 1 root root 19108 Sep 16 12:11 /sbin/ifenslave*

Bonding with Linux is real fun, it has a lot of options to play with. In the following I will show you how to setup a round-robin configuration and a active-backup configuration.

Round-Robin:

Round-Robin means that all defined interfaces will be grouped to one bonding device. That bonding interface will act as failover and the network traffic will be shared on all interfaces (load balancing). Round-Robin will be used as default when you setup bonding without any option. Round-Robin is good for multiple pathes with one switch:


To activate bonding as Round-Robin load the bonding module:

# modprobe bonding

Then define an IP address for your new bond0 device:

# ifconfig bond0 192.168.1.98 netmask 255.255.255.192 up

And attach any pyhsical network device to it:

# ifenslave bond0 eth0 eth1

Next take a look at /proc/net/bonding/bond0:

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.0 (June 2, 2010)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:11:22:33:44:55
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:11:22:33:44:56
Slave queue ID: 0

The first passage show general information about your bonding device itself, eg. the mode. The last two passages are about the attached devices. To test your bonding device take a look at your network devices with netstat:

# netstat -i
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR   TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0  1500   0     239      0      0      0     136      0      0      0 BMmRU
eth0   1500   0     126      0      0      0      72      0      0      0 BMsRU
eth1   1500   0     113      0      0      0      64      0      0      0 BMsRU
lo    16436   0       8      0      0      0       8      0      0      0 LRU

Then copy a more or less large file over your bonding device. Finally run netstat again:

# netstat -i
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR   TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0  1500   0   45181      0      0      0   21676      0      0      0 BMmRU
eth0   1500   0   24665      0      0      0   10842      0      0      0 BMsRU
eth1   1500   0   20516      0      0      0   10834      0      0      0 BMsRU
lo    16436   0       8      0      0      0       8      0      0      0 LRU

Above you can see that the traffic was shared across eth0 and eth1, while the bond0 device shows the traffic sum of eth0 and eth1.

Active-Backup:

Active-Backup means that one device is always in usage, while the other device operates in backup mode. When the first device fails, lost it's connection or else then the second device takes over. With active-backup you have only failover, not load balancing. To use bonding as active-backup load the bonding module:

# modprobe bonding mode=1

Then define an IP address for your new bond0 device:

# ifconfig bond0 192.168.1.98 netmask 255.255.255.192 up

And attach any pyhsical network device to it:

# ifenslave bond0 eth0 eth1

Next take a look at /proc/net/bonding/bond0:

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.0 (June 2, 2010)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:11:22:33:44:55
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:11:22:33:44:56
Slave queue ID: 0

The first passage show general information about your bonding device itself, eg. the mode and the active slave (eth0). The last two passages are about the attached devices. To test your bonding device take a look at your network devices with netstat:

# netstat -i
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR   TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0  1500   0   48140      0     54      0   22285      0      0      0 BMmRU
eth0   1500   0   26142      0      0      0   11164      0      0      0 BMsRU
eth1   1500   0   21998      0     54      0   11121      0      0      0 BMsRU

Then copy a more or less large file over your bonding device. Finally run netstat again:

# netstat -i
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR   TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0  1500   0   92583      0     82      0   44455      0      0      0 BMmRU
eth0   1500   0   70557      0      0      0   33334      0      0      0 BMsRU
eth1   1500   0   22026      0     82      0   11121      0      0      0 BMsRU

As you can see only the traffic counters for eth0 has changed (and bond of course). eth1 is nearly untouched.

XOR:
The XOR mode is very similar to the Round-Robin mode but instead of Round-Robin the XOR mode is capable of handling multiple switches. This is very useful in eg. HA environments:

To activate bonding as XOR load the bonding module:

# modprobe bonding mode=2

Then define an IP address for your new bond0 device:

# ifconfig bond0 192.168.1.98 netmask 255.255.255.192 up

And attach any pyhsical network device to it:

# ifenslave bond0 eth0 eth1

Next take a look at /proc/net/bonding/bond0:

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.0 (June 2, 2010)

Bonding Mode: load balancing (xor)
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 10 Mbps
Duplex: half
Link Failure Count: 0
Permanent HW addr: 00:40:95:30:32:dc
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:90:27:3a:bc:3a
Slave queue ID: 0

The first passage shows general information about your bonding device itself, eg. the mode. The last two passages are about the attached devices. To test your bonding device take a look at your network devices with netstat:

# netstat -i
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR   TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0  1500   0     239      0      0      0     136      0      0      0 BMmRU
eth0   1500   0     126      0      0      0      72      0      0      0 BMsRU
eth1   1500   0     113      0      0      0      64      0      0      0 BMsRU
lo    16436   0       8      0      0      0       8      0      0      0 LRU

Then copy a more or less large file over your bonding device. Finally run netstat again:

# netstat -i
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR   TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0  1500   0   45181      0      0      0   21676      0      0      0 BMmRU
eth0   1500   0   24665      0      0      0   10842      0      0      0 BMsRU
eth1   1500   0   20516      0      0      0   10834      0      0      0 BMsRU
lo    16436   0       8      0      0      0       8      0      0      0 LRU

Above you can see that the traffic was shared across eth0 and eth1, while the bond0 device shows the traffic sum of eth0 and eth1.

Sample failover situation:

I have configured my bonding device the following way:

# modprobe bonding mode=1 primary=eth1 miimon=1

That means I am running bonding in active-backup mode, the primary interface is eth1 and the bonding device shall look every second if the used NIC has a link. To remove the link I just unplugged the network cable, dmesg shows something like this then:

# dmesg
...
[ 4266.004306] e100 0000:00:0b.0: eth1: NIC Link is Down
[ 4266.004849] bonding: bond0: link status definitely down for interface eth1, disabling it
[ 4266.004857] bonding: bond0: making interface eth0 the new active one.

When I reconnect the cable to eth1 then the bonding device used eth1 again:

# dmesg
...
[ 4292.004269] e100 0000:00:0b.0: eth1: NIC Link is Up 100 Mbps Full Duplex
[ 4292.004995] bonding: bond0: link status definitely up for interface eth1, 100 Mbps full duplex.
[ 4292.005019] bonding: bond0: making interface eth1 the new active one.

You can avoid reselcting the primary device by using the primary_reselect option when you load the module, eg:

# modprobe bonding mode=1 primary=eth1 miimon=1 primary_reselect=2

Detach and (re)attach a device:

Sometimes you need to detach a device from a bonding configuration then you can use ifenslave with -d option:

# ifenslave -d bond0 eth1
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.0 (June 2, 2010)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:11:22:33:44:55
Slave queue ID: 0

To attach another device run ifenslave with device you want to attach:

# ifenslave bond0 eth2
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.0 (June 2, 2010)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:11:22:33:44:55
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:11:22:33:44:57
Slave queue ID: 0

No comments:

Post a Comment