Not in a true switch sense for stackable, you can drop lags between them and trunk vlans, but no MC-LAG or anything like that is possible with this of the LB8.Are these switches stackable?
Looking to set up LAGS between switches for linux network bonding.
Correct, sorry, the LB6M (24x10g) and LB8 (48x10g), neither supports switch stacking.Hello tjk - thank you for the fast response
you wrote ' LB8M' we have LB6M . Which I assume is the same - not stackable.
Depends on the bond type. I've created a lag between the switches and have used bonding active/backup and RR without problems. I have not done LACP bond across switches, not sure that'll work since that signaling isn't carried across the switches like it would be in a true stacked config.I assume then linux bond is not possible using two non stacked switches. hopefully am I wrong?
with active/backup and non stacking switches - does a patch cable need to connect the two switches?Depends on the bond type. I've created a lag between the switches and have used bonding active/backup and RR without problems. I have not done LACP bond across switches, not sure that'll work since that signaling isn't carried across the switches like it would be in a true stacked config.
11.2 High Availability in a Multiple Switch Topology
----------------------------------------------------
With multiple switches, the configuration of bonding and the
network changes dramatically. In multiple switch topologies, there is
a trade off between network availability and usable bandwidth.
Below is a sample network, configured to maximize the
availability of the network:
| |
|port3 port3|
+-----+----+ +-----+----+
| |port2 ISL port2| |
| switch A +--------------------------+ switch B |
| | | |
+-----+----+ +-----++---+
|port1 port1|
| +-------+ |
+-------------+ host1 +---------------+
eth0 +-------+ eth1
In this configuration, there is a link between the two
switches (ISL, or inter switch link), and multiple ports connecting to
the outside world ("port3" on each switch). There is no technical
reason that this could not be extended to a third switch.
thank you for the reply.if you had issues with basic l2 traffic with the stock firmware, that sounds like a configuration issue - I wouldn't expect the brocade firmware to solve it but worth a try. the stock fastpath should at least do L2 no problem
#iface enp129s0f0 inet manual
#iface enp129s0f1 inet manual
#auto bond2
#iface bond2 inet static
# address 10.11.12.2
# netmask 255.255.255.0
# slaves enp129s0f0 enp129s0f1
# bond_miimon 100
# bond_mode active-backup
# mtu 9000
## Intel 10G in use now:
auto bond1
iface bond1 inet static
address 10.11.12.2
netmask 255.255.255.0
bond-slaves enp7s0f0 enp7s0f1
bond-miimon 100
bond-mode active-backup
mtu 9000
Will I tried using the stock firmware - cluster communications issue I could not solve.if you had issues with basic l2 traffic with the stock firmware, that sounds like a configuration issue - I wouldn't expect the brocade firmware to solve it but worth a try. the stock fastpath should at least do L2 no problem
Note after setting up brocade firmware - as you predicted - we have the same issue.if you had issues with basic l2 traffic with the stock firmware, that sounds like a configuration issue - I wouldn't expect the brocade firmware to solve it but worth a try. the stock fastpath should at least do L2 no problem
telnet@quanta-1#configure terminal
telnet@quanta-1(config)#jumbo
Jumbo mode setting requires a reload to take effect!
telnet@quanta-1(config)#write memory
Write startup-config done.
telnet@quanta-1(config)#end
telnet@quanta-1#reload
Are you sure? (enter 'y' or 'n'): y
Halt and reboot