Quanta LB6M (10GbE) -- Discussion

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

slimshizn

New Member
Oct 26, 2018
3
0
1
My entire network is at 9000 MTU, so I sat the switch at 9000. That didn't work and the highest amount didn't work, so I'll just put my PC back to 1500.
 

tjk

Active Member
Mar 3, 2013
481
199
43
Are these switches stackable?

Looking to set up LAGS between switches for linux network bonding.
Not in a true switch sense for stackable, you can drop lags between them and trunk vlans, but no MC-LAG or anything like that is possible with this of the LB8.
 
Last edited:
  • Like
Reactions: fbcadmin

fbcadmin

New Member
Nov 18, 2018
19
0
1
Hello tjk - thank you for the fast response

you wrote ' LB8M' we have LB6M . Which I assume is the same - not stackable.
 

tjk

Active Member
Mar 3, 2013
481
199
43
Hello tjk - thank you for the fast response

you wrote ' LB8M' we have LB6M . Which I assume is the same - not stackable.
Correct, sorry, the LB6M (24x10g) and LB8 (48x10g), neither supports switch stacking.
 

tjk

Active Member
Mar 3, 2013
481
199
43
I assume then linux bond is not possible using two non stacked switches. hopefully am I wrong?
Depends on the bond type. I've created a lag between the switches and have used bonding active/backup and RR without problems. I have not done LACP bond across switches, not sure that'll work since that signaling isn't carried across the switches like it would be in a true stacked config.
 
  • Like
Reactions: fbcadmin

fbcadmin

New Member
Nov 18, 2018
19
0
1
Depends on the bond type. I've created a lag between the switches and have used bonding active/backup and RR without problems. I have not done LACP bond across switches, not sure that'll work since that signaling isn't carried across the switches like it would be in a true stacked config.
with active/backup and non stacking switches - does a patch cable need to connect the two switches?

the two switches are for an isolated storage network.
 

fbcadmin

New Member
Nov 18, 2018
19
0
1
searched and found a connections is needed. We've done this before - it was at the end of one of those really long days when i asked.. so for anyone reading my question in the future:


https://www.kernel.org/doc/Documentation/networking/bonding.txt

Code:
11.2 High Availability in a Multiple Switch Topology
----------------------------------------------------

   With multiple switches, the configuration of bonding and the
network changes dramatically.  In multiple switch topologies, there is
a trade off between network availability and usable bandwidth.

   Below is a sample network, configured to maximize the
availability of the network:

               |                                     |
               |port3                           port3|
         +-----+----+                          +-----+----+
         |          |port2       ISL      port2|          |
         | switch A +--------------------------+ switch B |
         |          |                          |          |
         +-----+----+                          +-----++---+
               |port1                           port1|
               |             +-------+               |
               +-------------+ host1 +---------------+
                        eth0 +-------+ eth1

   In this configuration, there is a link between the two
switches (ISL, or inter switch link), and multiple ports connecting to
the outside world ("port3" on each switch).  There is no technical
reason that this could not be extended to a third switch.
 

fbcadmin

New Member
Nov 18, 2018
19
0
1
New Question - we just received two Quanta LB6M - that we will use for a ceph storage network. The network does not need vlans or routing etc.

I've read a lot of people change the firmware to something else.

For our use - are there speed and reliability or other good reasons to change the firmware / operating system ?
 

fbcadmin

New Member
Nov 18, 2018
19
0
1
I tried using native firmware with ceph cluster. there were cluster communication issues. after an hour - gave up.

next will install Brocade TurboIron firmware.
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,728
3,078
113
33
fohdeesha.com
if you had issues with basic l2 traffic with the stock firmware, that sounds like a configuration issue - I wouldn't expect the brocade firmware to solve it but worth a try. the stock fastpath should at least do L2 no problem
 

fbcadmin

New Member
Nov 18, 2018
19
0
1
if you had issues with basic l2 traffic with the stock firmware, that sounds like a configuration issue - I wouldn't expect the brocade firmware to solve it but worth a try. the stock fastpath should at least do L2 no problem
thank you for the reply.

It could very well have been a configuration issue. It was almost working, drives mounted, multicast test worked. I may give it another try .
Multicast tests worked [ very fast ] . ceph drives mounted. I'll check logs and try to debug.

I did have active/backup bond set up , with a patch cord connecting the two switches. I had tested that prior to attempting operation of ceph cluster using the switches. If I can't find a bug in how i configured things then i might try a non bond connection.
 

fbcadmin

New Member
Nov 18, 2018
19
0
1
also this was the network config , the only difference from the normal one used is the address
Code:
#iface enp129s0f0 inet manual
#iface enp129s0f1 inet manual
#auto bond2
#iface bond2 inet static
#      address 10.11.12.2
#      netmask  255.255.255.0
#      slaves enp129s0f0 enp129s0f1
#      bond_miimon 100
#      bond_mode active-backup
#      mtu 9000

## Intel 10G in use now:
auto bond1
iface bond1 inet static
        address  10.11.12.2
        netmask  255.255.255.0
        bond-slaves enp7s0f0 enp7s0f1
        bond-miimon 100
        bond-mode active-backup
        mtu 9000
if something stands out as possibly wrong with that let me know.
 

fbcadmin

New Member
Nov 18, 2018
19
0
1
if you had issues with basic l2 traffic with the stock firmware, that sounds like a configuration issue - I wouldn't expect the brocade firmware to solve it but worth a try. the stock fastpath should at least do L2 no problem
Will I tried using the stock firmware - cluster communications issue I could not solve.
I tried setting up IGMP snooping and Multicast . Searched , tried settings etc..

anyway I'll change to Brocade as I have documentation for setting up igmp etc.

I knew you were just dying to know that.
 
Last edited:

fbcadmin

New Member
Nov 18, 2018
19
0
1
if you had issues with basic l2 traffic with the stock firmware, that sounds like a configuration issue - I wouldn't expect the brocade firmware to solve it but worth a try. the stock fastpath should at least do L2 no problem
Note after setting up brocade firmware - as you predicted - we have the same issue.

before doing that we'd tried a lot of things . so will of course continue to debug until fixed.

also thank you for the Brocade TurboIron firmware . these are great switches with now excellent software and manuals to follow.
 

fbcadmin

New Member
Nov 18, 2018
19
0
1
with 'mtu 9000' set in /etc/network/interfaces , jumbo needed to be set on the switch. that fixed all our issues.

note this was not a multi cast issue. in addition nfs clients could not access nfs mounts

so to enable mtu 9000 on the switch:
Code:
telnet@quanta-1#configure terminal
telnet@quanta-1(config)#jumbo
Jumbo mode setting requires a reload to take effect!
telnet@quanta-1(config)#write memory
Write startup-config done.
telnet@quanta-1(config)#end
telnet@quanta-1#reload
Are you sure? (enter 'y' or 'n'): y
Halt and reboot
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,728
3,078
113
33
fohdeesha.com
hehe yes, if you have configured your NICs to send up to 9000 byte packets, the switch will definitely need to be configured to accept them :p