Quanta LB6M (10GbE) -- Discussion

aero

Active Member
Apr 27, 2016
348
80
28
52
If you were super serious about security you would not dual-home a server to dmz + internal, regardless of what it is running (hypervisor or baremetal, etc.). However, that could mean needing another server dedicated to the DMZ, which maybe isn't worth it. Security is always a compromise against cost, complexity, and ease of use.....until that security saves your proverbial bacon and then it was well worth it!
 

PGlover

Active Member
Nov 8, 2014
498
63
28
56
that is how i would connect it.

the risk of having your lan compromised because you bridge your hypervisor between your dmz and your lan is low, in my opinion.

i believe most sysadmin/network admin connect their system the same, at least that is how i connect mine, and how my friends do as well.

good luck! and most importantly, have fun!
I plan to start the configuration this weekend.
 

tjk

Active Member
Mar 3, 2013
397
131
43
www.servercentral.com
Anyone know what the latest sw version is for these and how to get it? I just bought a few of these from Natex, and want to make sure I put the newest code on them that I can.

Thanks!
 

patchate

Member
Jul 3, 2016
60
17
8
I got these switches from Natex a month ago, and finally got to put them in service at my homelab last night. I was transferring data at 700 megabytes per second from my Nas within the hour, and that's not even close to the theoretical limits of 10gbe interface.

I love 10gb networking. I don't know if I can go back to gigabit interfaces...
 
  • Like
Reactions: Sleyk and fvanlint

PGlover

Active Member
Nov 8, 2014
498
63
28
56
MathieuP... Here is an updated diagram. So the hypervisor server crosses the DMZ and Internal LAN zones. Is there any risk that my internal network could be breached with having the hypervisor server connected to the DMZ and Internal network?

View attachment 3129
Can someone confirm that the LB6M is capable of intervlan routing. If yes, in my attached diagram, should I perform intervlan routing on both the LB6M and Juniper EX3300 switch?
 

patchate

Member
Jul 3, 2016
60
17
8
Has anyone been successful in passing tagged vlan traffic on the lb6m? I've fiddling around with the switch all weekend, but no matter what I do, I can't assign included or excluded vlans to ports, or actually pass tagged traffic through any of the ports.

Anyone? Buehler? Buehler?
 

Sleyk

Your Friendly Knowledgable Helper and Techlover!
Mar 25, 2016
1,346
687
113
Stamford, CT
Im still running .14 firmware. People were reporting getting .18 firmware earlier in the thread i think.
 

Denos Christofi

New Member
Aug 8, 2016
1
1
1
56
Can you please share the Quanta LB6M manual. Unfortunately I cannot find it anywhere. Also if you have access to the firmware for LB6M and LB4M switches, that would be great!
 
Last edited:
  • Like
Reactions: hknet

_Adrian_

Member
Jun 25, 2012
45
5
8
Leduc, AB
Gave up on this switch... Under sustained load it turns into a hair dryer and slows down to a crawl ( sub gigabit speeds )
Unloaded it this morning and ordered a more suited switch.... This ones an HP ProCurve JC772A.

Hopefully this one will do better with multiple aggregated links than the LB6m
 

tjk

Active Member
Mar 3, 2013
397
131
43
www.servercentral.com
Gave up on this switch... Under sustained load it turns into a hair dryer and slows down to a crawl ( sub gigabit speeds )
Unloaded it this morning and ordered a more suited switch.... This ones an HP ProCurve JC772A.

Hopefully this one will do better with multiple aggregated links than the LB6m
I've got them running in a data center, so don't care much about the noise.

But I've had zero problems with load. I'm using them as straight layer2 for storage, and able to do 5 to 6gbs second sustained from vmware nodes to nfs storage nodes (freenas). That's on a per port basis to the nfs servers.

What sort of performance problems are you seeing, and what are you doing?
 

_Adrian_

Member
Jun 25, 2012
45
5
8
Leduc, AB
Its setup as follows:
- 8x 10GB links into the switch ( 2x 4 LACP Links ) from my HP C7000 via the Virtual Connect 10/10D
- 2x 10GB links to the MPX200 ( EVA/SAN to ISCSI ) that acts as a bridge into the network
- 2x 10GB links ( Living Room and Office PC / Management Console )

I first thought its a throughput issue from the MPX as its the newest link and thought it would be the issue...
Pulled Array Offline ( complete shutdown ) and ran tests...
It maintains 7.3gbps for about 30 seconds and then the connection speed drops about 100mbps per second after that.

Having a 24 fiber drop between the network cabinet to the C7000 makes things a bit more simple... a quick reconfig of the VC10/10 and patching everything though the blade server. Now and I can Maintain 8.5 to 9.2gbps transfer rates over the network and 4.4gbps to 7.2gbps to the MPX200 which is a slight drop over a dedicated connection vs ISCSI ( about 3% to 5% slower ) but should work for now.

Peak temp under heavy load with measured laser thermometer about 30cm ( almost a foot) away, I have observed chassis temps in the low 40C range ( 104F to 111F ).

To me sounds like a mix of things but its the last thing i want to deal with after a 12hr work day...
 

tjk

Active Member
Mar 3, 2013
397
131
43
www.servercentral.com
Its setup as follows:
- 8x 10GB links into the switch ( 2x 4 LACP Links ) from my HP C7000 via the Virtual Connect 10/10D
- 2x 10GB links to the MPX200 ( EVA/SAN to ISCSI ) that acts as a bridge into the network
- 2x 10GB links ( Living Room and Office PC / Management Console )
Way more complex of a setup that I am running, I let VMware handle the A/P on the 10gbe links. I'm happy with the performance, especially for what I paid for these.

I've been using IBoIP to do this before, but it was such a PITA with drivers on the VMW side, 10Gbe just works, and is plenty fast enough for me.
 

_Adrian_

Member
Jun 25, 2012
45
5
8
Leduc, AB
Way more complex of a setup that I am running, I let VMware handle the A/P on the 10gbe links. I'm happy with the performance, especially for what I paid for these.

I've been using IBoIP to do this before, but it was such a PITA with drivers on the VMW side, 10Gbe just works, and is plenty fast enough for me.
I opted for the C7000 Chassis because its flexibility, ease of use and its fairly cheap as well.
My main "work horses" are FOUR BL680G5's, each loaded with 4x 2.4Ghz Xeon Hexacores and 128GB of DDR2 running 2012 Hyper V Core.
Mezzanine Slot 1 is occupied by an NC532m, Mezzanine Slot 2 is occupied by an QMH2562 and Mezzanine Slot 3 is occupied by 4X QDR Cards.

Mezzanine Cards are under $50 a piece and the C7000 switches aren't too terrible either if you have the patience to hunt a bit...

Slot 1 and 2 is taken up by VC1/10F
Slot 2 and 3 is taken up by VC10/10D
Slot 4 and 5 is taken up by VC8GB
Slot 6 and 7 is taken up by 4x QDR Switch

I use IB internally to move data from server to server rather than Ethernet ( 40GB/s vs 10GB/s)
 
Last edited:

wildchild

Active Member
Feb 4, 2014
394
57
28
Its setup as follows:
- 8x 10GB links into the switch ( 2x 4 LACP Links ) from my HP C7000 via the Virtual Connect 10/10D
- 2x 10GB links to the MPX200 ( EVA/SAN to ISCSI ) that acts as a bridge into the network
- 2x 10GB links ( Living Room and Office PC / Management Console )

I first thought its a throughput issue from the MPX as its the newest link and thought it would be the issue...
Pulled Array Offline ( complete shutdown ) and ran tests...
It maintains 7.3gbps for about 30 seconds and then the connection speed drops about 100mbps per second after that.

Having a 24 fiber drop between the network cabinet to the C7000 makes things a bit more simple... a quick reconfig of the VC10/10 and patching everything though the blade server. Now and I can Maintain 8.5 to 9.2gbps transfer rates over the network and 4.4gbps to 7.2gbps to the MPX200 which is a slight drop over a dedicated connection vs ISCSI ( about 3% to 5% slower ) but should work for now.

Peak temp under heavy load with measured laser thermometer about 30cm ( almost a foot) away, I have observed chassis temps in the low 40C range ( 104F to 111F ).

To me sounds like a mix of things but its the last thing i want to deal with after a 12hr work day...
Lacp on iscsi is not smart.
You'd be much better off using seperated paths.
I believe virtually all iscsi manual warn agains a setup like this
 
  • Like
Reactions: fvanlint

Idar Lund

New Member
Aug 11, 2016
2
1
3
38
Below you can find a snippet of a lb6m running Indigo on 2.6.25-bcm-ntsw
Have you tested all the commands that you wanted to test?

At this moment I don't plan to make an uploadable image, but if the request is high I might do that as well.
Yes, I would love to see a "production" version of indigo for this switch!

Now if anyone understands openflow's rc.soc please let me know as I think we might fix the port issues but I am not in a hurry to understand rc.soc. I have tried the one inside switchdrvr but it seems switchdrvr does more that just using this file to boot up it's internal (or infernal) openflow. For now just have fun with this.
Got this one sorted out, or should I use tape and a pen to relabel the labels on my switch? :)
 
  • Like
Reactions: fvanlint

PGlover

Active Member
Nov 8, 2014
498
63
28
56
MathieuP... Here is an updated diagram. So the hypervisor server crosses the DMZ and Internal LAN zones. Is there any risk that my internal network could be breached with having the hypervisor server connected to the DMZ and Internal network?

View attachment 3129
All,

I am working in implementing the network design I posted in this thread. The first thing is to create a 2 port Trunk/LACP for Uplink to the Pfsense Firewall PC. I have setup the LAGG interface in pfsense as well as the port channel/LAGG in the LB6M switch. Here is the status of the LAGG port on the LB6M switch. Can someone please confirm that the port channel is active and working on the LB6M switch. Are there any other commands to verify the port channel is up and running?

(FASTPATH Routing) #
(FASTPATH Routing) #show port 1/1

Admin Physical Physical Link Link LACP Actor
Intf Type Mode Mode Status Status Trap Mode Timeout
--------- ------ --------- ---------- ---------- ------ ------- ------ --------
1/1 Enable Up Disable N/A N/A



FASTPATH Routing) #show port-channel 1/1


Local Interface................................ 1/1
Channel Name................................... ch1
Link State..................................... Up
Admin Mode..................................... Enabled
Type........................................... Dynamic
Port-channel Min-links......................... 1
Load Balance Option............................ 3
(Src/Dest MAC, VLAN, EType, incoming port)

Mbr Device/ Port Port
Ports Timeout Speed Active
------ ------------- --------- -------
0/25 actor/long Auto True
partner/long
0/26 actor/long Auto True
partner/long
 
Last edited: