Bonded 10gbe or single 56gbe

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Quartzeye

New Member
Jul 29, 2013
16
0
1
I am planning to rework my cluster and networking. I currently have a (4) server Proxmox cluster and I am running multiple 1gbe and 10gbe flat subnets. I plan on using a virtualized TrueNAS VM to support iscsi targets and pxe booting VMs. All said my networking is the only concern I have.

I don't want to run iscsi over 1gbe networks. 10gbe should be fine but as more VM's spin up and run I could see congestion. I believe that the 10gbe theoretical speed is sufficient for iscsi and will perform well. However, if I could bond a 4-port 10gbe NIC I could get a theoretical 40gb total throughput at 10gbe speeds. That seems sufficient but I also want to consider if a single 56gbe NIC would be better all around for performance.

Another thing to consider is I have a 10gbe managed switch but with 56gbe, I would have to direct connect (3) of my servers instead of through a switch as a 56/100gbe switch is just to pricey. Also, 56gbe limits my expandability of my cluster with out purchasing a switch.

So my question:
Is bonded 10gbe's throughput is comparable to using 56gbe?
If so, I would suspect bonded 56gbe would be even better?
Until 56/100gbe switches become more resonable, is bonded 10gbe a reasonable alternative for total throughput even if limited to 10gbe speed and throughput per connection.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
I would say you have a bunch of reading to do;)
56Gbe is a Mellanox only implementation of 40GbE (with 14bit encoding, FDR, instead of 10 bit, QDR). QDR (40GbE) is actually 4x10Gbit connections 'hard-bonded' (but can be split into 4x10 on switch side).

For a single connection its quite rare to be able to exceed the 10 to 14G a single 10G, QDR or FDR channel actually provide.
For multiple processes the total aggregated bandwith should be comparable, regardless if you bond 4 10g interfaces or use a QDR connector (assuming there is not too much overhead of the bond and its properly distributed) [but I never tested].

If you want more bandwith per channel then you have to move to 25GbE (SFP28) or its 'hard-bonded' variant QSFP28 (100GbE).

Now re affordability - 56GbE is only available with Mellanox switches, which in the STH world usually means a (converted) SX6012 or aSX6036; both have been known to be available for <$200 at times.

If you could do with 40 GbE then there are many more switches as thats a standard, although these are not necessarily cheaper.
Head over to the Brocade post for some options (with various amounts of 40G ports), or browse a bit more for further options listed here in the networking area.
 
  • Like
Reactions: Joshh

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
FWIW, when I was using these (a single connection in each) between two servers I was able to get 25-30Gbps on single file transfers. I'm trying to find some screenshots I had taken when doing that testing. If I find any I will post them.

 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
with a single thread? Don't think I ever managed that, even with nvme to nvme. Or I just never realized;)