Mellanox MCX354A Question

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

saverio

New Member
Jun 24, 2020
6
0
1
Hello!
This is my first post, I hope it will be usefull for other users in the future.
We are planning an hyperconverged infrastructure using Proxmox+CEPH. We will use MCX354A as NICs with 2x 40Gbe uplink on two different Arista switches.

1) I read that the the -QCBT can be flashed to -FCBT, after that the NICs are really the same or does the -QCBT have performance penalities?
2) There are a lot of iperf tests using single port, but did someone tested it using bonding?

Thanks!
 

Fallen Kell

Member
Mar 10, 2020
57
23
8
There is not enough bandwidth to the card through the PCI-e bus to fully support both 40Gbe ports. The PCI-e 3.0 8x lanes will only theoretically support 61.5Gbps (and in reality it will be a little less, probably 10-15%, so really more like 50-55Gbps). The dual ports in these cards are meant simply for redundancy (i.e. connecting to 2 switches or two ports in the same switch) for failover of port/cable/switch.
 

saverio

New Member
Jun 24, 2020
6
0
1
Thanks, probably I will put 2 NICs in each server, this way it should reach 80Gbe, I think.