Intel XL710-QDA2 in PCIe 4.0 x 2 Slot?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jtabc2

New Member
Dec 24, 2023
7
0
1
I want to directly connect two computers using two Intel XL710-QDA2 40Gbps PCIe 3.0 x8 cards. However, the only slot I have available on my MSI x670-P Wi-Fi motherboard is a PCIe 4.0 x 2 slot (the physical slot size is x16). See "PCI_E4" in the diagram below:

1703965278619.png

If the card I would put in the other computer is in a PCIe 3.0 x 16 slot, does this mean I will get 10 Gbps speeds (since the Intel card is x8 but would be limited to a x2 on the MSI motherboard)? Or would it not function at all, or at a different speed?

I did a similar test on these computers using Mellanox Connect-X2 10G SFP+ cards and I was getting about 500 MB/sec in the PCIe 4.0 x2 slot, which was higher than the 250 I expected given those cards are PCIe 2.0 x8 (I think).
 

UhClem

just another Bozo on the bus
Jun 26, 2012
438
252
63
NH, USA
...
If the card I would put in the other computer is in a PCIe 3.0 x 16 slot, does this mean I will get 10 Gbps speeds (since the Intel card is x8 but would be limited to a x2 on the MSI motherboard)? Or would it not function at all, or at a different speed?. ...
With both cards configured for (& linked at) 40GbE, your throughput will be limited to that of PCIe gen3 x2, ~16-1700 MB/s [~15Gbps] (real-world #s). Whether you actually achieve that data rate will depend on CPU/NIC_driver performance.

A worthwhile experiment would be to then test the performance with the card in your PCIe x4 slot. If those #s are appealing and your case accomodates FH cards (and you are sufficiently creative/motivated), you could use a M.2- riser-cable-adapter to give the card x4, positioning/attaching it (w/HH bracket) using the bracket/case_opening of that M.2 slot.
 
  • Like
Reactions: NPS

jtabc2

New Member
Dec 24, 2023
7
0
1
With both cards configured for (& linked at) 40GbE, your throughput will be limited to that of PCIe gen3 x2, ~16-1700 MB/s [~15Gbps] (real-world #s). Whether you actually achieve that data rate will depend on CPU/NIC_driver performance.

A worthwhile experiment would be to then test the performance with the card in your PCIe x4 slot. If those #s are appealing and your case accomodates FH cards (and you are sufficiently creative/motivated), you could use a M.2- riser-cable-adapter to give the card x4, positioning/attaching it (w/HH bracket) using the bracket/case_opening of that M.2 slot.
Thanks. I bought the cards and tried it in two different slots on the motherboard. Here are the results I observed:

Setup \ Commandiperf3 -c 192.168.XXX.XXX -P 8iperf3 -c 192.168.XXX.XXX -P 8 -R
PC1: PCIe 4.0 4x Slot & PC 2: PCIe 3.0 8x Slot2.36 GB/sec1.98 GB/sec
PC1: PCIe 4.0 2x Slot & PC 2: PCIe 3.0 8x Slot772 MB/sec748 MB/sec

I expected it to be a little better than it was in the 2x slot. So I am interested in your suggestion to use the M2 riser cable adapters. Any suggestions? Unfortunately, I have two GPUs in the system, so it will be difficult to get any riser cable setup to work, I think. See this image:



Let me know if you have any ideas.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
438
252
63
NH, USA
I expected it to be a little better than it was in the 2x slot. So I am interested in your suggestion to use the M2 riser cable adapters. Any suggestions?
Yes, could've/should've been ~1500+ [@x2]; especially given your x4 #s. And, given your 2 CPUs, I'd "think" ~3000+ for x4 #s. However, I don't use Windows (for anything "performance-related"), and I've found ntttcp a better testing tool than iperfN. Also nttcp is written by a person @Microsoft, so it should perform well on Win.

Given that using x4 is added $/effort/mess, it might be wise to repeat the tests with ntttcp before embarking. (your call)

I'll (try to) address the mechanical/logistical riser_cable issues in your other thread [Link].