Best 2 x 10G performance on older system: 2 x single vs 1 x dual-port card?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TheBloke

Active Member
Feb 23, 2017
200
40
28
44
Brighton, UK
Hi all

I am wanting to add 2 x 10G ports to my workstation and server. My question is: is there any potential difference, positive or negative, between doing this via one dual-port card (in my case, X520-DA2) versus 2 x single-port card (X520-DA1.)

The cards will go in PCIE 2.0 x8 slots, so I have sufficient bandwidth for 20G on one card.

I have no experience of 10G ethernet yet, but I have read a number of posts in this forum where people talk about getting rather less than 10G speeds when they test with iperf, and they often put that down to local motherboard/CPU performance issues. This made me wonder whether these bottlenecks might be changed depending on whether one or two PCIE slots were involved.

I'm upgrading both my Solaris server/NAS and Windows 10 workstation, and hope to buy a Quanta LB6M switch to go between them. But my question relates only to the workstation, as I only have one PCIE slot free in the server and so must use a dual-port card there.

I'm wanting to get 2 x 10G ports as my NAS tops out at 1.8-2.0GB/s sequential writes and about 1.5GB/s reads. While I don't expect to see this full performance day-to-day all that often, if at all, I figure having spent so much time and money upgrading my server to that level, and adding a proper 10G network, I may as well give myself the best chance to use it all. Especially as I hope to have a decent amount of SSD cache which will hopefully accelerate some requests.

The server will be providing both iSCSI and SMB shares to the workstation, as well SMB and NFS shares to other (1G) hosts around the network. I am going to try to use iSCSI to make my workstation diskless, moving its current 500GB SSD into the server as ZFS cache, and booting and running Windows 10 from iSCSI.

Here's the hardware in the Windows 10 workstation:
  1. Intel i7 920; 4-cores, 8-threads; overclocked to 3.6Ghz and 7.2GB/s QTI; Nehalem architecture
  2. 24GB 1600mhz tri-channel DDR3 RAM
  3. PCIE 2.0: two x16 slots + two x8 slots
    1. one x16 is used by NVidia GPU, leaving 3 x PCIE 2.0 x8 slots available

Can anyone comment on whether there's likely to be any practical difference between using 2 x single-port 10G cards versus 1 x dual-port in this hardware?

Is there an improved chance of achieving full bandwidth with two cards? Or maybe it's the opposite; perhaps using two cards and therefore more IRQs increases the chance of problems?

My preferred option is to get two cards, as this increases my flexibility: if I find I don't end up using 20G in the workstation I have the option to put that card in a third machine to bring it up to 10G. Getting two cards also spreads the heat better in my case, and as a tiny extra bonus it's also £5 cheaper.

So if it doesn't make any difference either way I'll get two cards, but I didn't want to check first to see if it does change anything at all.

Thanks very much in advance
 
Last edited:

TheBloke

Active Member
Feb 23, 2017
200
40
28
44
Brighton, UK
I'm hoping to finally buy my NICs by end of this week so just wondering if anyone has any thoughts?

Any reason at all to favour a dual-port 10G card over 2 x single-port cards, or vice versa?

Thanks in advance
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
In your case I'd go with 2 cards.
Usually 1 card is better since it leaves a slot free for further usage but it doesn't look like your primary concern:)
Throughput is 4GB/s on a x8 slot, with 10GbE running 1G max, so no need to worry with a single card (if your cpu/drives can push that much)
 
  • Like
Reactions: TheBloke

TheBloke

Active Member
Feb 23, 2017
200
40
28
44
Brighton, UK
OK thanks a lot a Rand. Makes sense. I do need to save slots in my server so I will go dual-port there, but in the workstation it is no issue.