ConnectX EN 10GbE Card $70

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

cactus

Moderator
Jan 25, 2011
830
75
28
CA
Here From my experiance with CX EN, they are slightly slower than Intel 10GbE cards with more then a few writers. These will mix with standard 10GbE cards.
 
Last edited:

cafcwest

Member
Feb 15, 2013
136
14
18
Richmond, VA
I'd think these would be great for the users here in a P2P usage. But as there are not cheap switching options, a widespread deployment of them isn't particularly feisible.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
At $70/each I think I'll give a couple a try...not much exposure if they don't work well. Just too bad they don't have LP brackets.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Would get expensive soon, but I wonder if you could build a "switch" with 4 of these in a box. Guessing 80gbps would be too much for an x86 based switch but interesting.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Would get expensive soon, but I wonder if you could build a "switch" with 4 of these in a box. Guessing 80gbps would be too much for an x86 based switch but interesting.
Its not completely inconceivable (nor would it be very hard). You would have to find a MB with more than the average number of PCIe x8 slots. It really wouldn't even take too much CPU to keep up with 80Gbps - an E3 could do it pretty easily, though you are probably looking at E5 series CPUs in order to get more than two x8 slots without using a PCIe switch chip (or maybe that could be a use for the X8SIA board and X5550 chips I am trying to sell...). You could build it using readily available software (RouterOS) if the drivers will recognize this card.

The problem with this is idea would be latency. While you could keep up with the raw throughput, you'd never do it without adding a bit of packet latency. Without going into detail, you would really be building a layer-3 bridge (a router with the same subnets on more than one interface) and not a switch. You'd be adding a ms or more latency for every packet. Doing this for a GigE "switch" might work pretty well, but adding additional inter-packet latency for a 10Gbe application is unlikely to have satisfying results :(
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Its not completely inconceivable (nor would it be very hard). You would have to find a MB with more than the average number of PCIe x8 slots. It really wouldn't even take too much CPU to keep up with 80Gbps - an E3 could do it pretty easily, though you are probably looking at E5 series CPUs in order to get more than two x8 slots without using a PCIe switch chip (or maybe that could be a use for the X8SIA board and X5550 chips I am trying to sell...). You could build it using readily available software (RouterOS) if the drivers will recognize this card.

The problem with this is idea would be latency. While you could keep up with the raw throughput, you'd never do it without adding a bit of packet latency. Without going into detail, you would really be building a layer-3 bridge (a router with the same subnets on more than one interface) and not a switch. You'd be adding a ms or more latency for every packet. Doing this for a GigE "switch" might work pretty well, but adding additional inter-packet latency for a 10Gbe application is unlikely to have satisfying results :(
Congratulations on post #300! Would just be interesting to see how bad it is :)
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
I was looking into a server as a switch for Infiniband. Came to the conclusion it would make more sense to get a $1k 36port IB switch and put an IB card in my firewall to do L3 IP-IPoIB gateway.

I estimated on the low side: E5-2603 ~$100; Mobo ~$300; Mem ~$30; QDR dual port ~$250; QDR single ~$150
So a four port swirver:)cool:) using single port cards to not limit throughput, ~$1050. And, as PL pointed out, more latency than the real switch.

For you Patrick, it might make more sense if you can repurpose some hardware.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
I could give it a quick try. I do have a MB with lots of x8 slots (X8SIA), CPUs (X5550s) and memory. Its all sitting ready to sell. I just ordered a couple of these ConnectX EN cards to put into my C6100 (too bad they don't have low profile brackets, but I'll make do). I guess I could throw something together before I put everything where it is intended to go.

Of course, the project is just not all that interesting when you have a Juniper EX-2500 in your rack...

Given enough time I guess I could do some benchmarks comparing something like Patrick suggests to a true wire-speed switch like the EX (since I have them side-by-side). Unfortunately, as Khan said to Kirk, time is a luxury I don't have.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Yes. That's what I do with most of my 10Gbe. SFP+ copper DAC cables. Just a couple of links use actual optical.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
This guys listing is still active and this is a great price for 10Gbe SFP+ cards. I got mine and installed into one sled of a C6100 running Proxmox VE (basically Debian). My Solaris-based ZFS server has Intel X520 NIC and everything is connected on a Juniper switch. Without doing any tuning (9000 byte jumbo frames on):

60 second netperf from client to server: 9891.16 Mbits/sec (pretty darn close to 10G)
60 second netperf from server to client: 9877.13 Mbits/sec (negligible difference)

Bonnie++ to from Proxmox box using NFS mount:
Version 1.03d ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
proxmox-node 96576M 68845 98 551492 34 153044 15 68486 94 424345 19 213.3 0

Kinda hard to read in the forum font...but basically 551.5MBytes/sec seq write, 424.3MBytes/sec seq read (pool benches @ 600MBytes/sec write & 1,000MBytes/sec read directly on server).

I'll post comparisons to Intel-based 10Gbe cards later.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
This guys listing is still active and this is a great price for 10Gbe SFP+ cards. I got mine and installed into one sled of a C6100 running Proxmox VE (basically Debian). My Solaris-based ZFS server has Intel X520 NIC and everything is connected on a Juniper switch. Without doing any tuning (9000 byte jumbo frames on):

60 second netperf from client to server: 9891.16 Mbits/sec (pretty darn close to 10G)
60 second netperf from server to client: 9877.13 Mbits/sec (negligible difference)

Bonnie++ to from Proxmox box using NFS mount:
Version 1.03d ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
proxmox-node 96576M 68845 98 551492 34 153044 15 68486 94 424345 19 213.3 0

Kinda hard to read in the forum font...but basically 551.5MBytes/sec seq write, 424.3MBytes/sec seq read (pool benches @ 600MBytes/sec write & 1,000MBytes/sec read directly on server).

I'll post comparisons to Intel-based 10Gbe cards later.
Very impressive. Maybe not full 10GbE on bonnie++ but certainly better than you'd get even with multipathed 1gbe ports.