$30.29 OBO - HP 10GB Mellanox ConnectX-2 10GBe SFP+

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

h0tw1r3

Member
Oct 30, 2014
44
21
8
United States
zaplabs.com
New 671798 001 HP 10GB Mellanox Connectx 2 PCIe 10GbE Ethernet NIC | eBay

I have two of these. Work really well in my Linux boxen.

I've used HP DAC's, Cisco DAC's, and Fiberstore SFP's (flashed for Aruba spec) connected to a Quanta LB4M and an Aruba S2500.

Comes with HP firmware, but can be reflashed to a stock Mellanox firmware. You can also customize the PXE rom onboard. It uses iPXE (open source) by default. Best part (for me) is you can flash it right in Linux.

I have been reading up on their OEM firmware on the Mellanox website. Apparently you can change some of the features by passing an "ini" when you create a custom rom (pxe) paired with their firmware.

If you make an offer, let me know what the seller accepts. I may pick up another one (or two).
 

Biren78

Active Member
Jan 16, 2013
550
94
28
Great deal. Aren't the CX-2's coming off of support for new OSes?
 

Lance Joseph

Member
Oct 5, 2014
82
40
18
Just got one! The seller accepted an offer for $28 + free shipping. They just relisted for $34.

Thanks!
 

legopc

Active Member
Nov 2, 2014
227
38
28
28
The Netherlands
what kind of sfp module would one use with this and how much would they cost and do you also need the exact same sfp modules in the switch or can they be different?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
1 - Almost any brand SFP+ module works with this card. I've used Intel, Cisco, Juniper and cheap chinese knockoffs with success in ConnectX-2/3 EN.
2. Cost can vary. I've seen lows of $10 and highs that are ridiculous. Shop ebay.
3. You probably want SR (short reach) SFP+ modules
4. No - if you are using optics they do not have to match brand on each end. They must match optics type (SR-SR, etc).
5. Consider using copper SFP+ to SFP+ cables if your distances are short. Unless you need to go through a wall they are much easier.

Good luck!
 
  • Like
Reactions: legopc

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,826
113
Are these better than the Brocade 1020's?
Personally I have found Mellanox ConnectX 2s to be much easier to work with. I have Mellanox cards installed and Brocade 1020s on the Shelf.

Now the ConnectX 2 series is getting older, but is fairly well supported and much easier to get cables for.
 

james23

Active Member
Nov 18, 2014
450
124
43
52
is (beezzz) the reason ppl buy these (home users / non enterprise) to connect their main system to a NAS/SAN (like a cheap box running freenas with several HDDs, for example).

2 (3 really) questions:

1) Im trying to get an idea of why non enterprise ppl need more than full duplex gigabit (which just about every motherboard has) and im assuming it so that they can access file shares from a NAS? (who needs more than 120MB/s in a home setup, even on the uplink, unless ofcourse you are access a disk box directly)

2) would 2 x of these (plus 2 x GBICs) work in this scenario: I have a windows 2008 r2 server , and a 12 bay freeNas server with several disks. Put 1 in the win 2008 box, assign the adapter an internal/private (192.168.2.1) IP, put 1 x in the freeNas box, assign it a IP on the same subnet as the 2008 box... and then connect to the storage of the freenas box via IP, in windows? would that work or am i missing something / some part

3) please see Mr. F's question above mine as he needs an answer as well

THANKooouuuuuuu!
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
@james23, it is clear that none of us need to run 10gbe. It's more like 'because we can" or "because it's fun". Remember that you are dealing with a group of extreme hobbyists here...we like to test limits, see how much mileage you can get using used/ebay equipment, try things that are not 'average'. We also like learn, explore and to share for the benefit of others. And sometimes just brag a bit.

Really we are just like overgrown children.
 
Last edited:

james23

Active Member
Nov 18, 2014
450
124
43
52
I get that totally, but what is the source of the data that is going across the 10gig link, such that is it capable of more than 120MB/s? is this really going into a $800 + 10gig switch or are the connections generally PC to PC @ 10gbit?
tks!
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
I get that totally, but what is the source of the data that is going across the 10gig link, such that is it capable of more than 120MB/s? is this really going into a $800 + 10gig switch or are the connections generally PC to PC @ 10gbit?
tks!
It's not really that hard to saturate 10gbe. Reading an array of 12x hard disks will do it. Writing 2 or 3 SSDs. And once you are used to it 1gbe never feels the same again.
 

chinesestunna

Active Member
Jan 23, 2015
622
195
43
56
Man, just like piglover said, we do it because we're curious and can :) Boys with toys. At this price I'm tempted to pick a pair up for a direct link between my server RAID6 array and work station array, both should be good for 400+Mbps read and write for large sequential.
I was thinking about GbE NiC bonding but this seems cheaper and faster in the end
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
So update on pricing. I couldn't find anyone selling them that cheap anymore but Esiso has them/similar for $34.95 with offers. Just ordered 40 for a project ;) Got them at $34 for 2, $32 for 40. Wasn't in the mood to be too aggressive, needed to move project ahead. First 2 worked great.

Also bought a few of the QSFP -> SFP+ adapters (MAM1Q00A-QSA) for some existing VPI cards which cost the same price in case you already had Mellanox QSFP cards.
 

Rain

Active Member
May 13, 2013
276
124
43
These "EN"/"GBe" VPI ConnectX cards are presented to the OS as a standard network interface, right? And they could be used with "real" 10GBe switches & NICs for future expansion? Do they do checksum offloading, or does the CPU do everything (thus dealing with a lot of interrupts)?

Currently I have some ConenctX-1 IB cards hacked into a IPoIB-type configuration (point-to-point, no switches). Latency is rather high... higher than I thought it would be given the low-latency claims regarding Infiniband. CPU usage during high-bandwidth transmissions is definitely limiting the speed as well. I'm curious if upgrading to ConnectX-2 cards will reduce the latency between the few machines I have directly attached to each other.

Also, has anyone caught word if Mellanox is going to extend support for ConnectX-2 into vSphere/ESXi 6? I'm probably going to shoot them an email, I just figured I'd ask first.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I can answer the first part quickly. Yes, they are presented as eth# devices. We are using them against Gnlodal GS7200 switches.

Mellanox may release drivers themselves as well. They have been good at that.
 
  • Like
Reactions: Rain

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
Also bought a few of the QSFP -> SFP+ adapters (MAM1Q00A-QSA) for some existing VPI cards which cost the same price in case you already had Mellanox QSFP cards.
Hopefully you have better luck then I did. I couldn't get them to work in ConnectX-2 VPI cards (would not bring up a link).
 

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
I can answer the first part quickly. Yes, they are presented as eth# devices. We are using them against Gnlodal GS7200 switches.

Mellanox may release drivers themselves as well. They have been good at that.
Yeah support for EN cards is usually done quite quickly, if not already there at launch. Infiniband can take a bit though, seem to remember Esxi 5 OFED drivers took several months.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Yep, we bought a pair and tested with our cards and switches, worked great. So we bought another 50... so far everything seems to work in our new design. Thanks for the comment though, we will be sure to watch carefully.

Hopefully you have better luck then I did. I couldn't get them to work in ConnectX-2 VPI cards (would not bring up a link).
 

Rain

Active Member
May 13, 2013
276
124
43
I can answer the first part quickly. Yes, they are presented as eth# devices. We are using them against Gnlodal GS7200 switches.
Thanks for the quick response, Chuckleb! The temptation of buying some of these (or Mellanox's dual-port variant) is quickly growing...!

If anyone knows if these ConnectX-2 EN cards offload or if the CPU is left to do everything (requiring fast CPUs on both ends of the transmission to see anything close to 10GbE speeds), I'd appreciate it! Edit: I read Mellanox's spec sheet; seems they do offload!

What voodoo magic does Mellanox do "under the hood" when compared to similar offerings from Intel? Mellanox cards (judging by PCB layout & heatsink, nothing more) look way less complex despite supporting the same feature set.
 
Last edited: