Mellanox Infiniband (20 Gbps) HBA for 90 bucks

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Nice deal. I have two of those. They do not have onboard memory so you cannot use them in Solaris environments. For Windows, very simple installation IIRC.
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Nice deal. I have two of those. They do not have onboard memory so you cannot use them in Solaris environments. For Windows, very simple installation IIRC.
Might explain the firesale... :)

Nice site and thanks for all your contributions. I have enjoyed lurking for a bit.

peace,
 

vv111y

Member
May 6, 2011
76
4
8
Niagara Falls, Canada
Thanks ehorn & Patrick,

Just ordered 2. At checkout it said 2 were left, so I may have the last ones. If someone out there *really*, *really* needed them I just want cost & shipping. Otherwise I've got plans
 

ehorn

Active Member
Jun 21, 2012
342
52
28
The adapters arrived....

Here are some quick stats from an old box setup as an SRP initiator...

Anvil...


Atto...


CDM...


HDTune...


Target is Ubuntu w/8gb RamDisk...

Pretty good headroom on these adapters...

peace,
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Very nice benchies, they obviously work very well.
Agreed.... And this was pretty much plug and play (i.e. no tuning, etc...)

I am quite impressed with the latency of these 'older' adapters. The cards are spec'd to give 16Gbps. I am seeing ~10 here.

But I suspect something is clipping the pipe a bit here and not the HBA/driver combo...

I hope to play some more this weekend to see if they can get closer to full wire speed.

peace,
 

_Adrian_

Member
Jun 25, 2012
48
5
8
Leduc, AB
If i recall correctly a full duplex mode has to be selected or was it 10GBe / link.

Anyways...
I have 6 coming for my servers and waiting for the Serial Cable for my topspin 120 so i can set it up and update the firmware in the switch.
Currently im on a mission trying to find a PCIx one for my firewall.

The boys on the pfSense forum will be starting development on 2.2 which gonna be based on FreeBSD 9.0 which has OFED support.
For now they working out the bugs out of 2.1BETA which is currently "under construction"
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
I have 3 different IB cards coming. 3 are 20Gbps including one with RAM and the other is a 10Gbps. Ill do a write up once I get stuff working.
 

ehorn

Active Member
Jun 21, 2012
342
52
28
I suspect some (more) modern gear will do wonders for reaching wire speed here.

As I clock up ol'betsy and the bus and the 4GB of DDR2 ram, IB wants to climb out further, but the host continues to clip it off.

This platform has been very solid performer for me for many years but it is showing its age. I am reminded of Scott: "I'm given er all she's got cap'n an caint give er no more!" hehe...

She did manage to touch 1600MB/s reads in Atto. But all metrics continue to climb.

Anvil...


Atto...


CDM...


HDTune...



IMHO, I am simply hardware bound on this host. Time to modernize the lab I suppose. Technology refreshes are a fun (albeit costly) hobby.

All-in-all, I am well pleased. Seeing ~12.5 Gbps out of 16 effective is fine by me. Particularly given the aging gear this test was conducted on.

IMHO, for their performance/value point, they would serve a data hungry consumer (who requires a high speed fabric) pretty darn well.

Perhaps that technology refresh will have to include a couple ConnectX 40Gb cards for braggin' rights :)

P.S. I would enjoy seeing your guys numbers when you get them setup. Maybe better served in a more purposed thread.

peace,
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Have you tried the older Myricom 10G network cards ??
They are running for about $75 on fleabay...
No, I have not.

But whats that good for if your hardware is the limiting factor ???
Just remember... the chain is as strong as its weakest link :)
MS was marketing SMB Direct with ConnectX3's very near to wire speed at the InterOp in Vegas. The setup was the Romley platform with dual CPU's and Fusion IO II drives. They demo'd 5.8 GBs point to point.

http://blogs.technet.com/b/josebda/...ver-mellanox-connectx-3-network-adapters.aspx

EDIT: A nice showing for both companies IMHO. But this setup has quickly gone from "Great Deals" to ludicrously fat wallets. :)

Nevertheless, the modern IB HCA's are smokin' fast.

peace,
 
Last edited:

cactus

Moderator
Jan 25, 2011
830
75
28
CA
I just got two Cisco branded MHEA28-XTC set up with SRP from my Win7 computer to my main desktop(Linux Mint with kernel 3.0) as the target. My target was setup using the vdisk_fileio handler with a file on a tmpfs mount. The target was then formatted with NTFS from the Win7 box. Win7 box is a 2600K at 4.6GHz and 16GB of DDR3-1600. Linux target is a G630(with speedstep, running at 1.6GHz during testing except with QD32 4k test with crystal disk mark 3) and 16GB of DDR3-1066 ECC.

Max I saw on seq with CDM3 was 890MB/s when using mtu of 65K and connected mode. With default mtu and non-connected mode, I saw ~850MB/s. I hope I am limited by the memory on my target. I have a i7-930 and a G34 6128 system to use for more in depth testing, so I can see if I am limited by memory throughput. I have a MHGA28-1TC(Dual 20Gbps with memory) coming in the next few days along with a Voltaire 400 EX(10Gbps with mem). I will do a better testing and write up of them later this week or this coming weekend. I also have two Intel 10GbE cards I will try to compare with. On first impression, the IB is much cheaper, at least theoretically faster than 10GbE, uses less power(compared to gen1 copper 10GbE), and only a little harder to set up.

@ehorn, how do you have your target setup? Also, from what I have read the ConnectX2 and 3 cards do a much better job at TCP/IP checksum offloading which will help SMB and IPoIB throughput. They are also much more expensive.

Edit: Getting low performance because the card are MHEA and not MHGA...
 
Last edited:

ehorn

Active Member
Jun 21, 2012
342
52
28
...

@ehorn, how do you have your target setup? Also, from what I have read the ConnectX2 and 3 cards do a much better job at TCP/IP checksum offloading which will help SMB and IPoIB throughput. They are also much more expensive.
Thanks for sharing your prelim results.

I ran the same configuration as you (SRP).

I did run two tests.

First was IPoIB using Server 2008 with iSCSI S/W target. The performance was mid/upper 800's in seq. and latency was poor (no surprise)

Second was SRP using Ubuntu 12.04 (same basic setup as David Hunt's article). The numbers I have posted here.

IMHO, your seq's are right around the mark for the 10Gb HCA's. The MHGA should provide 2X more bandwidth (theoretically). I was hoping to see it and came up short in my tests.

Nevertheless, I am looking forward to your remarks on the MHGA.

peace,
 
Last edited:

33_viper_33

Member
Aug 3, 2013
204
3
18
Sorry for the noob questions...

Can these adapters be used as a storage solution? I see infiniband to SAS breakout cables and am wondering how that works. If so, can it interact with a switch? How does one address drives?
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
literally infiniband is just a connection bonding system so you can take that $999 ebay voltaire 36 port and use splitters to break out Quad,Double, Fourteen data rate to 2,4,?? ports.

Remember original CX4 ethernet 10gbe was actually 4 x 2.5gbps links - same theory, its easy to got XX speed, then you have to go WIDE.

You plug the break out to a switch, I am not sure how you keep it tidy and neat though!
 

33_viper_33

Member
Aug 3, 2013
204
3
18
So, I’m still trying to rap my head around infiniband and to that end, have been doing more research but am getting nowhere. From what I’ve read, infiniband can talk to infiniband targets. There have been experiments/prototypes that have used Infiniband targets to connect to a large volume of disks. I have yet to find an enterprise solution. From my understanding, Infiniband advertises itself to the operating system as a virtual NIC and a Virtual storage adapter. I guess I expected to find a solution similar to fiber channel storage allowing one to simply plug a external storage chassis into the adapter on the server instead of requiring a second computer to running Solaris or the like acting as a ISCSI target.
Am I getting anything wrong?