Mellanox VPI FDR IB - 40GbE Switch

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
ibm does the same! remember FLR is wanted, it standard for *R-iov (sr-iov? mr-iov?) function level reset. you want this if you are using virtual functions! also the HP cards (one of them notes it can work in a regular slot but at 4x (pci-e3)?)

the IBM virtual fabric cards do similar, the first nic is cheap using pci-E x9 (x1 for ibm's version of ilo) so you get the first nic for cheap, then big $$ for more. lol.

If you don't use Virtual functions then a regular older card works great. FLR/SR-IOV/MR-IOV is where all of the bugs come out but with complex (FCoE,ISCSI) or virtual functions you can see that per-port flow control just doesn't work so hot. So you then have to use DCBx data center bridging to do per vlan flow control.

I've got an old solarflare card that can do 1024 VF's per port (2048 per card) which for a web-hosting would be awesome since each VM could have its own dedicated nic with hardware vswitch offload (each virtual function can talk to others without having to go to a real switch).

Amazing deals you guys find!!
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
ibm does the same! remember FLR is wanted, it standard for *R-iov (sr-iov? mr-iov?) function level reset. you want this if you are using virtual functions! also the HP cards (one of them notes it can work in a regular slot but at 4x (pci-e3)?)

...
I came to the conclusion that the FLR in the part number meant something else in this case, reflecting the fact that the card was a "FlexibleLOM" (flexible LAN on motherboard) card, meaning a little tiny PCIe card that doesn't look like a PCIe card and generally plugs in down low where the motherboard ports would usually be. Here are the two versions of the card, the one without the FLR in the name being a standard PCIe card:

HP InfiniBand FDR/EN 10/40Gb Dual Port 544QSFP Adapter 649281-B21
HP InfiniBand FDR/EN 10/40Gb Dual Port 544FLR-QSFP Adapter 649282-B21
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
you have a 580 and a 585? Sick.

The problem I find is the intel only has 2 pci-e bus and for that I could just get a dl380p gen8 or ml370

Can you fit a few quadro 6000 in those big 5 series? like a video card (quadro 6000) and a pair of tesla cards? I want to try out VDI but the GRID K1[vgx] cards cost a lot more than say a hacked GTX480 or a quadro/tesla [api-intercept/pass-through].

Mellanox-3 is the only card that Tesla can do GPUDIRECT bus-master mode where the gpu poops the datastream direct to the nic I'm guessing for HPC applications but could also be used to rapidly do VDI. 10gbe full duplex might be near pci-e x2 which is fine for business 3D, 40-56gbe would be fast enough to handle more than 1 video card in theory.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
you have a 580 and a 585? Sick.

The problem I find is the intel only has 2 pci-e bus and for that I could just get a dl380p gen8 or ml370

Can you fit a few quadro 6000 in those big 5 series? like a video card (quadro 6000) and a pair of tesla cards? I want to try out VDI but the GRID K1[vgx] cards cost a lot more than say a hacked GTX480 or a quadro/tesla [api-intercept/pass-through].

Mellanox-3 is the only card that Tesla can do GPUDIRECT bus-master mode where the gpu poops the datastream direct to the nic I'm guessing for HPC applications but could also be used to rapidly do VDI. 10gbe full duplex might be near pci-e x2 which is fine for business 3D, 40-56gbe would be fast enough to handle more than 1 video card in theory.
I had a DL580 G7, but while I loved having 64 DIMM slots and getting Xeon x7560 ES CPUs for the price of gumballs, the IO on the thing was just miserable. It had 11 PCIe slots, but only two IO chips. Just four of the PCIe slots were x8 and none were x16 plus it had only two IO chips. I sold it.

My DL585 G7, on the other hand, has the same eleven PCIe slots, but there are four IO chips and ALL slots are wide - seven are x8 and four are x16. Much better for my needs.

Both DLs have a cool feature - built in support for four 225watt GPUs or three 300W GPUs!

Now the Xeon E5 series of CPUs blows the doors off of the AMD 6xxx IO advantage and even matches the AMD memory bandwidth, while of course having far more CPU grunt. Once the prices get reasonable, a quad Xeon E5-46xx is my next database, either HP DL560 G8 or something like this Supermicro.
 

Beaubeshore

New Member
Apr 14, 2013
5
0
0
Did anyone get a webui to this switch. I am thinking I am going to return because I can find any good documentation on the os that is currently on the switch. Would love to know if anyone was successful with this switch.
 

jtreble

Member
Apr 16, 2013
93
10
8
Ottawa, Canada
Would these HP cards work with that? Maybe mrkrad would know?

HP 649282 B21 Infiniband FDR En 10 40GB2P 544FLR QSFP Adapter Dual Port Spare | eBay

$220 for dual port cards. FDR and 40Gb Ethernet.

Edit: Here's the HP page for them that says they are ConnectX-3 based: HP Infiniband FDR/Ethernet 10Gb/40Gb 2-port 544FLR-QSFP Adapter - InfiniBand adapters - HP: 649282-B21

So maybe they would?
I would think carefully before purchasing custom cards that have no "Mellanox family" equivalent (http://www.mellanox.com/pdf/products/oem/RG_HP.pdf). Just my 2 cents.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Did anyone get a webui to this switch. I am thinking I am going to return because I can find any good documentation on the os that is currently on the switch. Would love to know if anyone was successful with this switch.
I still have not had the time to hook mine up. Does it pass traffic?