$67 DDR Infiniband on Windows - 1,920MB/S and 43K IOPS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
you can get iperf for windows. What you are getting is not bad. The older first generation intel cards with max buffers (switch or not) would do about 2.5gbps per thread, peaking at 3 threads'ish (4th thread would not generate more load). These are XR997 10gbase-T either direct connect or not , flow control enabled or not.

So if you are getting 15gbps for one thread, that's pretty awesome. We ran atto on the XR997 10gbase-T and it was about 600-700 megabits which is faster than the samsung 840 pro's by a good bit. 62-63% of 10gbit over 10gbase-T. CPU was spiking 20-30% across each core but not at the same time (ie some sort of round robin queue was rotating around the single socket quad core on both send/receive).

what ramdrive are you using? Starwind free is only moving 4 million on ATTO. Older Core2duo machine with dual channel 1333.

You gotta remember that 10gbe ethernet was designed around 2.5gbps * 4 serialized streams like CX4. The SFP+ uses a single 10.3ghz serializer and no idea how they scramble 10gbase-T but the latency is ginormous.

I think ethernet was never meant to go over 2.5gbps per thread. It is why they came up with vlan-assist port multiplication. 2 ports on a nic are spread out of 8 nic's present to the machine with the nic adding VLAN tags to the switch (or DCBx lossless ethernet).

I need to find mobilenvidia and get some hacked XR997 (AT 82598eb) drives for 7,2008R2 .
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
Thanks, Mrkrad.

I agree that 15Gbps is pretty nice for a single thread but why won't it scale to 40Gbps if I throw 4 or more threads at it ?

I'm using a starwind ramdisk just like dba did and he managed to get 2000MB/s from it using Iometer. At this point I'd be happy with that.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Thanks, Mrkrad.

I agree that 15Gbps is pretty nice for a single thread but why won't it scale to 40Gbps if I throw 4 or more threads at it ?

I'm using a starwind ramdisk just like dba did and he managed to get 2000MB/s from it using Iometer. At this point I'd be happy with that.
Don't settle for 2000MB/S! I got my 1,920MB/S with an old DDR Infiniband card. With IPoIB, DDR Infiniband links at a mere 16Gbits/s. My QDR cards, which link at 32Gbits with IPoIB, get 3,280MB/S tested via IOMeter, and that's probably limited by the PCIe2 bus, a limitation you do not have. You paid for PCIe3 and 40Gbits, so make sure you get them, or return those cards.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
go to chelsio or ibm and check out their ethernet versus IB. The numbers in many of their redpaper are very common.

all based on 2.5 6.5 etc. commonalities based on block size
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
go to chelsio or ibm and check out their ethernet versus IB. The numbers in many of their redpaper are very common.

all based on 2.5 6.5 etc. commonalities based on block size
holy crap, they got 36gbps using SMB in windows 2012. http://www.chelsio.com/wp-content/uploads/2011/05/Microsoft-Word-T5-Brief-SNW-Spring-2013-docx.pdf

I just ordered a 1m copper Mallanox 40GbE specific cable to eliminate or confirm my Fiber Optic FDR cable as the bottleneck. Will post what I find when I get it.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
holy crap, they got 36gbps using SMB in windows 2012. http://www.chelsio.com/wp-content/uploads/2011/05/Microsoft-Word-T5-Brief-SNW-Spring-2013-docx.pdf

I just ordered a 1m copper Mallanox 40GbE specific cable to eliminate or confirm my Fiber Optic FDR cable as the bottleneck. Will post what I find when I get it.
I'm not surprised at their results. They report 36 Gbits per second. Scale up my results of 1,920MB/s over a DDR connection with a 16 Gbit link speed to its equivalent over a 40 Gigabit connection and you get 37.5 Gbits. I saw 3,280MB/s from a QDR card with a 32Gbit link in another test, which calculates to 25.6 Gbits, or 32 Gbits if scaled to a 40Gbit link, but that was on PCIe2 x8, which I believed is a bottleneck at that level of throughput. So to me, 36Gb/s over a 40Gb link looks reasonable. Excellent results, but not surprising.
 
Last edited:

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
Yes. There is a free software package called SIV (system information viewer) at SIV - System Information Viewer

The UI is awful, but look for the "PCI Bus" button, which will show every PCIe device, its address, and its actual link width.
4322MB/s using Starwind RAMDisk as a network share!!! That's nearly 4% better than Chelsio was able to get from iSCSI in their testing.

As it turns out the Startech PCIE riser I was using in my render nodes was limiting the HCA to PCIe2 x8 which was having a disproportional effect on performance.

I had to jerry-rig the mobo out of the chassis so that I could install the 40GbE card upright temporarily for testing. Once it was running at PCIE3-x8 (confirmed using SIV) all was good.

Thanks to everyone for hanging in there and helping me out until the end, especially dba!
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Nice! Now can you make us feel even worse about our networks by running a test with both ports connected?

4322MB/s using Starwind RAMDisk as a network share!!! That's nearly 4% better than Chelsio was able to get from iSCSI in their testing.

As it turns out the Startech PCIE riser I was using in my render nodes was limiting the HCA to PCIe2 x8 which was having a disproportional effect on performance.

I had to jerry-rig the mobo out of the chassis so that I could install the 40GbE card upright temporarily for testing. Once it was running at PCIE3-x8 (confirmed using SIV) all was good.

Thanks to everyone for hanging in there and helping me out until the end, especially dba!
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
Thanks again, dba.

I would but my cards are single port. My RAID array will only push 2000MB/s in the best of times so I didn't see the point of spending another $500 for dual ports + $470 for a second 20m Fiber Optic cable!
 
Last edited:

cactus

Moderator
Jan 25, 2011
830
75
28
CA
Here is my iperf testing of a lot of cards in different modes and multiple thread counts. I haven't had time to do a write up for it and I am going to be out of town for a month or so.

Linky.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Here is my iperf testing of a lot of cards in different modes and multiple thread counts. I haven't had time to do a write up for it and I am going to be out of town for a month or so.

Linky.
Main site post? Can help with graphs.
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
Nice! Now can you make us feel even worse about our networks by running a test with both ports connected?
It occurred to me that PCIE3-x8 has a max bandwidth of around 56 so that second port wouldn't really give that big of a boost in my case.

Mellanox do make a PCIE3 X16 monster dual FDR card called the Connect-IB which would open a hole in the fabric of space-time.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
That monster Mellanox PCIe3 x16 FDR card is supposed to be good for 12.5 Gigabytes per second! Gotta love Infiniband.

It occurred to me that PCIE3-x8 has a max bandwidth of around 56 so that second port wouldn't really give that big of a boost in my case.

Mellanox do make a PCIE3 X16 monster dual FDR card called the Connect-IB which would open a hole in the fabric of space-time.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
Main site post? Can help with graphs.
That was the plan, but I have been busy the last few weeks and I am going out of the country for a month. If I have some time at airport before my flight, I will throw something together.

Edit: My busy day is nothing compared to what you have hinted at a few times. I have not reached super human status.
 
Last edited: