F2uSoe: 88GB/s will have to wait. She just can take Anymore

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
In a reply to a previous post of mine @Rain alluded to the fact that at 88GB/s you can read data from the future. @tare55 pointed out that I would need 1.21 GigaWatts. Well as it turns out I will not be able to test either theory anytime soon. Much to the to dismay of the power companies billing department.

My two SuperMicro Cards (AOC-SLG3-2E4) showed up today. I only had 1 more 1.2tb drive sitting around. So 1st up was a try to a new sequential record in a 2u computer. I did achieve a new high of 22.4 with 10 drives, but that is not scaling well compared to 22.1 with 9 drives.

My 1st though that the SuperMicro card was somehow slowing down the party. Turns out it was not.

I guess I’ll just have to live with a paltry 22.1 GBs a second. I have much more work to do to break 2 million IOPS

upload_2015-11-28_15-48-8.png

9 NVMe disk array with the SuperMicro card controlling 1 drive, the max is pretty much the same as the other day. The writes might are more consistent, I attribute that to different interleave (i didn't document it the 1st time around)

upload_2015-11-28_16-8-21.png
 
Last edited:

Chuntzu

Active Member
Jun 30, 2013
383
98
28
In a post by @Rain he alluded to the fact that at 88GB/s you can read data from the future. @tare55 pointed out that I would need 1.21 GigaWatts. Well as it turns out I will not be able to test that theory anytime soon.


My two SuperMicro Cards (AOC-SLG3-2E4) showed up today. I only had 1 more 1.2tb drive sitting around. So 1st up was a try to a new sequential record in a 2u computer. I did achieve a new high of 22.4 with 10 drives, but that is not scaling well compared to 22.1 with 9 drives.


My 1st though that the supermacro card was somehow slowing down the party. Turns out it was not.

I guess I’ll just have to live with a paltry 22.1 GBs a second. I have much more work to do to break 2 million IOPS

View attachment 1265

9 NVMe disk array with the SuperMicro card controlling 1 drive, the max is pretty much the same as the other day. The writes might are more consistent, I attribute that to different interleave (i didn't document it the 1st time around)

View attachment 1266
I have played with interleave a bit and can drastically increase iops using a smaller interleave. Still end up with a slow down in max iops compared to individual drives but helps a bunch. What interleave did you use?
 

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
I have played with interleave a bit and can drastically increase iops using a smaller interleave. Still end up with a slow down in max iops compared to individual drives but helps a bunch. What interleave did you use?

The interleave that i used there was 2Mb and 64k block sizes, i put it on the picture so i would not forget. This was set up is for max sequential and the failed attempt at 25GB/s. I agree with you 100% that the smaller interleave is better for IOPS.
 

iq100

Member
Jun 5, 2012
68
3
8
Isn't memory bandwidth of say 50GB/sec going to limit these attempts at NVME transfer records?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
50GB/s is 2x what he's at now, and that's not even the limit for 1P, let alone 2.

Intel & Anand show 2P XEON v3s getting up to around 100GB/s.

It will be interesting to see how network transfer adds to RAM utilization.