NVMe: 2.5" SFF drives working in a normal desktop

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ATS

Member
Mar 9, 2015
96
32
18
48
My sense is that the biggest issue is the efficiency of the load generation engines and is not the drives/ cards.
Its highly likely that the actual load generators have received very little work/optimization over the years because they've never really been an issue. Now that we have devices that can be driven by things like 128 QD/64 threads to 4GB+ of bandwidth, its probably starting to be a big of an issue. I wouldn't be surprised that someone could drastically reduce the load with a profiler and a little bit of hacking.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Its highly likely that the actual load generators have received very little work/optimization over the years because they've never really been an issue. Now that we have devices that can be driven by things like 128 QD/64 threads to 4GB+ of bandwidth, its probably starting to be a big of an issue. I wouldn't be surprised that someone could drastically reduce the load with a profiler and a little bit of hacking.
I think that is true of several of these tools. Then again, it is hard when you have people with 80-120MB/s hard disks and others with multiple devices pushing over 2GB/s.
 
Jun 24, 2015
140
13
18
75
I'm actually happy to see these anomalies emerge.
When I worked with super minicomputers back in the
early 1980s, we were tasked with the goal of setting
time-sharing prices that were close to a large CDC mainframe.

We ran a math loop with and without writing a large matrix
to disk. There was such a HUGE difference in CPU use
when we added disk I/O, that helped us isolate a major
inefficiency in the FORTRAN compiler's binary I/O routines:
WRITE (LUN) ARRAY .

We wrote our own I/O routines and they were 20 TIMES faster.

So, sometimes these surprises go a long way toward real
scientific progress. Case in point: we've now isolated a
real need for a much larger PCIe lane pool e.g. to support
large RAID-5 and RAID-6 arrays of 2.5" NVMe devices.
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
Its highly likely that the actual load generators have received very little work/optimization over the years because they've never really been an issue. Now that we have devices that can be driven by things like 128 QD/64 threads to 4GB+ of bandwidth, its probably starting to be a big of an issue. I wouldn't be surprised that someone could drastically reduce the load with a profiler and a little bit of hacking.
There was a post on the main nvme website making note of this and the current release of crystal disk mark (if I am recalling the name correctly) was recently optimized to handle nvme better. Also in regards to benchmarks my new favorite is diskspd, it is the replacement for sqlio, it is open source but built by Microsoft and its on github. It is extremely small and easy to use and I have been able to get great utilization out of it. Microsoft likes using it in there demos and its scriptable so for reproducing test for multiple different arrangements is really easy. I actually like it better than iometer.
 

kdotcarter

New Member
Oct 8, 2013
21
13
3
There was a post on the main nvme website making note of this and the current release of crystal disk mark (if I am recalling the name correctly) was recently optimized to handle nvme better. Also in regards to benchmarks my new favorite is diskspd, it is the replacement for sqlio, it is open source but built by Microsoft and its on github. It is extremely small and easy to use and I have been able to get great utilization out of it. Microsoft likes using it in there demos and its scriptable so for reproducing test for multiple different arrangements is really easy. I actually like it better than iometer.
I prefer diskspd as well. It's a handy tool for benchmarking the Intel NVMe drives and I've used it for infiniband testing as well.
 
  • Like
Reactions: T_Minus and Chuntzu

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
@Chuntzu and to be clear that's the software I can hit 80% CPU on, the other tools DO NOT stress it like you can with CDM with high threads and high QD. As I've said 3 times now that's how you hit 80% usage on your CPU, the other tests don't stress it enough to hit those numbers... sure they're not optimized, but they're also not able to tax it to 80% CPU. Could it still be software causing CPU issues, sure, but I'm using the "NVME" one... so we'll see if they can fix the CPU load:)

I'll check out that other bechnmarking tool and report back :D
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
There was a post on the main nvme website making note of this and the current release of crystal disk mark (if I am recalling the name correctly) was recently optimized to handle nvme better. Also in regards to benchmarks my new favorite is diskspd, it is the replacement for sqlio, it is open source but built by Microsoft and its on github. It is extremely small and easy to use and I have been able to get great utilization out of it. Microsoft likes using it in there demos and its scriptable so for reproducing test for multiple different arrangements is really easy. I actually like it better than iometer.
Mind sharing settings perhaps new thread worthy? I know I'd like to see what you're doing and I can add data when I get it.

@kdotcarter maybe same?
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
@T_Minus when using diskspd and my e5-1620 I have maxed out my CPU on 4k random read iops. It actually caps at about 400,000iops and then the CPU is fully utilized. So I have run into issues CPU being the bottleneck. Sequential speeds are not caped by cpu ever... Well up to 18gb/s is what i have hit and no issues yet (18gb/s was with reg ssds) Kind of cool to think that CPU is now the bottleneck for the storage. Still migrating over to a dual e5-2680 rig, jus too much work work recently to make any progress. I will try and post how I use the tool and working with a buddy of mine to parse data from it.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
@T_Minus when using diskspd and my e5-1620 I have maxed out my CPU on 4k random read iops. It actually caps at about 400,000iops and then the CPU is fully utilized. So I have run into issues CPU being the bottleneck. Sequential speeds are not caped by cpu ever... Well up to 18gb/s is what i have hit and no issues yet (18gb/s was with reg ssds) Kind of cool to think that CPU is now the bottleneck for the storage. Still migrating over to a dual e5-2680 rig, jus too much work work recently to make any progress. I will try and post how I use the tool and working with a buddy of mine to parse data from it.
I look forward to your info and input @Chuntzu :)
 

ggg

Member
Jul 2, 2015
35
1
8
44
Hi,

I put a AOC-SLG3-2E4R into my Z97-WS in turn to each of 4 x PCIE Gen 3 x16 slots attached to a PLX 8747. I get this ASUS/AMI BIOS code (with no other cards plugged in):

D4 – PCI resource allocation error. Out of Resources.

Same in an ASRock Extreme 6 with the same PLX chip.

Anyone have a clue as to what those jumpers (JP1, JP2, JP3) do? :)
 

ggg

Member
Jul 2, 2015
35
1
8
44
The ASUS Hyper Kit with the Intel 750 U.2 (nice and succinct) didn't work in any of:

ASRock Extreme 6
* Gen 3 x4 M.2
* Gen 2 x2 M.2

ASUS Z97-WS x4
* Gen 2 x2 M.2
* Gen 3 x4 via HyperKit -> Gen 3 x4 (via x4 M.2 to PCI-E adapter card) -> x16 Gen 3 slot -> PX 8747

The last combination I thought would certainly work but it seems not. :( No PCI-E device has ever appeared.

I'm using the latest BIOS that has nVME support listed in both boards. Perhaps there is still some kind of support missing in the BIOS.
 

ggg

Member
Jul 2, 2015
35
1
8
44
I've ordered a AOC-SLG3-2E4. I will use the drive as a secondary drive. I'm unsure what kind of BIOS support is required for only that...
 

cn9amt100up

Member
Feb 11, 2015
43
9
8
38
Do Supermicro X9DAI support NVMe?

Or I just plug in a AOC-SLG3-2E4 to the mainboard and connect all nvme drive to the card and the drive will work perfectly?

last quesion, how can I attached the nvme drive to raid card and create a RAID 5 or 6 array?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
@cn9amt100up good luck :)

As you've probably seen my thread and posts here, even with APPROVED motherboards sometimes they don't work how they should.

I've used the AOC card on boards that were not "approved" with luck so you just need to try and test :) if it will work for your setup.

Be sure you have latest BIOS.
 

cn9amt100up

Member
Feb 11, 2015
43
9
8
38
@cn9amt100up good luck :)

As you've probably seen my thread and posts here, even with APPROVED motherboards sometimes they don't work how they should.

I've used the AOC card on boards that were not "approved" with luck so you just need to try and test :) if it will work for your setup.

Be sure you have latest BIOS.
Currently I have a RAID 6 and RAID 10 array build on 12GB/S SAS SSD with LSI 9361
I still examine how can I build up a RAID 5 or RAID 6 array by nvme drive
 

ISRV

Member
Jul 11, 2015
72
8
8
42
@neeyuese if you think the 750 is hot try some of the Samsung/ Micron drives!
if i get that right, he said he bought 750 only for the cable.
and that's p3700 2tb is too hot.
actually intel says that more capacious 2.5" p3700's needs more airflow.
so no wonder.

i wonder, are you all still just buying 750 just for cables and then what, throw it away? :) because how else would you use 750 w/o the cable?
that's definitely not an option.
same with that $250 AOC
for those who only want one NVMe drive, asus hyper kit looks like the only one option.

the question about the cables is still open.

i just still can't believe they selling those drives and not selling the cables.