NVMe: 2.5" SFF drives working in a normal desktop

Patrick

Administrator
Staff member
Dec 21, 2010
12,382
5,533
113
Yea just hoping people share that article. Very expensive to produce something like that but a totally unique piece of content.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,350
1,798
113
CA
I know I'll be linking to it from numerous articles I'm working on, so you got that :)
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,350
1,798
113
CA

neo

Well-Known Member
Mar 18, 2015
672
362
63
Just an update on this, I ordered cables from the company on June 7th, no updates, no cables.

Anyone else order and/or get any?

Hoping NEO's come in soon :)
Just cancel your order from them. Mine should be in soon.
 
  • Like
Reactions: T_Minus

neeyuese

Member
Feb 28, 2015
35
34
18
38
1.jpg

I have installed the ASUS Hyper Kit on my new ASUS X99-Pro testbed. Same as Patrick, I bought 750 2.5inch just for that cable...

2.jpg

You can enable the Hyper kit mode in ASUS bios.

3.jpg

After that you can find P3700 in boot device list.

4.jpg

5.png

6.png

Some desktop benchmarks here.

7.png

8.png

After running Iometer QD32 128KB Sequence Write test for about 3 minutes , the driver limit the write speed from 2.1GB/s to 1.4GB/s due to high temperature. It's too HOT ! even can't touch the SSD case...

9.jpg

Finally I put a 12CM fan like this when running the benchmark.
 
  • Like
Reactions: Patrick

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,350
1,798
113
CA
I have 8.1 with NVME running good now... not how it should, but good on 2011-v3 platform , a guy on[H] has it on a 2011 v/2 platform I'll test those boars soon.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,350
1,798
113
CA
Just hit 80% CPU utilization of total of 30ghz Intel R3 E5 CPU with a P3700 800gb 2.5" & SM AOC card. Ram up the QD & Threads and these things open up to specs pretty much!
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,350
1,798
113
CA
And FWIW even with very very minimal airflow the P3700 is not at all "hot", maybe warm, but not on the high side of warm. I think the fan is 900rpm that's blowing across it, it's dead silent.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,382
5,533
113
One thing to remember - the CPU utilization for benchmark applications is largely due to load generation.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,350
1,798
113
CA
Honestly not too sure what you're getting at.

I'm not clear on what your definition of "load generation" is other than having a lot of threads with high QD, which I specifically said was why the CPU utilization was so high, if there are other reasons why benchmarking software cause false CPU spikes/utilization please, let me know. I saw a steady utilization almost consistently during the entire Read/Write tests 4k or sequential that had higher QD & #Threads.

Some benchmarking software that you can't change the # Threads or # of QD don't tax the CPU at all.

For a single user as a desktop I think there's nothing to worry about ;) but you're also never going to max out the drives performance.
 
Jun 24, 2015
140
13
18
74
If I'm reading this correctly ...
"Just hit 80% CPU utilization of total of 30ghz Intel R3 E5 CPU"
a single NVMe SSD required 30 x 0.80 = 24 GHz to drive a single SSD?

Am I reading that correctly?

I did also read the following comment about "load generation"
but 24 GHz seems to be quite excessive.

Why are so many cores involved in doing I/O with a single SSD??

This is the sort of CPU utilization which we usually see
doing routine benchmarks with a fast ramdisk, but
even then there are usually only 2 cores involved --
one to do computation, and the other to do the I/O.
 
Jun 24, 2015
140
13
18
74
As Allyn Malventano observed at pcper.com,
the Supermicro AOCs are "pass-thru" cards,
not RAID cards with their own input-output controllers.

One suspect, then, is the efficiency of your Supermicro AOC
e.g. the driver may need optimization.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,350
1,798
113
CA
As I said, high QD and high Thread.
QD 128 Threads 10 - 79%
QD 128 Threads 64 81%
QD 128 Threads 1 -Low %
QD 1 Thread 1 - Low %

Like @Patrick said, this is a very unlikely work-load for most anyone I mean the drive utilization was 98% most of the time during benchmarking, how often are people driving SSD at 100%? During certain business operations, but most other cases rarely.

I agree, it could be the SM card.

I have some AIC NVME drives to test next, and see what they do to the CPU.

My other thread mentioned 20% utilization on approx 50ghz (Intel too) via ESXI Win VM testing.

Soon I'll have my file server setup with NVME drives and we'll see what real world CPU usage is when when it's being used by a handful of VMs :D
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,382
5,533
113
My sense is that the biggest issue is the efficiency of the load generation engines and is not the drives/ cards.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,350
1,798
113
CA
My sense is that the biggest issue is the efficiency of the load generation engines and is not the drives/ cards.
So you're thinking the Benchmarking software is putting a load that would be impossible to put on normally?

Along the lines of how Prime95 wasn't accurate / properly for Haswell by overloading it to a degree impossible under any workload, even at 100%? (So I heard, it would def. overheat if not properly cooled!! LOL)

These are all things I plan to address with additional "real world" benchmarks in the next 2 weeks.


Good info to share.

I have notes someplace too which benchmark utils caused high-load and others that didn't. The ones that didn't were not high QD or Threaded though.