NVMe: 2.5" SFF drives working in a normal desktop

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RchGrav

Member
Aug 21, 2015
44
28
18
52
I just set up a machine for a friend using a 750 2.5" saturday night using it as a boot device.. There is some advantages.. So let me play the devils advocate for a moment. With a single NVMe system drive you no longer have the slowdown while performing disk intensive tasks which would normally make your system crawl because your OS drive was too busy, (150+ windows updates didn't use much more than 5-10% of the NVMe's performance so windows apps and web browsing still functioned as if the drive was idle) a regular SSD wouldn't have been able to accomplish this nearly as well. So on one hand it's not going to be that much faster from the fastest your system would be performing off of a standard SATA SSD.. however during those times when your system would be slow because your drive was busy performing disk intensive tasks, with NVMe system drive its going to be many times faster. The fact of the matter is that NVMe is going to be only faster during the periods when standard SSD would be slow. NVMe is really hard to slow down.. things you would normally avoid like multiple simultaneous file copies aren't a problem at all for NVMe.

Using NVMe as a boot + system device is still a little non-intuitive during the install process. You have to make sure everything is in EFI mode.. but you can't use Windows secure boot during the install process.. it needs to be EFI "Other OS", and you may even need to clear your secure boot keys... Once the install is done then you can set Windows Secure Boot in your EFI BIOS and reload the default secure boot keys..

Anyway.. if this discussion is "Using NVMe in a Normal Desktop" but we aren't using NVMe as a boot device.. then how ARE we using it?! Certainly not as plain old bulk storage. Sure we could maybe redirect some of our program folders there... or maybe boot from sata but locate %systemroot% on NVMe.. Or you could possibly use it as a level 2 SSD cache with something like PrimoCache.. (Love that program btw)

So let me put this out there for discussion... If you ARE using 2.5" NVMe SSD's in a normal desktop with a desktop operating system, how are you making the best use of this high speed, high iops performance? I mean besides storage benchmark scores. :p
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
So let me put this out there for discussion... If you ARE using 2.5" NVMe SSD's in a normal desktop with a desktop operating system, how are you making the best use of this high speed, high iops performance? I mean besides storage benchmark scores. :p
Well my "normal" desktop has 2x Xeon E5 V3 processors, 256GB of RAM and 12 SATA/ SAS SSDs right now. NVMe is being used for Hyper-V VMs. Basically a silent box that can even run VMs!
 
  • Like
Reactions: T_Minus

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
Well my "normal" desktop has 2x Xeon E5 V3 processors, 256GB of RAM and 12 SATA/ SAS SSDs right now. NVMe is being used for Hyper-V VMs. Basically a silent box that can even run VMs!
Geez! I'd be really curious to know how it is networked to everything.. You could be saturating 10gbe easily..
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
I just set up a machine for a friend using a 750 2.5" saturday night using it as a boot device.. There is some advantages.. So let me play the devils advocate for a moment. With a single NVMe system drive you no longer have the slowdown while performing disk intensive tasks which would normally make your system crawl because your OS drive was too busy, (150+ windows updates didn't use much more than 5-10% of the NVMe's performance so windows apps and web browsing still functioned as if the drive was idle) a regular SSD wouldn't have been able to accomplish this nearly as well. So on one hand it's not going to be that much faster from the fastest your system would be performing off of a standard SATA SSD.. however during those times when your system would be slow because your drive was busy performing disk intensive tasks, with NVMe system drive its going to be many times faster. The fact of the matter is that NVMe is going to be only faster during the periods when standard SSD would be slow. NVMe is really hard to slow down.. things you would normally avoid like multiple simultaneous file copies aren't a problem at all for NVMe.

Using NVMe as a boot + system device is still a little non-intuitive during the install process. You have to make sure everything is in EFI mode.. but you can't use Windows secure boot during the install process.. it needs to be EFI "Other OS", and you may even need to clear your secure boot keys... Once the install is done then you can set Windows Secure Boot in your EFI BIOS and reload the default secure boot keys..

Anyway.. if this discussion is "Using NVMe in a Normal Desktop" but we aren't using NVMe as a boot device.. then how ARE we using it?! Certainly not as plain old bulk storage. Sure we could maybe redirect some of our program folders there... or maybe boot from sata but locate %systemroot% on NVMe.. Or you could possibly use it as a level 2 SSD cache with something like PrimoCache.. (Love that program btw)

So let me put this out there for discussion... If you ARE using 2.5" NVMe SSD's in a normal desktop with a desktop operating system, how are you making the best use of this high speed, high iops performance? I mean besides storage benchmark scores. :p
I wasn't questioning using NVME for boot drive on desktop, or NVME in desktop in general... just questioning using 4x NVME drives in RAID for a desktop seems a bit excessive :)
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Geez! I'd be really curious to know how it is networked to everything.. You could be saturating 10gbe easily..
Or the other perspective is that I can run a lot in a single (virtually silent) box and not need lots of external networking. But yes, it does use 10g.
 
Jun 24, 2015
140
13
18
75
Here are a few considerations that you may be overlooking:
please do not take these points as criticisms, but as
my own personal and professional contributions
to this ongoing debate:

(1) U.2 ports and cables are single points of failure;

(2) we don't really know how reliable JBOD NVMe SSDs will be;
I have seen a lot of marketing promotions, and very positive
speed measurements, but I have STILL not seen any
discussions of MTBF (mean time between failure)
or infant mortality rates; based on 44 YEARS of IT
experience, it is not prudent or wise to expect that
100% of Intel's 2.5" NVMe SSDs will work perfectly;

(3) RAID modes and supporting software are already
very reliable and robust: they have been standard
features of modern motherboards for many YEARS;

(4) with so many different RAID modes now available
with modern chipsets and modern RAID controllers,
for the sake of redundancy and reliability -- if nothing else --
multiple 2.5" NVMe drives can be configured in any
of several redundant modes e.g. RAID-1, RAID-10,
etc.

(5) please also note that a limited selection of
2.5" NVMe backplanes reportedly support
up to 4 x drives: as such, there is a present
need to cable all 4 such drives to a host chipset
and/or host controller that can operate all 4
with full NVMe protocol support.

BTW, if you happen to come across any
backplanes with slots for more than
4 x 2.5" NVMe SSDs, please post that
information here for everyone's benefit.

Just my 2 cents.

p.s. Despite the fact that Hewlett Packard
got started with those 2 guys in their garage,
I'm beginning to detect a distinct bias in the
computer storage literature,
specifically:
the truly fast storage is so expensive that it
is beyond the reach of many individual desktop users
with the exceptions of a few JBOD U.2 drives.

Even then, new motherboards are frequently required,
but that directly abandons the original purpose
of PCI-Express i.e. expansion options.

It won't be long before patents will
only be awarded to corporations, and
individual inventors (like myself) will
NOT be granted any more IT patents.

At that point, America -- if not also other
parts of this planet -- will be suffering
under a vicious form of corporate fascism,
but in opposing such fascism thousands of American
soldiers have already given their lives
to defeat the "Axis" during World War II.

So, I am honestly concerned that the
clock speed of SATA-III is "stuck" at 6G,
while the spec for USB 3.1 increased both the
clock speed AND switched to a jumbo frame.

/s/ Paul
 

takao.nakagawa

New Member
Jul 10, 2015
13
6
3
42
Tokyo, Japan
d.hatena.ne.jp
If someone wants to organize another group buy of at least 25 or so cables, I can order it.
There are people here who wanted to buy 4 or 1 cable, right? I actually need just 6 but I am willing to buy the rest needed to make a purchase order of 25.
I don't need that many, but I will buy the rest ~20 cables or how many it would be necessary, so could you order them, please?
Also, could you please tell me how much time will ordering, delivery etc take?
Will send you my contact info via PM.
 

Continuum

Member
Jun 5, 2015
80
24
8
47
Virginia
There are people here who wanted to buy 4 or 1 cable, right? I actually need just 6 but I am willing to buy the rest needed to make a purchase order of 25.
I don't need that many, but I will buy the rest ~20 cables or how many it would be necessary, so could you order them, please?
Also, could you please tell me how much time will ordering, delivery etc take?
Will send you my contact info via PM.
Add me as another interested in a couple of u.2 cables.
 
Jun 24, 2015
140
13
18
75
Please allow me to elaborate a little more on my "want ad" above:

I don't expect I will receive too many arguments from PC enthusiasts
when I observe that multiple video cards have been all the rage
for at least 10 years now. And, most of the multiple video card setups
use cards with x16 edge connectors, with or without chipset support
for the full x16 PCIe lanes.

The Proposal we made to the Storage Developer Conference back
in 2012, included a goal to "sync" PCIe lanes with SATA cables:
http://supremelaw.org/patents/BayRAMFive/SATA-IV.Presentation.pdf
http://supremelaw.org/patents/BayRAMFive/overclocking.storage.subsystems.version.3.pdf

That idea was not limited to SATA as such, but sought to stimulate
debate about the advantages to be obtained by giving a flexible data cable
the same operational characteristics as a fixed motherboard trace.

Thus, we now see that SAS cables have reached 12 Gb/s,
and USB 3.1 cables have reached 10 Gb/s.
On the visible horizon is PCIe 4.0's 16 Gb/s clock rate.

As such, the overall trend is to increase the transmission clock rates
across the board. And SATA-III is obviously bucking this trend.

So, please envision the following:

As of PCIe 4.0, each x1 PCIe lane will oscillate at 16 Gb/s.
Using a 128b/130b "jumbo frame" during data transmission,
we get a divisor of 8.125 bits per byte (130 bits / 16 bytes).
For the sake of simplicity, we round that divisor to 8 bits per byte.
As such, 16 Gb/s divided by 8 equals 2 GB/s for each x1 PCIe lane.

Therefore, an x16 edge connector on a PCIe 4.0 RAID controller
should have a raw bandwidth (MAX HEADROOM) of 32 GB/s
in one direction e.g. READs.

If we distribute that bandwidth across 4 x U.2 cables,
then each cable has a MAX HEADROOM of 8 GB/s:
http://supremelaw.org/systems/intel/4-port.fan-out.cabling.topology.JPG

That raw bandwidth of 8 GB/s should be plenty of headroom
for 2.5" NVMe devices that are cabled to NVMe RAID controllers
with x16 edge connectors.

Throw into this mix the availability on the visible horizon
of sophisticated Non-Volatile DRAM, like Intel's Optane (3D XPoint),
and we are certainly in the same ballpark to foresee such
very high bandwidths.

And, of course, such a RAID controller with 4 x NVMe ports
should support all modern RAID modes, particularly RAID-0
which has become very popular for its ability to demonstrate
nearly linear scaling in sequential READ and WRITE operations.

In summary, by "syncing" x16 PCIe 4.0 lanes with 4 x 2.5" NVMe SSDs,
the data flow is symmetrical across the 4 x U.2 data cables, resulting
in a cabling topology that is perfectly suited for most modern RAID modes.

For RAID configurations that require more than 4 x SSDs,
we then postulate driver support for 2 (or more) NVMe PCIe 4.0 RAID controllers,
just as multiple video cards are now available, operational and reliable.

By starting our analysis with the upstream bandwidth and then
working our way downwards on that data stream, we can
anticipate extraordinary speed AND reliability from RAID arrays
built with 4 x 2.5" NVMe SSDs in open enclosures or backplanes,
expanding to 5+ NVMe SSDs when 2 or more NVMe PCIe 4.0 controllers
are working in tandem.

Hope this helps.

MRFS
 

CableGuy

New Member
Oct 1, 2015
6
0
1
Update 23-June-2015: Main site post with the latest - 4 solutions to add SFF 2.5" NVMe to your existing system tested is live with the latest information

We had a thread going on how to add 2.5" SFF NVMe drives to a current generation desktop system. Some of the review sites used a m.2 to SFF-8643 converter, but that is not practical if you:
  • do not have an open m.2 slot, or
  • want to install more NVMe drives
So we set about fixing that. Here is what you need:
  • Supermicro AOC-SLG3-2E4R - $149 (Amazon or Ebay but I got from ebay) and it came with a backplane indicator cable (CBL CDAT-0674) and both full and half-height brackets - NOTE: This works in some motherboards. The Supermicro AOC-SLG3-2E4 has a PLX chip that allows bifurcation down to x4 devices. Apparently that is the version we should be looking at. It is also what is adding that cost.
  • Cable from the Intel 750 series 2.5" retail kit - the label says SFF PCIe SSD Cable and there are part number markings: H73691-001 and AST 1513 - I could not find this cable for sale, but for $400 I just bought a $400 retail kit.
If you want to use 2x Intel 750 SSDs, this is an incremental $150$250 cost since the cables come in the retail kit.

Normally I would not spend $400 for a cable but alas I have much to do with it:
View attachment 480

Just to verify how well this worked I took the Supermicro AOC and put it in a Supermicro X10SDV-TN4F motherboard and used a spare Windows Server 2012 R2 Essentials boot disk. This image I had used to get some numbers off of add-in-card based SSDs like the P3600 and P3700 so I knew it was working with other NVMe SSDs. I then took a Dell branded Samsung XS1715 and it fired right up!
View attachment 479

Still validating performance, but the connection works on some motherboards at least. Stay tuned.

Here are the two cards Supermicro AOC-SLG3-2E4R and Supermicro AOC-SLG3-2E4 specs. I think we would want to use the Supermicro AOC-SLG3-2E4 in most cases.

View attachment 483 and View attachment 484
 

CableGuy

New Member
Oct 1, 2015
6
0
1
I have developed what I believe is a drop-in (plug-in) replacement for the cable assembly shipped w/ the Intel '750 Series' 2.5" PCIe NVME SSD's, currently shipped w/ the 1.2TB SSDPE2MW012T4RS, and the 400GB SSDPE2MW012G4R5 drives.
I understand that the SuperMicro AOC-SLG3-2E4R as well as ASUS Z97 & X99 mother boards support these 'add-on' SSD's via an SFF-8643 connection on their mother boards. I am expecting our solution to provide a connection to these devices.

As it is quite cumbersome to incorporate the SFF-8639 (U.2)drive connector onto a cable, with its requirement of *signal pairs, *'system management' lines, and *power wires, ..... we designed a PCB that integrates all of the above onto an adapter board that plugs onto the drive, and has a SAS high-density (12G-rated) SFF-8643 on the board to supply 'signals' and 'SM' from the Host to the drive.
The board has a standard 4-pin power plug (5v-GND-GND-12v) to supply power. We incorporated 3.3v regulator circuitry onto the board (derived from the 5v input) to supply 3.3v to the drive.
To connect from the drive adapter to the Host, as a part of our 'Kit', we supply a cable w/ SFF-8643's on each end.

As this is a BRAND NEW design, I would like to get a 'BETA' unit or 2 out to someone who has an existing system running the Intel 2.5" drive(s), and test our new design.

If you are interested, please contact me.

Chris Schwartz
CS Electronics - Irvine, CA
Home - CS Electronics
949) 475-9100
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
I have developed what I believe is a drop-in (plug-in) replacement for the cable assembly shipped w/ the Intel '750 Series' 2.5" PCIe NVME SSD's, currently shipped w/ the 1.2TB SSDPE2MW012T4RS, and the 400GB SSDPE2MW012G4R5 drives.
I understand that the SuperMicro AOC-SLG3-2E4R as well as ASUS Z97 & X99 mother boards support these 'add-on' SSD's via an SFF-8643 connection on their mother boards. I am expecting our solution to provide a connection to these devices.

As it is quite cumbersome to incorporate the SFF-8639 drive connector onto a cable, with its requirement of *signal pairs, *'system management' lines, and *power wires, ..... we designed a PCB that integrates all of the above onto an adapter board that plugs onto the drive, and has a SAS high-density (12G-rated) SFF-8643 on the board to supply 'signals' and 'SM' from the Host to the drive.
The board has a standard 4-pin power plug (5v-GND-GND-12v) to supply power. We incorporated 3.3v regulator circuitry onto the board (derived from the 5v input) to supply 3.3v to the drive.
To connect from the drive adapter to the Host, as a part of our 'Kit', we supply a cable w/ SFF-8643's on each end.

As this is a BRAND NEW design, I would like to get a 'BETA' unit or 2 out to someone who has an existing system running the Intel 2.5" drive(s), and test our new design.

If you are interested, please contact me.

Chris Schwartz
CS Electronics - Irvine, CA
Home - CS Electronics
949) 475-9100
Chris happy to give it a test. Most motherboards will require the AOC-SLG3-2E4 though.
 
  • Like
Reactions: Baron and CableGuy
Jun 24, 2015
140
13
18
75
Take a look at this Gigabyte Z170X-Gaming 3 motherboard:
note all of the room allocated to 3 x SATA-Express ports:

http://supremelaw.org/systems/gigabyte/3xSATA-Express.jpg

There is certainly enough room to accommodate
4 x U.2 ports on similar motherboards.

In the past, I've installed RAID controllers in the primary x16 PCIe slot,
and those PCs have worked just fine.

For maximum storage speed, x16 PCIe lanes should connect the CPU
directly to 4 x integrated U.2 ports, with support for all modern
RAID modes.

MRFS
 

Baron

New Member
Sep 24, 2015
7
0
1
54
Just received my AOC-SLG3-2E4 NVme card today, and was wondering if I would need the CBL-CDAT-0674 cable?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
I've had it work with and without cable, I'm honestly not sure what the cable does.