NVMe: 2.5" SFF drives working in a normal desktop

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

SpoonNerd

New Member
Aug 25, 2015
1
0
1
40
Any further thoughts on if AOC-SLG3-2E4R will work for a single drive in any motherboard? There was one report in this thread of it not working. But it seems like a single drive would not require any shenanigans on the part of the motherboard bifurcating an x8 port to two x4 ports.
 

RchGrav

Member
Aug 21, 2015
44
28
18
52
Wish cables would be more available to buy. I'd be setting up a test box with 4.
I really need to score a cable also... but I only need one. I would really like to experiment with this 1.6TB DC S3600 I have.
 
Last edited:

Baron

New Member
Sep 24, 2015
7
0
1
54
I have read every post, and think that I am going to try this out. I recently purchased x4 750 400GB 2.5" drives and want to try to RAID them all under RedHat 7.1. I'm sure that my server specs is good enough to make this happen:

My System:


SPECS:
Supermicro X9QRi-F+
Supermicro | Products | Motherboards | Xeon® Boards | X9QRi-F+
Supermicro SC748TQ, 1400WRedundant PSU Chassis
Supermicro | Products | Chassis | 4U | SC748TQ-R1400B
x4 Intel Xeon E5-4650
256GB 1600 MHz DDR3 ECC RAM
Adaptec 16 port, 12Gb/s, 81605ZQ SAS/SATA RAID Adapter
PMC Adaptec | Series 8Q RAID Adapters
NVidia Quadro K4200
x2 Supermicro 8 slot 2.5" 12Gb/s Mobile Rack (Expect to populate with 12Gb/s SSD's in the future)
Supermicro | Products | Accessories | Mobile Racks | CSE-M28SACB
x8 Samsung 840 Pro RAID-0 (6Gb/s connected to 81605ZQ)
x8 Samsung 850 Pro RAID-0 (6Gb/s connected to 81605ZQ)
x4 Intel 750 400GB (Not used yet, AOC-SLG3-2E4R cards on order)

I have support for Bifurcation in the BIOS, and am looking into getting a expansion board that I could plug x2 AOC-SLG3-2E4R into one of my PCIe-3.0 x16 slots. At the moment, Bifurcation is disabled until a expansion slot is recognized. If anyone has any input, your help would be greatly appreciated. Thanks.
 
Last edited:
Jun 24, 2015
140
13
18
75
I don't know if this will help you in any way, or not;
so, I'll share this information with you rather than
to withhold this information, in the hopes that
it might point you in the right direction.

The BIOS on some RAID cards comes with
INT13 (interrupt 13) enabled: this allows
such RAID cards to be bootable.

The Highpoint budget RAID cards we have
used often allow INT13 to be DISABLED
using software we download from the
vendor's website to change the
factory default.

If you are planning to host two AOC
(Add On Cards), you might check
with the manufacturer to see if
INT13 will be a problem for your
intended configuration.

As a general procedure that works,
we found that modifying the BIOS
was much easier if the RAID card
was installed withOUT connecting
any drives.

Once the INT13 setting was correct,
THEN we were able to cable drives
and configure RAID arrays.

Hope this helps.

MRFS
 
Jun 24, 2015
140
13
18
75
p.s. One more thought: given the relative maturity of SAS,
and the emerging maturity of 12 Gb/s SAS SSDs,
you might find it much easier and much simpler to configure
a robust RAID subsystem using multiple 12G SAS SSDs,
like this Toshiba model:

Toshiba PX04S Enterprise SSD Review

Toshiba PX04S Enterprise SSD Review | StorageReview.com - Storage Reviews

Given what you must have already invested in all of that
superb hardware, you may want to go with "Plan B"
which allows you to wait for the availability of stable
and reliable NVMe RAID controllers that utilize a
cabling topology like this:

http://supremelaw.org/systems/intel/4-port.fan-out.cabling.topology.JPG

You'll have to decide whether to utilize a backplane,
or direct cabling (i.e. just another integration issue
that should be easily solved).

MRFS
 

Baron

New Member
Sep 24, 2015
7
0
1
54
Thank you @Paul A. Mitchell. I will look into that INT13 solution in the morning. As for A newer RAID solution, I contacted Adaptec, LSI, Intel, and supermicro about possible solutions, none of them seemed to know about a RAID card that would be NVMe compatable for these drives. As a matter of fact, after talking with a Supermicro tech rep about what I wanted (which was to have a solution that could atleast see the drives in any shape, form, or fashion), they didn't recommend this AOC-SLG3-2E4R card. Come to think about it, they told me that a NVMe card was not even avalible. In retrospect, this kinda' pissis me off after finding this forum and reading how you guys are using the same solution that I asked them about. I even went as far as to ask SM about a newer 2.5" mobile rack that supports the NVMe SSD's, and yet again, they couldn't even give me any information about that. Don't get started on Intel. I called them just this past Monday and inquired about other alternative connections for these drives. They stated that they have nothing avalible that I could use, and that I would be stuck with just using a M.2 connection to use the drive(s). Again, I am pissed off - as there is a solution that intel has, and it's called the Intel A2U44X25NVMEDK. LSI...forget about it. Sorry for the rant... At the moment, I think that the best option would be to use Plan A until Plan B becomes a reality. Again, thank you. Your help is much appreciated.
 

neo

Well-Known Member
Mar 18, 2015
672
363
63
As for A newer RAID solution, I contacted Adaptec, LSI, Intel, and supermicro about possible solutions, none of them seemed to know about a RAID card that would be NVMe compatable for these drives.
The consumer Skylake Z170 does support NVMe raid. As Xeons are always to follow, my assumption would be that the next Xeon Skylake based chipset will also support it.
 
Jun 24, 2015
140
13
18
75
At Newegg, I searched for:

NVMe backplane

These popped up:

SuperMicro BPN-SAS3-116A-N2 Backplane support 8x2.5" SAS3 HDD & 2x2.5" PCIe NVMe - Newegg.com

Supermicro BPN-SAS3-826A-N4 2U Backplane for 3.5/" HDD/SSD upto 8 x SAS3/SATA3 and 4 SAS3/SATA3/NVMe devices - Newegg.com

Supermicro BPN-SAS3-216A-N4 2U 24 Ports Hybrid Backplane support up to 20 x 2.5" SAS3/SATA3 HDD/SSD and 4x2.5" NVMe/SAS3/SATA3 Storage Devices - Newegg.com


Intel has similar backplanes -- somewhere in their product catalogs
(I've seen them, but can't put my finger on them, just now).

You might ask SuperMicro to escalate your tech support question,
because SM is obviously manufacturing NVMe backplanes (see above).


The other thing about Intel's 2.5" 750 NVMe SSDs is that they require x4 PCIe 3.0 lanes:
as such, if you are trying to join four of such SSDs into some RAID array e.g. RAID 0,
you will necessarily need 16 PCIe lanes just for that one RAID array.

One of the two thrusts of my presentation to the Storage Developer Conference
a few years ago was to allow variable transmission clocks, e.g. to "sync"
SATA-IV devices with the PCIe 3.0 standard: 8G and 128b/130b jumbo frames:

http://supremelaw.org/patents/SDC/SATA-IV.Presentation.pdf

And, likewise, SATA-V would "sync" with the 16G clock planned for PCIe 4.0.

Upping the clock rate and switching to jumbo frames extends the
logical evolution of SATA and SAS -- by mating a single PCIe lane
with a single serial data stream.

This logic becomes even more compelling with the arrival of PCIe 4.0's 16 GHz
transmission clock, and continuing with the 128b/130b jumbo frame.

As such, we should reasonably expect future SAS standards to increase their clock rate
from 12 Gb/s to 16 Gb/s, in stepwise fashion.

You will note that the SATA standards group are STILL stubbornly stuck at 6G,
while the USB 3.1 standard adopted BOTH the higher clock rate (10G)
-AND-
their own jumbo frame (128b/132b).

With a large enough on-board cache on these PCIe 3.0 SAS RAID controllers,
the need to support heavy multi-tasking workloads is being addressed robustly
by the speed and capacity of those on-board caches: those caches play
a very crucial role in the overall performance of such high-performance
RAID controllers.

Areca are another very well developed brand.

I'm exactly in agreement with you: the industry should be developing
reliable PCIe 3.0 RAID controllers with support for 2.5" NVMe SSDs,
even if a "bridge chip" is required to mediate the PCIe lane pool.


MRFS
 
Last edited:
  • Like
Reactions: abstractalgebra

Baron

New Member
Sep 24, 2015
7
0
1
54
Is Skylake is a newer chipset? I was referring to a separate controller much like my PCIe RAID card.
 

Baron

New Member
Sep 24, 2015
7
0
1
54
This is crazy! You guys are helping me more than SM, Intel, and Adaptec ever has this week.
 
Jun 24, 2015
140
13
18
75
The Intel DC P3608 uses a PLX bridge chip:

Intel DC P3608 Series 1.6TB NVMe PCIe SSD Review - High Density Enterprise Storage | Internals, Testing Methodology and System Setup

"This is where the magic happens. The PLX PEX8718 is a 16 lane PCIe 3.0 switch that funnels data from both of the PCIe 3.0 x4 controllers to the host over a PCIe 3.0 x8 link."


If we can have multiple video cards, each with x16 edge connectors,
why not one PCIe 3.0 NVMe RAID controller with a single x16 edge connector?


I keep asking this same question, but as you also confirmed, none of the
majors will admit to anything like this on their published road maps.


MRFS
 
Jun 24, 2015
140
13
18
75
I'm not sure about this, because I don't have any "hands on" experience
with these late model SuperMicro motherboards.

Nevertheless, the reviews I've seen on the Internet, with photos,
seem to imply that those NVMe backplanes are currently designed
to integrate with a limited number of SuperMicro server motherboards,
like this one:

Supermicro SuperServer 1027R-WC1NRT (NVMe) Review | StorageReview.com - Storage Reviews

Supermicro Expands NVMe Support in Server and Storage Platforms | StorageReview.com - Storage Reviews


Google supermicro NVMe site:storagereview.com


MRFS
 
Jun 24, 2015
140
13
18
75
Am I correct that both SuperMicro Add-On Cards are
merely "pass thru" cards, NOT RAID controllers?

This is what expert Allyn Malventano at pcper.com
has already explained, in Comments at that website.

Also, aren't those cards limited to OS "software" RAID arrays?
As such, a Windows software RAID is not bootable.

PLEASE CLARIFY/CONFIRM.

MRFS
 
Jun 24, 2015
140
13
18
75
Here's my idea for a WANT AD:

WANTED FOR HIGH-PERFORMANCE DESKTOP STORAGE:


PCIe 3.0 RAID controller with x16 edge connector
and 4 x NVMe ports (16 x PCIe 3.0 lanes total),
with 4 x cables compatible with direct-connect
2.5" NVMe SSDs. Premium option supports
jumper block or Option ROM setting to increase
transmission clock to 16 Gb/s when PCIe 4.0 arrives.
For preferred cabling topology, see:
http://supremelaw.org/systems/intel/4-port.fan-out.cabling.topology.JPG
Must support all modern RAID modes and be bootable (cf. INT13).

MRFS
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
So @Paul A. Mitchell we have a bunch of experience with these cards. TBH - I would not use 2.5" NVMe drives as boot devices. That is what you use a cheap SATA port for.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I think 4x NVME for "desktop" anything except maybe the most high-end 2P workstation would go mostly underutilized.
In order to push 4x NVME drives + have compute power for the rest is going to require a hefty setup!

While I'd love the RAID card you speak of in a desktop I can't see how you'd notice it at all.

My desktop/workstation experience was something along the lines of... 10k RPM WD Raptor to Intel 80GB G2 which was OMG awesome... to a Samsung 840P was noticeable, but not huge. Going from that 840P to Crucial M4 I couldn't tell... going from the M4 to a PCIE Samsung I could tell the latency was slightly less, but not really much to notice that I went "WOW". NVME is the point it's not really perceivable in a desktop compared to SATA or cheaper PCIE drives.



My 02
 
Last edited: