PCIe NVMe HBA FYI

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Jun 24, 2015
140
13
18
75
MAX HEADROOM says:

8 GHz / 8.125 bits per byte @ x4 PCIe 3.0 lanes = 984.6 MB/second (x1 PCIe 3.0 lane)

4 x 984.6 = 3,938.4 MB/second (x4 PCIe 3.0 lanes = 1 x U.2 cable)

4 in RAID-0 x 3,938.4 = 15,753.6 MB/second (4 x U.2 cables)

generic cable topology (like Highpoint 3840A):


but using U.2 cables, of course.
 
Jun 24, 2015
140
13
18
75
p.s. I tried to get IcyDock interested in a 5.25" enclosure
with 4 x NVMe trays and a backplane, but I was not successful.

To exploit the widespread experience with 2.5" Nand Flash SSDs,
we can connect U.2 cables to this neat 2.5" M.2 NVMe enclosure
made by Syba. It comes with a thermal pad, so cooling/throttling
should never be a problem as long as this 2.5" enclosure is
properly cooled by chassis fans:

SY-ADA40112, 2.5" U.2 (SFF-8639) to M.2 NVMe

I bought one as a gift to Allyn Malventano, and he confirmed that
it worked fine in 2 modes: directly connected to a U.2 port,
and indirectly connected to an adapter installed in an M.2 socket:

SYBA SY-ADA40112 2.5" U.2 (SFF-8639) to M.2 NVMe - Newegg.com

 

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
p.s. I tried to get IcyDock interested in a 5.25" enclosure
with 4 x NVMe trays and a backplane, but I was not successful.

To exploit the widespread experience with 2.5" Nand Flash SSDs,
we can connect U.2 cables to this neat 2.5" M.2 NVMe enclosure
made by Syba. It comes with a thermal pad, so cooling/throttling
should never be a problem as long as this 2.5" enclosure is
properly cooled by chassis fans:

SY-ADA40112, 2.5" U.2 (SFF-8639) to M.2 NVMe

I bought one as a gift to Allyn Malventano, and he confirmed that
it worked fine in 2 modes: directly connected to a U.2 port,
and indirectly connected to an adapter installed in an M.2 socket:

SYBA SY-ADA40112 2.5" U.2 (SFF-8639) to M.2 NVMe - Newegg.com

Appreciate the info! The plan is to put these:


Into these:


Into 2x these:


Did you not find the metal case to be an issue for cooling purposes?
 
Jun 24, 2015
140
13
18
75
Appreciate the info! The plan is to put these:


Into these:


Into 2x these:


Did you not find the metal case to be an issue for cooling purposes?
At the Syba product page, there is a photo gallery;
one of those photos shows the pink-colored thermal pad
that sits between the M.2 and the aluminum enclosure:

 

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
At the Syba product page, there is a photo gallery;
one of those photos shows the pink-colored thermal pad
that sits between the M.2 and the aluminum enclosure:


Alright. 8 kits ordered from Amazon @$37 a pop. Hope they work!
 
Jun 24, 2015
140
13
18
75
Alright. 8 kits ordered from Amazon @$37 a pop. Hope they work!
I hope you don't mind a few dumb questions:

The trays with blue handles are NVMe, correct?:

http://www.servethehome.com/wp-content/uploads/2015/06/Intel-A2U44X25NVMEDK-hot-swap-cage-front.jpg


You are going to need 2 x AICs to control 8 x NVMe M.2 SSDs, correct?

1 of 2:
1 x AIC --> 1 x NVMe backplane @ 4 x NVMe M.2

2 of 2:
1 x AIC --> 1 x NVMe backplane @ 4 x NVMe M.2

CORRECT?

One more dumb question:

Does your x8 AIC work in pairs?

In case you haven't already considered this,
you'll need to ensure that your chipset assigns
a full x8 PCIe 3.0 lanes to both AICs.

Sometimes, a chipset will make its own decisions
about lane assignment, and sometimes
an x8 edge connector only gets x4 lanes assigned.


p.s. Maybe a photo of the backplane, for my benefit? (EDIT: see below ...)
Being low on funds, I can't afford to purchase
any of this stuff.

Thanks!!
 
Last edited:

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
Is this your backplane?

Yes I believe that is my backplane. Reading the manual for the chassis/motherboard Intel very clearly specifies two different kits to run 8x NVMe drives with the S2600WTT board.. each includes a different card, but I believe the backplane is the same.
 

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
FYI: Intel's upcoming 900P should "front load" your drive cage
with no serious problems:

Intel Optane SSD 900P – 3D Xpoint Based Drive For Enthusiasts
Interesting... I wonder what pricing and availability on 2.5" 900P's will be...May have 8 nearly new PM953's up for sale here pretty soon! :D

While waiting for the m.2 to SFF8639 caddies I tested both Intel 750's and a P3700 in the front load 2.5" trays.. both fit perfectly.

About to fire her up for the first time!
 
Jun 24, 2015
140
13
18
75
Yes I believe that is my backplane. Reading the manual for the chassis/motherboard Intel very clearly specifies two different kits to run 8x NVMe drives with the S2600WTT board.. each includes a different card, but I believe the backplane is the same.
Many thanks for your patience with me:
because I don't have these parts in hand here,
I have to rely on photos, text, and correspondence
with real users, like yourself!

I believe Intel has used a "color code" for those connectors:

If you scroll UP to the photo of the Intel AIC with x16 edge connector,
those SFF-8643 ports are the same putty color as the four ports
labeled PCIE 0, PCIE 1, PCIE 2 and PCIE 3.

The black ports are labeled PORT 0 - 3 and PORT 4 - 7:
those must be the SAS connectors, and that backplane
simply "breaks out" those 4 channels and routes them
to the matching SAS sockets on the interior of the backplane.
 
Jun 24, 2015
140
13
18
75
> While waiting for the m.2 to SFF8639 caddies I tested both Intel 750's and a P3700 in the front load 2.5" trays.. both fit perfectly.

Congratulations!

I was concerned whether the backplane sockets
are going to mate properly with the Syba caddies.

You're about to find out!

I believe there is a published standard for U.2 connectors,
and I doubt that Syba would have overlooked that key point.

I'm not so much concerned about the layout of the contact pins,
as I am concerned about the alignment of the U.2 connectors
with the Intel backplane.

We know the contact pins are correct, because Allyn Malventano
at www.pcper.com tested a Syba unit that I shipped to him,
for testing purposes, and he confirmed that his U.2 cable
worked with an integrated U.2 port on his motherboard,
and also with a U.2 cable plugged into an M.2 adapter.

I will be truly delighted to read that the Syba caddies
do align perfectly with the Intel backplane:

this is the kind of confirmation that we can share on other
interested discussion forums (e.g. by referring users
back to this thread).

I think we need to recommend that Intel bite the bullet
and design a general-purpose NVMe RAID controller
with x16 edge connector, 4 x U.2 ports, and support for
all modern RAID modes (as in my WANT AD):

Want Ad: PCIe NVMe RAID controller

This approach is really the ONLY way to circumvent
the bandwidth ceiling imposed by Intel's DMI 3.0 link,
which is exactly the same as a single NVMe M.2 drive.

And, a compatible U.2 cable will permit lots of
workstation users to connect such an NVMe RAID controller
directly to 2.5" NVMe SSDs installed in the myriad of
tower and mid-tower chassis that have proliferated
worldwide. Those chassis were already designed and
manufactured with proper cooling for all installed
2.5" SSDs.

There is really no need for Intel to abandon their
huge installed base of PC / workstation users.
Yes, your drive cage is perfect for a large server rack,
but a large percentage of PC users have tower and
mid-tower chasses with plenty of room for 2.5" drives.
 

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
> While waiting for the m.2 to SFF8639 caddies I tested both Intel 750's and a P3700 in the front load 2.5" trays.. both fit perfectly.

Congratulations!

I was concerned whether the backplane sockets
are going to mate properly with the Syba caddies.

You're about to find out!

I believe there is a published standard for U.2 connectors,
and I doubt that Syba would have overlooked that key point.

I'm not so much concerned about the layout of the contact pins,
as I am concerned about the alignment of the U.2 connectors
with the Intel backplane.

We know the contact pins are correct, because Allyn Malventano
at www.pcper.com tested a Syba unit that I shipped to him,
for testing purposes, and he confirmed that his U.2 cable
worked with an integrated U.2 port on his motherboard,
and also with a U.2 cable plugged into an M.2 adapter.

I will be truly delighted to read that the Syba caddies
do align perfectly with the Intel backplane:

this is the kind of confirmation that we can share on other
interested discussion forums (e.g. by referring users
back to this thread).

I think we need to recommend that Intel bite the bullet
and design a general-purpose NVMe RAID controller
with x16 edge connector, 4 x U.2 ports, and support for
all modern RAID modes (as in my WANT AD):

Want Ad: PCIe NVMe RAID controller

This approach is really the ONLY way to circumvent
the bandwidth ceiling imposed by Intel's DMI 3.0 link,
which is exactly the same as a single NVMe M.2 drive.

And, a compatible U.2 cable will permit lots of
workstation users to connect such an NVMe RAID controller
directly to 2.5" NVMe SSDs installed in the myriad of
tower and mid-tower chassis that have proliferated
worldwide. Those chassis were already designed and
manufactured with proper cooling for all installed
2.5" SSDs.

There is really no need for Intel to abandon their
huge installed base of PC / workstation users.
Yes, your drive cage is perfect for a large server rack,
but a large percentage of PC users have tower and
mid-tower chasses with plenty of room for 2.5" drives.

Tell me more about this DMI 3.0 Link limitation.. I was under the impression that DMI linked the CPU/PCH.. In the case of a pcie slot linked directly to the CPU, how does DMI come into play? On the X10DAC for instance:


As a side note, linking 2 new Intel 750's to the Intel HBA in the DAC board with no preconditioning/etc yielded the following results (one pass.. no tweaking):


850 Pro's connected to the on board LSI 3008:


Noe that the above numbers were calculated while using a set of temporary test 4655V3's..
 
Last edited:
Jun 24, 2015
140
13
18
75
> In the case of a pcie slot linked directly to the CPU, how does DMI come into play?

There is no connection: by using one of the x16 or x8 slots on your chipset diagram,
you are bypassing the DMI link completely.

Right dead center in the chipset diagram above, see "DMI2 4GB/s" .

Actually, that's a typo that should read "DMI3" or "DMI 3.0" .

(DMI 2.0 is 5 GHz / 10 bits per byte x 4 lanes ~= 2.0 GB/s)

The "4GB/s" is correct because a DMI 3.0 link uses x4 PCIe 3.0 lanes @ 8 GHz / 8.125 ~= 4.0 GB/second (roughly)
(the exact number is 3,938.4 MB/second = 8 GHz / 8.125 x 4).

The DMI link connects one of the CPUs (the one on the left) with the Peripheral Controller Hub (PCH).

In contrast, all of the x16 slots are connected directly to the 2 x CPUs: see the yellow "PCI-E X16" and "PCI-E X8"
connected directly to the two CPUs.

The above explains why it's so important to go with an
NVMe RAID controller with x16 or x8 edge connector
that completely bypasses that limited DMI 3.0 link!
 

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
> In the case of a pcie slot linked directly to the CPU, how does DMI come into play?

There is no connection: by using one of the x16 or x8 slots on your chipset diagram,
you are bypassing the DMI link completely.

Right dead center in the chipset diagram above, see "DMI2 4GB/s" .

Actually, that's a typo that should read "DMI3" or "DMI 3.0" .

(DMI 2.0 is 5 GHz / 10 bits per byte x 4 lanes ~= 2.0 GB/s)

The "4GB/s" is correct because a DMI 3.0 link uses x4 PCIe 3.0 lanes @ 8 GHz / 8.125 ~= 4.0 GB/second (roughly)
(the exact number is 3,938.4 MB/second = 8 GHz / 8.125 x 4).

The DMI link connects one of the CPUs (the one on the left) with the Peripheral Controller Hub (PCH).

In contrast, all of the x16 slots are connected directly to the 2 x CPUs: see the yellow "PCI-E X16" and "PCI-E X8"
connected directly to the two CPUs.

The above explains why it's so important to go with an
NVMe RAID controller with x16 or x8 edge connector
that completely bypasses that limited DMI 3.0 link!

...So I guess what I'm missing is, what's the affected audience? Who's trying to use NVMe downstream of the PCH? ...maybe SATA for someone who doesn't want to fork over the money for an HBA, but NVMe?
 
Jun 24, 2015
140
13
18
75
> Who's trying to use NVMe downstream of the PCH?

Mostly DIY prosumers who buy motherboards with
2 x M.2 slots, or 3 x M.2 slots, and configure those
multiple M.2 SSDs in a RAID-0 array.

ASRock has a motherboard with 3 x M.2 slots:

The ASRock Z170 Extreme7+ Review: When You Need Triple M.2 x4 in RAID

Problem is, those multiple M.2 slots are ALL
downstream of the DMI 3.0 link on ALL
of the latest model motherboards e.g. Intel Z270 chipsets.

So, if you know where to look, a RAID-0 array
of 2 x Samsung 960 Pro SSDs topped out at ~3,500 MB/second,
and a single Samsung 960 Pro SSD was not too far behind,
because all were downstream of the DMI 3.0 link.