Highpoint has done it! RocketRAID 3800A Series NVMe RAID Host Bus

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Jun 24, 2015
140
13
18
75
I don't know: I've put in a few requests to one of their Engineers in Taiwan,
but unfortunately she replied with a rather vague answer (no specific date(s)).

Allyn Malventano, storage expert at pcper [dot] com, was scheduled to meet her
during a recent trade show in Taiwan: you might try contacting him
at that website, to get an update.

Sorry I cannot be more definitive: the specs look very promising,
particularly the PCIe 3.0 x16 edge connector and RAID support for 4 x NVMe SSDs.

FYI: Syba are manufacturing a 2.5" enclosure for M.2 SSDs
which has a U.2 connector: with compatible mini-SAS to U.2 cables,
this enclosure should make it pretty easy to assemble a
RAID-0 array with 4 x M.2 NVMe SSDs:

SY-ADA40112, 2.5" U.2 (SFF-8639) to M.2 NVMe

plus Samsung 960 Pro NVMe SSDs e.g.:

SAMSUNG 960 PRO M.2 512GB NVMe PCI-Express 3.0 x4 Internal Solid State Drive (SSD) MZ-V6P512BW-Newegg.com
 
Jun 24, 2015
140
13
18
75
p.s. The one thing I did NOT notice in the specs for the Syba enclosure
is any thermal pad to transfer heat from the SSD to the enclosure housing.

The Samsung 960 Pro is reported to have a copper layer embedded in the
retail label: but, if that label does NOT contact this enclosure housing,
thermal throttling could possibly become an issue.
 
Jun 24, 2015
140
13
18
75
Yes: I believe you are correct.

I haven't (yet) actually assembled and successfully tested such a cabling setup
(I'm still building it in my mind's eye :)

I'm going on my knowledge of RAID controllers and cabling, in general.

The SFF-8087 connector is wider with 2 channels on one side,
and 2 channels on the other side of the contact plane.

The SFF-8643 connector split that contact plane down the middle,
and stacked the two "halves" to create a denser connector
(meaning, a narrower housing). This is the cable end
that mates with the corresponding ports on the model 3840A RAID controller.

At the other end, the SFF-8639 got a new name -- U.2 --
which is easier to identify because it mates with a more elaborate connector
on 2.5" NVMe SSDs

There are plenty of photos on the Internet of this elaborate connector.

(I believe "U.2" was chosen as the companion of "M.2" SSDs,
i.e. much easier to remember.)

Also, U.2 connectors typically come with a DC power input connector,
e.g. either SATA-style or Molex-style power input.

When I don't have the parts in-hand,
I like to search the Internet for photos of each e.g.:

Google SFF-8087 cable photo
Google SFF-8643 cable photo
Google SFF-8639 cable photo

Highpoint's specs list the SFF-8643 connector here:

http://highpoint-tech.com/PDF/RR3800/RocketRAID_3840A_PR_16_08_09.pdf

see "Four SFF-8643" (on the RocketRAID 3840A NVMe RAID Controller)

But, last time I checked, Highpoint's website does not list any
compatible cables under their "Accessories" category.


For confirmation, also ask Patrick Kennedy, who has much more experience
with server assemblies than I do; and, above in this thread Patrick has
already mentioned his meetings with Highpoint staff at trade show(s).

Hope this helps.


p.s. If you get your hands on a 3840A, would you please post an update here?
I'm dying to see some empirical measurements of a RAID-0 array with
four Samsung 960 Pro SSDs, assuming the 3840A is installed in a
compatible PCIe 3.0 x16 expansion slot. Intel's model 750 SSD
should also work in this same wiring topology, but the 750 is much
more expensive that the Samsung M.2 models.
 
Jun 24, 2015
140
13
18
75
Here's my hypothesis about a RAID-0 array with four Samsung 960 Pro SSDs:

EVEN IF a Z170 motherboard has 2 or 3 NVMe M.2 ports,
each such M.2 port has exactly the same upstream bandwidth
as the Z170's DMI 3.0 link: 32 Gigabits per second / 8.125 bits per byte
(because a PCIe 3.0 128b/130b jumbo frame sends 130 bits in a frame of 16 bytes).

Every M.2 RAID-0 array tested on Z170 chipsets has topped out
somewhere below 4,000 MB/second.

Let's assume perfect scaling with four Samsung 960 Pro SSDs
advertised at 3,500 MB/second per M.2 port:

3500 MB/sec x 4 = 14,000 MB/sec

Now, also assume 20% aggregate controller overhead
(cumulative, at both ends of the data cable):

14,000 MB/sec x 0.80 = 11,200 MB/second

As such, the Highpoint RocketRAID 3840A should realize
a measured / empirical READ speed that is almost
THREE TIMES the max upstream bandwidth of
the Z170's DMI 3.0 link.

Nothing we attach downstream of that DMI 3.0 link
can possibly exceed that MAX HEADROOM,
because that ceiling is hard-wired in the Z170 chipset.

Therefore, the ONLY WAY presently to exceed that ceiling
is to exploit the raw bandwidth available with a compatible
PCIe 3.0 slot feeding either a full-length x16 edge connector,
or a full-length x8 edge connector which runs at half
the raw bandwidth of the comparable x16 edge connector.

I've been over and over these calculations many times:
if you do find any errors in my arithmetic, RSVP ASAP!

Lastly, time will also tell if AMD's upcoming Zen architecture
will perform at similar raw speeds: AMD certainly knows
about NVMe -- the proof of the pudding is in the ....