Highpoint has done it! RocketRAID 3800A Series NVMe RAID Host Bus

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,533
5,854
113
I saw your profile post. I did just have an impromptu meeting with someone from HighPoint a few minutes ago about these products.

They appear to be PCIe switch (PLX) based designs. They also appear to be looking for an OEM partner.
 
Last edited:
Jun 24, 2015
140
13
18
76
Thanks, Patrick: because motherboards do not presently support 4 x U.2 ports,
I'm waiting to build a system with:

1 x Highpoint 3840A
4 x U.2 cable
1 x 5.25" 4-in-1 enclosure
(see Icy Dock's CAD drawing below: but add 2 x 40mm fans):

Like this:




+ U.2 cables

+ Icy Dock MB998IP-B(CP014)
MB998IP-B_2.5" HDD/SSD Cages_ICY DOCK manufacturer Removable enclosure, Screwless hard drive enclosure, SAS SATA Mobile Rack, DVR Surveillance Recording, Video Audio Editing, SATA portable hard drive enclosure



I now have a PCIe 2.0 workstation working very well with:

13GB ramdisk using RamDisk Plus from www.superspeed.com
-and-
4 x SanDisk Extreme Pro SSDs in RAID-0
-and-
several HDD "spinners" for backups

RamDisk Plus SAVEs and RESTOREs using that RAID-0 array.

Windows is installed in a 50GB primary NTFS partition on the RAID-0
and our 12GB database is stored in the ramdisk
-and-
in the secondary data partition on the remainder of that RAID-0 array.


So, I'd like to "ramp up" to NVMe using U.2 topology (see above).


As you know, Intel's DMI 3.0 link is limited to x4 lanes @ 8 GHz:
so, Highpoint's x16 edge connector is THE ELEGANT SOLUTION:

Want Ad: PCIe NVMe RAID controller

four @ x4 = x16

p.s. I'm assuming that Highpoint's 3840A is also bootable:
if not, they need to cure that omission ASAP:
the OS needs to be on a RAID-0 array of 2 or 4 x 2.5" NVMe SSDs, and
a proper enclosure will do the necessary cooling --

to prevent thermal throttling.

This goal is not mere dreaming: I've been corresponding
with May Hwang at Highpoint, and their 3840A satisfies
our WANT AD almost perfectly.

Allyn Malventano at www.pcper.com
has been invited to meet with May Hwang at Highpoint's
FMS booth as we speak, so expect an expert review
from Allyn in the very near future.

Keep up the good work, Patrick!

/s/ Paul (patent pending)
 
Last edited:

ttabbal

Active Member
Mar 10, 2016
766
212
43
47
It looks really, really, cool. But I'm not sure I want to know what it will cost. :eek:
 

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
I imagine the first will be cruelly expensive.

I'd be happy to see a pcie adapter card 8 wide that can handle two independent nvme drive on card at the moment.
NVME hardware raid is going to be obscenely fast. It will be very interesting to see who the buyers are.
 

ttabbal

Active Member
Mar 10, 2016
766
212
43
47
Absolutely. If only we knew a site admin that has contacts at HighPoint that might get a unit to review, so we can all drool and try in vain to get the companies we work for to get us one for our servers.... Such a person would be very popular...
 
Jun 24, 2015
140
13
18
76
contact May Hwang at Highpoint:[edited please do not post 3rd party e-mail addresses on the forums]
 
Last edited by a moderator:
Jun 24, 2015
140
13
18
76
tell May Hwang at Highpoint what you want: [edited please do not post 3rd party e-mail addresses on the forums]

They are now inviting such suggestions and recommendations.
 
Last edited by a moderator:

ttabbal

Active Member
Mar 10, 2016
766
212
43
47
I was obnoxiously volunteering Patrick. :)

I hesitate to inundate someone with email suggesting they send a review unit out. Or to shamelessly beg for one for myself. :) But if they really are interested in having reviews out there, getting one on this site would be a good place to start, IMO.
 
Jun 24, 2015
140
13
18
76
FYI: Allyn Malventano is supposed to be meeting with May Hwang
at Highpoint's FMS booth this week: one of the reasons for that meeting
is to arrange for Allyn to get a review sample of the 3840A. May Hwang
has known about Allyn's brilliance for a long time already :)
See www.pcper.com for Allyn's genius.

Hope this helps.

p.s. I've already suggested an experimental matrix to Allyn.
Highpoint have announced new PCIe 3.0 SATA, SAS and NVMe controllers,
so a parallel comparison of all 3 would be SUPERB.
 
Jun 24, 2015
140
13
18
76
My suggestion on August 7 to Allyn Malventano for reviewing Highpoint's new controllers:

Allyn,

They need to re-design these to enclose
4 x 2.5" NVMe SSDs like the Intel model 750:

OEM 400GB SSD 750 2.5IN NVME M - Newegg.com

Prosumers will not need dual-porting.

The latter Intel NVMe SSDs would be a logical choice
for your future review of Highpoint's new NVMe RAID controller.

I'm hoping you can test with 4 x SSDs in RAID-0, at a minimum.
I can foresee a test matrix like this:

4 in RAID-0 @ SATA
4 in RAID-0 @ SAS
4 in RAID-0 @ NVMe

Repeat the above with and without backplanes:
the latter permutation has SSDs simply wired
directly to the NVMe RAID HBA.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,752
2,129
113
Thanks, Patrick: because motherboards do not presently support 4 x U.2 ports,
I'm waiting to build a system with:

1 x Highpoint 3840A
4 x U.2 cable
1 x 5.25" 4-in-1 enclosure
(see Icy Dock's CAD drawing below: but add 2 x 40mm fans):

Like this:




+ U.2 cables

+ Icy Dock MB998IP-B(CP014)
MB998IP-B_2.5" HDD/SSD Cages_ICY DOCK manufacturer Removable enclosure, Screwless hard drive enclosure, SAS SATA Mobile Rack, DVR Surveillance Recording, Video Audio Editing, SATA portable hard drive enclosure



I now have a PCIe 2.0 workstation working very well with:

13GB ramdisk using RamDisk Plus from www.superspeed.com
-and-
4 x SanDisk Extreme Pro SSDs in RAID-0
-and-
several HDD "spinners" for backups

RamDisk Plus SAVEs and RESTOREs using that RAID-0 array.

Windows is installed in a 50GB primary NTFS partition on the RAID-0
and our 12GB database is stored in the ramdisk
-and-
in the secondary data partition on the remainder of that RAID-0 array.


So, I'd like to "ramp up" to NVMe using U.2 topology (see above).


As you know, Intel's DMI 3.0 link is limited to x4 lanes @ 8 GHz:
so, Highpoint's x16 edge connector is THE ELEGANT SOLUTION:

Want Ad: PCIe NVMe RAID controller

four @ x4 = x16

p.s. I'm assuming that Highpoint's 3840A is also bootable:
if not, they need to cure that omission ASAP:
the OS needs to be on a RAID-0 array of 2 or 4 x 2.5" NVMe SSDs, and
a proper enclosure will do the necessary cooling --

to prevent thermal throttling.

This goal is not mere dreaming: I've been corresponding
with May Hwang at Highpoint, and their 3840A satisfies
our WANT AD almost perfectly.

Allyn Malventano at www.pcper.com
has been invited to meet with May Hwang at Highpoint's
FMS booth as we speak, so expect an expert review
from Allyn in the very near future.

Keep up the good work, Patrick!

/s/ Paul (patent pending)
Instead of "ramping up" to raid 0 NVME... why not ramp up to a more appropriate business database solution that has RAM caching built-in and doesn't require you to run it on your workstation/desktop as well?

NVME RAID-0 isn't going to be faster than an in-memory database.

It seems that a high frequency CPU coupled with enough RAM that can fit your (small by sounds of it) database in RAM and has 1 actual enterprise NVME (p3700) for database persistent storage and 1 for your OS will yield better results than NVME RAID-0.

What database are you using on your desktop/workstation that's not going to benefit from going on an actual server with server grade hardware?
 
Jun 24, 2015
140
13
18
76
> What database are you using on your desktop/workstation that's not going to benefit from going on an actual server with server grade hardware?

Answer:
The source files for our website, which now has 125,000 discrete files
and sub-folders many levels deep.

The client/server paradigm is the wrong approach
for how I want to achieve maximum user productivity
(i.e. my own productivity).

It's also too expensive.

We don't leave this workstation running 24/7, so the ramdisk
needs to be SAVEd and RESTOREd at SHUTDOWN and STARTUP:
as such, the non-volatile subsystem needs to be as fast as possible.

Simply navigating to deep sub-folders goes MUCH faster with a ramdisk:
there is really no comparison with anything else, except a future 32GB ramdisk
using very fast quad-channel DDR4.

Doing simple searches is also a breeze, e.g. even using Command Prompt:
e.g.:

attrib usps.tracking.*.htm /s >log.txt

I would never consider doing these tasks over existing networks == too slow.

And, our back-ups WRITE to other storage devices directly from that ramdisk,
reducing wear on those other storage devices.

I'm very happy with our current setup, but 4 x U.2 ports in RAID-0
are an experiment I expect to run circles around our current system.

Tasks like drive images of our Windows C: partition should also finish
much faster than a RAID-0 with 4 x SanDisk 6G SSDs.


I realize that you would have designed a different solution for
our individual needs here.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,752
2,129
113
125,000 flat files with deep nested folders using command prompt / searching, and more for a website... scary!!
Yeah, would have def. used a real database :)

You can still easily migrate to a 'real' database either by moving your contents into the DB or using a DB as an index of where to find your content... if you're using 125,000 files I'm assuming you have a pretty darn good naming convention so importing and making sense of the database should be rather simple as well.

The point I'm trying to make is that 14GB of data is small, and database systems (even free ones like mysql) if configured properly will have ALL that data in cache (RAM) for you and persistent on NVME when power is out, and... you eliminate your entire ramdisk / raid0, backup / management nightmare/headache/concerns, and likely yield faster performance. Additional benefit of being able to write queries, summary tables, do complex math calculations, and much more near instantly. Really, there's no comparison to using some sort of database vs. how you're doing it now.

For what you're doing RAID-0 of NVME is going to be slower than in-memory (RAM) database, and will be slower than single (1) NVME drive without raid-0.

You mention price so I'll hit on that as well:
It would be cheaper to setup 1 NVME drive for DB 1 NVME for OS, 32GB ECC, E3-1271 vs. 4x NVME, NVME RAID card, RAM drive.
You could even step up to an E5-1650 (or greater) if the db can utilize more cores for your workload, and more room for RAM if need be too, this would be a good system to last many years to come based on what's still working for you :)


Sorry, didn't mean to get off track but I enjoy solving problems/issues like this to increase performance and in this case likely keep upgrade path/cost cheaper than expected yet performing more :)
 
  • Like
Reactions: tare55
Jun 24, 2015
140
13
18
76
> RAID-0 of NVME ... will be slower than single (1) NVME drive without raid-0

I think your statement is backwards, frankly.

Well, since you're claiming special knowledge of the future,
without having done the requisite experiment,
we'll just have to wait until the reviews of Highpoint's 3840A
become available.

My prediction is that 4 x Intel model 750 NVMe SSDs
in RAID-0 wired via 4 x U.2 cables to Highpoint's 3840A
controller will tell us what we need to know -- no backplane,
but wired directly to the SSDs' connectors.

I doubt very much that Highpoint would have invested
so much engineering, only to produce an NVMe RAID Controller
with which a single JBOD drive is faster that a RAID-0 array
of 2 or 4 NVMe drives.

Yes, I believe you've got it backwards: but, I am willing to await
the experimental results, before claiming victory :)
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,752
2,129
113
> RAID-0 of NVME ... will be slower than single (1) NVME drive without raid-0

I think your statement is backwards, frankly.

Well, since you're claiming special knowledge of the future,
without having done the requisite experiment,
we'll just have to wait until the reviews of Highpoint's 3840A
become available.

My prediction is that 4 x Intel model 750 NVMe SSDs
in RAID-0 wired via 4 x U.2 cables to Highpoint's 3840A
controller will tell us what we need to know -- no backplane,
but wired directly to the SSDs' connectors.

I doubt very much that Highpoint would have invested
so much engineering, only to produce an NVMe RAID Controller
with which a single JBOD drive is faster that a RAID-0 array
of 2 or 4 NVMe drives.

Yes, I believe you've got it backwards: but, I am willing to await
the experimental results, before claiming victory :)
I don't need to have "special knowledge of the future" as you say to understand that RAID introduces latency, and when you're not dealing with a huge # of users simultaneously accessing large chunks of data you're going to come no where at all needing to access > 2GB/s capacity that 1 single NVME can already do. Therefor for your work load, and many others like it RAID0 is absolutely worthless, and as I said before using an appropriate database that handles in-memory cache will be not only easier to use, faster to access but easier to manage and cost less than your proposed solution.

You're dealing with tons of small files in a mostly workstation type environment and that requires the access to be as fast as possible to actually be faster... therefor, 1 single Intel Enterprise NVME will provide faster access to your data then 4x in RAID 0. Will 1 be faster transferring/reading 100% large files, probably not but this is not your work load... and 1 single NVME can already do 2GB/s you have less than 16GB of data and you're hitting it with a single user... pointless to complicate and spend more on a raid0 NVME setup of "cheap consumer" drives to attempt to reach performance of a single enterprise drive.


You can get 2TB P3700 NVME for $1200-1800$ on ebay (or here from another member) that will absolutely smash 4x Intel 750 NVME.

The whole point of your use case is irrelevant anyway it's like putting a 1000HP engine in a ford focus... sure it could be made to work, sure it will go faster than the 4Cyl that's in it but it's still a ford focus not meant utilize 1000Hp so very much of that HP is wasted in spinning tires, and no traction.
 
  • Like
Reactions: palm101

Patrick

Administrator
Staff member
Dec 21, 2010
12,533
5,854
113
I did get a pretty good/ broad industry perspective today.

Also - we are starting to see companies at FMS also move to Oculink pushed by Intel. We do have NVMe systems in DemoEval using Oculink already.
 
Last edited: