looking for 8-Bay NVMe U.2 enclosure, with either PCIe 4.0 or Thunderbolt IV/USB4 Interface

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Noticed there are a couple of suppliers of now offering Highpoint SSD6540 - U.2 (15mm) NVMe PCIe Gen3 SSD external U.2 Enclosures out there. Looking for provider offering these enclosures with PCIe Gen4 x4 Interface or Thunderbolt 4. Problem with M.2 2280's is they don't achieve a high enough density at 4TB/SSD whereas the U.2 can provide 15TB/SSD Intel® SSD D5-P4326 Series (15.36TB, 2.5in PCIe 3.1 x4, 3D2, QLC) since we have access to a number of U.2 Data Center SSDs we're looking to find an 8-bay enclosure that can accommodate eight (8) of these U.2 SSDs. Preferably with a thunderbolt 4 or USB4 interface, however PCIe Gen4 x4 interface is OK. We want to repurpose these Data Center U.2 SSDs to our Office Environments and make them available for Laptops & Desktop which all come with Thunderbolt 4/USB 4 and/or PCIe Gen4 x16 I/O slots.
 

NateS

Active Member
Apr 19, 2021
159
91
28
Sacramento, CA, US
That's a pretty odd set of requirements -- what's your use case for this?

I suspect you won't find the exact product you're looking for, as it's pretty niche, and both PCIe4 and TB4 are new enough that there's not a lot of niche products out yet. I'm not even sure it'll ever get made, since it would be leaving a lot of performance on the table by connecting 8 drives, each with a x4 interface, through a single x4 interface to the host. And most people don't need that much storage connected externally to only a single system.

The more typical way to solve this sort of problem is a NAS -- instead of a single box that can only connect to one system at a time, you put all those drives in a server, and give all the systems network access to it. To get sufficient performance, this might require upgrading your network, but it would very likely be cheaper and better overall than trying to equip all your systems with their own high performance 120TB flash array.

It's also worth thinking through how you'll handle backups with this system. An external thunderbolt box that plugs into a laptop while being used, but is left unplugged otherwise, is very hard to automate backups for, whereas a NAS server can do backups whenever you need to, potentially onto hard drives in the same box (for one of your backup copies).
 
The major problem is that NAS SSD is not available in NVMe PCIe Gen4 with direct mother board connectivity. All NAS even FLASH NAS which is ultra expensive like $25K for 16TB (4 x 3.84TB) or something but it only goes through a 100G connection which would saturate at the 100G connection with two SSDs simultaneously at 7,000MB x 2 = 14,000MB/sec or 112Gb/sec obviously a larger pipe than is supplied with Flash NAS anyway. So if you wanted to order a external NAS with sustained sequential read capability of 6,000 to 7,000 MB/sec what would you suggest? Exactly. So the obvious answer is putting something like the Intel SSD D7-P5510 3.84TB, 2.5in PCIe 4.0 x4 U.2 15mm in some kind of external enclosure so it can be connected to PC through a PCIe Gen4 x16 or whatever. The PC supports both PCIe Gen4 (x4 or x16) I/O cards. Problem is vendors today (Dell PowerEdge R7525) are currently only offer these capabilities on their large rackmount Server Platforms. Not a problem I have eight (8) PowerEdge Servers for hosting Client Oracle Websites, Unfortunately the AI Deep Learning TensorFlow, Keras modeling/training programs for the Financial and Investment smart strategies AI Model is running on a Workstation running Ubuntu O/S in a non Server O/S environment. The workstation does have dual NVIDIA Tesla 100's to accelerate the AI Processing though. The Intermediate Data Generated during AI Deep Learning (Neural Network) Training has to go off on to some type of super fast media super fast (SSD's up to 6,200MB/sec sequential write is available., this intermediate storage kind of acts like AI Cache and it keeps getting referred to by the Intelligent Neural Network Compute Engine during the training cycle so speed is very important! The training model could generate up to 4TB of temporary data in this cache. These data sets are important and can be saved and tweaked then re-ran again (but since it's been pre-processed it runs much faster. So the Storage Media/External Cache has to be really fast. Not sure what you consider a special case scenario but this is very common throughout the entire huge AI Deep Learning Industry hardly a special use case. The workstation doesn't support things like Intel Optane SSD on the mother board.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,089
1,506
113
You do know that Thunderbolt or USB 4 is going to get you 4GB/s at most? With 8 drives over the same connection, that's 500MB/s each or lower than SATA speeds. It also means only a single workstation can utilize it at any given time.

Why don't you stick all the NVMe SSDs in a server (example) along with a dual 100GbE NIC and use 100GbE NICs in workstations? The hypothetical product you are looking for doesn't exist because there's little demand due to bandwidth constraints. There are far better ways of doing things.
 

Patriot

Moderator
Apr 18, 2011
1,451
792
113
Sounds like you need a large pcie nvme or optane hhhl card...

#1 I think you will find that QLC drives are not going to have the endurance you need... and TLC might not either.
#2 Scaling will never be linear especially with nvme drives, more and more cpu overhead will be required and the more drives you have the higher qdepth is required to get their performance out of them. /multi user context.

I don't think you need as much performance as you think you do for just a pair of..."tesla 100s" which is not a model its a class without a generation.

Without more info... I would blindly suggest 1-2 PM1735 which are pcie gen4 x8 drives, if cost is not a factor... optane will have much better low qdepth performance than anything nand based.

The drives you listed were read optimized drives and would die an early death and in general not perform very well as a cache.
 
Last edited:
Exactly and when I find someone willing to lend me $26,724.02 for the sort of minimally configured Server I'll buy it! Look at the quote I received from Dell and I have a VPA with Dell.
Shipping Group Details
Shipping To
DOUGLAS CALL


Shipping Method
Standard Delivery
SYNAPTICHORIZONS LLC



Quantity
Subtotal
PowerEdge R750 - [amer_r750_14794_vi_vp]
$24,349.89
1
$24,349.89
Estimated delivery if purchased today: Jun. 03, 2021
Description
SKU
Unit Price
Quantity
Subtotal
2.5 Chassis
379-BDTF​
-​
1​
-​
SAS/SATA/NVMe Capable Backplane
379-BDSW​
-​
1​
-​
No Rear Storage
379-BDTE​
-​
1​
-​
No GPU Enablement
379-BDSR​
-​
1​
-​
PowerEdge R750 Server
210-AYCG​
-​
1​
-​
No Trusted Platform Module
461-AADZ​
-​
1​
-​


321-BGFG- 1
-​

338-CBBW

-​

1


-​
338-CBBW
-​
1
-​
2.5" Chassis with up to 24 HDDS (SAS/SATA) including 8 Universal Slots
Intel Xeon Gold 6354 3G, 18C/36T, 11.2GT/s, 39M Cache, Turbo, HT (205W) DDR4-3200
Intel Xeon Gold 6354 3G, 18C/36T, 11.2GT/s, 39M Cache, Turbo, HT
(205W) DDR4-3200
Additional Processor Selected379-BDCO
-​
1
-​
Heatsink for 2 CPU configuration (CPU greater than or equal to 165W)412-AAVB
-​
1
-​
Performance Optimized370-AAIP
-​
1
-​
3200MT/s RDIMMs370-AEVR
-​
1
-​
Unconfigured RAID780-BCDS
-​
1
-​
PERC H745 Controller, Front405-AAUZ
-​
1
-​
Front PERC Mechanical Parts, rear load750-ACFQ
-​
1
-​
Power Saving Dell Active Power Controller750-AABF
-​
1
-​
UEFI BIOS Boot Mode with GPT Partition800-BBDM
-​
1
-​
Standard Fan x6 V3750-ADGK
-​
1
-​
Dual, Hot-Plug,Power Supply Redundant (1+1), 1400W, Mixed Mode450-AJHG
-​
1
-​
Riser Config 2, Half Length, 4x16, 2x8 slots, SW GPU Capable330-BBRX
-​
1
-​
R750 Motherboard329-BFGT
-​
1
-​
OpenManage Enterprise Advanced528-BIYY
-​
1
-​
iDRAC9 Datacenter 15G528-CRVW
-​
1
-​
Broadcom 57414 Dual Port 10/25GbE SFP28, OCP NIC 3.0540-BCOC
-​
1
-​
PowerEdge 2U Standard Bezel325-BCHU
-​
1
-​
Dell EMC Luggage Tag350-BCED
-​
1
-​
BOSS-S2 controller card + with 2 M.2 480GB (RAID 1)403-BCMB
-​
1
-​
BOSS Cables and Bracket for R750 (Riser 1)470-AERR
-​
1
-​

























































No Quick Sync
350-BBYX​
-​
1
-​
iDRAC,Factory Generated Password
379-BCSF​
-​
1
-​
iDRAC Group Manager, Disabled
379-BCQY​
-​
1
-​
No Operating System
611-BBBF​
-​
1
-​
No Media Required
605-BBFN​
-​
1
-​
ReadyRails Sliding Rails
770-BBBQ​
-​
1
-​
Cable Management Arm, 2U
770-BDRQ​
-​
1
-​
No Systems Documentation, No OpenManage DVD Kit
631-AACK​
-​
1
-​
PowerEdge R750 Shipping
340-CULS​
-​
1
-​
PowerEdge R750 Shipping Material
481-BBFG​
-​
1
-​
PowerEdge R750 CE Marking, No CCC Marking
389-DYHE​
-​
1
-​
Dell/EMC label (BIS) for 2.5" Chassis
389-DYHF​
-​
1
-​
Custom Configuration
817-BBBB​
-​
1
-​
Basic Next Business Day 36 Months
709-BBFM​
-​
1
-​
ProSupport and Next Business Day Onsite Service Initial, 36 Month(s)
865-BBMY​
-​
1
-​
On-Site Installation Declined
900-9997​
-​
1
-​
32GB RDIMM, 3200MT/s, Dual Rank 16Gb BASE
370-AGDS​
-​
4
-​
3.84TB SSD vSAS Mixed Use 12Gbps 512e 2.5in Hot-Plug AG SED 345-BCVR
-​
6
-​
3.84TB Enterprise NVMe Read Intensive AG Drive U.2 Gen4 with carrier 400-BKGL
-​
2
-​
Jumper Cord - C13/C14, 4M, 250V, 12A (North America, Guam, North 492-BBDG
-​
2
-​
Mellanox ConnectX-6 DX Dual Port 100GbE QSFP56 Network Adapter, 540-BCXN
-​
1
-​
Dell EMC PowerEdge QSFP28 SR4 100GbE 85C Optic 407-BCEQ
-​
1
-​
Subtotal:
$24,349.89
Shipping:
$0.00
Estimated Tax:
$2,374.13
Total:
$26,724.02
Drive,3DWPD,






Marianas, Philippines, Samoa) Low Profile
 
Thank all of you, for taking the time to consider the problem and making good points and recommendations. Ultimately decided to outfit the AI Research Workstation with one (1) Liqid Element LQD4500 PCIe AIC Composable Storage SSD PCIe Card. Which supports up to 24GB/sec for the intermediate storage location while running the AI CNN training and testing Model. Thanks. Only wish the Liqid Element LQD4900 PCIe AIC SSD with Optane Persistent memory could achieve the same speeds of the LQD4500 but it's only 1/3 as fast. However this would work very well for for our Oracle RDBMS Client Data Partitions which can take advantage of the App Direct Mode (AD) Optane Memory Mode (where PMem acts like persistent memory or storage).
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
What is the storage size of the liqid card that you got? How much do they cost?
 
Pricing is very specific and it would be provided to you through your LIQID Partner. Each customer may have different arrangements with their specific Partners. My Partner is Dell. So the card which we use is the FHFL:
L4500-007T68-040 Element LQD4500 - 7.68TB, NVMe PCIe Gen 4.0 x16 FHFL AIC SSD =0 $5,925.

There are a number of different sizes and models available. LQD4900, LQD4500 & LQD3000

Data Center Selection
L4500-007T68-040 7.68TB, 983, NVMe PCIe Gen 4.0 x16 FHFL AIC SSD
L4500-015T36-040 15.36TB, 983, NVMe PCIe Gen 4.0 x16 FHFL AIC SSD
L4500-030T72-040 30.72TB, 983, NVMe PCIe Gen 4.0 x16 FHFL AIC SSD

Enterprise Selection
L4500-006T40-040 6.40TB, 983, NVMe PCIe Gen 4.0 x16 FHFL AIC SSD
L4500-012T80-040 12.80TB, 983, NVMe PCIe Gen 4.0 x16 FHFL AIC SSD
L4500-025T60-040 25.60TB, 983, NVMe PCIe Gen 4.0 x16 FHFL AIC SSD
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
My poor mans version of the L4500-030T72-040 :) 6 cards with 4 x 2TB m.2. total of 48TB in raid 0. with 2 of those cards I was able to get >50GB R/W so I am curious to see how this test goes. Still have the one x16 slot waiting on the raid card!
 

Attachments

  • Like
Reactions: xeonguy
Hi jpmomo,
My poor mans version of the L4500-030T72-040 :) 6 cards with 4 x 2TB m.2. total of 48TB in raid 0. with 2 of those cards I was able to get >50GB R/W so I am curious to see how this test goes. Still have the one x16 slot waiting on the raid card!
Really interested in seeing how this configuration tests out. Probably the main difference is the Liqid version is optimized so that you can fit the entire configuration in one PCIe Gen4 16x Slot. Unfortunately my R750 only has one (1) Full Height, Full Length 16x Slot for the Liqid card, oh well. There probably is a slight difference the Dell PE R750 does have Dual Intel® Xeon® Gold 6354 3G, 18C/36T, 11.2GT/s, 39M Cache, Turbo, HT (205W) Processors and of course eight(8) hot swap Gold high performance fan modules to cool the CPU's and the sixteen(16) U.2 PCIe Gen4 NVMe SSD's. It also supports 32 RAM Modules and up to sixteen(16) Intel Optane Pmem 200 modules. Small difference but worth noting.
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
pm me if you want to discuss in detail. I also have a dell r750 and r7525 that I will use for some of these tests. the r750 has riser config #2 with the FH/FL slots. the r750 has the intel platinums and the r7525 has the amd milans. Both are also for sale after a few of these tests :)
jp
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
330-BBRW Riser Config 2, Full Length, 4x16, 2x8 slots, DW GPU Capable. I specifically ordered that config as it gives me the most flexibility. In addition to the DW aspect, you also get the full length which can help with some aic.
 
OWC has launched a 1U 4-bay that takes U.2 drives. You could buy two. OWC Flex 1U4
I checked it out. It's cool, trying to get the datasheet for it but, it appears to be PCIe Gen3 and support rates up to 2,750MB/sec. Also not sure if it supports Windows O/S it does show macOS.
It's Thunderbolt 3, USB 3.2 Gen 2 Type-C compatible.

  • 40TB Storage Capacity
  • 3 x 8TB 3.5" HDDs | 4 x 4TB NVMe SSDs
  • Thunderbolt 3 | USB 3.2 Gen 2 Type-C
  • USB 3.2 Gen 2 Type-A | DisplayPort 1.4
  • 1 x PCIe 3.0 x16/x4 Slot
  • Supports 8K, 5K, and 4K Displays
  • SoftRAID XT RAID Management Software
  • Data Transfers up to 5000 MB/s
  • Three Cooling Fans
  • macOS 10.15, 11.x, and 12.x Compatible
 
  • Like
Reactions: TrumanHW

TrumanHW

Active Member
Sep 16, 2018
253
34
28
I checked it out. It's cool, trying to get the datasheet for it but, it appears to be PCIe Gen3 and support rates up to 2,750MB/sec. Also not sure if it supports Windows O/S it does show macOS.
It's Thunderbolt 3, USB 3.2 Gen 2 Type-C compatible.

  • 40TB Storage Capacity
  • 3 x 8TB 3.5" HDDs | 4 x 4TB NVMe SSDs
  • Thunderbolt 3 | USB 3.2 Gen 2 Type-C
  • USB 3.2 Gen 2 Type-A | DisplayPort 1.4
  • 1 x PCIe 3.0 x16/x4 Slot
  • Supports 8K, 5K, and 4K Displays
  • SoftRAID XT RAID Management Software
  • Data Transfers up to 5000 MB/s
  • Three Cooling Fans
  • macOS 10.15, 11.x, and 12.x Compatible

As far as "supported" by Windows ... in this case, I think that's just based on whether you're limited to using it as a PCIe device that you provide control fo and use it as external HDD (SSD) as an HBA ... or if they provide the Software to configure it as RAID 0/1/10, etc.

I've spoken to HighPoint about similar subjects before (the SSD7120) which works as an HBA under FreeBSD
(I'd been considering its use via FreeNAS // TrueNAS).

Since your post, they've added another device:

SSD6780A
PCIe Gen4 x16
8-Bay
U.2 / U.3 NVMe (RAID) Enclosure

I don't see any pricing on it yet ... and, they're not exactly explicit as to whether the "Enclosure" includes the PCIe card.

If your purpose is to use it via TB3 ... like most things you can just use a PCIe to TB3 enclosure I'm sure.

Hope this is helpful and isn't too late or germane.
 

xeonguy

New Member
Aug 29, 2020
23
7
3
My poor mans version of the L4500-030T72-040 :) 6 cards with 4 x 2TB m.2. total of 48TB in raid 0. with 2 of those cards I was able to get >50GB R/W so I am curious to see how this test goes. Still have the one x16 slot waiting on the raid card!
This is awesome. How's it going for you? Do you have a post to the specs of your setup somewhere?
 

gb00s

Well-Known Member
Jul 25, 2018
1,188
599
113
Poland
My poor mans version of the L4500-030T72-040 :) 6 cards with 4 x 2TB m.2. total of 48TB in raid 0. with 2 of those cards I was able to get >50GB R/W so I am curious to see how this test goes. Still have the one x16 slot waiting on the raid card!
What filesystem did you test on? How was that working if each of the cards is capable of 15GB/s only while providing PCIe 3.0 performance only if filled with 4x M.2 NVMe's? Just curious.