U.2 to quad M.2 carrier (2.5 inch form factor)?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ouroboros

New Member
Jul 26, 2012
27
5
3
While oogling assorted NVMe gear, I ran across a bit of an oddball device. U.2 2.5" form factor device that hosts a PCIe switch and 4 M.2 22110 drives. An interesting potential alternative to an Amfeltec Squid.

Viking Enterprise Solutions
U20040
U.2 NVMe SSD M.2 Carrier High Performance Solid State NVMe Storage Drive
https://www.vikingenterprisesolutions.com/products/ssd-platforms-overview/u20040/

Naturally they want you to populate it with "certified" (read Viking brand) NVMe drives.

I kinda doubt Viking will sell these empty... so, does anybody else sell a U.2 bodied carrier like this?
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Massive density with 4 QLC drives that can't saturate a 4x connection anyway?
Or PCIe x4 host with a switch chip giving each a PCIe 3.0 2x lane ?
He mentioned NVMe drives so wouldn't make sense to use them if you're just going to limit each drive to x1 speed. As for the PCIe x4 host, yes that I can see being a potential use case.
 

Ouroboros

New Member
Jul 26, 2012
27
5
3
He mentioned NVMe drives so wouldn't make sense to use them if you're just going to limit each drive to x1 speed. As for the PCIe x4 host, yes that I can see being a potential use case.
It really hard to see the image, but the PCIe switch layout for the carrier is upstream x4 (x2 dual port when using their custom driver apparently), and x4 or x2 to each M.2 drive, so basically the same in concept to the older Amfeltec Squid with the short slot adapter (x4 uplink, x4 to each of 4 M.2 drives).

You're quite correct that sequential IO will bottleneck at the PCIe switch, but depending on how you populate the carrier, random might not bottleneck. If this carrier was being sold empty, it's would be handy for adding M.2 drives ad hoc to a system. A simple example would be using one of those M.2 to U.2 cables Intel sells to stretch a direct motherboard M.2 slot for fan out. For systems that already have U.2 hot-swap trays, this is a way of expanding the number of devices seen under the CPU (for situations where you need device count and not just raw capacity, such as ZFS VDEV's or Intel VROC)(though notably, a U.2 hot swap tray is likely backed with a backplane already using a PCIe switch, so this carrier would add a second switch layer, which seems to be the practical limit for some OS)

The somewhat skeezy DIY setup is using one of the new Amfeltec hexa M.2 PCIe cards paired with those M.2/U.2 cables and this carrier. You could potentially get 24 M.2 devices hanging of a single true x8 host uplink PCIe slot (though the hexa card also comes in x16 uplink flavor) while keeping things to only 2 layers of PLX switches.
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
Kingston briefly was touting their DCU1000, which looks suspiciously similar, prior to it disappearing from their product lineup...

The Anandtech article shows the PCB which might provide clues as to the source OEM...

Kingston at CES 2018: A 6.4 TB U.2 Enterprise SSD with Four M.2 Behind a PEX Chip

But, it looks like these 2.5 inch carriers are capped at 2280 M.2 drives due to length limitations
Kingston didn't really build theres it was from a partnership with liquid, they had a hhhl card liquid built as well although that hit retail
Liqid Inc. — The Leaders in Composable Infrastructure
https://www.kingston.com/datasheets/DCP1000_us.pdf
I work with kingston PR on occasion and they won't acknowledge it's existence anymore I think the partnership fell apart.
 

Ouroboros

New Member
Jul 26, 2012
27
5
3
Kingston didn't really build theres it was from a partnership with liquid, they had a hhhl card liquid built as well although that hit retail
Liqid Inc. — The Leaders in Composable Infrastructure
https://www.kingston.com/datasheets/DCP1000_us.pdf
I work with kingston PR on occasion and they won't acknowledge it's existence anymore I think the partnership fell apart.
Bingo, there's a Liquid product called the
Element LQD3250 PCIe U.2 SSD
Liqid Inc. — The Leaders in Composable Infrastructure

The hardware looks exactly the same as the DCU1000, including the USB monitoring port and the shell, just the label logo badge is different. They describe it as 8TB U.2, which would occur if you loaded 4x2TB M.2 SSD's into it.

So the questions remains, is Liqid the original OEM, or Viking, or some other OEM further down the food chain? The Kingston DCU1000 interior shots of the PCB shot the Liqid logo silkscreened on, but there may be a clue as to the OEM in the code at the bottom edge of the following picture

https://www.techpowerup.com/live/images/CES_2018/kingston_020.jpg

The code seems to read TTM 1SAM62-0 94V-0 0217 -9
0217 suggests the revision date being February 2017
The TTM is stylized, suggesting the maker (of the PCB at least).

TTM Technologies, Inc.

Which seems to have a matching logo, but these guys seem to be your basic contract electronics manufacturer/ PCB maker, and not a designer. So I guess Liqid is the likely OEM
 
Last edited:

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
Yeah from what I could tell liquid was the ODM when kingston was selling their card. You see it with other vendors that don't do their own in house drives as well like OWC using third parties for some models(and pretending they don't neither side can admit to it because of "reasons" even though I work with both)
 

Mithril

Active Member
Sep 13, 2019
362
106
43
Ran across one of these on ebay, do they work with any drive of the right length?
 

Sacrilego

Now with more RGB!
Jun 23, 2016
138
161
43
I bought two Viking U20040 U.2 to 4X M.2 carriers, and I just wanted to add some notes for those considering getting these since I couldn't find much information about them online.
ssd-side-shot.jpg SSD-Bottom-1.jpg
Top and bottom M.2 slots with loaded inland 2280 1TB drives. Notice on the top left shot it has one capacitor.

ssd-size-compare.jpg
Size comparison with an HGST HUSMM8040ASS20 on the left and an intel DC S3520 on the right.

pcie-topology.jpg
PCIe Topology
  • These carriers can only accept 2280 M.2 drives.
  • It uses a PCI switch. The Switchtec PM8531 PFX PCIe Gen 3 fanout switch.
  • Being an U.2 carrier, only 4 PCIE Gen 3 lane are connected to the switch.
  • The switch provides two lanes to each M.2.
  • The entire frame of the drive is metal and acts as a heatsink.
  • The carrier alone uses around 7-8 watts. Measured at the wall with a kill-a-watt.
  • The carrier can get very hot. Air flow is required.
  • There's a management interface for the PCIe switch on the carrier used to monitor statistics, temperature and configuration. You will need to install the driver and the user tools to use it in Windows. This driver isn't required for normal operations and isn't that hard to find. I couldn't find precompiled user tools though. I was able to compile the switchtec-user tools using the source from their github, but they don't seem to work properly. If you would like to try compiling it, you can get the
  • Apparently, there’s a driver to enable dual-port support for HA applications. It partitions the drive into (2) 2 lanes, allowing two different systems or CPUs to access it. I’m unable to find more information or the driver.
  • The brochure mentions it has PLP, but I don't think it does. At least not the carrier. They're very likely referring to the drives they would usually include with these carriers.
  • You don't need to populate all M.2 slots, but you do need to at least have one. The management device otherwise reports an issue to windows.
  • Each drive is passed independently as its own device to the system. Makes sense, since this is a switch, not an HBA. No built-in raid just to be clear.
I did some quick benchmarks using CrystalDiskMark with two different brands and configs.

1 Optane Band Bench.png2563 Optane Band Bench.png4 Optane Band Bench.png
Bandwidth benchmarks scaling from 1 to 4 Optane P1600X SSDs on the carrier.

1 Optane Latency Bench.png2 Optane Latency Bench.png3 Optane Latency Bench.png4 Optane Latency Bench.png
Latency scaling from 1 to 4 Optane P1600X SSDs on the carrier.

(4) Optane p1600x 58gb
With a single drive, it was very close to the limit of the 2 PCIe gen 3 lanes provided to it in sequential reads and the limit of the drive itself for writes. Random reads and writes were excellent as well, but that's just Optane flexing its muscle.

Adding a 2nd drive in a stripe doubled sequential reads and writes. Random reads and writes at higher queue depths presented a slight improvement. No doubt the full 4 PCIe lanes are being used in this case.

Adding the 3rd drive didn't do much for sequential reads as expected due to the 4 PCIe lanes on the U.2 interface. Sequential writes saw another decent boost. Random reads and writes saw very little change. They are within margin of error.

Adding the 4th drive pretty much maxed out the bandwidth of the x4 interface with sequential reads and writes. Again, the random reads and writes saw very little difference. Infact, random Q1T1 barely changed at all in all 4 tests.


(4) Inland (Microcenter Brand) 1TB using a Phison E12.

I striped all 4.
Sequential read and writes pretty much maxed out the 4 PCIe lanes.
Random reads and write speeds are normal. No anomalies were noticed.

Some interesting ideas on what to do with these carriers:
  • Using an M.2 to U.2 adapter to have 4 SSD drives on a cheap consumer board without using other slots.
  • Putting four of them on a PCIE x16 card with bifurcation enabled to have 16 drives on a single PCIe slot.
  • Or, like in my case, having eight drives on my Dell Precision T7820 using only the two front NVME U.2 drive bays, leaving the PCIe slots available for other uses (Had to increase airflow on HDD zone by 30% in bios).
So far, I think the Viking U20040 is not bad if you're looking into more density rather than performance. Sure, having dedicated PCIe lanes to each SSD would be excellent for performance, but that's not always possible or needed.

Dang, this post went longer than expected. Hope it helps others who like me were looking into more information about this carrier.
 
Last edited:

mrpasc

Well-Known Member
Jan 8, 2022
494
262
63
Munich, Germany
Cool, that’s exactly what I plan to do with those 2 I‘ve bought. Stuff them with all my spare p1600x laying around and have some nice scratch space for my T7820. Thanks for posting.
 

Mithril

Active Member
Sep 13, 2019
362
106
43
Awesome info thanks @Sacrilego ! 2x pcie gen3 per drive is unfortunate, but likely to not be an issue when using drives together really in an application where aggregating drives into fewer lanes is desireable (either so few PCI lanes you are not going to be pushing 100Gb anyways, or so many you make up for it by having many many drives)

The latency might be an issue, if you have the time to do benchmarks with the optane in and out of it that would be interesting.

Bummer about the management only working with a single drive, sounds like they didn't intend this product to have multiple in the same machine.

Any idea on power/heat of the device itself?
 

Sacrilego

Now with more RGB!
Jun 23, 2016
138
161
43
Awesome info thanks @Sacrilego ! 2x pcie gen3 per drive is unfortunate, but likely to not be an issue when using drives together really in an application where aggregating drives into fewer lanes is desireable (either so few PCI lanes you are not going to be pushing 100Gb anyways, or so many you make up for it by having many many drives)

The latency might be an issue, if you have the time to do benchmarks with the optane in and out of it that would be interesting.

Bummer about the management only working with a single drive, sounds like they didn't intend this product to have multiple in the same machine.

Any idea on power/heat of the device itself?
Just did some power measurements. It uses about 7-8 watts alone at the wall.
I'm going to try to build the user tools to see if I can get some temperature information. But it does get a bit hot to the touch.
Not just a bit hot, but after an extended benchmarking session with 4 Optane drives, it was uncomfortably hot. They need airflow.
 
Last edited:

mattventura

Active Member
Nov 9, 2022
448
217
43
You might be able to check if it has PLP by checking the bare PCB for a bunch of capacitors or a small battery.

Do you know how the carrier and drives show up in the PCIe topology? It seems like it probably wouldn't be possible to hotplug one of these unless it's on the highest bus #, because each downstream port of a switch usually wants its own bus number, but bus numbers need to be sequential.
 

Sacrilego

Now with more RGB!
Jun 23, 2016
138
161
43
You might be able to check if it has PLP by checking the bare PCB for a bunch of capacitors or a small battery.

Do you know how the carrier and drives show up in the PCIe topology? It seems like it probably wouldn't be possible to hotplug one of these unless it's on the highest bus #, because each downstream port of a switch usually wants its own bus number, but bus numbers need to be sequential.
I'll post some pictures tomorrow morning. I didn't see any bank of capacitors like I usually do on other drives.
I was able to sorta hot swap them by accident lol. Had to do a hardware rescan. But I'll also get a screenshot of hwinfo to check out the topology.
 

Sacrilego

Now with more RGB!
Jun 23, 2016
138
161
43
Did a few updates to the post.

I was running into some strange issues with the benchmark numbers where they were inconsistent and out of line with what I was expecting.

Turns out that sometimes, one or more of the threads during the tests would run on the CPU that did not have a direct connection to the PCIe lanes of the carrier. The second issue was heat. The drives would begin to throttle causing lower than expected numbers.
Had to remove the second CPU and increase airflow in the drive area.

I'm almost done with my post. I'm going to attempt to build the user-cli tools since I can't seem to find a compiled version anywhere.
Finished building the tools. No issues building and making the installer for windows. Running a list command shows the switch, but I can't run anything else.
Code:
C:\Users\Administrator>switchtec list
switchtec0              PFX 24XG3       RevB    1.09 B063       18:00.1
Anything else gives me:
Code:
C:\Users\Administrator>switchtec info switchtec0
switchtec0: A device attached to the system is not functioning.
I'll try in linux later...

I hope I'm not annoying anyone with the constant editing...
 
Last edited: