Intel NVMe/SAS3 2U Disk Enclosure w/ Oculink interface + SuperMicro AOC-SLG3-4E2P

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

larryexchange

Active Member
Dec 27, 2016
86
131
33
48
In my previous post, I installed ICY DOCK NVMe Mobile Rack (MB994SP-4SB-1) in my DELL T7810 workstation. That workstation can support 4 NVMe U.2 drives now.

I have another workstation, HP Z840, which has two 5.25" drive bays. I thought about installing two MB994SP-4SB-1 mobile racks but I find a better option Intel NVMe/SAS3 2U Disk enclosure (A2U8X25S3PHS).

IMG_1777 (Custom).jpeg
IMG_1779 (Custom).jpeg
I like the new drive adapter, which looks more modern than the previous version and more important it's screw-free.
IMG_1781 (Custom).jpeg

Here is two generation of Intel Disk Enclosure side by side.
IMG_1782 (Custom).jpeg
Actually the two types of drive adapter are compatible with each other.
IMG_1783 (Custom).jpeg

This is back view. New enclosure can support up to 8 NVMe U.2 drives or 8 SAS3/SAS2/SATA3 Drives. You can also mix and match between any type of drives. It's awesome! In order to support 8 PCIe ports, A2U8X25S3PHS use Oculink instead of mini SAS HD.
IMG_1780 (Custom).jpeg
Oculink cable looks like this
IMG_1834 (Custom).jpeg

I tried to order the following 4X Oculink cable. But it's out of stock in the local market right now. So I use the above cable for the time being.
TB280l_xYSYBuNjSspiXXXNzpXa_!!82818659.jpg

In order to support 8 NVMe U.2 drive, I ordered two SuperMicro AOC-SLG3-4E2P.
IMG_1817 (Custom).JPG
IMG_1818 (Custom).JPG
IMG_1820 (Custom).JPG
IMG_1821 (Custom).jpg
IMG_1822 (Custom).JPG
IMG_1823 (Custom).JPG

AOC-SLG3-4E2P has only PCIe 3.0 x8 interface but it supports 4 PCIe 3.0 x4 NVMe drive. That has both Pros and Cons. For me, 8GB/s throughput for each HBA is good enough and more important I didn't find other option (w/ PLX and Oculink).

Although Intel 2U Disk Enclosure doesn't have regular screw hole like normal 5.25" optic drive but it's not a problem for HP Z840. The HP workstation has a different mechanism to secure the enclosure (or other 5.25" drives).
IMG_1824 (Custom).jpeg

Plug in the Oculink cable into backplane of the disk enclosure.
IMG_1827 (Custom).jpeg
Then connect the Oculink cable to AOC-SLG3-4E2P.
IMG_1831 (Custom).jpeg
Finally close all the covers and power on the machine.
IMG_1829 (Custom).jpeg
IMG_1833 (Custom).jpeg
After Windows Server 2016 start up, I can see the two U.2 drives in the device manager.
4E2P-Devmgr.png
 

pc-tecky

Active Member
May 1, 2013
202
26
28
:D:D:D Is it Christmas in July? :cool: I can't like that enough.:cool:

Got links for the hardware used? What was the price tag on the hardware parts?
 
  • Like
Reactions: Patriot

Jason Hirsch

Member
Feb 12, 2018
36
6
8
47
So a very stupid question, I know, but... can you RAID-0 those nvme drives? I ask only because my one (and only one) application needs to write as fast as possible, but then repeatedly read the same data, oh, 50 or 60 times in a row. Can you say "Software Not Optimized" ?
 
  • Like
Reactions: Tha_14 and cactus

Scott Laird

Active Member
Aug 30, 2014
319
153
43
Out of curiosity, how does that A2U8X25S3PHS mount into the case? I assume you need to fabricate something for the top/bottom screws to screw into, rather than mounting to the sides like most 5.25" devices?
 

Jason Hirsch

Member
Feb 12, 2018
36
6
8
47
I purchased a couple of nodes from ThinkMicro (Great support, btw); the SM X11 board came with 2x OCU ports onboard marked NVME. The backplane for the drives has 4 ports for NVME on them, as well as 2 ports to go to the SAS card.

Sadly, I didn't know all of this before I got the hardware as I may have considered getting some NVME drives inside the first couple of bays.

Is it just me or is the literature kinda weak on how this hardware is setup? It took me forever to find even the name of the port, much less how SM was using it. Hadn't seen that particular connector before.
 

Scott Laird

Active Member
Aug 30, 2014
319
153
43
Also, interestingly, Intel's AXXP3SWX08080 appears to be an 8-port equivalent of the Supermicro board above. That is, you get 8 Oculink NVMe ports connected through a PLX chip into an x8 slot. So there's a fair amount of oversubscription, but I don't think it'd really matter for my use. It's the only 8-port NVMe PLX board that I've seen so far, but I'm sure others much exist somewhere.

Interestingly, it's cheaper than the AOC-SLG3-4E2P on ebay. I ordered a A2U8X25S3PHS, a AXXP3SWX08080, and a set of cables; we'll see how long it takes to install once it all arrives.
 

azev

Well-Known Member
Jan 18, 2013
770
251
63
Also, interestingly, Intel's AXXP3SWX08080 appears to be an 8-port equivalent of the Supermicro board above. That is, you get 8 Oculink NVMe ports connected through a PLX chip into an x8 slot. So there's a fair amount of oversubscription, but I don't think it'd really matter for my use. It's the only 8-port NVMe PLX board that I've seen so far, but I'm sure others much exist somewhere.

Interestingly, it's cheaper than the AOC-SLG3-4E2P on ebay. I ordered a A2U8X25S3PHS, a AXXP3SWX08080, and a set of cables; we'll see how long it takes to install once it all arrives.
Scott, looking forward to hear back from you on your experience in setting this up. My only concern with the intel card, that they are sometimes only works on intel servers.
 

Scott Laird

Active Member
Aug 30, 2014
319
153
43
True. I've seen HP, Intel, and Dell all play that game before. I figured this was a simple enough card (really just a PLX chip and some plumbing) that it was worth taking the chance. It's not like there's any BIOS integration or anything else weird for them to do.
 

Scott Laird

Active Member
Aug 30, 2014
319
153
43
One more question about the enclosure: what is it expecting for power? Is that just a 4-pin ATX 12v ("P4") power connector, or is it something weird?
 

Rand__

Well-Known Member
Mar 6, 2014
6,642
1,777
113
That is, you get 8 Oculink NVMe ports connected through a PLX chip into an x8 slot. So there's a fair amount of oversubscription, but I don't think it'd really matter for my use. It's the only 8-port NVMe PLX board that I've seen so far, but I'm sure others much exist somewhere.
Ouch - 8 to 8 is maybe a tad too much. Unless you don't use 8 drives o/c or are not using them at the same time, but why get this then? Ok, its cheaper than the 4 port SM, but do you really need one with PLX chip?

If you got an X11 with NVME Ports then you likely have a bifurcation capable x8/16 slot that you can use...
 

Scott Laird

Active Member
Aug 30, 2014
319
153
43
In this case, it's for desktop use, so oversubscription isn't *that* big of a problem. I'm more interested in capacity and expansion potential than total throughput.

I'd be happier with an x16 slot into 8 bays, but 8/8 should be good enough.
 

Scott Laird

Active Member
Aug 30, 2014
319
153
43
Huh? It should be able to drive any 2 devices at full speed (x4 PCIe). How often does your desktop actually need to drive all of the drives at full speed at the same time? I'd expect it to top out at 7ish GBps across all drives, but short of that the drives themselves will be the bottleneck.

It'd make a lousy big database server, but for my use patterns I don't really expect to see a difference between x8 and x16. As far as I'm aware, no one makes an 8-port x16 card. I don't have enough free slots for 2 16-port 4x cards, so that's not really an option for me.
 

Rand__

Well-Known Member
Mar 6, 2014
6,642
1,777
113
You are right, PCIe 3 single line rate is 985 MB/s - thought it was 500MB/s (PCIE2). And true, desktop will not use these concurrently unless you RAID them. In the end if you think its sufficient then o/c it is:)
Looking forward to hear how its been going:)
 

zack$

Well-Known Member
Aug 16, 2018
735
376
63
Looking forward to hear of the outcome on this and whether the Intel parts are not locked to Intel MBs only.

Also, interestingly, Intel's AXXP3SWX08080 appears to be an 8-port equivalent of the Supermicro board above. That is, you get 8 Oculink NVMe ports connected through a PLX chip into an x8 slot. So there's a fair amount of oversubscription, but I don't think it'd really matter for my use. It's the only 8-port NVMe PLX board that I've seen so far, but I'm sure others much exist somewhere.

Interestingly, it's cheaper than the AOC-SLG3-4E2P on ebay. I ordered a A2U8X25S3PHS, a AXXP3SWX08080, and a set of cables; we'll see how long it takes to install once it all arrives.
 

Scott Laird

Active Member
Aug 30, 2014
319
153
43
Quick update: I've been travelling and haven't had time to actually install the A2U8X25S3PHS and AXXP3SWX08080, but I was able to throw them into my system for a quick test and run it for a couple minutes with the lid off. Everything seemed to work as expected, Windows 10 saw the drive that I had plugged into the Intel drive bay via the Intel NVMe switch card. I didn't do any performance tests or try multiple SSDs at once. It was powered via a 'P4' motherboard power connector.

I'm using an Intel A2U8PSWCXCXK1 cable set. It's not obvious, but that's a full set of all 8 cables, labeled to match the ports on the two devices that I'm using. There are 3 variants of the cable set (IIRC K1, K2, and K3), intended for different mounting locations in Intel's 2U servers. I assume they're just different lengths.

I'll be moving my desktop into a new case this weekend, and I'm printing up a set of brackets for mounting the Intel drive bays. I can share the brackets, but they'll probably only be useful to anyone with a Thermaltake Core W100 or W200 case. The Intel A2U8X25S3PHS is a pretty good fit for 2x 5.25 bays. It's *precisely* the right height (measured at 84.6mm) and only a few mm too narrow (144.1mm vs 148mm).
 
  • Like
Reactions: zack$

Scott Laird

Active Member
Aug 30, 2014
319
153
43
Update: the AXXP3SWX08080 (card), A2U8PSWCXCXK1 (cables), and A2U8X25S3PHS (drive bay) are installed. I have one Intel P4510 (4T) and one Samsung PM961 (256G, M.2) in a cheap M2 adapter. They both show up just fine in Windows 10, with no special work needed. The P4510 benchmarks in CrystalDiskMark 6.0.1 at 2,923 MB/s reading and 2,889 MB/s writing. The Samsung gets 3,263 MB/s reading and 743 MB/s writing. Running both tests concurrently gives me 2,760+3,196 reading and 2,911+663 writing.

The drive activity LEDs blink on both drives when they're being accessed, but when the drives are idle the Intel LED stays on and the Samsung LED stays off.

Windows 10 doesn't recognize hot-plugged drives. That's not surprising. I suspect that Linux would, and Windows server might.

I didn't test SAS/SATA drives. They should be easier to get right than NVMe.
 

zack$

Well-Known Member
Aug 16, 2018
735
376
63
This is really cool. One pcie3 8x slot and 8x nvme :D

I wonder how performance is at full 8x nvme.

BTW, what motherboard are you using?
 

Scott Laird

Active Member
Aug 30, 2014
319
153
43
I'm using a Gigabyte X399 Aorus Xtreme with a Threadripper 2950X. It's been pleasantly trouble-free so far.