I have had a hell of a time getting the MCP-220-82619-0N NVMe kit (RE 81) NVMe kit working as a replacement for the 2 x SATA drives in my SSG-6049P-E1CR60L+ server, so I thought I would write down what I have and have not gotten working (mainly for my future self).
I also have a MCP-220-82617-0N (ID A0, Rev 3) NVMe kit which I have tried but *not* managed to get that one working under any configuration. The 82617 part seems to flash all of the blue, red and green LEDs on power on while the 82619 part only flashes blue and red (does it have green LEDs?). The other ways to tell the backplanes apart are;
- The white sticker on the Lattice CPLD (which I assume does sideband and LED signalling) says “ID A0 Rev 3” for the 82617 (the 82619 says “RE 81”).
- The backplane rev & part number on the bottom near J4 is directly on the PCB with Rev on the top and BPN on the bottom for the 82617 (the 82619 had a white sticker with BPN on the top and Rev on the bottom).
The SSG-6049P-E1CR60L+ server has 4 x NVMe OCulink ports on the motherboard (all connected to PE1 port of CPU1. These are normally connected to the optional NVMe cage (MCP-220-94607-0N - SC946S NVMe kits w/ cage, tray, cable, BPN OCuLink v.91,INT,PCIe NVMe SSD, 55CM,34AWG) via Oculink cables. The NVMe cable however has 6 slots. The final 2 x NVMe slots are normally provided by a AOC-SLG3-2E4R-O, which is a PCIe card with retimers. This card is placed in the 8x PCIe slot (driven by PE1 on CPU2) and connected to the NVMe cage via Mini-SAS HD (SFF 8643) connectors to OCulink cables.
My original plan was as follows;
- Replace the 2 x 2.5 inch SATA drives in the back panel with 2 x NVMe using the MCP-220-82619-0N NVMe kit connected to 2 x OCulink NVMe ports on the motherboard.
- Use the other 2 x OCulink NVMe ports on the motherboard with the NVMe cage.
- Use a PCIe card with a PCIe switch to populate the remaining 4 x NVMe ports on the NVMe cage.
This would give me a total of 8 x NVMe drives rather than the standard 6 x NVMe drives.
For the PCIe switch card, I ended up using a “DiLiVing 8 Port MiniSAS HD to PCI Express x16 SFF-8639 NVMe SSD Adapter”, part number LRNV9349-8I (
LRNV9349-8I), which uses SFF-8643 (Mini SAS HD) connectors and the PLX8749 IC (which is a 48-Lane, 18-Port PCI Express Gen 3 (8 GT/s) Switch).
The first challenge was getting cables that would work between the NVMe cage and the MiniSAS HD card. It turns out that there are many different types of cables which have MiniSAS HD on one end and Oculink cables on the other. This mess seems to be documented in the SFF-9402 standard at
https://members.snia.org/document/dl/27380. One of the primary issues is that the cables are unidirectional - a MiniSAS HD host to Oculink backplane cable can
not be used to connect an Oculink host to MiniSAS HD backplane cable. This makes it super easy to end up with the wrong cable type even after checking the seller's description carefully.
Given that I have only ended up using 4 of the 8 connectors on the PCIe card I probably should have gone with one of the two options below as they use OCulink connectors and it would have simplified the cabling issue (and probably been cheaper).
Once I had the NVMe cage<->PCIe card going, it was time to get the MCP-220-82619-0N NVMe kit working.
I was able to get the PCIe card<->MCP-220-82619-0N NVMe kit working. It was super useful to be able to demonstrate that the PCIe card<->NVMe Cage<->U.2 NVMe SSD pathway was working before moving the same set of cables & U.2 NVMe SSDs to the MCP NVMe kit. This is what eventually led me to discover that the NVMe SSD devices were not fully mating with the U.2 connectors on the backplane. I’m still unsure what is causing this but it might be because of slightly different mechanical dimensions between the various “orange” NVMe caddies.
I have not yet been able to get the motherboard NVMe Oculink connectors working with the MCP-220-82619-0N NVMe kit. I have tried;
- Using the first two NVMe ports and the second two NVMe ports. All these ports work correctly with the same cables and U.2 NVMe SSDs in the NVMe cage.
- Using different vendor brands and types of U.2 NVMe SSDs.
- Dropping the PCIe speed back to Gen1/Gen2 rather than Gen3 in the BIOS.
- I've tried the jumpers in positions 1-2 and 2-3.
The last thing I’m going to try is using the PCIe 4.0 Oculink cables from MicroSATA cables (
PCIe Gen4 16GT/s Oculink (SFF-8611) to Oculink 4 Lane Cable) which apparently have an inbuilt PCIe redriver.
If that doesn’t work, I’m probably going to just end up using using the following configuration which seems to work but has a lot longer cable runs;
- 4 x NVMe Oculink from motherboard to the NVMe cage.
- 2 x NVMe MiniSAS HD from PCIe card to the NVMe cage.
- 2 x NVMe MiniSAS HD from PCIe card to the MCP NVMe slots
One thing I’ve yet to understand is what the “NVMe connected to CPU1” and “NVMe connected to CPU2” configuration options actually do. Why does what CPU the NVMe devices are connected to matter? I assume it has something to do with the sideband / I2C / hotplug type operations? Another potential option is something around the PCIe reference clock? Anyone know?
I did find this page of documentation at
https://static.nix.ru/images/Supermicro-MCP-220-82619-0N-4938022247.jpg which does warn about issues with NVMe compatibility...
Supermicro support shared the following information with me regarding the jumpers on the NVMe cage backplane which might also be useful to some people in the future;
And the following pictures around the MCP-220-82619-0N NVMe kit,
Hope this information helps someone in the future!
Tim 'mithro' Ansell