GIGABYTE HPE R281-N40 CL2200

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

da.bernde

New Member
Apr 20, 2023
8
3
3
That won't work probably. Because the backplane is also different in newer models which support Tri-Mode. Same story with HP DL380 gen9 and even G10. There is an NVME-Enablement - Kit for U.2, which includes not only the controller and SAS-cables (NVMe - breakout), but also a new hdd-box with different electronics which replaces or expands the boxes in the front.

If I'm correct, the newer cables are only for U.3 (the connector or standard for the cables is called Oculink). Because you can drop an U.3 drive in a U.2 system, but not the other way round (Chris's Wiki :: blog/tech/ServerNVMeU2U3AndOthers2022)

The only way I see to enable front NVMe is to get the mezzanine card for the backplane. From there you can go different routes as kellenw for example states.


And this is in my opinion only HP's fault. They strip these boxes down to bare minimum for extra optional cash. As Gigabyte is concerned, they sell the 281-N40 with all necessary equipment to get startet. I have no problem with that, as long as these things can be bought. But unfortunately the server is also on HP's EoL list, so we probably don't see any improvement here... :-(
 
Last edited:

Cruzader

Well-Known Member
Jan 1, 2021
656
657
93
Is it the cable set or CNV0124 that is the main problem to get hold of? im probably pulling a stack of CNV0124 out of servers soon.

Won a 20 unit lot of servers that by spec should have "OCP mezzanine slot (Gen3 x16) - Occupied by CNVO124, 4 x NVMe HBA".
Tho they did ofc manage to send me just one so back to waiting for more of them as they reship i assume, the one i did get does have this nice looking card sitting in it.
1693934085224.jpeg
 
Last edited:

kellenw

New Member
Jan 15, 2022
22
11
3
Is it the cable set or CNV0124 that is the main problem to get hold of? im probably pulling a stack of CNV0124 out of servers soon.

Won a 20 unit lot of servers that by spec should have "OCP mezzanine slot (Gen3 x16) - Occupied by CNVO124, 4 x NVMe HBA".
Tho they did ofc manage to send me just one so back to waiting for more of them as they reship i assume, the one i did get does have this nice looking card sitting in it.
View attachment 31412
Hi Cruzader,
This does appear to be the CNVO124 OCP card. Can you by chance snap some pictures and describe how the cables coming from it are connecting to the mezzanine card that is attached to the backplane?
 

Cruzader

Well-Known Member
Jan 1, 2021
656
657
93
Hi Cruzader,
This does appear to be the CNVO124 OCP card. Can you by chance snap some pictures and describe how the cables coming from it are connecting to the mezzanine card that is attached to the backplane?
i have them in these openrack nodes
1693938697282.png

Taking off the hotswap cage above card the cables between look like this
1693938608838.jpeg
1693938654064.png

I can unscrew the card to get pics of both ends of cable and the label on them after work tomorrow.
 
Last edited:

Ezy

New Member
Mar 4, 2023
11
11
3
If you want to run this sever with 2.5” nvme drives you need:
- a nvme HBA card (e.g. CNVO124)
- an additional backplane nvme bypass card (CEPM080)
- a plug / pull retention module to secure the bypass card onto the backplane
- slim-sas 4i cables (orig. "25cfm-850820-a4r / rsl38-0570")
- an extra power cable for the bypass card

You probably also want switch to the Gigabyte firmware as the system fans start to run like crazy as soon as you add nvme drives.

Therefore I would go with the 9460-16i controller and some SAS3 ssds. No extras needed.

And if it has to be nvme then just take another system with more and faster pcie lanes ... and the proper backplane of cause.
 

da.bernde

New Member
Apr 20, 2023
8
3
3
Hi Cruzader,
This does appear to be the CNVO124 OCP card. Can you by chance snap some pictures and describe how the cables coming from it are connecting to the mezzanine card that is attached to the backplane?
This is pretty simple literally. If you google "hp cl2200 G10 cabling guide" there you find a document specifying all infos regarding which cable goes where. https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&docId=a00042396en_us (look for system cabling sff only nvme at the bottom)

I also can make pictures tomorrow, if this helps.

I just saw, that I made a mistake in my previous post.
You don't need 8 cables. You only need 4, all of which connect direct to the ocp card (the 8 cables was for the sas expander: 6 from sas expander to backplane and 2 from expander to HBA).
In the HP document there is however a slight misinformation. If you connect it following the guide (u2a to u2h and u2b to u2j, the first 4 nvme drives you get are in the rightmost hdd box are drive no 17 to no 20 instead of 21 to 24.
So you have to connect u2a (OCP) to u2a (backplane), u2b to u2b and so forth, if you want to have it look like in the product page.

I did that first and was wondering, why the box did not see the drives in the drivebays with the orange caddies.
If you look at the product page from gigabyte, the nvme drives (orange) are the right most drives.

And another thing to mention is, you do have to remember, how the 2 cables from the fans are on which 3 respective 4 pin (if memory serves) header. If you mix them up, you get BMC errors (fan not working), because you can't connect them to the backplane (cables to short) if you have installed the mezzanine card. So you have to connect them on the headers on the mezzanine card instead.

Another fun thing is, you have 2 OCP-Card Slots in your machine. So you technically can use 2 ocp cards with one mezzanine card to get 8 nvme drives in one box or if you have 2 mezzanine cards 4 drives in each hdd box. The mezzanine card also has 8 connectors. If you have a decent HBA, you probably could run up to 8 drives of each mezzanine card (did not test the latter, but eventually I will get another set hopefully soon).
 
Last edited:
  • Like
Reactions: kellenw

da.bernde

New Member
Apr 20, 2023
8
3
3
By the way.
ATM there are some Gigabyte NVMe items needed for our boxes on US Ebay:

Gigabyte R272 Z32 4-Port Raid Controller Card Low Profile P/N: CNV3024 Tested | eBay (4 - port Nvme - Card PCIe)
Gigabyte CNVP143 2.0 Mezzaninie HBA 4x NVMe PCIe Gen4 x16 Tested Working | eBay (4 - port Nvme - OCP Card, probably the successor to the cnv024. Mine looks nearly the same, and its also labled U2a, U2b, U2c, etc.
 

Jonasz

New Member
Sep 3, 2023
8
0
1
I had the same unfortunate issue. I bought 4 of the CL2200 units only to find them lacking the NVME enablement kit. Very frustrating and super hard to find. I found the backplane mezzanine card (CEPM080) on ebay here: Gigabyte CEPM080 8-Port Mezzanine Card Tested Working | eBay. I bought 4 of the 6 available last week, so there are still 2 up for grabs. In theory, the OCP card (CNV0124) is just a 4x4 pcie to nvme breakout board (just in OCP format). My plan is to try one of these or one of these to feed the CEPMO80. Fingers crossed. haha. Hope this might help you as well Jonasz. Good luck! :)
Thanks did they come with the retention module?

Thanks
 

Ezy

New Member
Mar 4, 2023
11
11
3
I guess all molex connectors are standard.
This one is quite common with server backplanes (and smaller than than those of consumer mobos).
But just take a look around the corner / fan. The first black 2x2 connector on your server board is what you are looking for.
 
  • Like
Reactions: kellenw

Ezy

New Member
Mar 4, 2023
11
11
3
Please excuse my sloppiness.
What I was trying to say is that Molex is building all its connectors according to certain standards ... and that I am too lazy to look up which one we have here.
 
  • Like
Reactions: RolloZ170

kellenw

New Member
Jan 15, 2022
22
11
3
Thanks for clarifying RolloZ170 and Ezy. My gut feeling is that it's likely a micro-fit 2x2.

EDIT TO ADD: I measured the pin spacing out, and it does appear to be 3mm, so that should mean it's a micro-fit 2x2. I ordered a cable to try out, so I'll update in a few days with results on that. :)
 
Last edited:
  • Like
Reactions: RolloZ170

a.out

New Member
May 3, 2016
18
11
3
58
Even though the connector is standard, different server vendors assign pin polarity differently. In the case of a straight cable with matching connectors at both ends it shouldn't be a problem, but with GPU power cables you must check the polarity. I noticed this when installing a P100 in my machine using a DELL cable where the pinout was completely different. Had to reorder the pins in the connector housing on the motherboard side to get it right. (Just thought it might save someone some hassle.)