Hacking the intel H10/H20?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

heromode

Active Member
May 25, 2020
379
199
43
So i recently bought a Orico USB-c nvme case off ebay that actually came with a 32+512GB Intel H10. Paid not much more than a new Orico, so i figured what the heck, because my Mobo has a "intel optane ready" M.2 slot.

I already knew that i would not be able to expose both the 3dxpoint chip and the nand chip as two separate block devices from reading about it, but still i secretly hoped that it would have some hidden setting to enable the 3dxpoint chip by adding a nvme namespace or something. No such luck.

But the point is there are thousands of H10's for sale at like 10-15$ each, and while they are slow, they still would make quite handy ZIL/L2ARC or L2ARC/special devices for home usage. Especially 4 of them in a cheap PLX pcie3x16 to 4x pcie3 M.2 adapter.

There are firmware updates for them which you can flash with the intel/solidigm tool.

So question is, is there no way to hack these to expose both block devices in linux? What is the secret that intel RST uses? Would it be a fun project for someone that really knows low level hardware stuff?

Edit, how do they show up in windows with Intel RST? If they are exposed as 2 devices, how about running some windows light VM with the H10 in pcie passthrough, and then exposing the block devices via iscsi targets to proxmox lol
 

nasbdh9

Active Member
Aug 4, 2019
164
96
28
The two parts of nand and optane are independent of each other. They both have their own master control.
When connected to the supported PCH, the channel mode of M.2 is bifurcation X2*2
 

heromode

Active Member
May 25, 2020
379
199
43
The two parts of nand and optane are independent of each other. They both have their own master control.
When connected to the supported PCH, the channel mode of M.2 is bifurcation X2*2
so that then must mean my "optane ready" motherboard M.2 slot supports bifurcation. Question is how to enable it (edit: without RST)
 

nasbdh9

Active Member
Aug 4, 2019
164
96
28
so that then must mean my "optane ready" motherboard M.2 slot supports bifurcation. Question is how to enable it (edit: without RST)
There will be an M.2 (one of two or more slots, the optane mode may be set in the bios) from the PCH to support this function, please check user manual
 

heromode

Active Member
May 25, 2020
379
199
43
There will be an M.2 (one of two or more slots, the optane mode may be set in the bios) from the PCH to support this function, please check user manual
Right. My motherboard is an Asus Pro WS C246-ACE. It has 2 x M.2 and 1x U.2. I have 2x Intel DC P3700's connected, one to the U.2 header, and the other to the 4x pcie3 lane M.2 via a M.2 to U.2 adapter. The remaining M.2 slot is from memory a 2x pcie3 lane. That's the 'Optane Ready' one i think.

I've always just kept Intel RST disabled, as i run linux, but have to check further. Of course if enabling RST puts both M.2 slots plus the U.2 slot behind Intel RST then i can't do it. But ATM the H10 is not connected to Motherboard, and system in use, so i'll have to leave this for another day.

Thanks for input @nasbdh9 .
 

heromode

Active Member
May 25, 2020
379
199
43
Ideal would be the possibility to run 4x H10's on a 4x M.2 adapter card on a 2x2x2x2x2x2x2x2 bifurcated pcie3x16 slot, resulting in 4x 32GB 3dxpoint and 4x nand block devices. Then use that for ZFS in home server.
 

nasbdh9

Active Member
Aug 4, 2019
164
96
28
The CPU supported by the motherboard only supports 1x16, 2x8, 1x8+2x4 (there may be options in the BIOS to configure, or AMIBCP modification)
PCH supports a minimum of X1 channels per PCIe slot, but depends on vendor design
 

heromode

Active Member
May 25, 2020
379
199
43
There's still the question about the H10, i haven't read anywhere that someone has been able to expose 2 block devices on them in linux, even with bifurcation. If that was possible, then i figure it would be well known. Hence the reason for this thread

To my knowledge the only way to use the 3dxpoint chip is in windows with Intel RST drivers. And even then it's exposed as a single block device, just with a write cache.
 

nasbdh9

Active Member
Aug 4, 2019
164
96
28
Out of curiosity, I purchased an H20 to test
The motherboard is MSI MPG Z390M GAMING EDGE AC, the CPU is 9600K
BIOS version 7B50v1C, H20 inserted 2280-length M.2 slot
BIOS RST module replaced use UBU to 18.31.3.5434 (from MSI Z590) (not sure if that has an impact, as I read that doc H20 requires RST version 18+)

In this way, the PCIe bifurcation on the PCH can be triggered normally, and two NVMe drives can be seen immediately after boot to ubuntu

20230605_130946491_iOS.c1215x912.jpg

I guess the design of the motherboard and maybe the BIOS or something limits you :oops:
 
  • Like
Reactions: UhClem

heromode

Active Member
May 25, 2020
379
199
43
Out of curiosity, I purchased an H20 to test
The motherboard is MSI MPG Z390M GAMING EDGE AC, the CPU is 9600K
BIOS version 7B50v1C, H20 inserted 2280-length M.2 slot
BIOS RST module replaced use UBU to 18.31.3.5434 (from MSI Z590) (not sure if that has an impact, as I read that doc H20 requires RST version 18+)

In this way, the PCIe bifurcation on the PCH can be triggered normally, and two NVMe drives can be seen immediately after boot to ubuntu

View attachment 29415

I guess the design of the motherboard and maybe the BIOS or something limits you :oops:
Sorry for delay.. Interesting, but what do you mean with "BIOS RST module replaced use UBU to 18.31.3.5434" ?

I have never actually enabled RST in BIOS because i don't know if you can enable it for selected M.2/U.2 slot, and as i'm running ZFS on nvme drives attached to the other slots, i can't enable it there. Moreover i thought RST needed drivers (that for example you have to load manually when installing windows for it to see the drive)

But ATM the H10 is in use as a cheap backup drive in a USB-C to nvme portable drive, so maybe i'll test sometime when i get the chance.

Either way, your screenshot is the only one on the internet i've seen that shows these drives as working with bifurcation on linux, that's certainly interesting, so thanks for putting in the time.

edit: This raises the question if it's possible to run 4x H20's on a cheap pcie16 to 4x M.2 adapter card (not a plx switch card, but a cheap card that needs bifurcation). on a 8x2 bifurcated pcie slot.

That would be really handy for ZFS, for example 2x mirrored special devices on the NAND + 2x mirrored ZIL on the 3dxpoint chips.
 
Last edited:

nasbdh9

Active Member
Aug 4, 2019
164
96
28
Sorry for delay.. Interesting, but what do you mean with "BIOS RST module replaced use UBU to 18.31.3.5434" ?

I have never actually enabled RST in BIOS because i don't know if you can enable it for selected M.2/U.2 slot, and as i'm running ZFS on nvme drives attached to the other slots, i can't enable it there. Moreover i thought RST needed drivers (that for example you have to load manually when installing windows for it to see the drive)

But ATM the H10 is in use as a cheap backup drive in a USB-C to nvme portable drive, so maybe i'll test sometime when i get the chance.

Either way, your screenshot is the only one on the internet i've seen that shows these drives as working with bifurcation on linux, that's certainly interesting, so thanks for putting in the time.

edit: This raises the question if it's possible to run 4x H20's on a cheap pcie16 to 4x M.2 adapter card (not a plx switch card, but a cheap card that needs bifurcation). on a 8x2 bifurcated pcie slot.

That would be really handy for ZFS, for example 2x mirrored special devices on the NAND + 2x mirrored ZIL on the 3dxpoint chips.
After changing the RST module of the BIOS back to the original 15.8 version, the two NVMe devices can still be recognized correctly, so it seems that whether automatic PCIe bifurcation has nothing to do with the version of the RST module