What I would like to do is unconventional and a idea to reduce cost for hardware, space and power consumption.
Typically, a M.2 or U.2 drive uses four PCIe lanes. I would like to operate M.2 or U.2 drives with only one or two lanes per drive instead of four, so I can attache 16 NVME drives to a PCIe x16 slot or to a HBA with 16 channels. I want to do this, because PCIe 4.0 supports 2GB/s per lane and PCIe5.0 4GB/s per lane - netto. Thats enough bandwith for a lot of homeserving use cases and much faster than SATA 6G (netto 600MB/s) and SAS 12G (netto 1,2GB/s) is per channel.
Most (all?) M.2 storage devices with NVME supports x1 connectivity as fallback mode. Some also support x2 operation. There are also some single-board computer and some cheap boards with x1 or x2 M.2 connecters. I think it should not be a problem for most drives to run in x1 or x2 mode.
At the moment, I see two possible solution to reach this goal: CPU or chipset based bifurcation or with a HBA.
First idea is bifurcation:
Bifurcation would be a nice way to reach this goal, because additional active hardware/HBA-chipset is not needed. But I see these problems:
1. A mainboard which can bifurcate a PCIe slot to x1x1x1x1x1x1x1x1x1x1x1x1x1x1x1x1 or x2x2x2x2x2x2x2x2 is needed.
2. Additional adaption (riser, breakoutcables, wireing harness, adapters) to lead out every single or pairs of PCIe-lanes to a single M.2 or U.2 connectors is needed.
I think modern CPUs or chipsets can do this, but have you ever seen a mainboard UEFI which allows you to do this? I think this problem can only be solved by the mainboardvendor or someone who can modify a UEFI/BIOS.
The second problem can be solved with available hardware. What you need is:
1x https://de.aliexpress.com/item/1005005237599668.html
4x https://de.aliexpress.com/item/1005004274364890.html
16x https://de.aliexpress.com/item/1005003610538234.html
16x https://de.aliexpress.com/item/4001030510953.html
Please note, that this is not USB protocol. Its PCIe over physical USB connectors.
I knot that this parts list is a bit wild, but its only to demonstrate a concept. Maybe some of you have a better idea to solve the connection issue.
I also though about using a card like this one and use a selfmade, customized wireing harness, but this will not work because every device needs its own RefClock signal:
DS320-SLIMSAS-EVM Evaluation board | TI.com
Best way to use such a card would be with a backplane, which supports 16 U.2 devices with x1 connectivity.
Second idea is HBA-based:
Use an industrial standard HBA. I see the following problems here:
3. There are a lot of HBAs which support up to 8 or 16 SATA/SAS drives, because they have 8 or 16 channels. Officially its only possible to attache two or four PCIe-based drives like M.2 or U.2 because both usually use four lanes/channels. But, what if you connect 16 NVME-based drives to a HBA with only one lane/channel per drive? Will it work to run 16 NVME drives? Maybe the HBA chipset could do this job, but I guess the firmware is not ready for it.
4. Same problem like point 2. A wireing harness is needed to connect 16 drives to a 16 channel HBA. But in this case, you might need one with a miniSAS connector on the one side, and U.2 on the other side.
5. Every PCIe device needs its own RefClock connection. Not sure if a HBA has 8 or 16 individual clock pins available?
Does anyone have experience with this topic?
Typically, a M.2 or U.2 drive uses four PCIe lanes. I would like to operate M.2 or U.2 drives with only one or two lanes per drive instead of four, so I can attache 16 NVME drives to a PCIe x16 slot or to a HBA with 16 channels. I want to do this, because PCIe 4.0 supports 2GB/s per lane and PCIe5.0 4GB/s per lane - netto. Thats enough bandwith for a lot of homeserving use cases and much faster than SATA 6G (netto 600MB/s) and SAS 12G (netto 1,2GB/s) is per channel.
Most (all?) M.2 storage devices with NVME supports x1 connectivity as fallback mode. Some also support x2 operation. There are also some single-board computer and some cheap boards with x1 or x2 M.2 connecters. I think it should not be a problem for most drives to run in x1 or x2 mode.
At the moment, I see two possible solution to reach this goal: CPU or chipset based bifurcation or with a HBA.
First idea is bifurcation:
Bifurcation would be a nice way to reach this goal, because additional active hardware/HBA-chipset is not needed. But I see these problems:
1. A mainboard which can bifurcate a PCIe slot to x1x1x1x1x1x1x1x1x1x1x1x1x1x1x1x1 or x2x2x2x2x2x2x2x2 is needed.
2. Additional adaption (riser, breakoutcables, wireing harness, adapters) to lead out every single or pairs of PCIe-lanes to a single M.2 or U.2 connectors is needed.
I think modern CPUs or chipsets can do this, but have you ever seen a mainboard UEFI which allows you to do this? I think this problem can only be solved by the mainboardvendor or someone who can modify a UEFI/BIOS.
The second problem can be solved with available hardware. What you need is:
1x https://de.aliexpress.com/item/1005005237599668.html
4x https://de.aliexpress.com/item/1005004274364890.html
16x https://de.aliexpress.com/item/1005003610538234.html
16x https://de.aliexpress.com/item/4001030510953.html
Please note, that this is not USB protocol. Its PCIe over physical USB connectors.
I knot that this parts list is a bit wild, but its only to demonstrate a concept. Maybe some of you have a better idea to solve the connection issue.
I also though about using a card like this one and use a selfmade, customized wireing harness, but this will not work because every device needs its own RefClock signal:
DS320-SLIMSAS-EVM Evaluation board | TI.com
Best way to use such a card would be with a backplane, which supports 16 U.2 devices with x1 connectivity.
Second idea is HBA-based:
Use an industrial standard HBA. I see the following problems here:
3. There are a lot of HBAs which support up to 8 or 16 SATA/SAS drives, because they have 8 or 16 channels. Officially its only possible to attache two or four PCIe-based drives like M.2 or U.2 because both usually use four lanes/channels. But, what if you connect 16 NVME-based drives to a HBA with only one lane/channel per drive? Will it work to run 16 NVME drives? Maybe the HBA chipset could do this job, but I guess the firmware is not ready for it.
4. Same problem like point 2. A wireing harness is needed to connect 16 drives to a 16 channel HBA. But in this case, you might need one with a miniSAS connector on the one side, and U.2 on the other side.
5. Every PCIe device needs its own RefClock connection. Not sure if a HBA has 8 or 16 individual clock pins available?
Does anyone have experience with this topic?
