(Help) Dual NVMe M.2 to SFF-8639

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nerdalertdk

Fleet Admiral
Mar 9, 2017
228
118
43
::1
Hi everyone.

I’m planning out my next server build, but need some help on the storage side.
I've been looking at the LSI 9400-16i HBA since it will do sata/sas and nvme.

But I need some help with the nvme part.
My plan is to run 4 x nvme in x2 mode so I can run 2 x nvme pr. port the controller should support it, but I can't find a cable/adapter to split the SFF-8639 to 2 x 'x2 nvme'



I have found two adapters, but i'm not sure they will work.
Cables that might work ?
1632822039049.png



1632821724250.png

I want the nvme x2 design but with 8 x sas/sata disks

1632821761894.png



Hope it all make sense :D
 

NateS

Active Member
Apr 19, 2021
159
91
28
Sacramento, CA, US
The QNAP option should definitely work. The PCIe switch inside means that to the HBA it looks just like a regular x4 connection, and will use a standard x4 U.2 cable. It should work even if the HBA did not support x2 bifurcation, which it looks like yours does.

The Samsung option should likely also work, though that form factor is basically obsolete at this point, since EDSFF ended up winning out. I wouldn't buy into that system at this point if you want any future upgradability. OTOH that may mean you can find great deals on them?

The direct cable attachment option is more of a maybe -- the LSI manual you linked gives the pinouts that such a cable would need to follow, but I didn't find what pinouts the cable is actually using on the product page. But since the cable says it's for dual port SAS, I would bet it's not the pinout you need for dual NVMe.

I'm curious though, what's your use case for needing 4 drives connected at x2 speeds, as opposed to just getting 2 drives of double the capacity each, both connected at x4 speeds? While the overall bandwidth is the same, with the 4 drive setup you're limited to x2 bandwidth to any particular data block, whereas with the two drive setup, all access is x4. And with current M.2 drives (assuming the QNAP unit) topping out at ~8TB, and regular U.2 drives at ~30TB, the two drive option actually allows for higher capacity.
 

nerdalertdk

Fleet Admiral
Mar 9, 2017
228
118
43
::1
Well its a bit of a compromise.
I only have two pcie slots in my server(hp dl20g9)
Need one for 10gbit and one for the hba.

The reason for the x2 is I want to run them in raid 10 or raid 5 plus smaller sized nvme is more affordable and I only need them to saturate an 10gbit/20gbit connection

it’s purely an storage server for my homelab the nvme is for vm’s and the sata/sad is for Linux iso’s :)
 

NateS

Active Member
Apr 19, 2021
159
91
28
Sacramento, CA, US
The reason for the x2 is I want to run them in raid 10 or raid 5 plus smaller sized nvme is more affordable and I only need them to saturate an 10gbit/20gbit connection
What I'm trying to get at is that a raid 10 across 4 x2-linked drives is pretty much equivalent to a raid 1 across 2 x4-linked drives -- they'll have the same theoretical bandwidth, though probably the 2-drive raid 1 will be slightly faster because it has less overhead. Keep in mind that if you're using an adapter with a switch chip, that adds both latency and power consumption.

Raid 5 is the one option that this configuration makes possible that wouldn't have an equivalent with 2 drives... but do you really want that? That's trading bandwidth for a bit more usable space, which will make things slightly cheaper per TB, but the performance in some cases will be limited to a single drive's performance, and that single drive will be limited to x2. This is the one configuration that may not be able to saturate a 20gbit connection in all circumstances. But if these tradeoffs are acceptable for you, it's a reasonable option.

And as for cost, I'd encourage you to price it out both ways. You're right that the biggest drives have a price premium, but that tends to be relative to what's available in the form factor. 8tb is the largest M.2 on the market currently, so it does cost extra, but 8tb is right in the middle of the range of what you can get in U.2, so it really doesn't cost much extra (the 30tb U.2s will though).

For example, the cost of 2 of these is actually slightly less than 4 of these plus 2 of these. With different drive choices, the comparison may lean one way or the other, but they appear pretty close in most cases.
 
Last edited:

nerdalertdk

Fleet Admiral
Mar 9, 2017
228
118
43
::1
I think you are focusing to much on speed, my aim is for max 20gbit of bandwidth anything over is waste, the raid 10 for availability and speed.
My vm storage now is just 1Tb nvme and its only 50 % full.

With the x2 mode i'm trying to avoid any pcie switch chip beside the hba
x2 pcie is 1500 MBps which is fine since its between 10 and 20 gbit

And for 8Tb/15tb/30tb nvme a bit overkill for a homelab in the basement :)


but thinking about it, 2 x 2tb (x4) in raid 1 should do it
 
Last edited:
  • Like
Reactions: NateS

NateS

Active Member
Apr 19, 2021
159
91
28
Sacramento, CA, US
EDIT: I was mixing up which of raid 0 and 1 was striping and which was mirroring

I think you are focusing to much on speed, my aim is for max 20gbit of bandwidth anything over is waste, the raid 10 for availability and speed.
My vm storage now is just 1Tb nvme and its only 50 % full.

With the x2 mode i'm trying to avoid any pcie switch chip beside the hba
x2 pcie is 1500 MBps which is fine since its between 10 and 20 gbit

And for 8Tb/15tb/30tb nvme a bit overkill for a homelab in the basement :)
Sure, though the argument I'm making scales down to smaller drive sizes and slower drives too.

Raid 10 in this case gets you availability, but it does not get you any extra speed (which you don't need anyway). Raid 10 is just raid 0 and raid 1 at the same time, and raid 1 provides the reliability, and raid 0 normally gives you extra speed, but in this case it doesn't since it also cuts down your bandwidth per drive.

If your goal is just storage with >= 20gbit bandwidth and high availability at the lowest cost, I really think the cheapest and best way to achieve that is a raid 1 mirror of 2 drives. Going with a raid 10 of 4 drives of half the size each instead will just add costs of non-standard cabling and adapters, without any performance benefit.

Bonus question :D

Will i be able to run an sata m.2 and an nvme m.2 with one cable with thes ?


View attachment 19985
That's an interesting question. Since U.2 uses separate pins for NVMe and SATA, and since there's no mode pins that disable one or the other, theoretically it might be possible, and the product page does claim that it allows simultaneous operation. I don't know of any cable that would allow connecting up both SATA and NVMe at once, but there are backplanes that do, and if it was inserted in one of those, I think there's a pretty good chance it might work actually.
 
Last edited:

ericloewe

Active Member
Apr 24, 2017
295
129
43
30
I'm really curious to know what something like a Dell Gen 13/14 with U.2 backplane and independent SATA and NVMe upstream connections would do when faced with that thing. Is the management interface going to go crazy? Is SES going to blow up? I'm sure the OS itself wouldn't care, but I'm not feeling adventurous enough to go try this one out.
 
  • Like
Reactions: NateS

NateS

Active Member
Apr 19, 2021
159
91
28
Sacramento, CA, US
I'm really curious to know what something like a Dell Gen 13/14 with U.2 backplane and independent SATA and NVMe upstream connections would do when faced with that thing. Is the management interface going to go crazy? Is SES going to blow up? I'm sure the OS itself wouldn't care, but I'm not feeling adventurous enough to go try this one out.
I'm curious too. My guess is that the management interface is probably only wired up to one of the two drives, probably the NVMe, and then the other would look to the host like you'd just unplugged the cable from the backplane and plugged it straight into a drive mounted elsewhere. But there are other possibilities as well, depending on how it's all designed. OP, if you end up trying it out, can you report back?