Sanity check - adding enterprise SSD to consumer PC

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gggr

New Member
Jul 18, 2025
16
3
3
Looking at options for adding large capacity SSD(s) to a consumer PC, looking to check my understanding and any possible issues.

My understanding of the current state of things:
As far as I can tell, what is generally available to consumers in the 2nd hand/refurbished market for SSDs in capacities 8TB and up are generally either SAS or PCIe NVME U.2 or U.3.
(I think Nimbus is offering high capacity SATA SSDs but I have not seen any for sale).

To add SAS to a consumber mobo without a backplane the best option is probably a SAS HBA - typically a LSI/Broadcom (or similar rebadged / counterfeit).
Among these you generally see lower power consumption / lower heat generated as the model number increases.
E.g. order of preference: 92xx < 9300 < 9305 < 9400 < 9500 etc (acknowledging 9300 16i may be worse than a 92xx)

One thing I haven't seen discussed too much is the availability of cables for the connector on each card and resulting limits - based on no backplane being available so going direct from HBA to drive. Using 8i as an example.
9500-8i
Capable of 1024 SAS devices
Connector on card: 1 off x8 SFF-8654
Searching for SFF-8654 to SFF-8482 cables: A lot of 4xSAS w/ SATA power and some 8xSAS w/ molex power.
So realistically 4 off drives?

9400-8i
Capable of 1024 SAS devices
Connector on card: 2 off x4 SFF-8643
Searching for SFF-8643 to SFF-8482 cables: A lot of 4xSAS w/ SATA power
So realistically 8 off drives.

When it comes to NVMe there are even fewer cable options but looking at some other threads these HBAs may not be the best way to add these. So will ignore this for now and focus on SAS for my own use.

In my particular case I think I am going to look at adding a ~16TB SAS SSD to my system.
I will note here that I am located in Australia so may not have the same products available, but it seems like most ebay shops will ship internationally.
(ebay example from server part deals - White Label OEM 15.36TB SAS Pardon our interruption... - AU $1200)
Side note: This comes to about $80/TB which does not appear to be maintained for 30TB units - go figure.

My system is set up as below
Motherboard - ASRock Model B365M Phantom Gaming 4
From the specs
2 x PCI Express 3.0 x16 Slots (PCIE1/PCIE3: single at x16 (PCIE1); dual at x16 (PCIE1) / x4 (PCIE3))*
1 x PCI Express 3.0 x1 Slot (Flexible PCIe)
6 x SATA3 6.0 Gb/s Connectors
1 x Ultra M.2 Socket (M2_1), supports M Key type 2242/2260/2280 M.2 PCI Express module up to Gen3 x4 (32 Gb/s)**
1 x Ultra M.2 Socket (M2_2), supports M Key type 2242/2260/2280/22110 M.2 SATA3 6.0 Gb/s module and M.2 PCI Express module up to Gen3 x4 (32 Gb/s)**
*If M2_2 is occupied by a SATA-type M.2 device, SATA3_0 will be disabled.

Where I have configured
PCIE1 - occupied by graphics card
PCIE3 - available
M2_1 - occupied by Intel 660p Series SSD M.2 PCIe 2TB (SSDPEKNW020T8X1)
M2_2 - occupied by Lexar NM790 M.2 NVMe PCIe Gen 4×4 4TB (LNM790X004T-RN9NG)

As far as I can tell there is no sharing of PCI lanes between M.2 sockets and PCI slots, and so the second PCIE slot should have x16 available? The question of bifurcation is the most confusing to me but I think the mobo is from a time before bifurcation was a thing and slots where able to be split up in such a way. At the very least there is no mention of bifurcation in the product specs.

So to add the Server Part Deals White Label 15.36TB SAS I could try the following:
LSI HBA 9500-8i - ~AU $150
Cable SFF-8654 to 4xSFF-8482 - AU $30

OR

LSI HBA 9400-8i - ~AU $130
SFF-8643 to 4xSFF-8482 - $20

Are there any issues with either of these options?
My thoughts are the 9500-8i option is not that much more expensive than the 9400-8i so it may be worthwhile for heat/power savings, however it will most likely be limited to 4 off total drives in the future. The 9400-8i will offer more flexibility for the future by having 2 off connectors.
 

nexox

Well-Known Member
May 3, 2023
1,825
881
113
Searching for SFF-8654 to SFF-8482 cables: A lot of 4xSAS w/ SATA power and some 8xSAS w/ molex power.
So realistically 4 off drives?
I'm not sure what's unrealistic about the 8 connector cable exactly, do you not have any molex power connectors? I'm also pretty sure you can't fit a SlimSAS 4i connector into an 8i socket, but I haven't tried it.

Ultimately if you only need a single SSD then connecting NVMe will be cheaper, though the drive itself may be more expensive.
 
  • Like
Reactions: gggr

gggr

New Member
Jul 18, 2025
16
3
3
Thanks for responding.

I'm not sure what's unrealistic about the 8 connector cable exactly, do you not have any molex power connectors? I'm also pretty sure you can't fit a SlimSAS 4i connector into an 8i socket, but I haven't tried it.
Was moreso that 90% appear to be 4i with SATA power so was an availability thing. However I had no idea there was a difference from 4i to 8i - I naively figured SFF-8654 is SFF-8654.
Looking at it more closely all of the 8i showing up on ebay are over $150 - not sure why there is such a jump from 4 to 8.
However amazon has a reasonable 8i option

Ultimately if you only need a single SSD then connecting NVMe will be cheaper, though the drive itself may be more expensive.
If this one goes well I would probably look at adding a few more. I honestly can't see myself with more than 4 off at any point but will depend on what the market does and what becomes available.
Do you suppose SAS SSDs will be phased out in the future or is that a long way off?
 

nexox

Well-Known Member
May 3, 2023
1,825
881
113
However I had no idea there was a difference from 4i to 8i - I naively figured SFF-8654 is SFF-8654.
The 8i and 4i are very different sizes and at least by eye it doesn't look like the 4i makes up the center pins of the 8i, but I could be wrong: https://img.genuinemodules.com/cach...i8-2x8654i4/CAB-8654i8-2x8654i4-1-800x800.jpg


Do you suppose SAS SSDs will be phased out in the future or is that a long way off?
I don't really have any special insight but given that SAS4 more or less just came out and manufacturers are making SSDs that support it I, don't think they're going away too soon.
 
  • Like
Reactions: gggr

mattventura

Well-Known Member
Nov 9, 2022
721
387
63
When it comes to NVMe there are even fewer cable options but looking at some other threads these HBAs may not be the best way to add these. So will ignore this for now and focus on SAS for my own use.
"Tri-mode" HBAs are suboptimal because they present your NVMe drives as SAS drives to the host. This only provides value in very specific circumstances, like being able to hotplug NVMe on hosts that would otherwise not support it.

There are many options for how to connect the drives, but the simplest and least expansive is a riser that holds the drive(s) directly on the card, like this one. The downside is that you need as many PCIe lanes on the host as you want to dedicate to drives (e.g. if you want 4 drives, you need a x16 slot that can bifurcate down to 4x4). However, the advantage is that you'll generally get the fastest speeds, the lowest power draw, and if you go with the simple riser option, no cables which just add cost. You can also get adapters to let you plug a cable into an M.2 slot to support a single NVMe drive.

Your motherboard splits lanes between the two x16 slots, so you would have x8 available. Assuming it supports bifurcation, that gives you two NVMe drives which will be faster than whatever combination of SAS HBA+drives you could realistically use.
 

gggr

New Member
Jul 18, 2025
16
3
3
Your motherboard splits lanes between the two x16 slots, so you would have x8 available. Assuming it supports bifurcation, that gives you two NVMe drives which will be faster than whatever combination of SAS HBA+drives you could realistically use.
My interpretation of this section of the motherboard specs
2 x PCI Express 3.0 x16 Slots (PCIE1/PCIE3: single at x16 (PCIE1); dual at x16 (PCIE1) / x4 (PCIE3))*
Is that with both slots occupied, one runs at x16 and the other runs at x4? And there is no mention of bifurcation so I assume that is a no go.
So most likely only 1 off U.2 drive could be added with this motherboard with this option.
(I'm assuming that if you put the 4xU.2 riser into a x4 PCI slot it doesn't split down to 4x1 - it just doesn't work).
(I also acknowledge that 1 off SSD in question is several times more expensive than my motherboard and you could argue it is throwing good money after bad trying to to fit something to it).

SAS SSDs also seem to be available for cheaper than NVMe SSDs, at least in my ebay browsing.
 

mattventura

Well-Known Member
Nov 9, 2022
721
387
63
My interpretation of this section of the motherboard specs
2 x PCI Express 3.0 x16 Slots (PCIE1/PCIE3: single at x16 (PCIE1); dual at x16 (PCIE1) / x4 (PCIE3))*
Is that with both slots occupied, one runs at x16 and the other runs at x4? And there is no mention of bifurcation so I assume that is a no go.
So most likely only 1 off U.2 drive could be added with this motherboard with this option.
(I'm assuming that if you put the 4xU.2 riser into a x4 PCI slot it doesn't split down to 4x1 - it just doesn't work).
(I also acknowledge that 1 off SSD in question is several times more expensive than my motherboard and you could argue it is throwing good money after bad trying to to fit something to it).

SAS SSDs also seem to be available for cheaper than NVMe SSDs, at least in my ebay browsing.
Yes, you would only get one, though it would perform better than a single (or even a couple) SAS SSDs. However, looking at those AU prices, you might be better off just going the SAS route.
 
  • Like
Reactions: gggr

itronin

Well-Known Member
Nov 24, 2018
1,401
947
113
Denver, Colorado
Looking at options for adding large capacity SSD(s) to a consumer PC, looking to check my understanding and any possible issues.

(I think Nimbus is offering high capacity SATA SSDs but I have not seen any for sale).
If you can afford a Nimbus or two then you should seriously think about getting a better motherboard... If this is your daily driver then maybe a workstation board and cpu with a lot more exposed PCIE lanes.

To add SAS to a consumber mobo without a backplane the best option is probably a SAS HBA - typically a LSI/Broadcom (or similar rebadged / counterfeit).
Since this hasn't been addressed, I'm going to speak to this because you might have a misunderstanding. Whether you have a SAS capable backplane *or not* you still need a SAS HBA to talk to SAS drives.

One thing I haven't seen discussed too much is the availability of cables for the connector on each card and resulting limits - based on no backplane being available so going direct from HBA to drive. Using 8i as an example.
Don't get to hung on the cables (yet). Consider the the basic unit of connectivity coming off a SAS HBA is a "SAS Lane". Cards listed as -4i, -8i, -16i are in general talking about the number of SAS lanes to/from the HBA and in general the "i" refers to internal. If you are NOT using a SAS expander then think of an HBA lane as a connection for a single "direct connect" SAS or SATA disk.

An aside: You can think of a SAS expander as a switch that multiplexes multiple SAS HBA lanes to many more disks than there are lanes on the HBA.

When talking about direct connect, the SAS cable breaks out SAS lanes to individual disks. cable like an 8654 may break out to 8 drives. 8643 may break out to 4. If the cable is broken out to SATA drive connectors then you are either going to a direct connect backplane with SATA drive connectors (like supermicro TQ) or to SATA drives. If the cable is broken out to "SAS" connectors like an SFF-8682 then depending on how the cable is constructed you may hook up SATA power *or* hook up molex. it will be one or the other, you'll have to provide drive power though.

I think you'll have a tough time with a usable quantity of NVME on this mothboard so I'm not going to go there. and before you go what about those two m.2 slots. Yeah not for serious performance (see my comments below).

My system is set up as below
Motherboard - ASRock Model B365M Phantom Gaming 4
From the specs
2 x PCI Express 3.0 x16 Slots (PCIE1/PCIE3: single at x16 (PCIE1); dual at x16 (PCIE1) / x4 (PCIE3))*
1 x PCI Express 3.0 x1 Slot (Flexible PCIe)
6 x SATA3 6.0 Gb/s Connectors
1 x Ultra M.2 Socket (M2_1), supports M Key type 2242/2260/2280 M.2 PCI Express module up to Gen3 x4 (32 Gb/s)**
1 x Ultra M.2 Socket (M2_2), supports M Key type 2242/2260/2280/22110 M.2 SATA3 6.0 Gb/s module and M.2 PCI Express module up to Gen3 x4 (32 Gb/s)**
*If M2_2 is occupied by a SATA-type M.2 device, SATA3_0 will be disabled.
I pulled the manual - @#@#$@#$#% ASROCK - no block diagram. But looking at one for a simliar B365M MSI - looks like all your CPU pcie lanes go to PCIE1.

The x4 (PCIE3) slot, the two m.2 and really everything else on that motherboard seems to come from the PCH with x4 DMI lanes from the CPU connecting the PCH.

If you were using spinning rust I'd say no problem with an -8i or possibly an -16i even hanging off the PCH x4 as you'd not likely notice signficiant performance degradation.

Stick 8 SAS3 SSD's off that x4 in a single pool/array etc. and. you will likely notice they aren't as fast as you'd expect.

Where I have configured
PCIE1 - occupied by graphics card
PCIE3 - available
M2_1 - occupied by Intel 660p Series SSD M.2 PCIe 2TB (SSDPEKNW020T8X1)
M2_2 - occupied by Lexar NM790 M.2 NVMe PCIe Gen 4×4 4TB (LNM790X004T-RN9NG)

As far as I can tell there is no sharing of PCI lanes between M.2 sockets and PCI slots, and so the second PCIE slot should have x16 available? The
question of bifurcation is the most confusing to me but I think the mobo is from a time before bifurcation was a thing and slots where able to be split up in such a way. At the very least there is no mention of bifurcation in the product specs.
This CPU class has x16 PCIE lanes total and if the MSI block diagram is representative of this ASROCK board then they all go to PCIE1.
So no, the second x16 slot (PCIE3) only has x4 lanes, all shared through the PCH with all the other I/O on the motherboard.
PCIE3 is physically an x16 slot, but only has data paths for x4. You'll note in your ASROCK manual they show that having a second GPU using a crossfire bridge between the two GPU's, that x4 is really for board control and (maybe) slot power.

re. bifurcation. Just becaues the manual doesn't mention it - well that does not mean that it isn't available in the BIOS (esp. later BIOS) however a quick google leads me to believe bifurcation isn't there. which is a shame because you could probably get away with x8 for your GPU and steal x8 for an HBA.

re. Use Case. Is this consumer PC your daily driver? If its daily driver and you want a lot of storage and you aren't trying to max performance and simply want the capacity then an HBA in PCIE3 with SAS 7.68 or 15.36TB (or SATA 7.68TB) SSD's is probably fine. Just watch cooling. Or are you looking to build a hypervisor (prox/xcp/TNS/KVM etc.) box or a NAS with GPU for transcoding and happen to have this spare board laying about?

Lanes. The challenge with consumer boards always comes down to PCIE lanes, how to access them and break out for your use case.
 
  • Like
Reactions: onose and nexox

gggr

New Member
Jul 18, 2025
16
3
3
Thanks for the detailed response, much appreciated.

Since this hasn't been addressed, I'm going to speak to this because you might have a misunderstanding. Whether you have a SAS capable backplane *or not* you still need a SAS HBA to talk to SAS drives.
Yes good point. Not sure what I was on about here.

Don't get to hung on the cables (yet). Consider the the basic unit of connectivity coming off a SAS HBA is a "SAS Lane". Cards listed as -4i, -8i, -16i are in general talking about the number of SAS lanes to/from the HBA and in general the "i" refers to internal. If you are NOT using a SAS expander then think of an HBA lane as a connection for a single "direct connect" SAS or SATA disk.

An aside: You can think of a SAS expander as a switch that multiplexes multiple SAS HBA lanes to many more disks than there are lanes on the HBA.
Makes sense! It isn't really feasible to add more PCI cards or really that many more HDDs to this build, so for my use case, it will be direct connect and so each SAS lane = 1 device.

I pulled the manual - @#@#$@#$#% ASROCK - no block diagram. But looking at one for a simliar B365M MSI - looks like all your CPU pcie lanes go to PCIE1.

The x4 (PCIE3) slot, the two m.2 and really everything else on that motherboard seems to come from the PCH with x4 DMI lanes from the CPU connecting the PCH.

If you were using spinning rust I'd say no problem with an -8i or possibly an -16i even hanging off the PCH x4 as you'd not likely notice signficiant performance degradation.

Stick 8 SAS3 SSD's off that x4 in a single pool/array etc. and. you will likely notice they aren't as fast as you'd expect.
Again makes sense and well explained! Even if I ended up with 8 SAS SSDs I doubt they would simultaneously be accessed during normal usage.


re. Use Case. Is this consumer PC your daily driver? If its daily driver and you want a lot of storage and you aren't trying to max performance and simply want the capacity then an HBA in PCIE3 with SAS 7.68 or 15.36TB (or SATA 7.68TB) SSD's is probably fine. Just watch cooling. Or are you looking to build a hypervisor (prox/xcp/TNS/KVM etc.) box or a NAS with GPU for transcoding and happen to have this spare board laying about?
This is primarily my gaming PC and I happen to be a bit of a datahoarder. So I want/need a lot of storage and am of course endlessly running out of space and swapping out drives for larger capacity models. So I definitely don't need blazing fast speeds and tend to prefer lower power / noise. I do have a separate truenas box full of spinning rust in the garage where heat and noise is less of an issue.
This all came about as I keep seeing those headlines about 60-100TB SSDs and wondered when they'd (if ever) become available to consumers or reach $/TB parity with spinning rust. After some searching this SPD SSD at AU$78/TB seems a reasonable premium vs HDD prices (around $50/TB new or $30/TB used) - at least that's what I'm trying to convince myself!

Related query to the above, I know the HBA will generate heat and may need a fan. But I was (perhaps naively) assuming that an SSD will draw less power and generate less heat than an equivalent size HDD under similar operating loads - does this hold for enterprise SSDs?
 

itronin

Well-Known Member
Nov 24, 2018
1,401
947
113
Denver, Colorado
Related query to the above, I know the HBA will generate heat and may need a fan. But I was (perhaps naively) assuming that an SSD will draw less power and generate less heat than an equivalent size HDD under similar operating loads - does this hold for enterprise SSDs?
For sure the HBA will need cooling. Look here for ideas. There are models floating around for 9400 too.

When in use the SSD's will need cooling. If you have up 2 x 5.25" bays free you could look at some Icy Dock hot swap bays. that would allow you to use enterprise 15mm 2.5" SSD's, each bay supporting 4. If you have 1 x 5.25" bay free and you can go with SATA enterprise SSD's (7.68TB) then you can look at an 8 x2.5" 7.5mm. Two benefits, simpler cable management & simpler power.

Your SSD's will generate heat and they need to be cooled. if your access model is infrequent and short duration then you can probably get away with some simple cooling.

I'd be lax if I bring this up. If you have a NAS box in the garage, can you run single mode fiber or < 50m Cat 6A cable from there to your gaming PC? You could add a single 10Gbe on that x4 PCIE3 slot and assuming that you have a slot free on your NAS add a 10Gbe card for a direct connect network? I'm naively assuming your Garage NAS doesn't have 10Gbe already. Obviously 10Gbe isn't going to be as fast as local storage but perhaps a better solution for the hardware you have? Of course perhaps you've looked at the network side and ruled it out (getting cable fished etc.)
 
  • Like
Reactions: gggr

gggr

New Member
Jul 18, 2025
16
3
3
For sure the HBA will need cooling. Look here for ideas. There are models floating around for 9400 too.
Unfortunately I don't have a 3D printer or know someone with one but will make sure there is something blowing air onto the HBA.
In my truenas box I have 2x80mm fans on a PCI slot fan mount blowing directly at a 9400-16i. I'll have to take some measurements of the case internals and do some checks to figure out what to do.

When in use the SSD's will need cooling. If you have up 2 x 5.25" bays free you could look at some Icy Dock hot swap bays. that would allow you to use enterprise 15mm 2.5" SSD's, each bay supporting 4. If you have 1 x 5.25" bay free and you can go with SATA enterprise SSD's (7.68TB) then you can look at an 8 x2.5" 7.5mm. Two benefits, simpler cable management & simpler power.
The case is a SilverStone TJ08B-E. It has 2x5.25" bays but one is currently occupied with an optical drive. It has a 4 bay 3.5" drive caddy directly behind the front fan so probably best bet is to put the SSDs in here. I doubt there would be much airflow over the 5.25" bays without some modifications.
Could you clarify what you mean by
8 x2.5" 7.5mm
8 off 2.5" x 7.5mm high SSDs in a single 5.25" enclosure? But not sure how this helps with cable management and power.

I'd be lax if I bring this up. If you have a NAS box in the garage, can you run single mode fiber or < 50m Cat 6A cable from there to your gaming PC? You could add a single 10Gbe on that x4 PCIE3 slot and assuming that you have a slot free on your NAS add a 10Gbe card for a direct connect network? I'm naively assuming your Garage NAS doesn't have 10Gbe already. Obviously 10Gbe isn't going to be as fast as local storage but perhaps a better solution for the hardware you have? Of course perhaps you've looked at the network side and ruled it out (getting cable fished etc.)
Haha appreciate the idea. I won't go into all the details as it isn't your problem but long story short, the garage NAS already hosts a 1:1 copy of the gaming PC data. The NAS serves media to a kodi box elsewhere, and the gaming PC does all the downloading and metadata management. House is wired for CAT5 (don't ask!) and not feasible to surface mount 6A or anything else.
 

nexox

Well-Known Member
May 3, 2023
1,825
881
113
I doubt there would be much airflow over the 5.25" bays without some modifications.
The hot swap bays generally include fans, so cooling is easy.

8 off 2.5" x 7.5mm high SSDs in a single 5.25" enclosure? But not sure how this helps with cable management and power.
Depending on the enclosure they usually have a couple power connectors and sometimes multi-lane data connectors like SFF-8087, which can majorly cut down on cable count.
 
  • Like
Reactions: gggr and itronin

itronin

Well-Known Member
Nov 24, 2018
1,401
947
113
Denver, Colorado
What @nexox said! here's a single 5.25 bay US linked example that uses sata connections , in AU YMMV though maybe ask Ali?

I've seen SFF-8643 connections on some, not really seen SFF-8087 and if you are looking at 9300, 9305, 9306, 9400, 9500-8i (IBM 530 etc). then 8643 or 8654 to SATA breakout may be your best bet. Biggest thing in my book though only 2 SATA POWER connectors vs 8. I prefer molex myself but the SATA power should be fine.
 
  • Like
Reactions: gggr and nexox

gggr

New Member
Jul 18, 2025
16
3
3
Hello all!

So the refurbished SAS SSD has arrived from USA and I am having troubles getting it recognised.

What I have ordered / connected (incl pictures incase ebay links stop working:
SAS SSD (this is a rebadged waterpanther - severpartdeals told me it would be reformatted etc) from eBay
LSI 9500-8i HBA from eBay
hba.jpg
SFF-8654 8i to SATA 8 Port cable from eBay
cable.jpg
SFF 8482 SAS To SATA SAS adapter from eBay
adaptor.jpg


I know the HBA is working as connecting my SATA HDDs with the same cables I was trying to use for the SAS SSD is working fine.
The HBA shows up in the device manager and in the bios.
I got a SATA cable so it could be used for SATA HDDs as well or if I ever get an enclosure/backplane down the line it will most likely have SATA connectors.

So I'm wondering if this should be working or if I've misunderstood the cross compatibility between SATA and SAS connectors?
The adapter looks like it doesn't have any have any metal pins on the bridging 7-pin SAS bit.

If this should be working are there any other tests to check.
If this is incorrect can you let me know if an adapter would work?
There are a couple that show up on ebay for me:
Sff-8482 SAS 29 Pin to SATA 22Pin
SAS to SATA board
 

gggr

New Member
Jul 18, 2025
16
3
3
The only thought I've had is that the SSD is dual port and the adapted is single port - I assumed it would just function as a single port SSD but maybe it needs a dual port connector?
 

itronin

Well-Known Member
Nov 24, 2018
1,401
947
113
Denver, Colorado
picture of the top label of the drive or something that can show the model number would be helpful.
have any pictures with it cabled up and showing it all connected, including power?