SC846 system gifted to me - A full overview with questions. Replacing DVD drive with SSDs? Ideas for upgrades or keep what I got?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nexox

Well-Known Member
May 3, 2023
654
262
63
Optane is great for small random writes because it can be overwritten directly, without the slow erase that NAND needs between writing to the same page, and the longevity is fantastic. Still, there are different degrees of random IO, and really a decent NAND SSD will handle most of them (and random reads are pretty fast for any enterprise NVMe drive,) especially if you get them oversized so they have plenty of spare blocks to erase and prepare for new writes.

In my thinking a VM generally doesn't do that many random writes, because Linux and BSD tend to be pretty optimized away from useless writes, and many write workloads are sequential. So you're really looking at a specific application like a high-throughput database in a VM that might benefit from Optane storage, and most of those can still be handled just fine by a write-optimized NAND drive, at a much lower price.

That said I have like 5 Optane drives in use for no good reason at all and I won't be giving them up ever.
 
  • Haha
Reactions: itronin

Koop

Active Member
Jan 24, 2024
149
48
28
Optane is great for small random writes because it can be overwritten directly, without the slow erase that NAND needs between writing to the same page, and the longevity is fantastic. Still, there are different degrees of random IO, and really a decent NAND SSD will handle most of them (and random reads are pretty fast for any enterprise NVMe drive,) especially if you get them oversized so they have plenty of spare blocks to erase and prepare for new writes.

In my thinking a VM generally doesn't do that many random writes, because Linux and BSD tend to be pretty optimized away from useless writes, and many write workloads are sequential. So you're really looking at a specific application like a high-throughput database in a VM that might benefit from Optane storage, and most of those can still be handled just fine by a write-optimized NAND drive, at a much lower price.

That said I have like 5 Optane drives in use for no good reason at all and I won't be giving them up ever.
Everything in that TrueNAS resource just made it sound like the bees knees, hah. I do see there's a bunch of drives that have optane cache as well, not sure where they fit on the performance scale?

Just exploring my options for a potential future "how do I jam as many SSDs into this thing" idea when I came across that resource. Was also thinking like using a PCIe expansion card to throw m.2 drives in there? Or drives that can be dropped right into PCIe?

Any other options? What could offer me the maximum density of flash storage?

I do also have a crap ton of onboard sata ports to leverage but with that single molex connection I don't know the best way to power enough drives if I were to say, mount them all on the inside of the chassis wall.

Not something I'm looking to do ASAP but it sounds like fun to eventually get in there.
 

mattventura

Active Member
Nov 9, 2022
447
217
43
Everything in that TrueNAS resource just made it sound like the bees knees, hah. I do see there's a bunch of drives that have optane cache as well, not sure where they fit on the performance scale?

Just exploring my options for a potential future "how do I jam as many SSDs into this thing" idea when I came across that resource. Was also thinking like using a PCIe expansion card to throw m.2 drives in there? Or drives that can be dropped right into PCIe?

Any other options? What could offer me the maximum density of flash storage?

I do also have a crap ton of onboard sata ports to leverage but with that single molex connection I don't know the best way to power enough drives if I were to say, mount them all on the inside of the chassis wall.

Not something I'm looking to do ASAP but it sounds like fun to eventually get in there.
There are also 2.5" U.2 NVMe drives. They tend to have a bit better performance than M.2s, and are available in larger capacities. Unfortunately, options are a bit limited on the 846. You can look for a backplane that supports U.2 NVMe drives directly (BPN-SAS3-846EL1-N8), but they're expensive and a bit hard to find. There's also no NVMe option for the rear drive bays. You might be able to use the internal fixed drive bay mounts instead.
 

nexox

Well-Known Member
May 3, 2023
654
262
63
Optane cache drives are mostly useless outside the platforms designed to use them, since they require x2x2 bifurcation to access the Optane part (Intel half-assed this because what else would they do.)

I forget which sorts of PCIe slots you ended up with, an x4 is probably best used with an AIC type NVMe, but they will run a single M.2 or U.2 just fine. An x8 slot can either fit two M.2 with a bifurcation card or four with a more expensive (and higher power consumption) PCIe switch card, or a rarer x8 AIC (all the Intel x8 drives are really two x4 SSDs with a PCIe switch, often a good price, but with some tradeoffs.) Multiple U.2 drives mounted directly on a PCIe card seems to result in problems too often, and I don't think you have anywhere to mount drives on cables until you upgrade backplanes again. If you're willing to deal with out-of-tree Linux drivers and utils from some random dropbox then perhaps a FusionIO card would make sense. In an x16 slot you can either fit four M.2 drives (cooling becomes important here) or a fun riser that holds two M.2 drives under an x8 slot for a low profile card, I wish I could find a link to one but I can't seem to figure out the correct search terms.
 

Koop

Active Member
Jan 24, 2024
149
48
28
You might be able to use the internal fixed drive bay mounts instead.
Yeah that would be the way for me. There's actually a bunch of 3D printable options for mounting along with the official metal brackets.

Optane cache drives are mostly useless outside the platforms designed to use them, since they require x2x2 bifurcation to access the Optane part (Intel half-assed this because what else would they do.)

I forget which sorts of PCIe slots you ended up with, an x4 is probably best used with an AIC type NVMe, but they will run a single M.2 or U.2 just fine. An x8 slot can either fit two M.2 with a bifurcation card or four with a more expensive (and higher power consumption) PCIe switch card, or a rarer x8 AIC (all the Intel x8 drives are really two x4 SSDs with a PCIe switch, often a good price, but with some tradeoffs.) Multiple U.2 drives mounted directly on a PCIe card seems to result in problems too often, and I don't think you have anywhere to mount drives on cables until you upgrade backplanes again. If you're willing to deal with out-of-tree Linux drivers and utils from some random dropbox then perhaps a FusionIO card would make sense. In an x16 slot you can either fit four M.2 drives (cooling becomes important here) or a fun riser that holds two M.2 drives under an x8 slot for a low profile card, I wish I could find a link to one but I can't seem to figure out the correct search terms.
Just for reference on what I have available on the board:

1708321765283.png
 

itronin

Well-Known Member
Nov 24, 2018
1,240
801
113
Denver, Colorado
Everything in that TrueNAS resource just made it sound like the bees knees, hah. I do see there's a bunch of drives that have optane cache as well, not sure where they fit on the performance scale?
NO. Unless you mean by cache - a dedicated SLOG device or some other metadata (special vdev). If you are talking M10 drives. don't. ever. please.

I have a mirror of optane 905p 960GB - about 6 months back, various resellers, (looking at you newegg) were offloading them at very nice prices.
specically 900p and 905p are consumer drives. that said they work fine in homelabs for SLOG, or in larger capacity vm storage, db storage etc.
P4800x is similar to 90xP drive but enterprise instead of consumer.

optane p1600x is a nice single-sided m2.2280 drive - SLOG mostly. not as performant as 900p or 905p or p4800x etc. but every nice and very affordable.

Optane is great as noted previously if you have a write heavy workload. For example if you were trying to build a home performance media server for lots of users it would IMO make a very nice transcoding storage device.


Just exploring my options for a potential future "how do I jam as many SSDs into this thing" idea when I came across that resource. Was also thinking like using a PCIe expansion card to throw m.2 drives in there? Or drives that can be dropped right into PCIe?
AIC - typical acronym for an nvme on a card that you drop into a slot.
If your server board supports bifurcation there are low cost m2. carrier boards that rely on bifurcation.
There are cards that let you mount a u.2 drive to the card and slot it. Just make sure you have good static pressure airflow across the u.2 drive.

IMO, a very nice AIC is the Intel P3605 at 1.6TB. Put in a pair, ZFS mirror them, use the mirror for VM storage. Unfortunately it will burn a pair of x4 slots, or u.2 then a bifurcated x8 carrier board.

Any other options? What could offer me the maximum density of flash storage?
Cost wise u.2 drives seem to hit the "low cost" and seem to be getting unloaded for about the last year and so driving prices down.
SAS3 SSD's seem to be decreasing in price much more slowly.

you could potentially add a jbod shelf and an -8e or -16e HBA to your server and go offboard, maybe a 2U 24 bay SFF for SATA or SAS SSD if you wanted. ZFS Pool - maybe an Nx3-way mirror which would have some very very higly available and speedy VM read performance.

I do also have a crap ton of onboard sata ports to leverage but with that single molex connection I don't know the best way to power enough drives if I were to say, mount them all on the inside of the chassis wall.

Not something I'm looking to do ASAP but it sounds like fun to eventually get in there.
You'll never really use up those onboard SATA ports when you have 24 bays of SAS3 expander backplane.

My suggestion, keep planning, asking, but get your server online first so you can get a feel for how it behaves and how you are going to use it before investing too much in more drives or going hitting the fork in the road. I think you've reached that point, time to play, then adjust your path!
 
  • Like
Reactions: Koop

itronin

Well-Known Member
Nov 24, 2018
1,240
801
113
Denver, Colorado
Found the fun riser: PCIe X16 To X8+X4+X4 Expansion Riser Card Extended Card M.2 NVMEx2 Input Ports | eBay

It seems that all the cheap Intel x8 AIC SSDs I looked at last week have dried up, but they would be a nice fit on top.

Edit: Since you can just get two x8 slots you probably don't need this x16 setup, just two separate things.
WOW!!! its been commoditized (SIC - made up word). About 2.5 years ago I had Christian Payne make me a couple of custom risers like that (and they cost a lot more than that ebay listing). They went in my Little Black NAS project builds. By custom I mean the m.2 mount was increased in standoff height to support double-sided m.2 22110 drives as his original version only supported single sided drives.

Only thing is, watch your airflow and power draw coming off the PCIE slot you use. a -16i/e HBA and 2 x m.2 or u.2 fed (power from PDB) NVME drives would be fine
 
  • Like
Reactions: Koop and nexox

itronin

Well-Known Member
Nov 24, 2018
1,240
801
113
Denver, Colorado
Yeah that would be the way for me. There's actually a bunch of 3D printable options for mounting along with the official metal brackets.

Don't forget to look at the block diagram in the manual to see how those lanes are being pulled especially if you are going to configure bifurcation.

Edit - the onboard m.2 cries out for maybe a dedicated SLOG device for your spinning rust pool - maybe a 58GB P1600X.
 
  • Like
Reactions: Koop

Koop

Active Member
Jan 24, 2024
149
48
28
NO. Unless you mean by cache - a dedicated SLOG device or some other metadata (special vdev). If you are talking M10 drives. don't. ever. please.
Nah I did mean these dookie little guys:
1708322699568.jpeg

I assumed they were not worth the time to look at but hey, never know unless you ask. When it comes to dedicated drive(s) for SLOG I wasn't thinking that far ahead yet- that seems like something to think about a while out.

I have a mirror of optane 905p 960GB - about 6 months back, various resellers, (looking at you newegg) were offloading them at very nice prices.
specically 900p and 905p are consumer drives. that said they work fine in homelabs for SLOG, or in larger capacity vm storage, db storage etc.
P4800x is similar to 90xP drive but enterprise instead of consumer.

optane p1600x is a nice single-sided m2.2280 drive - SLOG mostly. not as performant as 900p or 905p or p4800x etc. but every nice and very affordable.

Optane is great as noted previously if you have a write heavy workload. For example if you were trying to build a home performance media server for lots of users it would IMO make a very nice transcoding storage device.
Thanks I appreciate the insight here.

AIC - typical acronym for an nvme on a card that you drop into a slot.
If your server board supports bifurcation there are low cost m2. carrier boards that rely on bifurcation.
There are cards that let you mount a u.2 drive to the card and slot it. Just make sure you have good static pressure airflow across the u.2 drive.

IMO, a very nice AIC is the Intel P3605 at 1.6TB. Put in a pair, ZFS mirror them, use the mirror for VM storage. Unfortunately it will burn a pair of x4 slots, or u.2 then a bifurcated x8 carrier board.

Cost wise u.2 drives seem to hit the "low cost" and seem to be getting unloaded for about the last year and so driving prices down.
SAS3 SSD's seem to be decreasing in price much more slowly.

you could potentially add a jbod shelf and an -8e or -16e HBA to your server and go offboard, maybe a 2U 24 bay SFF for SATA or SAS SSD if you wanted. ZFS Pool - maybe an Nx3-way mirror which would have some very very higly available and speedy VM read performance.
And all your insights here. Thanks for going through so much detail and opinions on everything here.

My suggestion, keep planning, asking, but get your server online first so you can get a feel for how it behaves and how you are going to use it before investing too much in more drives or going hitting the fork in the road. I think you've reached that point, time to play, then adjust your path!
This 900% of course. I was only reading and researching while I wait for some last minute parts. (The person I bought the X11SPi-TF "forgot" to include the CPU which they sent out yesterday... So I'm waiting on that).

I absolutely do not intent to do anything with SSDs in any meaningful capacity any time soon- let alone buy any. I may have some m.2 drives just laying around though but otherwise just being curious.

So no worries- the only creep I let in was jumping onto that X11 board before the X10 haha.
Found the fun riser: PCIe X16 To X8+X4+X4 Expansion Riser Card Extended Card M.2 NVMEx2 Input Ports | eBay

It seems that all the cheap Intel x8 AIC SSDs I looked at last week have dried up, but they would be a nice fit on top.

Edit: Since you can just get two x8 slots you probably don't need this x16 setup, just two separate things.
lol that thing looks interesting for sure.
 

Koop

Active Member
Jan 24, 2024
149
48
28
Don't forget to look at the block diagram in the manual to see how those lanes are being pulled especially if you are going to configure bifurcation.

Edit - the onboard m.2 cries out for maybe a dedicated SLOG device for your spinning rust pool - maybe a 58GB P1600X.
Will keep this in mind though, thanks
 

nabsltd

Well-Known Member
Jan 26, 2022
410
274
63
Everything in that TrueNAS resource just made it sound like the bees knees, hah.
TrueNAS uses ZFS. ZFS sync writes directly to spinning disks are slow. So, you use a separate device for the ZIL (ZFS Intent Log). This is called a "SLOG" (separate log). Using flash, the SLOG is faster than the spinning disks, so you get better performance. For an explanation, see here.

However, because ZFS limits the total amount of data in the transaction group (TXG), even a SLOG only helps if your workload is bursty. Long term, you still can only write at the speed of the spinning disk. So, if you are blasting data through a 40Gbps network link, you'd need 4GB/sec write speed on the ZFS pool over the long term. It doesn't matter if your SLOG can write that fast...without changing the defaults, ZFS won't let much data sit in RAM before being committed to the pool.

Essentially, despite the fact that a SLOG device with power loss protection (PLP) means the data has absolutely been written to a non-volatile location that will eventually end up in the pool, ZFS refuses to use more than a tiny amount of it before it forces the data from RAM to the spinning disk, thus reducing throughput. If you don't care about your data, disable sync writes and ZFS will let the TXG grow much larger in RAM, and you'll get better performance, as long as you don't write more than the size of your system RAM in one long burst.

TL;DR: Optane is great for SLOG, but you don't need more than the 118GB version regardless of the size of your pool until you start thinking about 100Gbps networking. Buy two and put them in a RAID-1 to use as SLOG and you'll get as much performance as ZFS will let you with sync writes.

Note that if you are going to use mirroring, you want the two drives on separate cards in separate slots. The whole point of "redundant" is making sure a single point of failure doesn't break things. With the drives in separate slots, only a motherboard failure can break the pool.
 
Last edited:
  • Like
Reactions: nexox and Koop

Koop

Active Member
Jan 24, 2024
149
48
28
TrueNAS uses ZFS. ZFS sync writes directly to spinning disks are slow. So, you use a separate device for the ZIL (ZFS Intent Log). This is called a "SLOG" (separate log). Using flash, the SLOG is faster than the spinning disks, so you get better performance. For an explanation, see here.

However, because ZFS limits the total amount of data in the transaction group (TXG), even a SLOG only helps if your workload is bursty. Long term, you still can only write at the speed of the spinning disk. So, if you are blasting data through a 40Gbps network link, you'd need 4GB/sec write speed on the ZFS pool over the long term. It doesn't matter if your SLOG can write that fast...without changing the defaults, ZFS won't let much data sit in RAM before being committed to the pool.

Essentially, despite the fact that a SLOG device with power loss protection (PLP) means the data has absolutely been written to a non-volatile location that will eventually end up in the pool, ZFS refuses to use more than a tiny amount of it before it forces the data from RAM to the spinning disk, thus reducing throughput. If you don't care about your data, disable sync writes and ZFS will let the TXG grow much larger in RAM, and you'll get better performance, as long as you don't write more than the size of your system RAM in one long burst.

TL;DR: Optane is great for SLOG, but you don't need more than the 118GB version regardless of the size of your pool until you start thinking about 100Gbps networking. Buy two and put them in a RAID-1 to use as SLOG and you'll get as much performance as ZFS will let you with sync writes.

Note that if you are going to use mirroring, you want the two drives on separate cards in separate slots. The whole point of "redundant" is making sure a single point of failure doesn't break things. With the drives in separate slots, only a motherboard failure can break the pool.
Extremely informative thank you. I did indeed read a lot of this as well as on L2ARC and the conclusion I walked away with was "just buy more RAM" after reading so many discussions on the TrueNAS forums.

From the way you put it and how itronin described their recommendation though it seems like it would be wasteful to have a large drive dedicated to SLOG, no? Wouldn't it make more sense to get the smaller 58GB drive? Or am I missing something? Also in terms of mirroring I can see why that may not be considered necessary- because if the SLOG were to fail it would just fallback to ZIL right? I assume that's why itronin mentioned it specifically being used in the single m.2 slot available on my X11 board.

Obviously I do get redundancy though. In my case I would technically have a couple points a failure in that regard (single HBA for example). But those are sacrifices worth making imo unless it's something that may completely jeopardize my whole pool like a bad vdev layout.
 
  • Like
Reactions: nexox

nexox

Well-Known Member
May 3, 2023
654
262
63
The 118GB p1600x is often available for almost the same price as the 58GB, if you wait for a deal the added endurance and performance is probably worth a few dollars.
 
  • Like
Reactions: itronin and nabsltd

Koop

Active Member
Jan 24, 2024
149
48
28
The 118GB p1600x is often available for almost the same price as the 58GB, if you wait for a deal the added endurance and performance is probably worth a few dollars.
Ah I see. So not about the size, got it.
 

nabsltd

Well-Known Member
Jan 26, 2022
410
274
63
Also in terms of mirroring I can see why that may not be considered necessary- because if the SLOG were to fail it would just fallback to ZIL right?
Here's a scenario:

Power cuts out suddenly, so you lose what has not been written from RAM to the pool. Since ZFS wrote it to the ZIL, you're fine, right. Wrong, because the ZIL was on a separate drive (the SLOG) and when you power back on, that drive isn't responsive...it died during the power failure. So, now you've lost data.

With a RAID-1 SLOG, the same thing could happen, but only if both drives die. ZIL within the pool space has the same failure chance as the pool as a whole.
 
  • Like
Reactions: itronin

Koop

Active Member
Jan 24, 2024
149
48
28
Here's a scenario:

Power cuts out suddenly, so you lose what has not been written from RAM to the pool. Since ZFS wrote it to the ZIL, you're fine, right. Wrong, because the ZIL was on a separate drive (the SLOG) and when you power back on, that drive isn't responsive...it died during the power failure. So, now you've lost data.

With a RAID-1 SLOG, the same thing could happen, but only if both drives die. ZIL within the pool space has the same failure chance as the pool as a whole.
Fair. Makes complete sense. A power event like that is exactly when that drive would go toast too.
 

Fallen Kell

Member
Mar 10, 2020
57
23
8
There is a night and day difference between enterprise and anything below it. Also wow that is a good number of chassis you picked up at brand new, I'm envious for sure! The versatility is the best part and there's plenty of novel solutions out there such as 3D printing mounts and mods- though I would prefer to keep things as stock as possible. Main things I've entertained so far was trying to get some front fans mounted to push air over my drives as opposed to pulling air using the stock fans due to noise. There's also 3D prints to replace the stock internal fans which I'm interested in- but I have no access to a 3D printer- need to do more research into how I can print things test some of those solutions out.
You don't even really need to 3D print the internal bracket for replacement fans. You can simply remove the fan wall bracket that is in there and use double-sided tape and/or zip/twist ties to connect 3x 120mm together directly and just slot it into the same spot in the chassis, holding it in place with a twist tie (I believe I had to remove the little bracket that holds the internal fan wall, but that was just 1-2 screws). If you want to really seal it up well, you can use some foam weather stripping on the top/bottom and 2 sized of the 3x fan grouping. I simply put it on one side and top in my case, but there are people would would do all 4 sides for better air seal to ensure the air is being sucked through the drive array and not simply circulating around the sides of the fan. I sealed up the side air vent holes on my chassis to force all the intake air to come in from the front over the disk drives (I only have 12 drives right now though).

Also, if you do this mod, consider using heatsink+fan combos on your CPUs (I added in 2x Noctua NH-U9DX i4 heatsinks as I have a board with 2x narrow LGA2011). If I recall, the 120mm fans I used were the Noctua industrial NF-F12 iPPC-3000 fans, which the chassis didn't seem to have any issues with at all since they (in theory) move more air than the original 80mm fans (109cfm vs 72.5cfm), and are much quieter (43.5dba vs 53.5dba).

You do lose the ability to easily hot swap them in case of a fan failure (since you would need to pull all three fans at once and that would remove all the cooling). You would be in the same situation even with the 3D printable setup (all the 3D files I have seen are simply small spans that connect between the 3 fans and a better connection to the chassis sides to hold it in place).
 
Last edited:

nabsltd

Well-Known Member
Jan 26, 2022
410
274
63
You can simply remove the fan wall bracket that is in there and use double-sided tape and/or zip/twist ties to connect 3x 120mm together directly and just slot it into the same spot in the chassis, holding it in place with a twist tie (I believe I had to remove the little bracket that holds the internal fan wall, but that was just 1-2 screws).
3M Dual Lock works really well to secure fans to sheet metal chassis. The adhesive is rated to pretty insane temperatures (it's what is distributed with automated toll sensors to stick on you windshield) and is quite sticky.

It makes the fans easy to remove for maintenance, and if you have to replace one, just put Dual Lock on the new fan and you're set. Unlike Velcro, both sides of the Dual Lock are identical, so you don't end up wasting one side if you keep replacing/moving stuff.
 

Fallen Kell

Member
Mar 10, 2020
57
23
8
3M Dual Lock works really well to secure fans to sheet metal chassis. The adhesive is rated to pretty insane temperatures (it's what is distributed with automated toll sensors to stick on you windshield) and is quite sticky.

It makes the fans easy to remove for maintenance, and if you have to replace one, just put Dual Lock on the new fan and you're set. Unlike Velcro, both sides of the Dual Lock are identical, so you don't end up wasting one side if you keep replacing/moving stuff.
I had not thought of using 3M dual lock for this, but once you said it, it makes sense. I use it on my guitar and bass pedal boards to anchor the various effects/pedals and the stuff works great in that application. You really don't need much as it locks together much stronger than velcro and requires pulling an angle to remove (just pulling straight apart will usually require more force than even the very strong adhesion of the backing tape can handle, and so the adhesive tape comes off before separating the items connected via the dual lock).