SC846 system gifted to me - A full overview with questions. Replacing DVD drive with SSDs? Ideas for upgrades or keep what I got?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Koop

Active Member
Jan 24, 2024
165
76
28
I upgraded from dual X5650s (1366) to dual E5-2695 (2011) I did see a 100 watt power difference with all cores at 100% doing video rendering. 385watts to 285watts. Performance is better. Passmark CPU 8500 - 16000. I wanted to save the DDR3 memory I had. it is an inexpensive way to go. Best you can do is a 2011-v2 CPU. V3 an v4 are 2011-3 and DDR4, a lot more cost. Running CPU Temp you can see the e5-2695 power reduced when it idles. Hope that helps with your decision to do the cheap upgrade or not. Sometimes I wish I had put the cash out for a bigger jump/
I have actually taken your comment into consideration, time traveled, and got an X11 board because of this. Thank you.

Just wanted to let y'all know so far I've just been running the full badblocks script against all my drives and everything is looking solid so far. With my added three front fans the drives are staying cool and the whole system is actually extremely quiet now. I replace the wall fans with the lower RPM green colored drops in and I modded two back fans to slap 2x noctuas.

My next issue is IPMI reporting fan failure due to the noctuas sometimes spinning too slowly and thinking the fans are going 0 rpm. I know there are ways to fix this from googling so I'll be doing that once the badblock testing completes..I really don't want to mess with the system too much and interrupt it somehow hah.

Some interesting news. An eBay seller had 3x CSE-848XA-R3240B barebone systems up for $300 as well as the complete system with X10QBi board inside.

What struck me was I noticed the chassis said it had the BPN-SAS3-846EL1 backplane inside it. The SAS3 backplane sells for $245+$20 shipping cheapest on eBay and it seems that seller sells them frequently.

So obviously I was curious if I could use this chassis. However one day I got an alert that barebone chassis dropped to $200. Ok so now I'm getting the backplane for cheaper than if I bought it on its own, a whole chassis, multiple power supplies, cables, etc for only $200 Shipped?

I bought two of them. I didn't get to dig inside too much (need lift assist!) but I was able to verify the SAS3 backplane is indeed in there with 2x supermicro sff-8643 to sff-8643 cables connected.

Now they have the empty chassis back up to $300.

I figured worst case I could find a way to leverage these as jbods or just leverage whatever parts I can from them. Maybe power distribution units could be used in my 846? Certainly the SAS3 backplane. Heck even just having a bunch of extra drive sleds are nice.

Figured it was too good a deal to pass up.
 

nabsltd

Well-Known Member
Jan 26, 2022
423
288
63
Some interesting news. An eBay seller had 3x CSE-848XA-R3240B barebone systems up for $300 as well as the complete system with X10QBi board inside.

What struck me was I noticed the chassis said it had the BPN-SAS3-846EL1 backplane inside it. The SAS3 backplane sells for $245+$20 shipping cheapest on eBay and it seems that seller sells them frequently.
I'll have to keep an eye out for this kind of deal. I'm going to need to upgrade the backplane in my 846s (all currently SAS2) sometime this year, and having the spare chassis could be nice.
 

Koop

Active Member
Jan 24, 2024
165
76
28
I'll have to keep an eye out for this kind of deal. I'm going to need to upgrade the backplane in my 846s (all currently SAS2) sometime this year, and having the spare chassis could be nice.
Here's a link directly to the listing I got the deal from.

Perhaps in the future they'll drop the price again.

The chassis is huge. I made another topic detailing exactly what was inside. Not really sure what I'll end up doing with them.

In other news my NAS has been up and running without issue. Things are going well and I am just copying a bunch data to it for now. Badblocks took almost a week to finish but came back with a clean bill of health. Need to consider a backup solution for the more important data.

I've set up S.M.A.R.T short tests daily and long tests weekly. I set pool scrub to weekly as well and make sure to keep them a few days apart. Maybe I'm being a bit too aggressive with these schedules though?

I'd also like to find a better way to organize data in a more logical way. Right now I just have a few high level shares but I'd prefer to have a filesystem structure created than just a bunch of datasets right under my pool mount. Right now just been treating DataSets 1:1 with shares but I can see that's not the right way to do it. I'm sure there are some good best practice guides for dataset/share creation. For now just dumping all the data I can. I can always manipulate where things are after the fact. Just have about 32TB of data to throw onto there now one drive at a time over SMB.

I created a Windows Server VM to play with. Was thinking of experimenting with creating an AD for SMB authentication. Not sure if there's a different or better way to do that other than ignoring it and just using local user credentials.

I recall back in the day connecting to a proper Windows AD and LDAP server to set up a usermap in the isilon NAS days to correlate an LDAP user to a Windows domain user... That was a while ago. Not sure if that's done in TrueNAS or if that was just an isilon NAS thing.

Can always just make a separate share for each individual local user auto created share. Not like there's that many.

I did see you can mount NTFS usb drives apparently. Too late for that- for now anyway hah. Maybe when I dig through some boxes of data sitting on old drives. Half the fun will be going through those random drives I stored away years and years ago to try and copy onto here.

But yeah up and running and looking good! Now to just play around with settings..maybe create one or two more VMs for now.

For apps I set up the TrueCharts catalogue and was following their documentation.

Only thing I need to consider is replication of critical data. Might pick up a small NAS for that purpose to just be a replication target for the important stuff like photos and docs.

If anyone has suggestions for best practice documentation for overall configuration beyond setting up your storage pool feel free to share, I'm down to read or watch

Cheers!
 

nabsltd

Well-Known Member
Jan 26, 2022
423
288
63
I recall back in the day connecting to a proper Windows AD and LDAP server to set up a usermap in the isilon NAS days to correlate an LDAP user to a Windows domain user... That was a while ago. Not sure if that's done in TrueNAS or if that was just an isilon NAS thing.
You can do the same sort of thing with TrueNAS, but you'll have to ask how at their forums.

If it's a Windows filesystem, I share it from a Windows machine (usually a VM), because that gives me the most flexibility and compatibility. Also, with Windows Distributed File System (DFS) and remote-to-remote symbolic links enabled, it's really easy to share out data and not have the client know exactly where the data is stored. For DFS, the client just knows to go to \\DOMAIN_NAME\SHARENAME and it finds the data. DFS can be done in Samba, but it's harder to set up and get right.
 

Koop

Active Member
Jan 24, 2024
165
76
28
You can do the same sort of thing with TrueNAS, but you'll have to ask how at their forums.

If it's a Windows filesystem, I share it from a Windows machine (usually a VM), because that gives me the most flexibility and compatibility. Also, with Windows Distributed File System (DFS) and remote-to-remote symbolic links enabled, it's really easy to share out data and not have the client know exactly where the data is stored. For DFS, the client just knows to go to \\DOMAIN_NAME\SHARENAME and it finds the data. DFS can be done in Samba, but it's harder to set up and get right.
Sounds good, I'll have to play around, see, and ask questions.

I posted about this over on the TrueNAS forums so I figured I'd drop what I learned here too.

For a day or so I noticed something interesting. Every five seconds I would see all my disks light up with activity. This was consistent and constant. Hmmm ok what's happening every five seconds... It must be sync writes being committed from my SLOG device? I don't say that with authority but from what I understood that happened- every five seconds right? So that makes a lot of sense. I have my optane SLOG for my spinning disk pool.

I found a very funny "Solution" on the forums- just make it wait longer!

Code:
echo 10 >/sys/module/zfs/parameters/zfs_txg_timeout
Obviously I found this kind of odd. Changing how this works seems like something you fundamentally shouldn't mess with. It did raise the question in my mind though- why is it every 5 seconds? Why not shorter or longer periods of time? Genuinely curious if this could be elaborated on.

I did play with this though just to see. Set it to 1 second to see all my hard drives click and flash every single second. Set it to 60 and watched them light up every sixty seconds. I put it back to the default. Obviously changing something so fundamental didn't seem "wise" or what I was really seeking to figure out.

Because It obviously still didn't answer the question. What was actually doing these writes? From more searching I found out about the System Dataset and how that was set to my large spinning disk pool. I changed it to use my boot pool instead but no change.

Finally it clicked- it was the apps. Well it clicked because it was literally the only other thing it could be haha. Then it made sense. Of course they're constantly writing something- I was playing with the netdata app. Constant monitoring how got to generate something right? So I turned off all my apps but it still did it- still activity every 5 seconds on the dot. So I figured this must be something fundamental to how apps work on Scale. So my next idea was to move where the apps live off my spinning disk pool.

So my solution was to get an M.2 drive I had laying around, slap it into a M.2 NVME to PCIe adapter and move the apps over to that instead. Low and behold no more activity every five seconds.

Was there anything else I may have been missing in figuring this out? Think it's a good idea System Dataset Pool on my boot pool (2x SataDOMs) or perhaps move it to my NVMe? I think I'll expand my NVMe pool to a X16 card w/ bifurcation and use 4 NVMe drives in the future to accommodate more apps and a fast storage pool in general. I aint mirroring those things though. Stupid M.2 drives are expensive.
 

Koop

Active Member
Jan 24, 2024
165
76
28
Another thing. I've noticed my drives can get a bit toasty.

1710939901252.png

Highest temp 68C? hmmm... Is that okay? It must've been when I was writing a lot of data to the system.

All of these drives are model HUH721010ALE600.


1710940219636.png

1710940240091.png

It says operating conditions are up to 60C but this second chart shows upward to 70C? I guess between 60 to 70c should.be considered not good. I should find a way to cool the drives down I assume.
 
Last edited:

nexox

Well-Known Member
May 3, 2023
678
282
63
It did raise the question in my mind though- why is it every 5 seconds? Why not shorter or longer periods of time? Genuinely curious if this could be elaborated on.
I don't know anything about this particular situation, but my experience suggests this went something like "I dunno, just pick a reasonable default and we'll tune it later/we'll document it so the user can adjust it if they need to." Then nobody ever thought about it again.
 

nabsltd

Well-Known Member
Jan 26, 2022
423
288
63
Another thing. I've noticed my drives can get a bit toasty.

Highest temp 68C? hmmm... Is that okay? It must've been when I was writing a lot of data to the system.
I wouldn't want my drives to run that hot.

Try these scripts. They help solve the problem that most of the fan headers are tied to the CPU temperature, and none know anything about drive temperatures.
 
  • Like
Reactions: itronin and nexox

Koop

Active Member
Jan 24, 2024
165
76
28
I wouldn't want my drives to run that hot.

Try these scripts. They help solve the problem that most of the fan headers are tied to the CPU temperature, and none know anything about drive temperatures.
OHHH Thank you SO MUCH for the link to these. Definitely going to set this up. Yeah from reading the motherboard manual I realized most of my fans are tied to the CPU which isn't working all that hard so the disks are being ignored.

Thankfully they aren't constantly running that hot. It is a pretty wide range I have though. Hopefully implementing these scripts can even things out. Once summer is in full swing I know I'll be in deep trouble so I'm watching it carefully.
 

ziggygt

Member
Jul 23, 2019
62
10
8
I have actually taken your comment into consideration, time traveled, and got an X11 board because of this. Thank you.

Some interesting news. An eBay seller had 3x CSE-848XA-R3240B barebone systems up for $300 as well as the complete system with X10QBi board inside.
That X10QBi MB is a beast with 4 CPUs and crazy memory capacity. It would sure make the meter spin 165 watts each? , but how fun. I see that package is still available, but my lab area already looks like a hoarder's junk pile and my electric bill is just nasty. Which X11 board did you choose?
 

Koop

Active Member
Jan 24, 2024
165
76
28
That X10QBi MB is a beast with 4 CPUs and crazy memory capacity. It would sure make the meter spin 165 watts each? , but how fun. I see that package is still available, but my lab area already looks like a hoarder's junk pile and my electric bill is just nasty. Which X11 board did you choose?
I had gotten a deal on a X11SPI-TF combo (board, memory, and CPU) being sold by someone independently on Facebook marketplace so that's what I went for.

Though I do see the value in a dual CPU board now regardless of the need for more CPU power in general. I was looking a at the X11DPH boards and saw they have dual m.2 slots, more PCIe lanes, etc- was thinking, oh duh sure, with two CPUs of course you can do that. Having that would be pretty nice for slapping in more NVMe if the price differences between boards were the same. Some of those gen 1 scalable CPUs are dumb cheap.

I think if I were ever 'upgrade' or build a second NAS I'd want to get something with PCIe 4.0 to leverage faster NVMe drives maybe? Not sure how much them X12 boards are. CPU prices are still kind of a pain in the ass to track. (although thinking about this not sure if all the power connections off the PDB would be there for newer boards? Guess I'd have to double check that).
 
Last edited:

nexox

Well-Known Member
May 3, 2023
678
282
63
Not sure how much them X12 boards are.
Pretty expensive still, rarely below $500 for a single socket used board, then probably around $400 minimum for a Xeon Silver, unless you want to figure out the whole ES/QS thing. If you want a whole lot of lanes for NVMe then Epyc is probably the way to go, if you really need serious SSD performance I'd say that more 3.0 drives is a better choice than 4.0, which is not only more expensive, but much more picky about adapters and cables and such.
 

Koop

Active Member
Jan 24, 2024
165
76
28
unless you want to figure out the whole ES/QS thing.
It's a whole can of worms. I've seen the thread on here and it's got a lot of info but across like 10 zillion pages it's kinda tough to follow. I was trying to figure out a possible gen 2 scalable I could hone in on for my X11SPI-TF as an 'upgrade' but it's been a bit difficult to find/figure out.

If you want a whole lot of lanes for NVMe then Epyc is probably the way to go, if you really need serious SSD performance I'd say that more 3.0 drives is a better choice than 4.0, which is not only more expensive, but much more picky about adapters and cables and such.
Appreciate the advice. I was thinking what I would build a hypervisor on and it seems Epyc may be a great way to go for that. I'm starting to see why people take issue with how TrueNAS handles apps for example and why people would want to keep things "NAS only". VMs work but then you end up taking away resources like memory that could be used for cache and that feels bad. From reading the forums here I see a lot of people looking at Epyc builds so there's a lot of good info to work with just here.

I may get my hands on another 846 chassis locally. I was thinking I may use that to do a proxmox or XCP-ng build and try a virtualized TrueNAS. Wonder how things would go if I just dropped in 24 cheapo/small SSDs into all the drive bays on a SAS3 backplane. ¯\_(ツ)_/¯
 
  • Like
Reactions: itronin

nabsltd

Well-Known Member
Jan 26, 2022
423
288
63
Though I do see the value in a dual CPU board now regardless of the need for more CPU power in general. I was looking a at the X11DPH boards and saw they have dual m.2 slots, more PCIe lanes, etc- was thinking, oh duh sure, with two CPUs of course you can do that.
4U chassis make the DP boards easy, as you can get a wide variety of cheap CPU coolers instead of having to deal with a pair of screaming 60mm fans like in 2U. I swore off DP because my compute nodes are 2U.

I think if I were ever 'upgrade' or build a second NAS I'd want to get something with PCIe 4.0 to leverage faster NVMe drives maybe?
A bunch of RAM and PCIe 3.0 NVMe drives will easily absorb 5GB/sec for the short term, and up to 3GB/sec (striped pair) long term. So, unless you have more than 50Gbps network speed, you don't need PCIe 4.0 NVMe drives. For a compute node, where programs access the drives directly, then it could help.
 

Koop

Active Member
Jan 24, 2024
165
76
28
A bunch of RAM and PCIe 3.0 NVMe drives will easily absorb 5GB/sec for the short term, and up to 3GB/sec (striped pair) long term. So, unless you have more than 50Gbps network speed, you don't need PCIe 4.0 NVMe drives. For a compute node, where programs access the drives directly, then it could help.
Totally fair and I appreciate the math. Sounds like if I wanted to consider hardware specifically with PCIe 4.0 NVMe in mind it wouldn't be for a dedicated NAS. I think at this point I am good with my single dedicated NAS but I'm looking to do something for a hypervisor and try to virtualize TrueNAS on it as a replication target for a smaller subset of my important data.
 

nabsltd

Well-Known Member
Jan 26, 2022
423
288
63
I'm looking to do something for a hypervisor and try to virtualize TrueNAS on it as a replication target for a smaller subset of my important data.
I can't recall if the backplane you replaced had an expander, but "hypervisor plus virtualized TrueNAS" is a great use case for a backplane without an expander.

Plug two cables from an HBA into the backplane and pass the HBA and the 8 drives connected through to the TrueNAS VM. The hypervisor would use a different storage card (this is where hardware RAID can work nicely, and a 9361-8i with non-volatile cache and super-capacitor is $40) and would connect to the backplane with two cables.
 
  • Like
Reactions: Koop

Koop

Active Member
Jan 24, 2024
165
76
28
I can't recall if the backplane you replaced had an expander, but "hypervisor plus virtualized TrueNAS" is a great use case for a backplane without an expander.

Plug two cables from an HBA into the backplane and pass the HBA and the 8 drives connected through to the TrueNAS VM. The hypervisor would use a different storage card (this is where hardware RAID can work nicely, and a 9361-8i with non-volatile cache and super-capacitor is $40) and would connect to the backplane with two cables.
Ah-ha. The original backplane I had was direct attach. Thanks for the great idea.