SC846 system gifted to me - A full overview with questions. Replacing DVD drive with SSDs? Ideas for upgrades or keep what I got?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nexox

Well-Known Member
May 3, 2023
700
289
63
It was listed for $500 but they were willing to sell for $450+shipping. X11SPi-TF, Xeon 8153, SK Hynix 4x32GB 128GB DDR4 ECC RAM PC4-2666, and Noctua NH-D9 DX-3647.
That cooler and the memory are each worth somewhere around $100, so that's $250 for the board, pretty good if it meets your needs as far as PCIe slots and the onboard 10G.
 

itronin

Well-Known Member
Nov 24, 2018
1,244
804
113
Denver, Colorado
Will do on checking the rack stats. Amazon says it can go up to 40" depth. I have the yellow label SuperMicro rails I believe- will cross check.
For stability I personally do not take any depth adjustable flat pack rack to its max depth, heck I don't even do that on the full heigh assembled ones as I want to keep the cables inside the doors. Advice I would suggest - in your wildest dreams what's the max depth product you might go out and get and install in the rack? adjust the rack day 0 for that. you don't want to go down the road and have to re-adjust depth.

Disclosure: I don't use the Startech ones - I use the sysracks 27U (really 29U) racks.

I've worked remotely with peoples that have the startech racks - seem nice.
 

Koop

Active Member
Jan 24, 2024
174
85
28
That cooler and the memory are each worth somewhere around $100, so that's $250 for the board, pretty good if it meets your needs as far as PCIe slots and the onboard 10G.
I don't think I necessarily have a use case that requires 10g, at least not right now. PCIe wise yeah the X11SPI is definitely better but other than a HBA not sure what else I'd connect. I guess more network connectivity possibly? Can't think of much for right now anyway.

As long as I'm not a total fool for not immediately taking that offer I'll live lol if you were like "You'd be an idiot to not buy that" I would really have to think about it!

For stability I personally do not take any depth adjustable flat pack rack to its max depth, heck I don't even do that on the full heigh assembled ones as I want to keep the cables inside the doors. Advice I would suggest - in your wildest dreams what's the max depth product you might go out and get and install in the rack? adjust the rack day 0 for that. you don't want to go down the road and have to re-adjust depth.

Disclosure: I don't use the Startech ones - I use the sysracks 27U (really 29U) racks.

I've worked remotely with peoples that have the startech racks - seem nice.
Options for me are definitely limited in my current living space as I got to get a rack up a flight of stairs- garage is the 1st floor. Appreciate the advice on rack. Measure one rack once.
 

Koop

Active Member
Jan 24, 2024
174
85
28
Yeah a 10, 25, or 100G card and then every NVMe drive you can find for cheap on ebay.
Ah fair on the NVMe actually, I didn't think about that. Do you know what PCIe speeds you need for those dual m.2 adapters?
 

itronin

Well-Known Member
Nov 24, 2018
1,244
804
113
Denver, Colorado
Ah fair on the NVMe actually, I didn't think about that. Do you know what PCIe speeds you need for those dual m.2 adapters?
the "low cost" dual m.2 adapters (SM, others) require pcie slot bifurcation. Supporting that is a function of the motherboard bios, the cpu, and how the pice lanes from the cpu are mapped to slots on the motherboard. Example: X10SRL-F supports pcie bifurcation. I use this on two x8 slots to give me a total of qty FOUR nvme "slots" which are actually u.2 connector going to a SM 826 TQ hybrid backplane, up to 4 nvme u.2 and up to 12 SAS/SATA connections - total of 12. In my case I have 4 u.2 and 8 SAS wired up.

you want to check the manual for block diagrams for the motherboards in your pool of MB choices. search the manual for bifurcation. google bifurcation and the MB choices you have to see who has done what (before you). :).
 
  • Like
Reactions: nexox

nexox

Well-Known Member
May 3, 2023
700
289
63
The basic dual m.2 adapters would need an x8 slot, you're mostly stuck with PCIe 3.0 unless you want to spend somewhat more. If you end up with an x16 slot you can get a quad m.2 adapter as well. Plus there are NVMe SSDs that just fit into a slot, mostly x4 for things like the Intel 750/P3600/905P/etc, but there are some x8 devices out there. If you want to deal with some drivers you may also find some deals on pre-NVMe PCIe SSDs, I just found a 700GB Micron P420h for $30, and back during that brief SSD price crash I scored a 6.4TB FusionIO card for $200 (still not sure what to do with that since I moved my current build from an ATX case to a 2U...)
 

Koop

Active Member
Jan 24, 2024
174
85
28
What do you both of you @itronin and @nexox use your m.2s for? Just want to understand your use cases. Do either of you also use sata SSDs? m.2 vs sata SSD scenarios and uses?
 

nexox

Well-Known Member
May 3, 2023
700
289
63
I pretty much just use m.2 to boot, because my motherboards have 2280 slots and there's nothing I want to put in there more than a p1600x, I tend to go for U.2 or add-in card form factor. I do have a pile of SATA drives from back when the price crashed last summer, but aside from dropping a couple 1.92TB Cloudspeeds into my new fileserver build, to use for stuff that doesn't fit on the boot drive but needs to be faster than the array of spinning drives (prometheus database, container images and volumes,) the rest are mostly sitting in a drawer. I should really just sell them before they depreciate below the cost of shipping, but that sounds like work.

I have considered using an add in card SSD as a write through cache on my array, with the goal of potentially spinning the disks down when there are no writes and serving reads out of a large cache, but the options to actually make that happen don't feel all that reliable and there's a good chance I'll just use one as a temp volume for downloads instead of messing with my main array.
 

Koop

Active Member
Jan 24, 2024
174
85
28
the rest are mostly sitting in a drawer. I should really just sell them before they depreciate below the cost of shipping, but that sounds like work.
Sounds like you should sell or even better give me some so I can make an SSD pool. ;)
 

itronin

Well-Known Member
Nov 24, 2018
1,244
804
113
Denver, Colorado
ZFS boot pool I use low cost used enterprise sata disks (120GB and are $20 < unit or super/satadoms as no physical space required).
on TNC/TNS servers where I want a dedicated SLOG device and against spinning rust I use m.2 P1600x (basically cost) or AIC Optane 900P 280GB. For enterprise SSD SAS pools and dedicated SLOG device I use AIC Optane 900P or u.2 Optane 900P. NB: I have sync=always on those shares going to hypervisors. IMO and for servers m.2 unless it supprots 110 not much use due to wear issues. Don't like 'em for boot unless the server can fail without impact as you don't see a lot of dual m.2 slots ... some of the X11 boards that take consumer procs do have dual m.2 but if you look at the block diagram you'll see again - boot duty - maybe as I see other better lower cost options.
 

nexox

Well-Known Member
May 3, 2023
700
289
63
Yeah my X11 boards all have the m.2 slot connected through the PCH. I should probably get around to configuring backups for those boot volumes...
 
  • Like
Reactions: itronin

Koop

Active Member
Jan 24, 2024
174
85
28
I was able to snag 2x 128GB SM SuperDOMs off ebay for very cheap (they were cheaper than all the smaller capacities, heh). According to the SuperMicro product page this is the performance, not that it should honestly matter all that much for boot drives?:
1707422575466.png

That'll use the two powered sata ports for a boot mirror, figured that was the best course of action for boot drives as to not touch any of the PCI slots so I can leverage them for whatever I want later.

For motherboard I was able to get a X10SRI-F for sub $100 so I said yolo and nabbed it. Next will need to be CPU. I know the 2667v4 was previously suggested however I see the 1680v4 has the same core/thread count but higher clocks (but also harder to source. Cheapest price I saw at a quick ebay glance was $90). I also see the 1650v4 with lower cores higher clock (6c/12t, 3.6/4.0 clock). Finally I see the 1630v4 is the highest clock at 3.7/4.0 but drops down to 4 core 8 thread. I think all these mentioned CPUs were dirt cheap imo.

If I want to focus on say, SMB and NFS performance wouldn't make more sense to go for a higher clock / lower core count CPU? Or am I missing the mark? I guess being able to play around a bit with the virtualization and containers would be nice though, so maybe the previous 2667v4 suggestion rings true? Most of the above CPUs mentioned were dirt cheap as I mentioned so I'm OK with buying different ones if something doesn't work out. Also if there's a CPU I haven't mentioned that's compatible on the X10SRI-F please let me know.

Navigating these Xeons is pretty confusing when you're new to it. Like why are there16xx CPUs newer than some 26xx? Were they just a seperate line of cheaper chips option at release time? I feel like I am not understanding the funamental naming principles.

With that said, any further opinions on my thought procress here? Am I missing/not understanding anything fundamental that I need to consider?

I just want to start somewhere with a friendly budget so I can take my time to learn and experiment with TrueNAS- I feel like I got to bite the bullet and start somewhere. If I end up wishing I was on newer hardware I'll have a whole system ready to use elsewhere. For example I'm thinking eventually a replication target for backups of critical data. So I see this hardware as something I will keep and use either way.

Thanks again to everyone for their feedback as always.
 
  • Like
Reactions: nexox

nexox

Well-Known Member
May 3, 2023
700
289
63
Navigating these Xeons is pretty confusing when you're new to it. Like why are there16xx CPUs newer than some 26xx?
The E5-1600 series are for single socket systems, 2600 for up to dual socket, 4600 for up to quad sockets. The 2600 were way more common initially so they tend to be cheaper on eBay.

As far as CPU requires, you need to figure out your network speed first, I have a quite old 6 core Opteron that does just fine at 10G for NFS, SMB is probably slower but I don't use it for anything serious so I haven't really checked to see if it's CPU limited or what.
 

itronin

Well-Known Member
Nov 24, 2018
1,244
804
113
Denver, Colorado
t drops down to 4 core 8 thread. I think all these mentioned CPUs were dirt cheap imo.

If I want to focus on say, SMB and NFS performance wouldn't make more sense to go for a higher clock / lower core count CPU? Or am I missing the mark? I guess being able to play around a bit with the virtualization and containers would be nice though, so maybe the previous 2667v4 suggestion rings true? Most of the above CPUs mentioned were dirt cheap as I mentioned so I'm OK with buying different ones if something doesn't work out. Also if there's a CPU I haven't mentioned that's compatible on the X10SRI-F please let me know.
I think E5-2667 v4 is your better value over the 16xx series as @nexox said, more 26xx on the market than the workstation oriented 16xx. You'll have plenty of computes still with that to mess around with virt. figure typical oversubscription for active vms is a 5:1 for cpu/core/thread count. that's active vm... if you have some that spend omst of their time idle then you can go higher.

If you want more cores then I think E5-2680v4 is best value higher core/thread count with some okay clock speed (2.6ghz). either one is MORE than adequate for an all-in-one server - which you are building.

IMO prices are cheap enough with both those cpus that you can buy one - don't like it? buy the other.

and HAHAHAHA you're going to learn (US only reference0 that like eating lays potato chips - you can't just eat one. you get this server finished and you may say "oh I need a backup server" or "oh this one is really production now - I need a home lab one "or "oh I just want to build another server - cause I can"... so if you end up with spare parts you will likely put them to use.
 
  • Like
Reactions: nexox

Koop

Active Member
Jan 24, 2024
174
85
28
The E5-1600 series are for single socket systems, 2600 for up to dual socket, 4600 for up to quad sockets. The 2600 were way more common initially so they tend to be cheaper on eBay.

As far as CPU requires, you need to figure out your network speed first, I have a quite old 6 core Opteron that does just fine at 10G for NFS, SMB is probably slower but I don't use it for anything serious so I haven't really checked to see if it's CPU limited or what.
Ahhhh ok so it has to do with socket count compatibility- now it makes sense. You can see being new to server grade hardware is poking it's holes in me.

As for the networking... I think I am going to tackle that as a can of worms once I have a base setup. Drop in higher network cards once I got everything running.

I think E5-2667 v4 is your better value over the 16xx series as @nexox said, more 26xx on the market than the workstation oriented 16xx. You'll have plenty of computes still with that to mess around with virt. figure typical oversubscription for active vms is a 5:1 for cpu/core/thread count. that's active vm... if you have some that spend omst of their time idle then you can go higher.
Thanks, I think based on all discussed here I'll just grab the E5-2667v4 to start and see how it goes. $30. Easy.

and HAHAHAHA you're going to learn (US only reference0 that like eating lays potato chips - you can't just eat one. you get this server finished and you may say "oh I need a backup server" or "oh this one is really production now - I need a home lab one "or "oh I just want to build another server - cause I can"... so if you end up with spare parts you will likely put them to use.
lol hey I got a whole rack for a reason. To fill it.

either one is MORE than adequate for an all-in-one server - which you are building.
I just want to point out that no matter how many times I keep saying I am building with a focus on NAS first you CORRECTLY state this, MFer. lmao
 
  • Haha
Reactions: itronin

Fallen Kell

Member
Mar 10, 2020
57
23
8
I was able to get a BPN-SAS2-846EL1 pretty cheap to replace my BPN-SAS-846TQ backplane (still waiting for it in the mail). I'm Trying to pick up a new single HBA that I can use now and in the future. To the end I was thinking of getting the AOC-S3008L-L8E. I should be able tot cable it with 2x SFF-8643 to SFF-8087 connections to the ELI1 and if by chance I can come across a SAS3 backplane in the future (bpn-sas3-846EL1?) I'd already have an HBA ready to go. While I I know starting out with all spinning disks means I wouldn't be able to utilize the bandwidth I'm just future thinking if I ever end up wanting to use a pool of SSDs I'd want to consider this futureproofing.

Does the above make sound sense or am I mistaken? Any caveats I may be missing? There's about 10 zillion HBAs with all different controllers (I literally started making a spreadsheet to track everything because of this lol) so let me know if my choice is trash or if I'm missing something.
Since no one has answered this, I would say that is something of a mistake replacing the BPN-SAS-846TQ with a BPN-SAS2-846EL1. Yes in theory you would think the SAS2 was better, but the BPN-SAS-846TQ is actually direct passthrough, meaning it is capable of SAS, SAS2, and even SAS3 speeds, you just need to cable it up to the appropriate controller. The BPN-SAS2-846EL1 is much more limited in that sense as it will not support SAS3 and is SAS2 only (as it uses a SAS2 expansion chip), with the only benefit being less cable clutter over the BPN-SAS-846TQ (which needs 24x SAS/SATA connectors for all 24 drives).

If you are really trying to cut on cable clutter, you should have been looking for the BPN-SAS-846A (which cuts it down to using 6xSFF-8087 connectors on the board, vs 24 SATA/SAS connectors), or gone directly to the SAS3 backplanes and not lost functionality/capability of the backplane you currently have.

There are some guides that explain this, like this one at: Homelab – Backplane for SUPERMICRO SC846 Chassis, the Buying Guide
 
  • Wow
  • Like
Reactions: nabsltd and itronin

mattventura

Active Member
Nov 9, 2022
449
218
43
Since no one has answered this, I would say that is something of a mistake replacing the BPN-SAS-846TQ with a BPN-SAS2-846EL1. Yes in theory you would think the SAS2 was better, but the BPN-SAS-846TQ is actually direct passthrough, meaning it is capable of SAS, SAS2, and even SAS3 speeds, you just need to cable it up to the appropriate controller. The BPN-SAS2-846EL1 is much more limited in that sense as it will not support SAS3 and is SAS2 only (as it uses a SAS2 expansion chip), with the only benefit being less cable clutter over the BPN-SAS-846TQ (which needs 24x SAS/SATA connectors for all 24 drives).

If you are really trying to cut on cable clutter, you should have been looking for the BPN-SAS-846A (which cuts it down to using 6xSFF-8087 connectors on the board, vs 24 SATA/SAS connectors), or gone directly to the SAS3 backplanes and not lost functionality/capability of the backplane you currently have.

There are some guides that explain this, like this one at: Homelab – Backplane for SUPERMICRO SC846 Chassis, the Buying Guide
Less cable clutter, but also less HBAs. Nice to be able to run the entire thing off an 8i HBA than to need 24 lanes.
 
  • Like
Reactions: nexox and itronin

itronin

Well-Known Member
Nov 24, 2018
1,244
804
113
Denver, Colorado
Since no one has answered this, I would say that is something of a mistake replacing the BPN-SAS-846TQ with a BPN-SAS2-846EL1. Yes in theory

There are some guides that explain this, like this one at: Homelab – Backplane for SUPERMICRO SC846 Chassis, the Buying Guide
you perhaps meant to write "answered directly". indirectly much discussion about # pcie lanes, usage, hba sizes & availabiity, etc. so indirectly yep, talked about. Also use case. at least for now, spinning rust. Also discussed -24i HBA with TQ too... Discussed complexity of build - lots discussed - did you read all the posts?

But before you think I'm disagreeing with your statement about TQ - I'm a fan of TQ - said so earlier. actually run TQ on my 836 's. you only have to cable that once. But you also need an appropriately sized controller to be efficient in both slot and lane consumption.

You may have missed the point too about compromise especially in the face of use cases. While this use case was originaly NAS focused you'll see it morphed (as is often the case).

FWIW if the use case changes, OP gets a bigger HBA, check gets a whole 'nother server - they'll still have that 846 TQ backplane and can always swap it back in.

Beauty of the process and all that.

My opinion (which is worth $0USD): OP made a wise call starting with the SAS2-EL1. To be blunt OP is doing their home work, articulating the pros and cons, considering their use cases, and adapting their decisions to match what they are learning as theiruse case transitions a bit.
 
  • Like
Reactions: Koop and nexox