SC846 system gifted to me - A full overview with questions. Replacing DVD drive with SSDs? Ideas for upgrades or keep what I got?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nexox

Well-Known Member
May 3, 2023
695
283
63
I think the single socket boards that make sense are the X10SRL-F, X10SRI-F, X11SPL-F, and X11SPI-F, the difference between the I and L variants is the I has at least one x16 slot and the L has all x8 or lower (sometimes in x16 physical slots.) X11 boards also usually have a 2280 m.2 slot which is nice for a boot drive, but it is usually connected through the PCH so performance isn't ideal (Supermicro is good at giving you block diagrams in the manual to make it super clear how everything is connected.)
 

nexox

Well-Known Member
May 3, 2023
695
283
63
Those X11DPH boards do look like a decent deal, though the ones I see with CPUs aren't a better value because the difference in price is 2-3x more than the CPUs cost. You can run a dual socket board with just one CPU, you just have to check the block diagram to see what slots and other components connect to CPU2 and will thus be unusable with only CPU1. Since you have an EATX chassis that could definitely be more cost effective than buying a single socket board, except for that temptation to just drop in another CPU and 6 more DIMMs and turn up the idle power consumption by 35-50W.
 

Koop

Active Member
Jan 24, 2024
174
85
28
I think the single socket boards that make sense are the X10SRL-F, X10SRI-F, X11SPL-F, and X11SPI-F,
Thank you for your thoughts on these choices. I had looked at the X11SPL-F and saw it was a big price jump from the X10SRL-F so I was really thinking the value wasn't there unless you can point out something that makes it worth the price difference that I may be missing?

the difference between the I and L variants is the I has at least one x16 slot and the L has all x8 or lower (sometimes in x16 physical slots.) X11 boards also usually have a 2280 m.2 slot which is nice for a boot drive, but it is usually connected through the PCH so performance isn't ideal (Supermicro is good at giving you block diagrams in the manual to make it super clear how everything is connected.)
Seems like it would be nice to have the x16 slot just in case? Maybe drop a Quadro in there for some purpose? (just the first thing that came to my mind honestly.) ... Flexibility by just having it.

The m.2 sounds nice but if I can just use superdoms (I can't get over this name by the way lol) for boot device then I figure that's the way to go. Unless there's a direct comparison to be made between the two? I just assumed as soon as I read the description for superdoms that was my easy mirrored boot device.

Those X11DPH boards do look like a decent deal, though the ones I see with CPUs aren't a better value because the difference in price is 2-3x more than the CPUs cost. You can run a dual socket board with just one CPU, you just have to check the block diagram to see what slots and other components connect to CPU2 and will thus be unusable with only CPU1. Since you have an EATX chassis that could definitely be more cost effective than buying a single socket board, except for that temptation to just drop in another CPU and 6 more DIMMs and turn up the idle power consumption by 35-50W.
Makes sense with just making sure you follow the block diagram. Wasn't sure if there were any other instability gotchas for doing that- I've never in my life thought I would own a board with two CPUs even though one is sitting right next to me at this moment lol.

And yeah I do definitely have the space for EATX that is true. I would be temped though haha. However so far the X10SRL-F only being a bit above $100 seems like a no brainier move. X11 boards don't seem to come close to that lower price point unless I'm just not finding the right stuff.

Perhaps with the space saved not going EATX I could use that space for something else entirely in the chassis? More low RPM airflow for quieter operation? A whole seperate device sitting in there? I don't know what that could possibility be but maybe there's a thought?

I honestly saw the X10SRL-F at the price it was at and thought "oh well that's an easy win right there" haha
 

nexox

Well-Known Member
May 3, 2023
695
283
63
I like an x16 slot for a 100G NIC, but that's just me, I think the GPU in my workstation is only x8 electrical even though it's in an x16 slot, for server stuff like transcoding or CUDA computing that's usually fine even if the GPU can connect at x16.

The X10SRL prices do look pretty good, an X9SRI is barely cheaper, but if you wait and look for a while there could be other options - I scored one X11DPL in a 1U server with minimal specs and photos, I had to go through all the Supermicro dual socket board photos to see which IO port layout matched, and that seller periodically has more systems like that which go for under $200, with some memory and disks. Still, lots of compromises on that board, like only 4 memory slots per socket and the super-uncommon LGA3647 square ILM (square is the more common ILM on LGA2011.)

You can certainly use the extra space for more fans, but if you're running just one socket and ambient temperature isn't too hot you can likely just replace the stock fans with some quieter "pressure optimized" fans and save some noise and power, though you may have to pay attention to temperatures on PCIe cards and add a fan or duct or something to get them enough airflow - server hardware like HBAs and 10G+ NICs expect a pretty good amount of air flowing over their heatsinks and if your fans all spin up or down based on CPU temperature they can get hot while the CPU idles.
 

itronin

Well-Known Member
Nov 24, 2018
1,243
804
113
Denver, Colorado
X11spl-f is the closest to the x10srl-f. Provantage has the best price that I have seen for the X11spl-f and I've had good luck purchasing from them in the past. you can always run a hypervisor and TNC as a guest. Otherwise TNS gets you a hypervisor built-in. True, a pure TNC will benefit a bit more from single core performance, TNS is a bit of a different beast and I'm not really "up on it" I suspect multi-core performance especially if using it as a hypervisor/docker etc. then mult-core will beneficial.

For a long time I ran an X10SRL-F with E5-2680v4, boatload of memory and ESXI. TNC was a guest with the IT mode HBA and an nvme passed through. Had P620 passed through to another guest for media services, everything else were just compute and memory vm's. I did NOT mount TNC's filesystems back to the ESXI host for vms. it can be done, I've done it in a couple of situations but not something I recommend to the novice. For ESXI's storage I used a Raid 1 HBA with a pair of SATA SSD's for boot and vm's.

you could run proxmox and zfs natively. there are other hypervisor optios as well.

your chassis and most X10 X11 single or dual socket will work really well building an all-in-one system - which from your use cases sounds like what you are after. I'd get as much memory as you can afford too.
 

nabsltd

Well-Known Member
Jan 26, 2022
431
293
63
Generally X9 (with a few exceptions) is too old for bifurcation and NVMe boot, the IPMI remote view requires ancient and broken Java with ancient broken https, plus power efficiency is worse and the maximum CPU performance is lower.
I have X9SRL-F, and the latest BIOS supports bifurcation.

That said, I absolutely agree that if you don't have the X9 already in hand, the X10 is much better. Moving laterally, the X10SRL-F has everything good about the X9 version (lots of PCIe slots, bifurcation, etc.) plus the SuperDOM slots and even more flexible PCIe slot lanes.

I have some X11, too, but feel like the prices for a motherboad/CPU combo are still pretty high, although you do get at least one onboard M.2 socket.
 

nabsltd

Well-Known Member
Jan 26, 2022
431
293
63
CPU for that X10 could be... E5-1650 v4 ? vs perhaps the E5-1630 v4 or the E5-1680 v4?
As I said in another post, I like the *SRL line of boards.

You can also use E5-2600 series processors in the board. The E5-1600 line give you higher single core performance (about 10% faster), while the E5-2600 series allows you to get a lot more cores (up to 22 in the E5-2699A). So, pick based on what you think you need, and the price you can afford. As you noted, the E5-1600 series can be had for dirt cheap.
 
  • Like
Reactions: nexox

Koop

Active Member
Jan 24, 2024
174
85
28
As I said in another post, I like the *SRL line of boards.

Would you mind elaborating on why you like the *SRL line of boards? Sorry if that's an obvious question. I like to ask obvious questions so I can learn.

You can also use E5-2600 series processors in the board. The E5-1600 line give you higher single core performance (about 10% faster), while the E5-2600 series allows you to get a lot more cores (up to 22 in the E5-2699A). So, pick based on what you think you need, and the price you can afford. As you noted, the E5-1600 series can be had for dirt cheap.
Noted on CPU choices. I’m thinking less overall cores and more single core performance. There are so many CPU choices and things to consider. Being my first time really diving into Xeons and server architecture ever you can understand I’m basically swimming in a sea of information while taking as many gulps as I can haha.

I've slept on it and now I’m pretty much determined at this point I'll be sticking to an upgrade using a single CPU X10 or X11 board. Pending thoughts on the best CPU for the job- assuming I want to focus on highly clock possible for file sharing without sacrificing an unnecessary number of cores for minimal clock gain though (balancing tradeoff?). I'll be doubling down on making this a NAS first.

I've concluded that I would, if anything, look to use dual socket and/or higher clock count CPU hardware for a proxmox server, assuming the best approach is to just throw as many cores as possible for VMs for proxmox- but this is another topic entirely to work on proxmox as my next project. Thus I see myself heavily leaning towards the X10SRL-F unless I can get lucky with an X11 find for much cheaper. Again, with a doubling down on “NAS first” and not treat TrueNAS Scale as a jack of all trades. Focusing instead of NAS specific functionality and performance (snapshotting? replication?) I'm thinking the X10 platform with high speed cores would be the better approach. This is just the assumption that I’ve gotten from reading many posts on the TrueNAS forum. Once I come to some definitive part choices I’ll propose them both here and on the TrueNAS forums so I can get roasted there.

So again, I'd like to double down on making this box the best NAS it can be while being power conscious. More research on what works best for TrueNAS may be needed beyond just what I’ve gathered from my reading so far- but my understanding is focus on a single high clock lower core CPU for TrueNAS. Also throw as much memory as possible at it.
I assume with my number of spinning drives I have and am considered using this may fly in the face of being "power conscious" which leads to the question- is there ways in TrueNAS or via hardware functionality to spin down drives after not being accessed for some time? Sleeping drives is perhaps the terminology I'm thinking of? I assume this would help on power draw from the large number of drives. If they’re spun down while not in use and the NAS is overall running idle, I assume that would be a good impact to idle power draw?

Right now, from my understanding I may be able to get away with using a 500w PSU but I understand and have the 900w+ PSU(s) available if/when I need more power if I get to 24 drives (which I may do). All my drives are 10TB so they are large. I know for TrueNAS I'll have to really consider vdev layout and that is a whole other can of worms I need to dive deeper into. From the priliminary research I have done Iwas thinking on picking up two more drives to do three 6 wide RAIDZ2 vdevs and perhaps picking up a 1-3 more 10TB drives to act as cold or hot spares ready to be used in case of a disk failure. This is just very preliminary planning and perhaps there's 10 million other ways to do it more efficiently. I do want a decent level of protection and I know that rebuilding with drives this size must take a long ass time hence having spares ready. Maybe overkill?

I know I asked this somewhere in my many walls of text so I apologize. But let's say I have a scenario where I upgrade to an X10 or X11 board. What HBA + Backplane combo could help me cut down on my current HBA spaghetti wiring? My thought/goal would be to remove as much cable as possible to allow for best airflow while also reducing the number of HBAs to cut down on heat, power draw from HBAs.
My first thought while typing this though is "Do I truly understand where my power draw is coming from in this theoretical setup?" and the answer is not really- I just assume probably the large number of spinning drives. Maybe someone could kindly point me in the right direction to understanding this better?

I understand that with spinning disk I'm going to be limited on their max throughput per disk but let's just say in theory I went with all SSD drives in the future. Any recommendations there if I wanted to maintain as high of IOPS as possible? I specifically ask within the context of using an X10 or X11 board because I would have access to newer PCIe and I assume, perhaps, there is a different backplane + HBA setup to consider.
Any opinions on that thinking? Am I in the right line of thinking? Corrections or fallacies in my line of thinking or obvious lack of knowledge?

As I've said in almost every post, to everyone who has contributed to answering my many questions and providing their thoughts and feedback- thank you very much. I truly appreciate all the information and opinions you've provided which has helped me learn a lot. Please continue to throw thoughts and opinions my way! Thank you @nabsltd, @itronin, @nexox, @mattventura and @NPS for your contributions. Again, really appreciate it. :)
 
  • Like
Reactions: rnavarro

nexox

Well-Known Member
May 3, 2023
695
283
63
There are so many CPU choices and things to consider. Being my first time really diving into Xeons and server architecture ever you can understand I’m basically swimming in a sea of information while taking as many gulps as I can haha.
There are a lot of Xeons, and the SKUs get more and more confusing every generation, fortunately the v4 / Broadwell options aren't so complex. You'll want to get familiar with Intel ARK, which will give you all the basics of every chip and show you comparisons pretty easily: Products formerly Broadwell
 

Koop

Active Member
Jan 24, 2024
174
85
28
There are a lot of Xeons, and the SKUs get more and more confusing every generation, fortunately the v4 / Broadwell options aren't so complex. You'll want to get familiar with Intel ARK, which will give you all the basics of every chip and show you comparisons pretty easily: Products formerly Broadwell
Thanks, that is great resource.

I've realized I never shared photos of anything up to this point. Just for the sake of it here's a few of how things currently look:
1706912045932.jpeg
1706912061667.jpeg
1706912091849.jpeg
 

nexox

Well-Known Member
May 3, 2023
695
283
63
Do you have a CPU air shroud that was just left out of the photos? Those 2U heatsinks aren't going to get a lot of air flow without one.
 

Koop

Active Member
Jan 24, 2024
174
85
28
Do you have a CPU air shroud that was just left out of the photos? Those 2U heatsinks aren't going to get a lot of air flow without one.
Yeah, only removed for the photos, no worries! I am wondering if getting larger active coolers when I move up to X10 or X11 would be a good call though?

Was also hoping showing the power connectivity I had could explain how I could potentially power SSDs or more drives internally.
 

nexox

Well-Known Member
May 3, 2023
695
283
63
4U active coolers would likely let you swap those main fans with quieter ones, but if you aren't so concerned with sound then 2U active coolers would also work and probably cost less. Make sure to match square vs narrow ILM to whatever your next board uses, if you're buying used you may need to carefully eyeball the mount points in photos, since many sellers don't specify, or even get it backwards in the description.
 

Koop

Active Member
Jan 24, 2024
174
85
28
4U active coolers would likely let you swap those main fans with quieter ones, but if you aren't so concerned with sound then 2U active coolers would also work and probably cost less. Make sure to match square vs narrow ILM to whatever your next board uses, if you're buying used you may need to carefully eyeball the mount points in photos, since many sellers don't specify, or even get it backwards in the description.
Yeah ideally I'd want to swap to quieter fans + 4U active coolers to cut down on the noise as it's a concern in my current living space. I saw I could drop in the FAN-0074L4 as direct replacements which should be much quieter. Was hoping using them plus an active cooler as a good solution? As for keeping the drives cool though perhaps something to push air through the front of the whole chassis would be good to add as well.

I know people have replaced the whole internal fan wall before with things like Noctua fans but I wouldn't want to do anything that requires modification to the chassis. Maybe there are some novel 3D print solutions for the front. I'd like to be able to swap everything back to stock when I can put my whole rack in a better space (no basement right now). For now my plan was to use a closet in my office bedroom and find some solutions for air circulation in there or just taking the door off.
 

nexox

Well-Known Member
May 3, 2023
695
283
63
I just swapped the fans in my 2U with 80mm pressure optimized fans, took a little modification of the fan carriers and I had to wrestle the hot swap fan plugs out of them and just wire straight. I feel like many Noctuas don't have enough peak static pressure to pull air through the disks, but there are other options that spin down to low noise levels still.
 

nabsltd

Well-Known Member
Jan 26, 2022
431
293
63
Would you mind elaborating on why you like the *SRL line of boards?
  1. Lots of PCIe slots, with bifurcation in the BIOS. This makes it easy to add NVMe storage. The number of slots also make adding Ethernet, storage, etc., really easy.
  2. Enough memory slots that you can buy smaller DIMMs and still get a lot of total memory.
  3. Enough on-board storage (SATA or SAS) that you don't need to add a card for storage in many cases.
  4. Single processor, which means every PCIe slot is active, which isn't the case for dual-processor boards with only one CPU installed. Also, for a storage server, a single processor is enough.
X9 through X11 used SRL, while X12 used SPL. The "R" or "P" refer to the socket on the board (but the same letter doesn't mean the same socket across board generations), and the "L" means "low cost". That's reason #5...these boards have historically been less expensive, and definitely more bang for the buck than other Supermicro boards.

To me, the SRL and SPL are "jack of all trades" boards. They might not have some things built in that other boards have (10Gbit Ethernet, Oculink, M.2, more than 2x Ethernet, etc.), but the sheer number of slots make it easy to set them up with whatever you want.
 
  • Like
Reactions: rnavarro

Koop

Active Member
Jan 24, 2024
174
85
28
  1. Lots of PCIe slots, with bifurcation in the BIOS. This makes it easy to add NVMe storage. The number of slots also make adding Ethernet, storage, etc., really easy.
  2. Enough memory slots that you can buy smaller DIMMs and still get a lot of total memory.
  3. Enough on-board storage (SATA or SAS) that you don't need to add a card for storage in many cases.
  4. Single processor, which means every PCIe slot is active, which isn't the case for dual-processor boards with only one CPU installed. Also, for a storage server, a single processor is enough.
X9 through X11 used SRL, while X12 used SPL. The "R" or "P" refer to the socket on the board (but the same letter doesn't mean the same socket across board generations), and the "L" means "low cost". That's reason #5...these boards have historically been less expensive, and definitely more bang for the buck than other Supermicro boards.

To me, the SRL and SPL are "jack of all trades" boards. They might not have some things built in that other boards have (10Gbit Ethernet, Oculink, M.2, more than 2x Ethernet, etc.), but the sheer number of slots make it easy to set them up with whatever you want.
Thank you very much for the insight.ans explanation, really appreciate it. Super helpful to get your take in it and I agree- it makes total sense. I agree it's much easier to get something with flexibility and then figure out what you really need by dropping in the right cards.

This was actually something I was doing a lot during my initial research and motherboard browsing. "Oh I would love to have 10Gbe" but then it limited me. Going for boards like these give me the ultimate flexibility of putting in what I want and need and even better change my mind if it turns out to be a dumb idea.
 

sth

Active Member
Oct 29, 2015
381
92
28
The hot-swap dual 2.5” rear drive caddy only works with newer 864B chassis. You would need to pretty heavily mod an ‘A’ chassis to fit.
 

Koop

Active Member
Jan 24, 2024
174
85
28
The hot-swap dual 2.5” rear drive caddy only works with newer 864B chassis. You would need to pretty heavily mod an ‘A’ chassis to fit.
Is that so? Is it too thick to fit where the DVD slot would go? I figured it was a pretty close fit from eyeballing the size but I'm not sure I suppose.

I think looking at too many product pages got me confused and I thought I saw it as compatible with the SC846TQ-R1200B and/or SC846TQ-R900B but I think maybe I'm just going crazy.
 
Last edited: