SC846 system gifted to me - A full overview with questions. Replacing DVD drive with SSDs? Ideas for upgrades or keep what I got?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Koop

Active Member
Jan 24, 2024
165
70
28
Hello everyone,


I was gifted a SC846 chassis with a system ready to go inside. For the chassis itself I was curious if there were a part I could use to replace the DVD drive on the back with SSDs? Through many google searches I found a part, MCP-220-84606-0N, which appears to be just what I am looking for but I can't seem to find it anywhere. I see that this part uses a board part number BPN-SAS-2PT which I can find but obviously but itself it's not as useful. Just curious if anyone has tried to do the same? I have already ordered the metal side mount (part MCP-220-84603-0N) but being able to leverage that little bit of extra space would be nice.

As to which EXACT 846 I have I've googled a lot. It came with two 1200W 80 Plus Gold power supplies and the backplane is a SAS846TQ so I believe that narrows it down to the SC846TQ-R900B or SC846TQ-R1200B? According to my google skills it should be the SC846TQ-R1200B due to the 1200w gold PSUs. Not sure if there is much difference between those exact models regardless.

So my next question is- does anyone know if it's possible for me to get a replacement top panel? I've been trying to search for "846 top panel" and such but have had very little luck. the top panel was dropped some time ago right on the back corner and even with as much skill as I could muster with my pliers I can't really get it back into a shape that allows me to close the top panel. The one time I did manage I got it stuck for some time which was not ideal. Any recommendations or ideas on how I can replace the top panel?

Next I wanted to dive more into the hardware I got. So loaded inside was a X8DTH-i motherboard (there's several variants per the manual but printed on the board itself is just X8DTH-i with two Xeon E5620s. The bios shows 65536MB system memory (Which is.. 65.535GB?). There' should be 8 sticks of 8GB ECC memory which should be 64GB yeah? But TrueNAS shows 62.9GB total so uh haha, I'm a bit confused on what's up with the memory. It appears to all be DDR3 Samsung memory (M393B1K70CHD-CH9). There's still four open slots.

As I mentioned It's using a SAS846TQ Rev3.1backplane which is all wired up to three SAS9211-8i HBAs (flashed into IT mode). So from my understanding there is no concerns with having full bandwidth to all 24 drives I believe (please correct me if I'm wrong here). Is there any recommendations for keeping the HBAs cool? Adding a fans anywhere to help with airflow over them? From what I understand I don't think I can monitor the temperature of the HBAs unless I use something like an external USB thermometer? I believe, also, I was told there may be a concern with using SSDs through these HBAs? I obviously need to do more research on the HBAs unless anyone would like to shed light.

My plan would be to be able to run TrueNAS Scale to present storage to a proxmox as well as be a general dump for data for my local desktops. I'd like to be able to play with some apps in TrueNAS Scale a bit but I'd primarily focus on storage presentation to Proxmox so it can handle the VMs, LXC containers, and whatever else I can do on Proxmox. I assume I can just present NFS share storage to proxmox for this purpose? I know iscsi is an option as well but I am pretty sure from what I've read so far getting good performance would mean setting up your TrueNAS enviorment properly and intentionally for that purpose- I assume it would be easier/simpler to just use file shares for everything? I am very new to both TrueNAS and Proxmox so forgive me if this doesn't make sense or is a bad idea some how. Please correct me if my line of thinking is wrong. Because I am new to Proxmox, TrueNAS, and setting up a home lab in general I may not fully grasp all I may want to do just from lack of experince so that is a consideration as well.

If anyone has recommendations I'd love to hear what people think I might want to change or upgrade based on my use case? Anything I could swap out to be more power efficient and/or more powerful for running TrueNAS? Features or functionality I am missing out on by being on this older hardware that I'm not aware of? I was thinking that perhaps I could keep the LSI cards but go with a newer SuperMicro motherboard with single CPU setup? Another consideration is if I would want to change backplanes at all? I don't think there would be any reason to other than less cables. Or perhaps it's not even worth touching and have the current hardware just focus on filesharing? Eventually I would like to use more features within TrueNAS itself though such as snapshotting and replication and I don't know if there would be any performance issues trying to do that and handle proxmox storage and filesharing?

Appreciate any feedback, thoughts, or pointing me to resources to help me out.
 

NPS

Active Member
Jan 14, 2021
147
44
28
Do you have 24 drives? How many do you want to use? And how much storage capacity do you want to use? Of course you could get better power efficiency with newer components but basically this will stay a monster of a system except you use much less drives and chance more or less everything but the metal...
So what do you really want in terms of total disk capacity and performance? Do you want to run it 24/7?
 

Koop

Active Member
Jan 24, 2024
165
70
28
Do you have 24 drives?
I am currently at 14 drives total right now. I think this will be a fine amount for a while to come.

How many do you want to use? ?
Right now just the number of drives I have available. I like the idea of being able to expand up to 24 if wanted in the future.

And how much storage capacity do you want to use?
Currently I am at 14 drives, 10TB each. I still have to do research in what I feel should be my approach to understanding how much capscity I will have at the end of the day and if that's enough (It's enough for sure though). For example I know I could do two 6-Wide Z2s and keep two drives as hot spares (I think? Correct me if I'm wrong here). Or I could potentially do two 7-wide Z2s. From my understanding no matter what I do I should stick to RAIDZ2 for my vdevs due to the size of the disks I'm using. My understanding being that even with two drive resilience the rebuild process of drives this size will take a long time and chances of additional failure in that time frame are a important consideration. My data isn't all critical though- it's again just home lab stuff, media. If VMs die they won't be critical just annoying to rebuild if anything. Most important things would probably be pictures I wouldn't want to lose which I could ensure I am properly backing up so I guess I could be more aggressive on layouts and not rely on hot spares.

So what do you really want in terms of total disk capacity and performance? Do you want to run it 24/7?
Truthfully I'm not sure in terms of total disk capacity hence why I'm only at 14 drives. Going to assume I will be very well off with capacity but since this is my first time expanding into a NAS like this I'm not sure what my consumption will end up actually being like. Starting at this amount of disks I am hoping for flexibility one way or the other- to be able to add more or take away if it seems like overkill over time.

Performance isn't a major objective unless it causes noticeable delay. For example I wouldn't want the NAS or disks to be an issue for video playback when pulling data for PLEX to play a movie or listen to music. I don't think I would encounter such issues with the number of disks I am working with but I don't know for sure, again, newbie and all.

Running it 24/7 is something I hadn't thought about. Any idea what my options could be to have it not be 24/7? I would assume I would actually have a lot of idle time because I wouldn't need data access 24/7. Things that I would want running 24/7 would be things like my PLEX server whick could live on the Proxmox machine. Of course when PLEX needs to fetch data off the NAS for large video files I'd need to start spinning disks. Thoughts of achieving a middleground?
 

nexox

Well-Known Member
May 3, 2023
674
278
63
As far as upgrades, it appears that's a standard-ish EATX board, which means the chassis should fit pretty much any other EATX/ATX/Micro ATX motherboard, and the X8 hardware is so old that almost anything you can find will be an improvement, X9 LGA 2011 socket stuff is quite cheap and a single socket can provide as much processing power as both of the CPUs you have, at lower power (especially at idle.) Beyond that socket you get into DDR4 and higher costs, but if you go new enough you can get Xeons based on consumer-grade CPUs that bring great power efficiency and quite a lot of processing power (plus potentially an integrated GPU that gets you great video transcoding performance.)

You also don't really need three HBAs that take up 24 PCIe lanes, a single 8 port 12G SAS HBA plus an expander would provide plenty of bandwidth with a single x8 slot and lower overall power consumption (by a bit.)

I don't know exactly what variety of power supply that chassis uses, but Supermicro 80+ Platinum PSUs are pretty cheap used, and if you don't absolutely need to tolerate a PSU failure without downtime, pull the second one out a bit and leave it unplugged to save more power, if the active one does die you can just plug in the spare and reboot.
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
So my next question is- does anyone know if it's possible for me to get a replacement top panel? I've been trying to search for "846 top panel" and such but have had very little luck. the top panel was dropped some time ago right on the back corner and even with as much skill as I could muster with my pliers I can't really get it back into a shape that allows me to close the top panel. The one time I did manage I got it stuck for some time which was not ideal. Any recommendations or ideas on how I can replace the top panel?


Appreciate any feedback, thoughts, or pointing me to resources to help me out.
There is a chance that the top panel from an 826 or 836 may fit. I have a very damaged "control ear" 826 in the junk pile that I am probably not going to try and fix. I have an 846 on the way to rebuild for a friend so I will test it when it arrives. If so solution found and I'll post back here when I know.

Another path is to go to the supermicro e-store and use the chat feature. *sometimes* the estore can pull things from supermicro's build inventory and sell replacement parts. sometimes not. sometimes they do a custom order through the warranty / repair department. I've experienced all of the above. Its worth a check with them for that as well as the rear 2 bay 2.5" hot swap module you want to acquire. Esp. at the supermicro e-store its worth using the chat if an item is listed but has 0 quantity the folks manning the chat have definitely pulled inventory from the build side to fulfill an order.
 

Koop

Active Member
Jan 24, 2024
165
70
28
X9 LGA 2011 socket stuff is quite cheap and a single socket can provide as much processing power as both of the CPUs you have, at lower power (especially at idle.) Beyond that socket you get into DDR4 and higher costs, but if you go new enough you can get Xeons based on consumer-grade CPUs that bring great power efficiency and quite a lot of processing power (plus potentially an integrated GPU that gets you great video transcoding performance.)
Would you mind elaborating on options I should consider? I'm very unfamiliar with server grade hardware. What are some examples of Xeon based consumer-grade CPUs?

I'm thinking I'd really like to move to something single CPU.

a single 8 port 12G SAS HBA plus an expander
Any particular recommendations for specific HBAs that would fulfill this?

I don't know exactly what variety of power supply that chassis uses, but Supermicro 80+ Platinum PSUs are pretty cheap used, and if you don't absolutely need to tolerate a PSU failure without downtime, pull the second one out a bit and leave it unplugged to save more power, if the active one does die you can just plug in the spare and reboot.
It came with 2x 1200w gold PSUs. Right now only one is slotted in. I've ordered a 920W PWS-920P-SQ already actually. But yes I don't plan to run multiple PSUs.

There is a chance that the top panel from an 826 or 836 may fit. I have a very damaged "control ear" 826 in the junk pile that I am probably not going to try and fix. I have an 846 on the way to rebuild for a friend so I will test it when it arrives. If so solution found and I'll post back here when I know.

Another path is to go to the supermicro e-store and use the chat feature. *sometimes* the estore can pull things from supermicro's build inventory and sell replacement parts. sometimes not. sometimes they do a custom order through the warranty / repair department. I've experienced all of the above. Its worth a check with them for that as well as the rear 2 bay 2.5" hot swap module you want to acquire. Esp. at the supermicro e-store its worth using the chat if an item is listed but has 0 quantity the folks manning the chat have definitely pulled inventory from the build side to fulfill an order.
Good insight, thank you! Let me know what you find out. I'll try the estore chat as well, good idea.
 

mattventura

Active Member
Nov 9, 2022
447
217
43
As has already been said in the thread, If you're not running SSDs in those 24 bays, my recommendation would be to not run 3 HBAs, but instead run one HBA and an expander. You can get a discrete expander or a backplane with a built-in expander (bpn-sas3-846EL1).

The AOC-S3008L is very cheap at this point, as is the 82885T standalone expander.
 

nexox

Well-Known Member
May 3, 2023
674
278
63
Would you mind elaborating on options I should consider? I'm very unfamiliar with server grade hardware.
I'm mostly familiar with Supermicro/Intel server parts, for the big Xeons X9 gets you LGA2011 (Xeon E5-[12]6xx and E5-[12]6xx v2) with DDR3 and PCIe 3, X10 is LGA2011-3 (v3 and v4) with DDR4 and PCIe 3, X11 is LGA3647 (Xeon Scalable first and second gen) with DDR4 and PCIe 3, X12 LGA4189 (Scalable Gen 3) with DDR4 and PCie 4. The letter after the X number denotes the number of sockets, you want an S for single (so X10S), the next letter after that mostly indicates the socket (R for 2011 and P for 3647 and 4189,) the third letter is often form factor (there are a lot of proprietary layouts, you want to avoid, for example, U and W,) and then there's a dash and then more letters (occasionally numbers as well) that kind of identify some features on the board (-F usually means it has IPMI, this is good, T often means on-board 10G, most of the other letters aren't very consistent.)

The advantages of the big socket Xeons are that they use RDIMMs, which are usually cheaper used than UDIMMS used by consumer CPUs, they mostly have more PCIe lanes, more memory channels, and are available with more cores. Downsides are higher power consumption, lower peak clock speeds, and often more-expensive CPU coolers.

What are some examples of Xeon based consumer-grade CPUs?
Something like the Xeon E-2300 series on LGA1200, in something like an X12STH board. They have some pretty nice chips up to 8 cores and quite high clock speeds, but they are not cheap (I'm used to $10 for Skylake Scalable Xeons, so $700 for a E-2388G seems very steep.)

I've ordered a 920W PWS-920P-SQ already actually.
It may not be quite enough if you run all 24 bays full of spinning drives, but the PWS-501P-1R is that same form factor, similarly quiet to the -SQ, and you're more likely to run it a >20% load, which is where power supplies operate with much more efficiency - I saved ~10W at idle switching from the 901P-SQ to the 501P-1R.
 

Koop

Active Member
Jan 24, 2024
165
70
28
I'm mostly familiar with Supermicro/Intel server parts, for the big Xeons X9 gets you LGA2011 (Xeon E5-[12]6xx and E5-[12]6xx v2) with DDR3 and PCIe 3, X10 is LGA2011-3 (v3 and v4) with DDR4 and PCIe 3, X11 is LGA3647 (Xeon Scalable first and second gen) with DDR4 and PCIe 3, X12 LGA4189 (Scalable Gen 3) with DDR4 and PCie 4. The letter after the X number denotes the number of sockets, you want an S for single (so X10S), the next letter after that mostly indicates the socket (R for 2011 and P for 3647 and 4189,) the third letter is often form factor (there are a lot of proprietary layouts, you want to avoid, for example, U and W,) and then there's a dash and then more letters (occasionally numbers as well) that kind of identify some features on the board (-F usually means it has IPMI, this is good, T often means on-board 10G, most of the other letters aren't very consistent.)
Personally I like the idea of sticking with Supermicro. It's a fun space for me to explore as I have never owned my own enterprise/server level gear. Thank you very much for the breakdown on the generations and hardware terminology. Thanks for mentioning IPMI as being good lol yeah I am familiar with general data center tech knowledge (I passed the EMC ISM over a decade ago for what it's worth lol) - but I do have major gaps in knowledge. I've never had to calculate power consumption costs or anything like. I am very ignorant on what is considered good power consumption with severer/enterprise gear nor can I say I am familiar with all capabilities especially older vs newer tech. I was never the person making the purchase or selling.

I think the goal I am looking for is finding a more powerful single CPU and supermicro board combo to go with so I can have more fun and capability within TueNAS for VMs and docker stuff ("Apps" ? It's just docker right?). I just don't know what the "sweet spot" I should be looking for? "Oh look for this X10 board and this CPU you'll have way less power consumption and X% more CPU power with a single CPU". Not trying to break the bank- just want to upgrade from really old to somewhat old. Would you happen to have any specific recommendations?

Something like the Xeon E-2300 series on LGA1200, in something like an X12STH board. They have some pretty nice chips up to 8 cores and quite high clock speeds, but they are not cheap (I'm used to $10 for Skylake Scalable Xeons, so $700 for a E-2388G seems very steep.)
Thanks for the examples. I agree, that's not the kind of pricing I'm looking to spend. I'm over here looking for sub $50 parts after all. Hopefully you can get a feel for where I am at and could suggest some SuperMicro board recommendations? I'll continue to do my own research of course, but I appreciate you sharing your knolwedge- maybe you have some goto "sweet spots" ? haha

It may not be quite enough if you run all 24 bays full of spinning drives, but the PWS-501P-1R is that same form factor, similarly quiet to the -SQ, and you're more likely to run it a >20% load, which is where power supplies operate with much more efficiency - I saved ~10W at idle switching from the 901P-SQ to the 501P-1R.
Ah I see so 500w. Thanks for the heads up on this. Would I not be able to power 25 bays even if I have two? I know it's designed for failover but from my understanding it isn't fully on/off but more like high % main psu vs low % failover utilized? (please correct me if I'm off base). What is the proper way for me to gauge my system's power utilization to know what PSU would or wouldn't be enough?

On the topic of power, what's the deal with the SuperMicro power distribution boards? Obviously on the consumer level I am used to much different connectivity but I wasn't sure if that's just because the hardware I have is old? I've noticed in the system I have I only have a single molex available since the rest of the power is being used by the backplane. There's one connection to the DVD drive (which is... a small 4 pin power connection?) and one 4pin PC labeled connection. Are there different power distributors that have more connectivity for things like sata power? Or is the idea that I would buy the appropriate parts/enclosures and use the still available molex power? I asl because obviously there'a still a ton of direct sata connections on the motherboard but how would I power that man drives with that's left/available? I feel very silly asking but maybe I just don't understand the server board design philosophy haha. I suppose they would be there to use if you were using this board in a different chassis? I just see it as an opportunity to go beyond 24 drives using side mounting. Was thinking 2x mirrored SSDs for the boot drives or potentially more SSDs for other things TrueNAS might be able to use (probably don’t need it though from the reading I’ve done). Right now, I’m doing all my drive burn-in testing off of a USB stick which I obviously don’t want to be to permanent solution.

Thank you so much for sharing your knloedge with me. It's a lot of fun learning all this actually even if the part numbers can get really confusing haha.
 

Koop

Active Member
Jan 24, 2024
165
70
28
As has already been said in the thread, If you're not running SSDs in those 24 bays, my recommendation would be to not run 3 HBAs, but instead run one HBA and an expander. You can get a discrete expander or a backplane with a built-in expander (bpn-sas3-846EL1).

The AOC-S3008L is very cheap at this point, as is the 82885T standalone expander.
Yeah, I did not plan to run SSD storage at this point unless I was connecting it directly to the motherboard for boot drive(s). I'm used to running, a most, 2-3 drives for "deep" storage so having so many spinning disks at my disposal will be enough for me I think haha. My only goal was just to make a large ass spinning pool. I can elaborate of disk choices and such if interested but yeah, just spinning white label 10TB drives across the board for me.

With that said, what would be the benefits and/or downsides to doing this? I see the AOC-S3008L-L8E for pretty cheap on ebay. I'm just not understanding why I would make the change when what I have works fine as-is? Just trying to understand why you recommend it.
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
without putting words into @mattventura 's mouth I think they are trying to help you see a direction that minimizes costs and maximizes flexibility.

If you are going to use spinners in the LFF bays then you are unlikely to ever exceed more than 2Gbps per drive or roughly 48Gbps if all bays were populated (typical transfer rate of the spinners when coming off platters and not cache). If you are using a SAS3 controller at 12Gbps per channel...

More HBA's means a higher operational cost (power), more heat generated inside the chassis (higher fan speeds, again more power).

TQ cabling can be very flexible, and having different sets of bays on different controllers can be very useful (for multiple use cases) but if your primary use case will be all large cap spinners - ergo - a large pool to hold your data then multiple HBA's will burn slots on your mainboard that maybe you won't have (if you use matx), or could eb put to better use: nvme, high speed networking (if not on mainboard) etc. etc. Maybe even use cases you have not thought of or have not hit yet in your journey through this process.

FWIW people either love or hate TQ backplanes. Getting them cabled up can be tedious the more bays you have, needing more ports therefore more cables. NB: I'm just making an observation. I personally like TQ backplanes. I have them in half my 836's. I also use a -16i HBA so I don't have to futz with more than one HBA or put an expander mid-span.

RE DVD slot to 2.5 spinner thingy. I can't remember whether it takes the floppy drive style power connector (what you described0 or needs a molex. I think molex so you'll have to figure out how to get one back there to that bay.

adding additional 2.5 SSD's

Depending on the age of yoru CSE-846 there may be some "studs" on the power supply wall. SM makes a bracket which attaches to the studs. the bracket allows you to mount 2x 2.5 drives there. Obviously not hot swap but can be handy. that bracket is also cheaper than the fancy hot swap DVD slot thingy and if you mirror a couple of low cost SSD's for boot - maybe you can afford to have downtime to swap one if it fails.

boot drive mirrors.

Depending on the OS you run (TNC for example) and depending on the motherboard you may be able to take advantage of satadoms - what supernmicro calls superdom. If your supermicro motherboard (say most X10, X11 E5/scalable) supports them you may observe a pair of yellow/gold sata ports. those have integrated power for the right kind of sata/super doms. Again not really hot swappable - but do not take drive bays (more flexible) and if your OS is TNC then you do nto need a lot of storage for boot. The DOMS come in sizes from 16 - 256GB, depending on manufacturer and can be had anywhere from $10USD - $60USD for reasonable sizes: 32GB-256GB. 32-64GB is a good size for TNC and really price tends to dictate what you will get. Example: There are some 256GB satadoms in the bay that can be powered from the orange ports for $55USD .
Not not all satadoms have the integrated power connections and there are at least two different kinds of integrated power connections. Supermicro uses the type with the metal connectors on the side of the physical connector (you can see it the pics in the above links).

why am I pointing these out? Flexibity in configuration, maximizing your capabilities.

Getting an 846 as a gift is pretty generous - as they have become hard to acquire and can demand a nice price on the used market. enjoy!
 

nexox

Well-Known Member
May 3, 2023
674
278
63
One reason to go to the single HBA + Expander setup is to save yourself a bunch of PCIe lanes, which would make one of those E series Xeons with fewer available lanes a more viable option, plus you'd likely save some power, the expander probably uses about as much as a single HBA.

As far as power connectors, they don't really expect you to expand much, I'd look at m.2 drives on a PCIe card, though you'll need at least X10 era hardware to get bifurcation that allows you to use the cheap passive adapters for this. Bifurcation options on the E series Xeons is more limited but I don't know the generation details. Either way you want to fit 22110 size drives to get compatibility with most enterprise grade NVMe, or perhaps look at U.2 adapter cards, assuming your chassis is one that fits full height cards.

I would personally be looking at X10 boards and v4 Xeons in your case, but if you find a suitable X11 board for the same price then it won't cost any more to fill it with a CPU and memory. For idle power consumption generally fewer cores is better, and mostly each generation gets a little more efficient, but sometimes that's offset by added features. Every motherboard feature and card and even fan will add on a few watts, which is why people often avoid 10GBaseT ports on the motherboard, it's more efficient to get a cheap NIC with SFP+ ports.
 

Koop

Active Member
Jan 24, 2024
165
70
28
without putting words into @mattventura 's mouth I think they are trying to help you see a direction that minimizes costs and maximizes flexibility.

If you are going to use spinners in the LFF bays then you are unlikely to ever exceed more than 2Gbps per drive or roughly 48Gbps if all bays were populated (typical transfer rate of the spinners when coming off platters and not cache). If you are using a SAS3 controller at 12Gbps per channel...

More HBA's means a higher operational cost (power), more heat generated inside the chassis (higher fan speeds, again more power).

TQ cabling can be very flexible, and having different sets of bays on different controllers can be very useful (for multiple use cases) but if your primary use case will be all large cap spinners - ergo - a large pool to hold your data then multiple HBA's will burn slots on your mainboard that maybe you won't have (if you use matx), or could eb put to better use: nvme, high speed networking (if not on mainboard) etc. etc. Maybe even use cases you have not thought of or have not hit yet in your journey through this process.
Ok that makes complete sense now, totally get it. Especially if I were to go down to a single CPU board and lost a bunch of lanes and thus had less slots. This is something I should probably do then because it just makes sense. However is there some combination of single HBA + Backplane I could use to maintain full bandwidth to all drive slots? Like for example if wanted to go all flash suddenly with my build- what would be the go to HBA to use in that circumstance? A different backplane too? Just curious what the cost of those parts are like.

RE DVD slot to 2.5 spinner thingy. I can't remember whether it takes the floppy drive style power connector (what you described0 or needs a molex. I think molex so you'll have to figure out how to get one back there to that bay.
When you google the MCP-220-84606-0N part it has a molex connection on the board. I swear I saw one though that look like it used the same power connection used by the DVD drive but maybe I'm mistaken. There actually is a single molex connection left that can easily reach. It would be helpful if I could use the same power delivery that was for the DVD for an SSD via a board and then I can still use that single Molex connection that's left to power a few side mounted drives. Since those are the only power leads I have left at my disposal. So let's say I can find the part to replace the DVD drive and use two sets of the MCP-220-84603-0N part (I believe there's two spots for them) that means technically I could have 6 more drives. But I only have a single molex at my disposal? Would the expectation be to splitter that to everything? Or am I being unrealistic with how many drives I can leverage internally?

adding additional 2.5 SSD's

Depending on the age of yoru CSE-846 there may be some "studs" on the power supply wall. SM makes a bracket which attaches to the studs. the bracket allows you to mount 2x 2.5 drives there. Obviously not hot swap but can be handy. that bracket is also cheaper than the fancy hot swap DVD slot thingy and if you mirror a couple of low cost SSD's for boot - maybe you can afford to have downtime to swap one if it fails.
Yeah that's the MCP-220-84603-0N part that bolts on. Again though my concern is how would I power the drives with the limited power I have left. Also yeah not like I'd need boot drives to be hot swappable but my thought process was like along the lines of using the sata motherboard connections with enough SSDs to have a small SSD only pool. Just that it doesn't seem like there's enough connectivity coming out of that power distributor to be able to do that.

boot drive mirrors.

Depending on the OS you run (TNC for example) and depending on the motherboard you may be able to take advantage of satadoms - what supernmicro calls superdom. If your supermicro motherboard (say most X10, X11 E5/scalable) supports them you may observe a pair of yellow/gold sata ports. those have integrated power for the right kind of sata/super doms. Again not really hot swappable - but do not take drive bays (more flexible) and if your OS is TNC then you do nto need a lot of storage for boot. The DOMS come in sizes from 16 - 256GB, depending on manufacturer and can be had anywhere from $10USD - $60USD for reasonable sizes: 32GB-256GB. 32-64GB is a good size for TNC and really price tends to dictate what you will get. Example: There are some 256GB satadoms in the bay that can be powered from the orange ports for $55USD .
Not not all satadoms have the integrated power connections and there are at least two different kinds of integrated power connections. Supermicro uses the type with the metal connectors on the side of the physical connector (you can see it the pics in the above links).

why am I pointing these out? Flexibility in configuration, maximizing your capabilities.

Getting an 846 as a gift is pretty generous - as they have become hard to acquire and can demand a nice price on the used market. enjoy!
Oooo that's extremely helpful info, thank you! I had never heard of satadoms/superdom before. I'll keep this in mind as I look through X10/X11 boards. Also yes, it was extremely generous- the guy who let me have it is an awesome dude. You can see I want to make good on his gesture by learning as much as I can and show that his gift was really put to good use. It's already sparked me learning so much and I've really enjoyed learning so much about SuperMicro hardware. There’s just a lot to take in when you’re new to sever/enterprise level hardware but then have to backtrack to 10+ years ago hardware haha.
 

Koop

Active Member
Jan 24, 2024
165
70
28
One reason to go to the single HBA + Expander setup is to save yourself a bunch of PCIe lanes, which would make one of those E series Xeons with fewer available lanes a more viable option, plus you'd likely save some power, the expander probably uses about as much as a single HBA.
Right totally on the same page for why this makes sense now. I suppose I still pose the same question: If I were to get an X10 or X11 board is there a better HBA + backplane combo I could use?

As far as power connectors, they don't really expect you to expand much, I'd look at m.2 drives on a PCIe card, though you'll need at least X10 era hardware to get bifurcation that allows you to use the cheap passive adapters for this. Bifurcation options on the E series Xeons is more limited but I don't know the generation details. Either way you want to fit 22110 size drives to get compatibility with most enterprise grade NVMe, or perhaps look at U.2 adapter cards, assuming your chassis is one that fits full height cards.
Yeah it can absolutely fit full height cards. This makes me want to definitly do more research into a new board/cpu/memory combo so I can have these expanded capabilities.

I would personally be looking at X10 boards and v4 Xeons in your case, but if you find a suitable X11 board for the same price then it won't cost any more to fill it with a CPU and memory. For idle power consumption generally fewer cores is better, and mostly each generation gets a little more efficient, but sometimes that's offset by added features. Every motherboard feature and card and even fan will add on a few watts, which is why people often avoid 10GBaseT ports on the motherboard, it's more efficient to get a cheap NIC with SFP+ ports.
Could you elaborate on why X10 with a v4 xeon in particular? Just looking to understand why. What would you consider to be worth looking at on X11?

Also thank you for the insight on the 10Gbe connecitivty, I had not thought about how it would be more efficient to use a NIC over looking for a motherboard with 10Gbe.

Again appreciate yours and everyone else's input here- It's been very helpful.
 

nexox

Well-Known Member
May 3, 2023
674
278
63
Could you elaborate on why X10 with a v4 xeon in particular? Just looking to understand why. What would you consider to be worth looking at on X11?
Generally X9 (with a few exceptions) is too old for bifurcation and NVMe boot, the IPMI remote view requires ancient and broken Java with ancient broken https, plus power efficiency is worse and the maximum CPU performance is lower. X10 supports v3 and v4 CPUs but v4 is more efficient and quite inexpensive so there's no reason to bother with v3. X11 gets you 8 more PCIe lanes per socket and two more memory channels, and some CPUs are rather inexpensive (the 6132 for example,) but the boards usually cost more and power consumption didn't really get better thanks to the added stuff. On the upside, X11 boards have more of a future with the Cascade Lake / Scalable Gen 2 CPUs, which, if you get the proper board also support Optane NVDIMMs, which can be quite cheap since they're tied to specific generations of Xeon.

I personally just went through this decision tree twice and ended up maybe saving money, maybe giving myself a series of headaches, by landing with two X11DPL boards, which are absolutely full of compromises so they can fit two quite-large LGA3647 sockets into a standard ATX board.

And some rare boards do come with SFP+ ports, but commonly they're just the higher-power-consumption 10GBaseT ports for CAT6.
 

nexox

Well-Known Member
May 3, 2023
674
278
63
Realized I forgot to answer the question about power supplies - going dual 500W will indeed get you more capacity, but at that point you're almost certainly better off plugging in the 900W PSU - once you get to about 200W of load it should be similarly efficient to the 500W, it's just when you're under 20% that they differ substantially.
 

Koop

Active Member
Jan 24, 2024
165
70
28
Generally X9 (with a few exceptions) is too old for bifurcation and NVMe boot, the IPMI remote view requires ancient and broken Java with ancient broken https, plus power efficiency is worse and the maximum CPU performance is lower. X10 supports v3 and v4 CPUs but v4 is more efficient and quite inexpensive so there's no reason to bother with v3. X11 gets you 8 more PCIe lanes per socket and two more memory channels, and some CPUs are rather inexpensive (the 6132 for example,) but the boards usually cost more and power consumption didn't really get better thanks to the added stuff. On the upside, X11 boards have more of a future with the Cascade Lake / Scalable Gen 2 CPUs, which, if you get the proper board also support Optane NVDIMMs, which can be quite cheap since they're tied to specific generations of Xeon.

I personally just went through this decision tree twice and ended up maybe saving money, maybe giving myself a series of headaches, by landing with two X11DPL boards, which are absolutely full of compromises so they can fit two quite-large LGA3647 sockets into a standard ATX board.

And some rare boards do come with SFP+ ports, but commonly they're just the higher-power-consumption 10GBaseT ports for CAT6.
Again, really appreciate the insight.

So what do you think of the X10SRL-F for an X10 board choice? Anything I am missing? It looks really good and is fairly inexpensive. Is there a comparable X10 with additional features or X11 I should be looking at in comparison to make a decision? I saw that I could use the superdoms mentioned by @itronin which exited me to save space.

CPU for that X10 could be... E5-1650 v4 ? vs perhaps the E5-1630 v4 or the E5-1680 v4? Thoughts? I looked up each on ebay and they are all dirt cheap. Seems like mostly an argument for if I want more or less cores and if it's worth it?

I skimmed through X11 boards but they jumped in price quite a bit when trying to compare to the X10SRL-F specifically. Only major loss I can see is no PCie 4 for which I would use for... Not sure lol (and searching on the SuperMicro server boards I couldn't seem to find one with pcie 4)
 
Last edited:

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
overall:

FWIW, part of "the game" is trying to maximize everything everywhere all at once (ha!) while reducing opex and capex costs - that's tough to do. Really its a game of compromise. If you don't plan your design to some use case you may find that you've spent more in places you didn't need to and not enough in places you did.

you can always upgrade down the road too...

However is there some combination of single HBA + Backplane I could use to maintain full bandwidth to all drive slots? Like for example if wanted to go all flash suddenly with my build- what would be the go to HBA to use in that circumstance? A different backplane too? Just curious what the cost of those parts are like.
There is the LSI 9305-24i. However you will run out of PCIE slot bandwidth first. Its an x8 card. I also suspect that card can't actually sustain 288Gbps anyway. My 2 cents. Waste of a chassis if you were to try and fill it up with high cap SSD's. You'd be better off selling that and buying a CSE-216 and motherboard and maybe cpu's for what the 846 would likely bring in. Also do you have use cases that could actually consume that amount of IO? I'm biased towards the LSI chipsets. IIRC there are adaptec SAS3 HBA's and I think a -24i too! Read up on ADaptec and truenas if you are going to run truenas before you buy though. If you were going sata or don't mind mixing/matching sas and sata on your backplane then a single -16i and the right motherboard will work for ya.

So let's say I can find the part to replace the DVD drive and use two sets of the MCP-220-84603-0N part (I believe there's two spots for them) that means technically I could have 6 more drives. But I only have a single molex at my disposal? Would the expectation be to splitter that to everything? Or am I being unrealistic with how many drives I can leverage internally?
so if that is in fact a floppy connector then you could buy a floppy to molex connector cable. molex splitters and molex extension cables. I don't think you are being unrealistic. just apply common sense. 2-4 2.5" off a single molex. absolutely. people run 4-5 LFF spinners off a single molex.

the rear 2 bay hotswap for the 846 seems to be in stock at some places in the US (google search). so it is available seems to be about 96-100USD new.

Just that it doesn't seem like there's enough connectivity coming out of that power distributor to be able to do that.
molex splitters and extensions. 920sq - 501 is nice and quiet if just doing some spinners, but if you max out the spinners, and a dual proc motherboard you'll want the power with HBA's, nvme, sfp or qsfp nic's maybe a slot pwoered gpu etc. etc.

Oooo that's extremely helpful info, thank you! I had never heard of satadoms/superdom before. I'll keep this in mind as I look through X10/X11
right tool for the job if you do not need a lot of storage for your boot devices.

Personally, I think X11 is the way to go. lot of life left in the x10 series and I have more than a few x10 boards but from an operational standpoint the same is very true (if not more) for the X11 and they are getting very near price comparable.

There are some good deals on the bay (US) for X11dph-t. 2 superdom ports, 2 m2. nvme sockets, dual scalable. Some deals with proc (though not the ebst and don't support optane pmem) and passive heatsinks. With passive heatsinks though you'd want to make sure you have the airdam for the 846 and you'd be relying on the chassis fans for cooling the procs so they will likely spin faster (louder). No air dam then you want active cooling heatsinks, and probably largests (tallest) heatsinks you can get. SM makes one that's 4U compatible, reasonably quiet as well.

lastly, once you have a plan and a bill of materials you need please take advantage of the WTB forum topic as folks may have some of what you need, likely used, likely cared for and working so you know what you are getting.
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
Again, really appreciate the insight.

So what do you think of the X10SRL-F for an X10 board choice?
I have a number of these boards my "go to board" possibly my favorite supermicro server board, have not seen an identically comparable X11 - there is one that is close but have not seen it affordable nor if memory serves there was some funk with the lane assignments I didn't like.

CPU for that X10 could be... E5-1650 v4 ? vs perhaps the E5-1630 v4 or the E5-1680 v4? Thoughts? I looked up each on ebay and they are all dirt cheap. Seems like mostly an argument for if I want more or less cores and if it's worth it?
again - what's your use case? your question or more cores (less top end frequency) or fewer cores (higher top end frequency) is the quintessential question. use case or use cases can help you answer that.

I personally think the E5-2680v4 is probably best bang for the buck still for multi-core. Single core or all turbo performance then probably E5-2667v4. E5-26xx procs work just fine in the X10SRL-F and may be cheaper/easier to source than the equivalent E5-16xx proc.

I skimmed through X11 boards but they jumped in price quite a bit when trying to compare to the X10SRL-F specifically. Only major loss I can see is no PCie 4 for which I would use for... Not sure lol
X11 will eek out a little more cpu performance cause newer cpus but a bit more memory performance - faster memory clock speed, triple channel too at the end of the day. whether you can take advantage of that depends on - yeah gonna say it again - use case!
 
  • Like
Reactions: nexox

Koop

Active Member
Jan 24, 2024
165
70
28
I have a number of these boards my "go to board" possibly my favorite supermicro server board, have not seen an identically comparable X11 - there is one that is close but have not seen it affordable nor if memory serves there was some funk with the lane assignments I didn't like.
Just out of curiosity, what is the X11 board you're referring to? I'm just honestly curious if it's the one I was looking at haha.

again - what's your use case? your question or more cores (less top end frequency) or fewer cores (higher top end frequency) is the quintessential question. use case or use cases can help you answer that.

I personally think the E5-2680v4 is probably best bang for the buck still for multi-core. Single core or all turbo performance then probably E5-2667v4. E5-26xx procs work just fine in the X10SRL-F and may be cheaper/easier to source than the equivalent E5-16xx proc.
My original use case was to focus primarily SMB and NFS file sharing with TrueNAS. I was only considering it to be a bonus being able to play around with VMs and apps in scale. But when I say the primary focus was on file sharing it's only going to be to a small number of clients within my home lab. So maybe more cores to be able to mess with VMs and stuff within scale makes more sense? I feel like file sharing would mean prioritizing single core performance. Perhaps a middle ground to be able to enjoy both?


X11 will eek out a little more cpu performance cause newer cpus but a bit more memory performance - faster memory clock speed, triple channel too at the end of the day. whether you can take advantage of that depends on - yeah gonna say it again - use case!
It's so tough to lock down a use case other than "I want to do all the things in TrueNAS" haha. What do you think with that being the focus?

Also I did look at a lot of X11 boards and I see the one you pointed out being dual CPU- I hadn't considered it since I was only looking at single CPU boards. From my understanding TrueNAS really doesn't benefit from having multiple CPUs right? Just better to have a shit ton of memory?

With alignment to this being dedicated to my very loose " have fun with TrueNAS" use case, what is your opinion? haha.

Again ya'll have been amazing help. Thank you. And for the record I legit been googling so many SuperMicro parts I've completely lost track of all sense of time.

I also called in to SuperMicro about the chassis and parts btw. The support tech said he actually saw that the ssd slot that replaces the DVD drive was "in this inventory" but that everything I was asking about was end of life crap - he specifically said "before my time" lol. But I now have an email thread going with them with the hopes that maybe I can get some parts. A new top panel would really be the best if I could work it out somehow.