Supermicro 4U 24 bay 846 chassis SAS2 with rails/motherboard - $300

Fleat

New Member
Feb 6, 2016
12
2
3
What was it that made you switch?
These are the reasons that I personally switched. ZFS is a great solution and it may fit your needs better, but this is what motivated me to move on. Incoming wall of text, so skip it if you are just not interested in my personal reasoning.

It is much easier to add additional drives to an Unraid array. If I wanted to remain fault tolerant on my ZFS setup, I have to create an entirely new vdev and then add it to the zpool. This means at least adding two drives in a mirror, but usually meant adding three drives in my case as I was using RAID-Z1. With Unraid, I just throw another drive in, preclear it, and add it to the array whenever I want. It is a bit more complicated if the new drive is bigger than the parity drive(s), but still easy to do.

My hardware build for Unraid was much cheaper than my build for ZFS. My ZFS build had a server motherboard, ECC memory, the "recommended" 1GB of memory for every 1TB of hard drive space, and a pricey ZFS ZIL drive as it gets thrashed. I ended up going somewhat over the top with my Unraid build, but it isn't necessary like people claim it is with ZFS.

Unforseen circumstances led to much more churn on my vdevs than I would have liked. I started my ZFS build with three RAID-Z1's of 3 Seagate 3TB drives each. Those of you who are familiar with the Seagate 3TB debacle will understand that I was constantly rebuilding those RAIDZ's with replacement or new drives.

I even lost a couple of these RAIDZ's over time when a second drive would fail while attempting to rebuild the array. When that happens with ZFS, recovery of the remaining data becomes complicated at best. If you were to lose the 2 parity drives + another drive in Unraid, all of the remaining data on those other drives is still easily accessible. I eventually started doing RAIDZ-2's with 5 drives, and that helps the reliability but makes expanding your pool size that much more expensive.

Unraid has evolved significantly since I first started using it. It is a fairly good turn key "homelab" at this point in time. It has solid community support for dockers and virtual machines. I am running a dozen or so dockers, a few VM's, and a dedicated HTPC VM with GPU pass-through for my theater. This is of course all possible with other things like FreeNas, OpenSolaris, OpenMediaVault, and any linux distro with ZFS on Linux. This will make me sound like a filthy casual, but the ease of use with these features on Unraid is paramount. I deal with enough complication at work, and sometimes it is nice to just have things that work without a significant amount of tinkering.

Last but not least, I find that the Unraid community is far more approachable. The ZFS community can be quite toxic at times. I have never personally been the victim of it, but I have read hundreds of threads where ZFS zealots have made me want to do anything but use ZFS.

Same here, I'm on ZFS now.

Was on unraid for years but it got messy, seemed like it was ALWAYS doing parity checks, I had a 20 disk array.

Swapped to Server 2012 storage spaces, lost the entire array to a silent double disk failure, no indication of failing drives, one day the array wouldn't start and it said it was two drives short (happened after a major server 2012 update).

Now I'm on a poorly constructed ZFS on Linux array (8 - 3 TB drives, should have gone with 2 arrays of 4 I think). I really like the ability to see what is happening with ZFS. I like that I can get disk and data information without looking a lot of different places.
I am not sure I follow what you are getting at with the parity checks. With my ZFS setup, I was doing weekly scrubs of each vdev, and on Unraid I am doing a monthly parity check for the entire array. You can customize the frequency in which these occur with both products.
 

talsit

Member
Aug 8, 2013
112
20
18
I never figured it out, I had everything set and it ran for years, through upgrades, through major revisions, then in the span of a few months it was constantly doing parity checks.

It's good software that I would go back to, but I'm enjoying ZFS for the time being.
 
  • Like
Reactions: Fleat

SirCrest

New Member
Sep 5, 2016
18
20
3
30
Florida
Well damn. I just bought a SAS2 expander from him for 279+shipping yesterday. Did it because I already have a fully loaded 24bay supermicro 846 with a SAS1 backplane, and all the other expanders keep going for 600+. Still I didn't want to buy another chassis just to take out the expander to sell it again.
 

fake-name

Active Member
Feb 28, 2017
150
113
43
70
Mine came today, it seems to have been packed OK, though the box got beat to hell.

They packed it with shock-sensor stickers, and all three were ruptured by UPS's delicate handling.

The server itself seems fine, I won't be able to do any testing on it until tomorrow evening. Either it has 2 CPUs, or the second heatsink is just mounted to an empty socket. It did come with two heatsinks.

It came with a SAS cable, which makes me happy (I wasn't sure if I'd need to buy one). It also has a slimline CD drive!

Sorry about some of the pictures being potato quality. Blasted phone.
 

Attachments

Last edited:

i386

Well-Known Member
Mar 18, 2016
2,111
556
113
31
Germany
It looks like a pre-"B revision" chassis with the older psus, so too loud to sit next to it for hours.
 

Kneelbeforezod

Active Member
Sep 4, 2015
528
121
43
42
So the PSUs Would have to be swapped then with the quiet models i assume to lower the noise. How use thinking here but it would likely be in my main area with the rest of the computers. I have a large Great Room where we spend most of our time.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,053
1,598
113
CA
They're not exactly quiet even with 'SQ' power supplies., I wouldn't want one in my living room.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,053
1,598
113
CA
They're not exactly quiet even with 'SQ' power supplies., I wouldn't want one in my living room.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,539
979
113
artofserver.com
I don't know about the different revisions of the 846 chassis, but I'm in the middle of building a couple of them. Assuming the PSUs are swapped for the "SQ" series, the noise level is "acceptable" to me, but you will still hear it in a room that is 16ft x 20ft. To quantify it better, using a sound meter app on my phone, I measured about 43-45 dBA at 12" in front of the server and 48-50dBA at 12" in the back of the server.

However, it depends also on how you drive the fans. My dBA numbers are based on driving the fans via PWM on the motherboard at idle. Under load, they get louder. There's a subjective part of how sensitive an individual is to background noise. This type of noise is okay with me as white noise and I can sleep in a room with it. I would not want to watch a movie with it in the room though.

There are also measures one can take to reduce the noise level but swapping fans and other modifications.
 

fake-name

Active Member
Feb 28, 2017
150
113
43
70
I got mine booting. It came with **TWO** L5630s, which is nice. I threw two random ECC sticks I had lying around in it.

I'm going to pull the mobo and replace it with a X10SL7-F + E3-1231 v3 which is from my current NAS. I mostly bought the case because my current NAS case is a shitty ultra-budget 4U case with no hot swap bays at all, and it basically unmaintainable (if a drive fails, I'm hosed).

Noise-wise, it isn't that bad. I wouldn't want to sit next to it, but if I put it in my closet, it'll be fine. Power supplies are both PWS-1K21P-1R modules.
 

Lev

New Member
Sep 18, 2015
2
0
1
Since this thread is spiraling totally off topic, why not jump in...

UnRaid I/O performance works exactly as it's described and intended, always has. Calling UnRaid's I/O performance garbage, when it's performing as expected is just.... not right.
 

jwegman

Active Member
Mar 6, 2016
144
65
28
46
Since this thread is spiraling totally off topic, why not jump in...

UnRaid I/O performance works exactly as it's described and intended, always has. Calling UnRaid's I/O performance garbage, when it's performing as expected is just.... not right.
Since you micro-necro'd this thread for further Unraid input; I'll toss mine in as well... The Single spinning disk centric performance of Unraid (regardless of the number of drives in an Unraid array) is rather pathetic specifically compared with a traditional RAID (stripped) array. For those unaware; when accessing an Unraid array (that's not using a dedicated SSD cache drive), you'll *never* achieve higher read/write speeds than what the given (single) disk that's holding the relevant data is capable of.

Full disclosure; I run two Unraid systems in the home laboratory; a primary and backup mirror. I do often lament the lack of IO performance specifically if I'm needing to transfer large amounts between the two (on a 10GB link). However I *do* enjoy the VM and Docker management usability of Unraid... Too bad they are not considering incorporating LXC 2.0 as that would be much more preferable to me than Docker as the Linux container solution...