Storage Server Build Planning - Feedback Appreciated

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Pirivan

New Member
Jun 1, 2017
5
0
1
First, let me say thanks in advance for any feedback, I really appreciate any feedback that you might have. My apologies for not fitting the build thread guidelines; I don't have my 'plan' fully formulated well enough to meet the requirements. I am really in the 'planning' phase trying to determine what the best solution is.

Background: I have an existing Norco 4020 (original version), with an Areca 1880i, old school HP SAS expander and an older Intel motherboard/CPU with Hitachi 2TB drives. I purchased 7x new 8TB WD RED drives to build a larger array only to my dismay to find that they do not physically fit into all the slots of my Norco (long story short, I think the case was poorly constructed and they will only fit into the top 4 slots).

So, given that I have an aging system with an issue, I decided it is time to explore some new options. My parameters are loosely as follows:
1. Support 10Gbs Ethernet in the future. Not built in necessarily but it needs to be 'upgradeable' to 10Gbps
2. I need to be able to connect up backup enclosures that go offsite with at least 10 bays between two enclosures. Currently I use 2x5-bay AMS eSATA backup enclosures that have worked great connecting via a PCI-E x1 card. I would love to re-use these but I am open to replacing them with USB 3.0 enclosures IF the per enclosure price is reasonable.
3. It needs to be quiet as far as a storage server is concerned. I need to be able to make it quiet with appropriate fans if it is not via the stock cooling.
4. Performance isn't a huge concern given that this will primarily just be a storage server, but with 7x8TB drives to start with I'd like rebuild times to be within reason. Mostly it will just stream files but I could see transcoding being required in the future from 4K to 1080p for certain devices until all the clients can handle 4K streaming
5. Price is a factor of course and I am aiming to keep it under $1400 or so. Given the price of good RAID controllers generally I was planning to re-use my 1880i to save money
6. Hot swappable drive bays. I wrestled with the idea of tricking out a Fractal R5 or something but ultimately it just did not feel like the best way to meet my needs.

After some initial research, here are some solutions that met my criteria. I am interested to hear feedback on these to see if I am missing something important and/or if there are better solutions I should be aware of.

Solution 1: Entirely New Server Build
---------------------------------------------------------
The plan here would be to purchase a new Norco 4224 and fill it with brand new parts. Here is an EXAMPLE of some parts I located to get a rough cost estimate (I am not married to these or convinced that they are the best):

1. Replacement 120MM fan ($25.95x3): NF-F12-iPPC-3000
2. Replacement 80mm fans ($5.40x2): MASSCOOL FD08025S1M4 80mm Case Cooling Fan
3. Case ($429.99): NORCO RPC-4224 -> I think this comes with the 120mm fan wall now
4. PSU $120-$180: Not sure here, something fully modular, quiet, 850W, maybe Seasonic etc.
5. Motherboard $285-$342: No idea here, just picked a couple of SM boards out of a hat, ideally they would allow the HP SAS expander to still work: SUPERMICRO MBD-X11SAT-O or Supermicro X11SAT-F
6. RAM $138: Kingston ValueRAM 16GB (1x16G) DDR4 2133 ECC DIMM KVR21E15D8/16
7. CPU $270: Intel Xeon E3-1230 v6 Kaby Lake 3.5 GHz 8MB GA 1151 72W BX80677E31230V6

Ideally with this build I would keep my existing 1880i and could consider replacing the HP SAS expander with a more recent version (HP 12G SAS Expander - $245). Without factoring cables or the SAS expander I am looking at roughly $1500 for this option.

Pros:
1. It's mostly all new parts and thus increases the resale value down the road.
2. I can re-use my old eSATA card and backup enclosures and it has plenty of PCI-E slots for installing a USB-C/thunderbolt 3 card or 10GBps Ethernet PCI-E card down the road
Cons:
1. I have read less than stellar things about Norco 4224 backplanes in some of the reviews and am I already well aware of their "build quality" overall (leaves something to be desired and their drive cage sizes is what put me in this position to start with). Also, I understand that their support is pretty useless.
2. It's been a long time but cabling it all up in the old 4020 was a bit of a PITA so I am not looking forward to doing that over again in a similar case
---------------------------------------------------------

Solution 2: Used SuperMicro Build
---------------------------------------------------------
This plan would be to purchase a used SuperMicro 846BA-R920B that already comes pre-installed with dual CPU's, 2x SQ PSU's (so they should be 'quiet' as I understand it) and 48GB of RAM and replace the fans with quieter options. Here is the 'parts' list:

1. Case/CPU/RAM/Motherboard/PSU ($1098): 4U X9DRI-LN4F+ 24 bay SAS3 2x Xeon E5-2680 8 Core 2.7GHz 64GB SATADom 48GB SQ PS
2. Replacement Fans: No idea, I would need to research what people are using to 'quiet' these cases that are relatively straightforward to install (120MM fan ($25.95x?): NF-F12-iPPC-3000?)

Again ideally I would re-use my 1880i and possibly replace the SAS expander with a new one ((HP 12G SAS Expander - $245). Looking at somewhere around $1200 not factoring in whatever SAS cables I will need. It's possible that I could make a lower offer for the server, but who knows what they might entertain in terms of price.

Pros:
1. I understand SM cases are fantastic and I could escape some of the jankiness of the Norco cases
2. The system would mostly be built. I would need to replace fans and install and cable up the RAID card/SAS expander but hopefully (no idea how SM cases are to work in) it wouldn't be too rough.
3. Should have just enough space for me to install a PCI-E x1 eSATA card, 10Gbps Ethernet card, and the RAID Card/SAS expander and slot in a USB 3.0 card as well.
4. Appears to be a good 'value' if you part out the RAM/CPU/motherboard/case/SQ PSU costs
Cons:
1. I believe this is a CPU/platform from 2011 so any kind of resale value by the time I am done with it is likely approaching zero.
2. Again given that the platform is aging, finding recent drivers could be problematic for motherboard etc.
3. No USB 3.0 onboard
4. I would have no warranty for just about anything in the system
---------------------------------------------------------

Solution 3: DS1817+
---------------------------------------------------------
This is a bit of a wild card. It would be a very quiet and compact solution for my storage needs and I have an existing single bay synology, so I am familiar with the platform. Normally I would consider the 12-bay version but it has not been upgraded recently and has no 10Gbps support, plus it's pricey so it's out. Parts:
1. DS1817+: $949.99
2. 5 Bay USB enclosures: $179 (x2)

So here I would need to buy the device and USB enclosures and that is it. They do have the DX517 expansion units but those are a massive ripoff, $470 for 5 bays seems ridiculous to me.

Pros:
1. Simple to install, setup and configure.
2. Has support (though I have heard it is pretty awful)
3. Lower power utilization than a 4U server and should be reasonably nice looking and quiet out of the box
4. 10Gbps expansion support via a card
Cons:
1. Right away I would fill 7 out of the 8 bays so there isn't any way that I could keep this for many years and keep expanding the array. I would have to dump/resell/buy larger capacity drives or unit with more bays or an expansion unit
2. Expensive cost per # of drives bays you get
3. Low powered CPU compared to other options, could it even transcode 4K if I wanted it to? Hard to say how bad rebuild times might be
4. Requires me to purchase new USB 3.0 backup enclosures. It's POSSIBLE that the DS1817+ would work with the 5-bay eSATA enclosures that I have but I haven't seem them work with anything except the PCI-E x1 adapters they shipped with so this feels pretty unlikely.
5. Will any 5-bay USB 3.0 enclosures actually WORK with this unit for backups? Or will I have to connect them to another PC on the network and then do the backups over 1Gbps from the NAS -> PC -> enclosure (a PITA and slower than eSATA or USB 3.0) or buy a DX517 expansion unit (overpriced).

If you made it this far, I thank you, my apologies for the length. At a minimum it was helpful to me to write out my thought process. At this point I am leaning toward option 2, though I am a bit leery of "upgrading" to a system of that age and of how easily I can really make it a "quiet" server. Option 3 is of course appealing given that it would look svelte and be "simple" but it may be pretty limiting in a variety of ways.

I am interested to hear what people think if they have weighed with similar plans/builds. Thanks again!
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Opt 2 if it were me, will add details once I get out of this torturous dentists chair!
 
  • Like
Reactions: Pirivan

markarr

Active Member
Oct 31, 2013
421
122
43
I'm with whitey on option 2.

Pros:
1. I understand SM cases are fantastic and I could escape some of the jankiness of the Norco cases
2. The system would mostly be built. I would need to replace fans and install and cable up the RAID card/SAS expander but hopefully (no idea how SM cases are to work in) it wouldn't be too rough.
3. Should have just enough space for me to install a PCI-E x1 eSATA card, 10Gbps Ethernet card, and the RAID Card/SAS expander and slot in a USB 3.0 card as well.
4. Appears to be a good 'value' if you part out the RAM/CPU/motherboard/case/SQ PSU costs
Cons:
1. I believe this is a CPU/platform from 2011 so any kind of resale value by the time I am done with it is likely approaching zero.
2. Again given that the platform is aging, finding recent drivers could be problematic for motherboard etc.
3. No USB 3.0 onboard
4. I would have no warranty for just about anything in the system
Pros
1: Yes, high quality and modular.
2: It would, before you replace the fans I would run it and see what the fans do, the one i have i removed the back ones and only have the fan wall and it idles pretty quiet.
3: lots of room.
4:You could get it cheaper by building and sniping deals but that is not guaranteed. It is a "good" value.

Cons
1: It is an "older" platform but the newer versions not the leap that we had from 1366 to 2011, so it will still idle pretty good, and there are the v2's that will work in this as well.
2: Drivers wont be a problem, it's still easy to find drives for the previous platform and 2011 is still very prevalent in the corporate world.
3: Add-in cards are cheap and there are several pcie slots
4: Everything in there is easily find-able on ebay, and you have supermicro reliability which is one of the best.
 
  • Like
Reactions: Pirivan

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
I would avoid USB, eSata, hardware raid and Expander solutions without SAS disks.
My suggestion would be

- Use a storage case with a passive backplane (no expander)
- If you want to add an external SAS jbod backup case, use one with SAS or Sata disks and expander

- use a mainboard like Supermicro | Products | Motherboards | Xeon® Boards | X10SDV-2C-7TP4F
This is the dualcore version. It comes with a 16 channel LSI SAS/Sata HBA, 10 Gbe and can hold up to 128GB ECC RAM

The same board is available with more cores when needed but for storage mainly RAM counts when used as cache.

Use ZFS!
This offers software raid over the LSI HBA with best of all performance and datasecurity.

Regarding OS, use a storage optimized regular enterprise OS:
ZFS is origin and native on Oracle Solaris with best integration of OS, ZFS and storage services especially
when it comes to Windows alike ACL Permissions and integration of ZFS snaps with Windows previous versions.
This is the fastest and most feature rich ZFS option but for commercial use it is quite expensive.

OmniOS and OpenIndiana are free Solaris forks. Beside encryption and ultrafast sequential resilvering
they are comparable to Oracle Solaris (a little slower in my tests).

For them I offer a Web-UI for easy storage management, see my setup howto
http://www.napp-it.org/doc/downloads/setup_napp-it_os.pdf
 
  • Like
Reactions: Pirivan

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I recommend a combination of option 1 and 2. Basically option 1 with a SuperMicro chassis.

I have the Norco 4224 as well as SM chassis in different Us. I personally have not had any issues with the Norco and have been happy with it. But I definitely prefer the SuperMicro chassis over Norcos.

Power consumption of the system that you linked will be high. I got a similar system but with L5630 CPUs and the idle consumption was around 160W without any drives. Right now I am running a Xeon D 1518 board and the power consumption at normal usage is around 145W.

Here is the BOM from my build excluding the drives.

(1) SC846 - $220
(1) BPN-SAS2-846A - $230
(1) 920SQ PSU - $100
(1) X10SDV-4c-7tPF (Xeon D 1518) - $560
(2) SK Hynix 16GB PC4-2400T Server ECC RDIMM - $230
(3) Noctua iPPC3000 Fans - $75

Total - $1415

If you get the Supermicro X11SSL‑CF or X11SSH-CTF board, then it has onboard LSI 3008 that works very well with the supermicro expander backplanes. You can sell your areca 1880i to offset some of the cost if you choose to.

Edit : The backplane was $230
 
Last edited:
  • Like
Reactions: Pirivan

Pirivan

New Member
Jun 1, 2017
5
0
1
Thanks all for the great feedback!

@whitey Noted, look forward to your analysis!

@Churchill Yeah, I don't see the need from a performance perspective to replace the 1880i. However I am considering replacing the somewhat aging HP SAS expander with a newer model.

@gea

Hmm interesting suggestion, I have had 0 problems with the hardware RAID card and my Hitachi SATA drives and the WD 8TB Red's are specifically designed to work in NAS/RAID environments. Any large (minimum 16 max 20 bay) cases that you would recommend with passive backplanes to go along with the motherboard you mentioned (looks like it has 20 SATA ports)? Looks like the motherboard has 10GB which is nice but I'd prefer copper to SFP+ for sure. Interesting point on your solution to pair it with ZFS. Not sure if I want to dip my toes into that world given that I am completely unfamiliar with Linux/BSD/Solaris. I am likely planning to stick with Windows but I appreciate the suggestion.

@K D Great information here, seriously thank you. Good to know on SM vs. the Norco 4224. It sounds like with the Norco you might get a good one with zero issues or you might get one with backplane problems; luck of the draw. Interesting on power consumption, something I had not considered.

How did you find an SC846 for that cheap, did you just keep in eye on ebay for a good deal that didn't come with PSU's and a backplane (given that you replaced the PSU's with the quieter versions anyhow)?

Looks like you installed the quieter fans in the fan wall for the 846, how was that process in the SM?

After some research, please excuse my extreme ignorance here (perhaps you can educate me), if I went with an X11SSL-CF or X11SSH-CTF board with the LSI 3008 onboard, I have two primary questions. How would the cabling look/work going to the backplane you mentioned to cable up all 24 drives bays in an 846 using the LSI 3008 and the BPN-SAS2-846A backplane? You would use the all 8x of the SAS3 ports on the LSI 3008 with ?? cable types to the backplane or? I am having trouble picturing how it would all connect up.

Second question, isn't the LSI 3008 not a true hardware RAID card, it's more software RAID and doesn't support RAID6 (which is what I am planning to run)? If I wanted to run RAID6 would I have to use the ARECA 1880i -> a BPS-SAS2-846A backplane (the backplane acts as an expander?) with ?? cables? I couldn't use the LSI 3008?
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I am using ZFS with FreeNAS in this build. I am also using @gea s napp-it in another build. For ZFS you will need to present the drives directly without any RAID card in the middle. I have the LSI 3008 Flashed to IT mode and use it as a HBA. You will use one cable to connect one of the SFF-8643 ports on the motherboard to the SFF-8087 port on the backplane.

846 has a few backplane options

BPS-SAS-846EL1 & BPS-SAS-846EL2 - SAS1 Backplanes - Avoid. or plan to replace them. Most of the cheap chassis that you find will have these. They support only drives up to 2TB capacity.
BPS-SAS2-846EL1 and BPS-SAS2-846EL2 - SAS2 expander backplanes. The expander is built in to the backplane and you just need one connector from your host card.
BPS-SAS2-846A - Has 6 SFF8087 connectors
BPS-SAS2-846TQ - Has 24 individual SATA connectors
BPS-SAS3-846EL1/BPS-SAS3-846EL1 - SAS3 Backplanes - Overkill for Media storage and expensive.

You need to keep a lookout on ebay/craigslist to source the parts cheaply. Some patience and you will be able to find a match.

Replacing the fans in the 846 is very easy. You can find several examples online as well as other posts in this forum.
 
  • Like
Reactions: Pirivan

Churchill

Admiral
Jan 6, 2016
838
213
43
Here's your server with 36 bays, loaded to the gills with ram, rail kit, prebuilt for FreeNAS/ZFS and all for $900 (24 bay is cheaper) or so:

4U Supermicro 36 bay Intel Xeon Sandy Bridge X9DRH-IF FREENAS Storage Server | eBay

Granted it's a dual 6 core but that's more than enough. Motherboard can hold 1.5TB of RAM. Has all the connectors you need. Comes with the SQ power supplies. Supports 8TB hard drives.

That will leave you with $500 or more left over to buy more disks and a 10GB card WHEN you need it



Supermicro 4U 36x 3.5" Drive Bays 1 Nodes
Server Chassis/ Case CSE-847E16-R1K28LPB
Motherboard X9DRH-iF
Backplane
2x Backplane:
*BPN-SAS2-846EL1 24-port 4U SAS2 6Gbps single-expander backplane
*BPN-SAS2-826EL1 12-port 2U SAS2 6Gbps single-expander backplane

NIC * Integrated Dual Intel 1000BASE-T Ports
IPMI * Integrated IPMI 2.0 Management
Processor
2x Intel Xeon E5-2620 V1 2.0Ghz Six Core

Socket: LGA 2011
Clockspeed: 2.0 GHz
Turbo Speed: 2.5 GHz
No of Cores: 6 (2 logical cores per physical)
Typical TDP: 95W


Memory
48GB DDR3
12 x 4GB - DDR3 - REG

RAID / HBA
1x LSI 9211-8i HBA JBOD FREENAS UNRAID (with SAS2 Expander will see all 36 drive bays)

HD Caddy 36x 3.5" Supermicro caddy
Power Supply
2x 1280Watt Power Supply PWS-1K28P-SQ

Rail Rail Kit 2U
PCI-Expansions slots Full Height 1 PCI-E 3.0 x16 and 6 PCI-E 3.0 x8
 
  • Like
Reactions: Pirivan

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
Keep in mind, you don't need a ton of *nix experience to use ZFS these days. napp-it and FreeNAS make it downright easy. FreeNAS actively discourages messing with the CLI. If you have a spare box handy, or something you can fire them up in a VM with, it's worth having a look. Some of those boards/CPUs are massive overkill for a file server though. Unless you have VMs or something else running on it as well. One option with the nicer/newer hardware is to use ESXI on the hardware, pass through the SAS controller to a VM running FreeNAS or napp-it. @gea has a handy VM appliance you can use for this and lots of documentation.

Then run whatever you like alongside it, Windows works fine in a VM, and you can do various *nix as well if you like.

The "A" backplanes are nice. You connect SAS cables, and it breaks them out into individual SAS/SATA connections for the drives. It's like combining the backplane and SAS->SATA breakout cables into one. Each SAS connector runs 4 drive bays on the backplane. It's not an expander, you would need the right number of SAS channels for the number of drives. Sounds like 7 in your case, so 2 SFF8087s, a -8i card would do it, the onboard SAS controller is enough.

The "TQ" backplane is what I have, 24 individual connections with SAS->SATA cables. Works great, but I wouldn't want to manage a datacenter full of them.

That link @Churchill posted looks great and it's in your budget. And it's a 2011 platform. Run a SAS cable from the controller to each backplane, the expander built into the backplane handles the rest.
 
  • Like
Reactions: Pirivan

Churchill

Admiral
Jan 6, 2016
838
213
43
That link @Churchill posted looks great and it's in your budget. And it's a 2011 platform. Run a SAS cable from the controller to each backplane, the expander built into the backplane handles the rest.

Ask Mr. Rackable to throw in a set of SAS cables or have them do it for you. They will. You can have a 36 bay FREENAS/ZFS box (I prefer UnRAID) up and going as soon as you unpack the server. Use all extra parts to build a backup system. I would.
 
  • Like
Reactions: Pirivan and K D

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Ask Mr. Rackable to throw in a set of SAS cables or have them do it for you. They will. You can have a 36 bay FREENAS/ZFS box (I prefer UnRAID) up and going as soon as you unpack the server. Use all extra parts to build a backup system. I would.
This is exactly what I did when I first dipped my toes into ZFS. Just installed FreeNAS on the server as I got it, loaded a bunch of drives and played around with it till i was comfortable with ZFS.
 
  • Like
Reactions: Pirivan

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
If you prefer 10G Base-T, use a X11SSH-CTF | Motherboards | Products - Super Micro Computer, Inc. with a CPU up from a G4400. It comes with an included 8port LSI 3008 HBA for software raid.

Some build options, see http://www.napp-it.org/doc/downloads/napp-it_build_examples.pdf

btw.
Softwareraid with ZFS is far superiour either to hardwareraid (no datacorruption due write-hole problems) or ntfs (crash resistent due copy on write, checksum for real data validation and auto repair for silent errors) or ReFS (much better performance and easier handling)

Add this to a better stability (no be weekly critical security updates with reboot), much more stable versioning with ZFS snaps compared to vss and the same ACL permissions like ntfs at least with Solaris based systems.

You can select a NAS based on a fancy GUI or included add ons for a homeserver or based on its underlying technologies. For the filesystem ZFS you have Open-ZFS. This is quite compatible on BSD, Linux, OSX, and Illumos (free Solaris forks). Mostly Illumos offered newer features first but some important ones like encryption that is currently Oracle Solaris only will be first on Linux. Currently Oracle Solaris is the leading ZFS regarding performance or features but not compatible to Open-ZFS and not free.

Another main question is the SMB or NFS server. Mostly ex on distribitions like FreeNas (BSD), OMV or Synology (Linux), you use SAMBA While it runs on any X system it is limited by Linux/Unix defaults. This does not cover Windows features like its ACL inheritance, very fine granular ACL, SMB groups that can contain groups or Windows SID identifiers that contain a machine id. This is not available/ not possible on SAMBA but on the Solaris SMB server as Sun adds this to its own CIFS/SMB server. One of many reasons why I call Solarish superiour for SMB storage.

But for all ZFS options, you will be surprised how easier a large ZFS box is compared to a large Windows filer.
 
Last edited:
  • Like
Reactions: Pirivan and T_Minus

Pirivan

New Member
Jun 1, 2017
5
0
1
Fantastic responses all around, thanks so much everyone! Sorry for the delay, I was without internet access for a little while. I will attempt to respond in order.

@K D

Interesting, thank you for the backplane explanation! Why did you decide to go with ZFS + FreeNAS and a 3008 in IT mode as an HBA as opposed to using a hardware RAID controller card? Any particular reason or is it just purely that HBA's are a far more affordable option and you wanted to go with FreeNAS/ZFS?

@Churchill

Fantastic find there, thanks for looking at that. Maybe I am crazy but I couldn't find a cheaper 24 bay version (only some with 2.5" drives), could you link that to me for compassion's sake? Totally agreed on six core being more than enough, a single quad core Xeon would be more than enough to meet my needs (it will be primarily a file server after all, possibly some transcoding in the future).

Am I correct in understanding that if I wanted to, I could use my 1880i Areca card and with two 8087 cables from the Areca card -> BPN-SAS2-846EL1 and PN-SAS2-826EL1 12-port and see all 36 drives to create a RAID 6 array (just 7-8 drives to start with in my case? That would be an option if I did not want to go the LSI 9211-8i HBA/ZFS/FREENAS route, correct?

Good suggestion on asking them to throw in some SAS cables, highly appreciated. I don't want to get anyone into any kind of OS war but any specific reasons as to why you prefer unRAID to FREENAS/ZFS? I am not opposed at all to paying for an OS if it gets some support etc.

@ttabbal

100% agreed on these CPU's being overkill for a file server. A simple quad core would work great. It's just that if you want to use a SuperMicro case, a lot of the 'good deals' on eBay for used gear come pre-populated with dual CPU's and the like. If I was going with a Norco 4224 I would put a simple quad core in there and be done with it. In terms of the board, I really just want one with a decent number of PCI-E slots for a few expansion cards. I may fire up FreeNAS on my existing server to take a peek at it.

If I go with the SM I will shoot for the BPS-SAS2-846EL1/BPS-SAS2-846EL2 or BPS-SAS2-846A backplane (would need an expander for the A backplane or -8i card) to simplify cabling. It sounds like you answered my cabling question to Churchill, pretty nice that I can eliminate the HP SAS expander using the 846EL2/EL1 backplanes and 2 cables from my 1880i.

@gea

If I decide to lean towards the 100% self-built option with a 4224 the X11SSH-CTF might be the way to go. I love that it has 10Gbps built in though it's a little low on PCI-E slots for expansion cards (fine if you are using the onboard LSI and a BPS-SAS2-846EL1/BPS-SAS2-846EL2 backplane). Thanks for the information on ZFS, that is some interesting food for thought.

I like your build examples, the Fractal Define R5 is a slick case and it's another build alternative I will keep in mind from a low noise perspective (given that this will go inside an office). It's a bit low on the drive slots though, I think I could get max 11 drives in there and a max of 16 would be more ideal (8x3.5" slots + a 2x5.25 to 3x3.5" converter). Still not a bad idea to add to my list to consider.
 

Churchill

Admiral
Jan 6, 2016
838
213
43
How to Search like a Boss:

Go on Ebay. Search "24 bay supermicro"

Supermicro 4U 24 Bay Adaptec ASR-72405 1GB HW Raid 2x X5675 3.06Gh Hex core 96GB | eBay


You'll want to find one that supports SAS2, the 36bay one does for sure. email Mr. Rackables on their ebay account, ask to make a deal outside of ebay, tell them what you want, hell call them (I did) they will get you what you want.


Your RAID Card: I cannot speak for what your RAID card can and cannot do. I'm not familiar with it. If your RAID card can and does support 36 drives then by all means it "should" work. A bit more research is needed. On the other hand, the server already comes with a card that supports all 36 drives. Your card would be a spare. Since everything is tested, go with what works.

UnRAID vs. FreeNAS: Comes down to Chevy vs. Ford.

Both do a damn good job at being a NAS type system. UnRAID does some things that FreeNAS does not. UnRAID is stupid simple to setup with limited configuration (RAID 6 with 2 drive parity), FreeNAS can be quick or long depending on how many pools/drives you want. UnRAID supports docker out of the box, FreeNAS has BSD Jails(docker like).

I'm familiar with UnRAID been using it for 10+ years, i love Tom and his support from the forums. Great crew. Others are familiar with FreeNAS. Google search and you'll find out what you want to know.
 
  • Like
Reactions: Pirivan

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Interesting, thank you for the backplane explanation! Why did you decide to go with ZFS + FreeNAS and a 3008 in IT mode as an HBA as opposed to using a hardware RAID controller card? Any particular reason or is it just purely that HBA's are a far more affordable option and you wanted to go with FreeNAS/ZFS?
I have always used Hardware RAID cards till now. Even today, my main storage server runs on an Areca 1880i and backup server uses an Adaptec 7805. Some of the issues I have faced are - rebuild times, especially with larger drives, Once my Areca controller failed and I could not recover data from one array. I had to restore from backup and while waiting for a replacement to the failed card, the whole server was unusable. With ZFS, I have been able to import the pool in different servers, VMs, versions (FreeNAS, Linux, OmniOS etc) even when the pool was resilvering. and re-silvering in ZFS is fast. Took me around 6 hrs to replace a 6TB drive.

I installed FreeNAS on an old MicroServer that I used only for my workstation backups and found it simple and just worked for my requirements. I also evaluated the other options like unraid, flexraid, drivepool, etc... and finally settled on ZFS and StableBit DrivePool.

I am currently running 3 AIOs, one with StableBit DrivePool and 2 with ZFS (1 omnios/napp-it and one freenas).
 
  • Like
Reactions: Pirivan

Churchill

Admiral
Jan 6, 2016
838
213
43
Took me around 6 hrs to replace a 6TB drive.
.
UnRAID cannot match that speed at all unless the disks are SSD's. The rebuild time is much longer than ZFS TBH. This is one of the 'bad' things about UnRAID, it truly is a RAID type file system. Granted you have to lose 4 disks (2 pairity, 2 regular) before you reach peak data loss so the probably of full failure is low.

This is why you have the 3-2-1 backup policy because RAID not a backup.

Backup Strategies: Why the 3-2-1 Backup Strategy is the Best
 
  • Like
Reactions: Pirivan

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
UnRAID cannot match that speed at all unless the disks are SSD's. The rebuild time is much longer than ZFS TBH. This is one of the 'bad' things about UnRAID, it truly is a RAID type file system. Granted you have to lose 4 disks (2 pairity, 2 regular) before you reach peak data loss so the probably of full failure is low.

This is why you have the 3-2-1 backup policy because RAID not a backup.

Backup Strategies: Why the 3-2-1 Backup Strategy is the Best
I totally agree with you on Backups. I always maintain atleast 2 backups of my media and multiple copies + offsite backups for photos and docs.
 
  • Like
Reactions: Pirivan

Pirivan

New Member
Jun 1, 2017
5
0
1
@Churchill

Ah ha, thanks for the link. I actually saw that one during my search but I was convinced that it wasn't the one you referenced as I thought you mentioned that a 24 bay would be cheaper and I assumed there was a cheaper one for sale currently (I am sure that one has an increased price due to the included HW RAID controller etc., I am sure I could negotiate with them if I wanted 24 bays), I am glad we are/were looking at the same thing.

Good advice on contacting them directly, I will do that if I decide to go with one of their solutions, they sound like a flexible good vendor to work with. I will investigate if utilizing my 1880i with the backplane expanders would be an option, thanks!

Thanks for the feedback on UNRAID vs. FreeNAS, was just curious about your personal preference. Thanks for the input! I hear you on the RAID is not backup, that is why I have the 2x5 bay eSATA enclosures that go offsite. I am a little concerned about the PCI-E x1 eSATA port-multiplier cards being supported in something like UNRAID/FreeNAS (plus the drives are NTFS right now....) so I will have to think about that issue if I go down the FreeNAS/UnRAID route.

@K D

This is fantastic and valuable feedback, thank you. Very good point about the 'portability' of a ZFS 'pool' between different hardware/OS's instead of being so tied to a particular RAID card/manufacturer. Those rebuild times sound excellent as well. Are you running RAIDZ2? Have you ever added a new disk and 'expanded' the ZPool and if so was the expansion time similar to the replacement of a drive?
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
@K D

This is fantastic and valuable feedback, thank you. Very good point about the 'portability' of a ZFS 'pool' between different hardware/OS's instead of being so tied to a particular RAID card/manufacturer. Those rebuild times sound excellent as well. Are you running RAIDZ2? Have you ever added a new disk and 'expanded' the ZPool and if so was the expansion time similar to the replacement of a drive?
I don't know if my zpool strategy is the best. This is what I have in 3 different hosts. ZPool3 is just for VM storage.
Zpool 1 (performance + capacity) - (4x4TB RAIDZ) *3 + (4 *6TB RAIDZ) * 1 + 1 DC3700 SLOG
ZPool 2 (capacity) - (9x8TB RAIDZ2) * 2 + 1 DC3700 SLOG
ZPool 3 (performance) - 4x400GB DC3500 and 6x400GB DC 3700 mirrors

Zpool1 I built with 1 RAIDZ VDEV and added the other 3 after. I also replaced the 4TB disks in one VDEV with 6TB Disks one at a time. Resilvering happened at around 1.3G/s.

From what I understand and experienced, expansion of a ZPool can be done by either adding new VDEVs or by replacing all the drives in a single VDEV with higher capacity.