Looking for advice for file server

zogthegreat

Member
Jan 20, 2019
40
9
8
Hi everyone!

So I'm planning to upgrade my home file server. Currently it's running on a SuperMicro X8DTT-F blade board with a Dell PERC H310 in IT mode.

I'm planning to switch out the SuperMicro board with an ASUS Z9PR-D12 motherboard that someone gave to me. This board is nice for me because I can reuse my DDR3 ECC memory from the SuperMicro board.
on an Asus PIKE 2008 8-Port SAS SATA card to activate the onboard RAID.

For my drives, I have a total of 9 x's 2TB, mostly Seagate, 4 of them are the same drives. All of the drives are consumer drives.

I need to emphasize that I want to use hardware that I already own, so please don't tell me that I need to buy 10 8TB drives!

The purpose of this serve is home backup as well as remote backup for my daughters NextCloud server, (NEVER buy an iPhone for your teenage daughters! The amount of pictures that my child can generate is staggering!).

My current server is setup with RAID 10 on an ext4 file system. I've been reading that zfs is a better route to go, but I'm unsure of the best configuration with the hardware that I have. One recommendation was to use all of the drives in a zfs pool, with a 512gb SSD for caching. Would I need the SSD if my RAID card already has cache memory?

I also want to have very aggressive power management, as most of the time the server will be sitting idle.

Any suggestions would be greatly appreciated!

Thanks!

zog
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,664
487
83
Canada
Your RAID card should not be doing RAID at all if you plan on using ZFS. It will be in IT mode and passing the disks directly to ZFS which will handle any RAID level you want to use. I would probably do an 8 disk RAIDZ2 and keep one as a spare. I can't really recommend consumer oriented disks for reliable RAID use, as they generally do not have proper firmware support for that use case, so you may run into issues with disks randomly dropping out or some other anomaly, which is not exactly what you want from a back-up server. The SSD I would use for the underlying OS, unless it's a very fast, low latency disk with proper in-flight power loss protection, in which case I might double duty it as a slog :)
 
  • Like
Reactions: nikalai

zogthegreat

Member
Jan 20, 2019
40
9
8
@pricklypunter,

Thanks for the response pricklypunter! The PIKE 2008 can be flashed to IT mode, so that's not a problem for me. As for my drives.... well, I didn't have any problems with the matching Seagate drives in RAID 10/ext4. The fifth drive was a spare just in case a drive failed. However, I do understand your point about reliability. Not much of a point in a backup server that doesn't backup! I'm going to start looking for used SAS/NAS drives, but right now my budget is tight.

I forgot to mention that my OS, (Ubuntu Server), will be riding on 2 x's 32gb InnoDisk SATADOM's. I prefer to have my OS separate from my data drives.

So essentially, I install the OS and then create the zfs pool? Anything extra that I should be considering?

Thanks!

zog
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,664
487
83
Canada
I run Debian and ZoL with basic Zvols to share out to my VM's etc. So basically yes, install OS, add ZoL and create your pools etc :)
 

zogthegreat

Member
Jan 20, 2019
40
9
8
I can't really recommend consumer oriented disks for reliable RAID use, as they generally do not have proper firmware support for that use case, so you may run into issues with disks randomly dropping out or some other anomaly, which is not exactly what you want from a back-up server.
Hmm, I'm pricing NAS/SAS hard drives. I found 4 TB SEAGATE ST4000NM0023 Constellation ES for $50 USD and Seagate ST6000NM0034 6 TB for $85 USD. The 4 TB's are a slightly better value at around $12 per TB vs $14 per TB for the 6 TB's.

I don't have any experience with zfs pools. If I go with 2 6 TB's in a RAID 10, can I add more 6 TB drives later to expand the pool? Would RAID 10 be my best choice?
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,664
487
83
Canada
You can't just add disks to a ZFS pool the way you would normally have been using mdraid. You can add more Vdevs, if you initially create your pool using mirrored pairs, in which case you add another pair, and another etc. Use mirrors if you need performance and RAIDZ2 if you want storage capacity from your pool. There are lots of folks that use RaidZ1 (approx Raid5), and in some cases it might well be appropriate, just not this one. That would get you max capacity, but that's a risk I wouldn't recommend for use in a back-up. With RAIDz2 you can have 2 disks fail and still be able to easily recover your data :)

Here's kind of what I would suggest to anyone that is new to ZFS:

Obtain the disks that you want to use, test them and test them and test them again, be certain they are good before you start. It's a lot easier to weed out any duffer's before you have TB's of data on them!

Get your OS and Zol dialed in, make whatever tweaks and changes you plan on now, get it all stable then take a back-up of it, so you can easily roll back and try again.

Create your pools and make whatever optimisations and changes you feel best meets your needs. Don't just try the RAID levels that you think you want to use, try them all. If you plan on sharing stuff out with Samba or iSCSI etc, get that going as well and play with that too.

Stick some real data on the pools, not too much, but enough to be a useful test and try out some of the ZFS features. Deliberately corrupt stuff, run scrubs, pull a disk or 2 etc. Basically get to know your way around the ZFS commands and get comfortable using them. Try doing a snapshot, test out ZFS Send/ Receive. Play with it all for days, not a few hours. You need to be comfortable enough with it, that if and when a failure occurs, you have enough experience behind you not to panic and do something that really will put your data at risk. If you have confidence in what you are doing, recovering your data will be an easy task and will feel natural to you. There's nothing worse than rolling the dice and crossing your fingers with a command that you have only just learned.

Find and read/ re-read every document and resource you can on ZFS, get to know it well, and it will become your best friend, but rush into misunderstanding/ poor decision making with it and it's like tripping over a bear in the woods, it will bite you. Everything after that point becomes reactionary, which is a very bad place to be when your data is at risk :)
 

zogthegreat

Member
Jan 20, 2019
40
9
8
Thanks for the advice everyone! I'm reading and rereading and rereading and rereading and rereading.....!!!!!

Then I'll do a test setup like @pricklypunter suggested.
 

vl1969

Active Member
Feb 5, 2014
611
69
28
May I stress another point!
Make sure your PSU is up to snaf.
I almost through 3 brand new Seagate
Disks because my PSU was failing.
I run 16 disks + 2 SSD for OS in mirrored setup. The old PSU could not keep up with power needs.
 

zogthegreat

Member
Jan 20, 2019
40
9
8
Thanks for the advice @vl1969 . I have a 750w Corsair PSU that I'm planning to use, which according the the power supply calculator that I used, should be enough to power everything. If I see that I'm having power problems, I can run a different PSU for the HDD's. I plan to stress test everything before I put the server to use, so hopefully any power problems will pop up then.
 

vl1969

Active Member
Feb 5, 2014
611
69
28
It may not be the PSU capacity but age.
The PSU I had issue with was old. It migrated 3 servers builds.
It is still working I have stored it for emergency use. But it lost it's capacity.
It simply could not handle the load anymore.
What happened was when I build out the server, I keeplosing drives from the pools.
The drives whould just fall off the face of the Earth. Take the drive put it in my worksation, all working. The stats look good. Reformat, replace it back all working for a a while. Than another disk dropps off. Got a new 1500w PSU. Have been running the setup for over a year now. No issues.
 

herby

Active Member
Aug 18, 2013
185
53
28
My oldest ZFSpool (around 2012) was built in two stages, a span of mirrors, RAID 10 style. It became clear that growing that way would become expensive quickly for bulk data storage where performance wasn't an issue.

When it was time think about upgrading I bought a drive every couple months to defer cost and reduce the likelihood of getting multiple disks from a bad batch. After I had six in hand, tested with badblocks, and checked S.M.A.R.T. I put them in a RAIDz2. Decent performance and resiliency but more annoying to grow the pool.

I found that RAIDz2 was very reassuring when I lost one disk a couple years back. It's nice knowing you aren't in trouble if another disk died while waiting for the replacement, or if the stress of resilvering killed one, or even if there was some unnoticed corruption since the last scrub.

ZFS loves RAM (ARC) way more than any disk for read cache (L2ARC) and not all SSDs are really suited for SLOG or write cache. You can get by with less RAM than some people claim; but really more RAM makes things way better. ZFS' performance also has a tendency to tank when the pool is much over 80% full, for a fast RAID 10 type pool you're best off under 50% for top performance.

---------------------------------------
Xenservers (XCP-ng 8.0)
-long-red: Supermicro H8DCL-6F, AMD 4332 HE (x2), 32GB ECC, ConnectX-3 40GbE, Radeon RX 580
-big-red: Supermicro H8SCM-F, AMD 4376 HE, 32GB ECC, ConnectX-3 40GbE, Radeon HD 6450

FreeNAS (11.2)
-blue: Supermicro H8SCM-F, Opteron 4376 HE, 64GB ECC, ConnectX-3 40GbE, PERC H310 (x2)
480GB DC S3500 (x4), 240GB M500 (x2)
-SE-3016: 1TB WD10EACS (x2), 2TB 5k3000 (x2), 3TB DT01ACA300 (x6)

Network: Linksys E2000-RM (running TomatoUSB), Mellanox SX6012, Dell 2816
 

zogthegreat

Member
Jan 20, 2019
40
9
8
Hmm, I have a couple of SuperMicro 600w blade PSU's that have standard 24 x 8 pin motherboard connectors that I hade modified for my SuperMicro X8DTT-F's that died on me, that I can use one of those. They are "80 PLUS Gold" PSU's. According to the PSU calculator that I used, I need 500w to run my setup.What I'm thinking is run one for the motherboard and the other for HDD's. That should cover me. I'll also run OCBase for a while to stress test the PSU.
 

zogthegreat

Member
Jan 20, 2019
40
9
8
@herby Thanks for the tip on memory. I'm swapping the memory from my old X8DTT-F's to my "new" board. I was fairly happy when I found out that my existing 64gb of DDR3 ECC memory would work on the Asus board, saved me a bit of pocket change! I'm considering running a SSD for L2ARC just to improve the performance, although in reality, with just me using the server, plus the monthly remote backup's from my daughter's NextCloud server, I don't have a huge overhead.

Your method of buying a drive a month is what I'm planning to do to upgrade the consumer grade drives that I currently have. Running 4 x's 8TB SAS/NAS drives makes more sense than running 8 x's 2TB consumer drives.

(sigh) It's only money!

BTW, you said "not all SSDs are really suited for SLOG or write cache." Is there a list of recommended SSD's for L2ARC?
 

zogthegreat

Member
Jan 20, 2019
40
9
8
Thanks @herby

BTW, cool tag! I remember watching Herbie the Love Bug as a child! Buddy Hackett was an incredible comedian!

(And, yeah, I'm that old! I watched Herbie as a new release, not a rerun! :))
 

zogthegreat

Member
Jan 20, 2019
40
9
8
A follow up question. In the link the you provided, it states:

"You can also skip guides that suggest 120GB or 240GB drives. While you may not need that much space, performance on smaller capacity drives suffers. 400GB should be your minimum capacity."

I have a couple of 256GB M.2's laying around that I was planning to use as L2ARC drives. Since my setup is small and I won't have that much data passing through on a daily basis, can I get away with using the smaller drives until funds become available to upgrade to a larger SSD?
 
Last edited:

zer0sum

Well-Known Member
Mar 8, 2013
567
265
63
Hmm, I'm pricing NAS/SAS hard drives. I found 4 TB SEAGATE ST4000NM0023 Constellation ES for $50 USD and Seagate ST6000NM0034 6 TB for $85 USD. The 4 TB's are a slightly better value at around $12 per TB vs $14 per TB for the 6 TB's.

I don't have any experience with zfs pools. If I go with 2 6 TB's in a RAID 10, can I add more 6 TB drives later to expand the pool? Would RAID 10 be my best choice?
You can find some decent deals on Ebay, If you're ok with used enterprise SAS drives that are still usually under warranty.
You can find nice HGST 8 or 10TB drives for around $13 per TB if you search around :)
 
  • Like
Reactions: zogthegreat

herby

Active Member
Aug 18, 2013
185
53
28
A follow up question. In the link the you provided, it states:

"You can also skip guides that suggest 120GB or 240GB drives. While you may not need that much space, performance on smaller capacity drives suffers. 400GB should be your minimum capacity."

I have a couple of 256GB M.2's laying around that I was planning to use as L2ARC drives. Since my setup is small and I won't have that much data passing through on a daily basis, can I get away with using the smaller drives until funds become available to upgrade to a larger SSD?
It couldn't hurt to try. I think the guide encourages bigger disks since they tend to be faster as a rule. If the M.2 you have doesn't help you should be able to pull it from the pool with no ill effect.

I have a vague suspicion that your use case isn't going to benefit much from L2ARC, it doesn't sound like your expecting a big frequently read from dataset that it would be improving. I understand the urge to tinker and tune though, so it might be fun to find out. As far as real world performance if all your reads and writes are over a gigabit connection your pool should already exceed that bandwidth by a healthy margin and any cache disks would be largely pointless, practically speaking.
 

zogthegreat

Member
Jan 20, 2019
40
9
8
I have a vague suspicion that your use case isn't going to benefit much from L2ARC, it doesn't sound like your expecting a big frequently read from dataset that it would be improving.
From what I have been reading, your probably right. I'll be doing a one time large data dump from my desktop to the server. After that it will be incremental backup's from my desktop overnight and the monthly NextCloud backup, which will also happen overnight. From what I have been reading, for a setup like I'm planning, system memory will play a bigger role with zfs. Right now I have 64gb of DDR3 ECC, so as for what I'm planning, I should be good as far as memory goes.

I understand the urge to tinker and tune though, so it might be fun to find out.
(chuckle) Yeah, I'm borderline OCD when it comes to tinkering. If I'm honest, a good quality turnkey Synology or QNAP device would probably do everything I need with a lot less hassle. But I really do love tinkering with hardware and software. I think that a large percentage of the people who come to forums like this are similar. They can't accept some type of prepackaged "whatever" device. Where's the fun in that?

Besides, you can learn a lot by tinkering with things. Everyone should tinker and be curious about the technology that surrounds us.
 
Last edited: