Need some help to make up my mind, 180TB server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

looney

New Member
Dec 31, 2016
6
0
1
30
Hi there,

Right now I have 37 4TB drives (Seagate desktop) running of a LSI 9286-8E + BBU split across some RAID 6 arrays.
This server is running Server 2012R2 baremetal.

The plan is to install ESXi on the server, it will have 192GB RAM and 2x E5-2650v1.
I'm also buying 8 more 4TB drives to bring the total up to 45 4TB drives.

The thing I'm not sure about is what would be the best RAID solution for my new setup.
I'm not even sure if I want hardware or software RAID.

The NAS VM can have 150GB of ram if needed, it can also get a dedicated 2650 if needed.
The 9286-8E is not a HBA so if software RAID is preferred then I would need to get something else for that.

Expandability is key, I must be able to add drives easily.
Would also be nice if it would support a constant write throughput of 200 Mbps for security cams.

Buying some additional hardware should not be a issue (HBA, SSD for cache, stuff like that)


Personally I just can't seem to make up my mind on this :)

Here is the hardware currently available to this:
SuperServer 6017R-N3RF4+
LSI 9286-8E + BBU
2x 850 Pro 512GB SSD (RAID 1 for datastore ESX)
24 x 8GB ECC DDR3 RDIMM 1600Mhz
2x planned: E5-2650v1; Currently E5-2609v2

Supermicro SC847 E16-RJBOD1
45x 4TB ST4000DM000
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
uuuh

Whats your use case?
How important is performance, reliabiliy and availability?
180 TB raw or usable (after subtracting redundancy)?
What is your backup concept?
What is your budget?

Be aware:
- Desktop disks are bad in large multidisk environments (due vibrations)
- Expect a failure rate of at least 5% per year what means at least 2-3 disks per year
- more when the disks become older
- Be aware that a single bad Sata disk can block an expander, hard to find the problem.
Professional support would avoid to help for Sata + Expander solutions

- with 180 TB you have always some bitrot. If you use filesystems like ntfs or ext4,
you have no chance to find or repair
- On a problem with ntfs you need a checkdisk to find at least metadata problems
(no repair option for data). Such a checkdisk can last days. and it is offline!

Have you considered solutions that are suited for such a capacity like
enterprise SAS disks + expander, Sata enterprise disks + HBA solutions and ZFS as a filesystem?

And less disks (8-10TB ones)
 
Last edited:
  • Like
Reactions: Patrick

looney

New Member
Dec 31, 2016
6
0
1
30
-Whats your use case?
Home NAS, mostly static media (photo's and video, data is not accessed often)
Though twice a year I will bring the server to a large event where it will record 20+ security camera streams.

-How important is performance, reliabiliy and availability?
Normal operation is not mission critical, having said that during those events I do want it to be stable.

-180 TB raw or usable?

raw (45x4=180), naturally I will have a lot less usable.

-What is your backup concept?
Backed up online, looking into tape backups but thats not realized yet.
Most data is non critical. I would say no more then 10TB is something I really need backed up in the first place.

-What is your budget?
I don't have a definitive budget, things like a HBA wont be a issue but replacing drives would be to expensive.

-Be aware:
Thank you for the warning concerning the drives, my personal experience contradicts those failure rates, I have been running the 35 drive array without issue for over a year and smart data is still good, though off-course I keep spares incase something goes wrong.



As stated there is already a 35 drive system running, it's a home server and upgrading all the drives to SAS does not fit in the budget.
But things like ZFS are the main reason I'm here, I don't know most of the pros and cons.

Would a HBA and ZFS be a better sollution with the listed hardware?

Thank you for your help so far.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
Your hardware is high end, your disks are cheap desktop, would be better the other way round.

I can give you only some principles what I would prefer

- Do not build one huge raid array from desktop disks and 180 TB
I would create at least two arrays/ pools, one from old disks, one from newer
- when you buy new disks, use at least 24/7 NAS disks and prefer SAS ones

- You cannot switch with your hardware to a pure Sata and expanderless HBA solution

I would
- use the hardware raid for a mirror for ESXi and a local datastore with the storage VM
- add an HBA like a SAS 9300-8e Host Bus Adapter
pass-through that adapter to a storage VM to use it directly for mass storage

-use a ZFS appliance
Can be a BSD based solution like FreeNAS.
I prefer Solarish based solutions where ZFS comes from. For them I created a webmanaged appliance software
http://napp-it.org/doc/downloads/napp-in-one.pdf

The storage VM manages storage and shares.
I would use either a single pool with 4 or 5 raid-Z2 arrays (each similar raid-6)
or two pools, one from your current disks and one from newer and larger SAS disks

When you create a ZFS pools from vdevs where number of datadisks is not a power of 2,
enhance blocksize from default 128k to 512k or 1M to achieve maximal useable capacity.
 

looney

New Member
Dec 31, 2016
6
0
1
30
Your criticism regarding hardware is well place and I will not take it lightly.

But whats discussed so far, I think this would be a acceptable setup with current hardware available:
-Using hardware RAID create a RAID 1 array with the existing SSD's to serve as the datastore for VM storage.

-Pass-trough a SAS 9200-8E HOST BUS ADAPTER to the mass storage VM (9300 is 12gb and therefore incompattible with my 12gb JBOD)
-Mass storage VM based upon either solaris or BSD, more research by me will determine which. (thank you for the webmanagment link, looks great so far)
-Create 4 raid-Z2 arrays of 8 drives each
-Buy more 24/7 drives (probably 8x ST4000NM0033) and create a separate raid-Z2 for those drives.
-Pool vdevs together

The power of 2 relates to the actual data disks of the vdevs? if so 128k should be correct? (vdevs of 8 drives each)

This would give me roughly 120TB usable.

If storage needs expand, should buy 8 drives at once and create a additional vdev?


Again, thank you for the help so far.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
12G SAS is compatible to 6G but 9200E is ok

The rule about power of 2 per vdev is per datadisks not disks of a raid-z
what means the optimum per raid-z2 is 6 or 10 disks (4 or 8 + 2 redundancy disks)
but this can be ignored with a larger blocksize

To expand a ZFS pool you can add a new vdev or you can
replace disk by disk of a vdev with larger ones.

If you create a new vdev, this is added to the pool.
If any vdev fails the whole pool is lost.
This is why I suggested a second pool with desktop disks
 

looney

New Member
Dec 31, 2016
6
0
1
30
Oke, I understand the blocksize part now, thanks.

The replacing/upgrading of the drives in a vdev, can this be done online?
So just replace a 4TB with a 8TB 24/7 down the road and it increases the vdev by 4 TB?

I will take your advice on 2 pools.
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
The replacing/upgrading of the drives in a vdev, can this be done online?
So just replace a 4TB with a 8TB 24/7 down the road and it increases the vdev by 4 TB?.
You can replace disk by disk online.
After the last disk of a vdev is replaced, you have the higher capacity available
(when ZFS property autoexpand is set to yes)
 
  • Like
Reactions: Patrick and looney

looney

New Member
Dec 31, 2016
6
0
1
30
Just to be sure, would the 150GB of ram for the ZFS VM be enough for the aforementioned configuration?
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
-Whats your use case?
Home NAS, mostly static media (photo's and video, data is not accessed often)
Though twice a year I will bring the server to a large event where it will record 20+ security camera streams.
All else aside, do you really plan on hauling 5 rack units worth of computing equipment that will weight over a hundred pounds that contains all your home server data including say I say, personal entertainment to an unfriendly environment where should it actually be needed is likely to be impounded by local law enforcement?

I was spare crew for a high school rock band many, many years ago and all the equipment that went to events got beat to crap. I now work for a tech company and gear we ship to events for demo's come back beat to crap. I really can't see hauling 45 hard drives in a box to and from an event on a regular basis with any expectation of them surviving.
 

looney

New Member
Dec 31, 2016
6
0
1
30
All else aside, do you really plan on hauling 5 rack units worth of computing equipment that will weight over a hundred pounds that contains all your home server data including say I say, personal entertainment to an unfriendly environment where should it actually be needed is likely to be impounded by local law enforcement?

I was spare crew for a high school rock band many, many years ago and all the equipment that went to events got beat to crap. I now work for a tech company and gear we ship to events for demo's come back beat to crap. I really can't see hauling 45 hard drives in a box to and from an event on a regular basis with any expectation of them surviving.
The event is a large 1000+ BYOC LAN party, so in total there is about 60u worth of servers and +100u worth of networking equipment per event, all just as expensive.
Servers are mounted in shock proof flightcase racks and transported by only those who know how and are at all times in a separate area in the event hall which is off limits to anyone but network crew.

For transport I usually take out the drives and store them in those foam OEM containers that are used to ship them to reduce the overall mass of the system during transport, mass+movement is bad.
We have yet to have hardware fail due to transport, if you do it right and buy the right stuff (expensive flightcases) it should be fine.

Also as far as law enforcement goes, my server is the least of the problem if they decide to show up again, I dare say all participants will have something on there PC, last time law enforcement showed up at a LAN in this neck of the woods only 3 out of 5 large LAN events had to stop due to the massive decrease in visitors as they where all scared to go, the event I'm with went from 1400 to just 300 participants due to the police that particular year.
 
Last edited: