Building 50-100TB NAS/RAID Backup Server for Genomic Lab

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

SmartBugger

New Member
Aug 8, 2020
5
0
1
Hello all!

I am researching storage options for 50-100TB of usable storage space for a genomic lab. Essentially, I am trying to get an understanding of best practices and cost to build such a server. Right now, I am in the early stages of investigating solutions, so I can understand the direction we need to focus on.

Note: I initially made a post on LinusTechTips and then discovered this community and its articles. Thought it would be beneficial to post here.

Key Considerations/Details:
1. The server will be used as a backup for compressed and encrypted genomic data. It's main purpose will be backing up files, which individually can be quite large (250-500GB). Generally speaking, this isn't something that will be accessed frequently and certainly nothing that we will be running computationally demanding programs on (e.g. no video-editing 4K movies). It simply needs to transfer large files in a reasonable amount of time.

2. Data redundancy is very important as the data is quite valuable. We don't want a RAID setup where rebuilding could take weeks. Additionally, this server won't be the only back-up (Currently planning to also have an off-site cloud backup solution).

3. Ability to scale. I was asked to investigate the cost/typical-solutions for 50-100TB. For our current needs, I suspect 25TB is adequate. So being able to create a server for 25TB and then scale it to 100TB with little notice/effort would be a huge plus.

4. Noise and form-factor. This will be held in a lab environment. I know typical server-racks can sound like mini-jet engines. I am also a bit concerned about the form-factor of a server. That being said, I am open to all suggestions and realize a normal server may be most cost effective.

5. We may also use this server to make daily backup copies of user's home directories on the cluster (e.g rsync). Additionally, I am intrigued at the possibility of having the backup server also act as a git-server. This may make latency concerns more important.


Again, looking for the general information so I can make more informed inquiries in the future.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,059
1,479
113
Given that you've emphasized the importance of the data, you might be better off purchasing a commercial off the shelf solution. You don't want to be in a position where you have to support something that's not particularly easy or user friendly (ie ZFS) and get potentially blamed for any issues that may come up. You will pay a bit of a premium for a pre-built system, but I think it might be worth it.

Since your workload doesn't seem to be too demanding, you might be served by as simple as something like a Synology DS1819+. Start off with say 4 large drives and add more as you need it. You will probably want a 10GbE NIC as well to help with throughput.

Most pre-built NAS offer some means of packages/containers or virtual machines, so getting Git set up is quite simple. Synology offers it and so does the competition like QNAP.

With 4 x 16TB drives (32TB usable in RAID6) and the Synology I mentioned, you'd be looking at ~$2400 to start.
 

SmartBugger

New Member
Aug 8, 2020
5
0
1
Thank you BluxFox for your feedback.

I am a bit surprise you didn't recommend something like Storinator for a commercial solution. Essentially, that is a product that I've seen around and thought would be the "default" recommendation before making this post.

I'm essentially intrigued on why you would recommend DS1819+ over something like the Storinator Q30. Is it the Q30's CPU (often xeon) and ECC ram is viewed as unnecessary for our usages? Or is your recommendation really trying to keep pricing low (Q30 + Rack will likely add $4k more).

Thank you!
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,059
1,479
113
Given how much space you need, I don't see how it would be of any benefit to get one of those. You can still get to ~100TB with 8 drive bays these days (with 2 drive redundancy). The Storinator is also going to be large and loud, which I presume you don't want as it sounds like you'll be putting this in an office environment. I'm of the opinion that the quality isn't all that great either compared to large OEMs like Supermicro, Dell, HPE, etc (though the motherboard is fine).

The DS1819+ actually does support ECC RAM (thanks to the enterprise Atom CPU it uses), so you could easily swap over to that (fairly inexpensive). If you're just using it for moving large files around and have small workloads (like Git, Docker containers, small VMs, etc), you won't need much CPU.

Synology is not the vendor with that type of storage appliance/servers, I just think their product is better in some ways than the competition. I would have a look at the other available offerings however to see if maybe one is more suitable for you.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
For your use case of doing daily backups, a rack mount such as the Storinator may be over kill. Certainly a Xeon is not needed to do that. You’d also need to consider the self-support costs due to the added complexity of running a rack mount. I don’t recall 45 Drives offering such support.

An off the shelf unit such as Synology or QNAP should be able to handle daily backups just fine. Keep in mind though that this should only be considered one point in your backups. If your data is important, it should have a third copy (workstation, first NAS being the first and second copies) backed up elsewhere, ideally off-site.
 

SmartBugger

New Member
Aug 8, 2020
5
0
1
The Storinator is also going to be large and loud, which I presume you don't want as it sounds like you'll be putting this in an office environment.

It actually will be a lab, which isn't quiet exactly (There are walk-in fridges, centrifuges, etc.... all things that make noise). What concerned me was that server-farms can be quiet loud... where you need to be speaking very loudly for others to hear you. That's why I was intrigued about the Q30, which is whisper-quiet according to their site.


I'm of the opinion that the quality isn't all that great

That's important to know.

I appreciate the feedback
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,059
1,479
113
I'm not sure how they claim to achieve that given how much air one needs to move to maintain low drive temperatures with such a layout. Normally I would recommend getting a second unit to mirror any critical data, however it seems you'll already be doing that and that this won't be your primary storage location (I think a $1000 Synology NAS might not be the best thing for that).

In terms of quality, Synology is not as good as the large OEMs, but adequate.

Since I haven't mentioned any numbers yet, if you have 10GbE, I would expect a 500GB transfer to take no more than 15 minutes. Is that adequate?

Hopefully some others on here provide input as well since I'm not the end-all of NAS recommendations.
 

SmartBugger

New Member
Aug 8, 2020
5
0
1
Since I haven't mentioned any numbers yet, if you have 10GbE, I would expect a 500GB transfer to take no more than 15 minutes. Is that adequate?

Yes, that should be fine. The analysis we do isn't something we interact with in real-time. It's mostly statistical and machine-learning applications where we load data on a cluster, run a script for hours/days, and then come back and download the results.

Normally I would recommend getting a second unit to mirror any critical data, however it seems you'll already be doing that and that this won't be your primary storage location

It actually will be.

So, our clusters only give us 10GB of "backup storage" (the scratch allowance is huge, at 10TB and can be increased). Thus, we will need to move files from our NAS to run any sort of analysis.

I was planning on having this NAS be the main storage solution and then use AWS/Google/etc as an off-site online storage solution. I realized this doesn't follow the 3-2-1 mantra and interested in any additional feedback concerning this.
 

ari2asem

Active Member
Dec 26, 2018
745
128
43
The Netherlands, Groningen
Key Considerations/Details:
1. The server will be used as a backup for compressed and encrypted genomic data. It's main purpose will be backing up files, which individually can be quite large (250-500GB). Generally speaking, this isn't something that will be accessed frequently and certainly nothing that we will be running computationally demanding programs on (e.g. no video-editing 4K movies). It simply needs to transfer large files in a reasonable amount of time.
i would go with epyc rome cpu with single socket atx-mainboard, with ecc-reg memory, 40gbit network-card, nvme-storage for boot OS, HBA card 12gbps

2. Data redundancy is very important as the data is quite valuable. We don't want a RAID setup where rebuilding could take weeks. Additionally, this server won't be the only back-up (Currently planning to also have an off-site cloud backup solution).
you can use snapraid for this.
snapraid.it
big disadvantage of snapraid is that you have to do manualy SYNC and SCRUB, no support for automatic snapshots like ZFS. without SYNC and SCRUB no data protection. but there are snapraid-helper scripts to automate these things for you with task shedulers.

advantages: parity can go up to 6 disks
you can mix and use any hdd what you want (any size, with or without data on it).
after loosing 1 disk you only loose data of that disk.


3.Ability to scale. I was asked to investigate the cost/typical-solutions for 50-100TB. For our current needs, I suspect 25TB is adequate. So being able to create a server for 25TB and then scale it to 100TB with little notice/effort would be a huge plus.

4. Noise and form-factor. This will be held in a lab environment. I know typical server-racks can sound like mini-jet engines. I am also a bit concerned about the form-factor of a server. That being said, I am open to all suggestions and realize a normal server may be most cost effective.
rack server case with 24 bays and 10tb hdd of enterprise grade (like hgst dc-series)

5. We may also use this server to make daily backup copies of user's home directories on the cluster (e.g rsync). Additionally, I am intrigued at the possibility of having the backup server also act as a git-server. This may make latency concerns more important.


Again, looking for the general information so I can make more informed inquiries in the future.
 

SmartBugger

New Member
Aug 8, 2020
5
0
1
i would go with epyc rome cpu with single socket atx-mainboard, with ecc-reg memory, 40gbit network-card, nvme-storage for boot OS, HBA card 12gbps

For my use case, I suspect you believe an 8-core (low-end) EPYC ROME CPU with low RAM (8GB) would be sufficient. Is this assumption true?

you can use snapraid for this.

Another disadvantage would be the read/write speeds. I suspect this can be reduced by having a large SSD cache. But I am interested in why you would choose snapraid over something like unraid. Is it because it validates the data with a checksum?

rack server case with 24 bays and 10tb hdd of enterprise grade (like hgst dc-series)

As far as case manufacturers/products, are there any that you recommend?

Lastly, from doing a cursory look-up of prices, it seems like this solution may be around $3k (excluding storage space, and a cabinet to hold the server in). Am I ball-parking this figure correctly? Just trying to get a general idea of the costs associated with your recommendation.

Thank you! I really appreciate all the feedback.
 

ari2asem

Active Member
Dec 26, 2018
745
128
43
The Netherlands, Groningen
if your server is REALLY and ONLY for backup, you can use 8 or 16 core epyc rome. but ALWAYS use ecc-reg memory.

as for snapraid...yes, for checksum and silent bit-rot. unraid is not good idea for your use-case (high demands for redunducy and intergrity)
for me snapraids big advantage: using hdd of any size, any type, with or without data.
read/write speed of snapraid depends on the specifications of the hdd you use. but my experince with different hdd's in my snapraid-build is that the slowest hdd impacts negatively the speed of SYNC and SCRUB

as for server rack case, just as example
my personal choise would be a case with expander in the backplane. because then you only need 1 cable between backplane (which has expander) and hba card. you even need a hba with 1 port

 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,059
1,479
113
I don't think SnapRAID is suited to anything beyond home use. There's no commercial support. You really don't want to be in a position stuck supporting something like that. I would recommend against it for your use.
 
  • Like
Reactions: cactus

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
Given that you've emphasized the importance of the data, you might be better off purchasing a commercial off the shelf solution. You don't want to be in a position where you have to support something that's not particularly easy or user friendly (ie ZFS) and get potentially blamed for any issues that may come up. You will pay a bit of a premium for a pre-built system, but I think it might be worth it.

Since your workload doesn't seem to be too demanding, you might be served by as simple as something like a Synology DS1819+. Start off with say 4 large drives and add more as you need it. You will probably want a 10GbE NIC as well to help with throughput.

Most pre-built NAS offer some means of packages/containers or virtual machines, so getting Git set up is quite simple. Synology offers it and so does the competition like QNAP.

With 4 x 16TB drives (32TB usable in RAID6) and the Synology I mentioned, you'd be looking at ~$2400 to start.
this. Pay the price for commercial support. You don't want to be the guy/gal who lost data because of a DIY build.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Without own know how and a lot of money, call your next NetApp or Oracle sales rep and request an offer. If you are locking for a high quality storage solution with less money, look at ZFS and commercial offers ex from Nexenta or IX with full hardware/software support.

If you look for a high quality solution (hard and software) for low money, buy a quality hardware ex from SuperMicro (Intel CPU and nic, ECC, LSI HBA storage adapter) from a trusted local vendor, at least the mainboard, best in a 19" case with redundant PSU and 16 better 24 disk bays (quite loud, but with best cooling and reliability) or in a more silent "home case" with some disk bays and ultra high capacity disks (up to eight). I would suggest to place the server outside lab and use a professional 19" dual PSU equippment. Start with a single ZFS Raid-Z2 with up to 8 disks. With 10TB disks this gives 60 TB usable. In a 24bay case you can add another 8 disks (120 TB) with 180TB if you fill up all bays. In a 16bay case you are limited to 120 GB (more if you use disks > 10TB). If your data suddenly explode, move the disks to a 60/90 disk case and go to Petabytes.

From OS for a ZFS storage server you can use a regular storage optimized OS, ex the commercial Solaris with native ZFS from the inventor of ZFS. This is the fastest solution with commercial support > 2034 (up from around 800 usd per year). Cheaper with nearly same quality of ZFS intregration are the free Solaris forks ex OmniOS, my favourite OS for a production storage server due its stable/long term stable and regular security fixes and a commercial support option, OmniOS Community Edition

One of the main features of Solarish is the "it just works" and the superiour multithreaded kernel/ZFS based SMB server with best of all Windows and ntfs alike permissions integration.

I have made some manuals about concept and hardware examples, see napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris : Manual

Such a solution will scale up from Gigabytes to Petabytes "on the fly" with superiour data security and a ultrafast raid rebuild when needed. To avoid a disaster situation (fire, theft) add a second quite identical solution at a different physical location/ building area for backup/ redundancy and keep them in sync with ZFS replication (can keep Petabytes under high load in sync with a delay down to a minute)
 
Last edited:
  • Like
Reactions: jonobk

ari2asem

Active Member
Dec 26, 2018
745
128
43
The Netherlands, Groningen
...If you look for a high quality solution (hard and software) for low money, buy a quality hardware ex from SuperMicro (Intel CPU and nic, ECC, LSI HBA storage adapter)...
i can't imagine you advice intel cpu, because of meltdown and spectre issue with many intel-cpu's.

if you take security serious, stay away from any kind of intel cpu (new or old generations).
use amd (ryzen, threadripper or epyc)
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
I do use a lot of AMD in my render farm due their superiour performance vs price. But for a storage server you will hardly find professional AMD solutions. Their main target market is (currently) home/gaming/3D.

Newer Xeons and newer OS releases target the security problems (some are also on AMD). So no problem (and no alternative) and mainly a problem of OS maintenance. If you use a generic enterprise OS (and not some security freezed storage appliances) you are quite save as bugs and security problems get fixed quite often and quite soon, see for ex omniosorg/omnios-build
 
Last edited:

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,059
1,479
113
OP, you should take the above post with a grain of salt as they have a vested commercial interest in pitching ZFS and do so at every single opportunity.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,059
1,479
113
what do you mean ??
Pretty much exactly what it sounds like. They run napp-it.org and sell commercial services on there. They shill it here and on other forums constantly. They could at least disclose the fact that they could be potentially profiting off their recommendations.
 
Last edited:
  • Like
Reactions: jonobk and ari2asem

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Nothing wrong with ZFS tbh. I prefer it myself, though I can say that without commercial support, it’s not something I would recommend for regular admins. Especially if your job depends on it. It’s much better to shift that responsibility to a commercial vendor, of course after doing the proper research of pros and cons and receiving buy-in from the relevant business/department owners.

For this purpose, as long as there’s a third, off-site backup (which you mentioned), an off-the-shelf NAS is more than sufficient.

On the topic of security vulnerabilities, the reason I’ve moved to all AMD systems for workstations is I run applications that I don’t want to be bothered by performance hits introduced by the microcode fixes. For a storage server, IMHO even if there’s a performance hit it’s hardly a problem unless the storage server is used as a converged server hosting more than a few other things other than serving files. As much I’d like to migrate to all AMD platforms, there are some use cases where I just can’t due to design constraints. A NAS or storage server is one of them.