Commercial NAS vs Supermicro Custom NAS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

pricklypunter

Well-Known Member
Nov 10, 2015
1,709
517
113
Canada
I haven't used it, but did look at using it when I built my system, primarily because it's Debian based. I remember it had a really nice looking user interface, which funnily enough reminded me a lot of Webmin. In the end, I went with Debian and ZoL, figuring that I could roll my own storage box, without any of the bells and whistles that I'll likely never need or use. My storage VM does nothing more than present ZVols to my other VM's via iSCSI / SAMBA. The other VM's are where I do anything that might involve data manipulation. The result of course is a more modular approach, which obviously requires more administration on my part, but it does also mean that I have a very small footprint handling my storage needs. My thought behind that approach was "keep it simple and ultra reliable" :)
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,142
594
113
New York City
www.glaver.org
If you are looking at just a storage box, I would definitely say roll your own :)
Here is my file server build using a Supermicro chassis & motherboard. I also go into detail about backup and replication. I use regular FreeBSD (I don't need a nice GUI to hide the CLI from me), but the same hardware should work fine for other operating systems.

I recommend lots of RAM for ZFS even if you aren't doing deduplication. In regular use, it will be used as cache. But where it really helps is in resilvers if you're replacing a drive. I've gotten resilver speeds of > 4GByte/sec on a pool with 20TB of data (larger hardware than the above system, of course). You really don't want to have your system down or in a degraded state for longer than necessary because of small RAM space, and you definitely don't want to try throwing more RAM in the system to do the rebuild and then discover you have memory problems.
 

mason736

Member
Mar 17, 2013
111
1
18
Does unraid provide enough read performance to handle multiple 1080p plex streams or 4K streams? The Plex Server will be running on a different, dedicated VM, just accessing the media on the unraid server.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
Does unraid provide enough read performance to handle multiple 1080p plex streams or 4K streams? The Plex Server will be running on a different, dedicated VM, just accessing the media on the unraid server.
UnRAID reads from a single disk so it could handle whatever your max read speed is on said drive. Now if for example you were pulling multiple files from multiple drives your speed would be capped at whatever your network speed can handle. Which in the real world means I can't see your maxing out your bandwidth unless you were trying to run 20 streams at once or something crazy like that.


Sent from my iPhone using Tapatalk
 

mason736

Member
Mar 17, 2013
111
1
18
UnRAID reads from a single disk so it could handle whatever your max read speed is on said drive. Now if for example you were pulling multiple files from multiple drives your speed would be capped at whatever your network speed can handle. Which in the real world means I can't see your maxing out your bandwidth unless you were trying to run 20 streams at once or something crazy like that.


Sent from my iPhone using Tapatalk
Thanks!
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,193
113
DE
If any stream is delivered from a different disk, you will not see problems with several streams.
If you play multiple streams from the same disk, your disk head must be repositioned for the different streams what means that you are iops limited unless you have some sort of readcache that removes load from disks.

If you compare this to a realtime raid system like raid-6/Z2, where all data is striped over several disks, you will face the same problem. While sequential performance can scale there with number of datadisks, iops of a raid or ZFS vdev is like a single disk as well. Main Advantage with ZFS would be the superiour Ram/SSD cache options. On ZFS you can scale iops additionally with number of vdevs.

.. beside the ZFS crashresistent realtime raid protection with selfhealing, snaps/versions and checksums.
 

mason736

Member
Mar 17, 2013
111
1
18
If any stream is delivered from a different disk, you will not see problems with several streams.
If you play multiple streams from the same disk, your disk head must be repositioned for the different streams what means that you are iops limited unless you have some sort of readcache that removes load from disks.

If you compare this to a realtime raid system like raid-6/Z2, where all data is striped over several disks, you will face the same problem. While sequential performance can scale there with number of datadisks, iops of a raid or ZFS vdev is like a single disk as well. Main Advantage with ZFS would be the superiour Ram/SSD cache options. On ZFS you can scale iops additionally with number of vdevs.

.. beside the ZFS crashresistent realtime raid protection with selfhealing, snaps/versions and checksums.
For the media I'm going to be storing, I would rather have the disk space, than all of the redundancy that comes with a Raid 6 or 10 array. The data is merely movies and TV shows that can be re-downloaded or ripped if necessary. My crucial files will be kept on a the HP P4300 G2 that is setup to run in Raid 6.
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
For the media I'm going to be storing, I would rather have the disk space, than all of the redundancy that comes with a Raid 6 or 10 array. The data is merely movies and TV shows that can be re-downloaded or ripped if necessary. My crucial files will be kept on a the HP P4300 G2 that is setup to run in Raid 6.

I was initially going to do that, but then considered the time involved in re-ripping/downloading everything and the irritation caused by wife/kids while doing so. I also didn't want to deal with segregating the storage. One of the big wins in a pooled setup like ZFS for me is that I only really need one to maintain. One big RAID10 to rule them all, as it were.. :) It helps that I have a 24-bay supermicro case, so throwing more spindles at the problem wasn't a big deal, particularly as I got a bunch of cheap spindles to use for it.

One trick with media on ZFS arrays you might consider, is enabling L2ARC on sequential reads. It will cache active files to the SSDs, however many you choose to use for cache, relieving IOPS pressure on the spindles. I've seen a couple big raidz1s set up that way.
 

mason736

Member
Mar 17, 2013
111
1
18
What supermicro chassis or server would someone recommend for UnRAID high density LFF storage?


Sent from my iPhone using Tapatalk
 

cheezehead

Active Member
Sep 23, 2012
723
175
43
Midwest, US
What supermicro chassis or server would someone recommend for UnRAID high density LFF storage?
16-bay (3U): SC836/CSE836
24-bay (4U): SC846/SC846
36-bay to 90-bay (4U): SC847/CSE847 ....variants with more than 36-bays are not common.

The 836 and 847 both have variant which include an additional 2x2.5" sleds in the rear. Sometimes they are the SuperMicro version, sometimes they are 3rd-party integrated ones....some ebay sellers out of cali sell the ones with 3rd party rear bays which work but you'll have some cpu cooler height issues if your running a dual-cpu setup.

All of them come with a myriad of different backplane options, please research what style would best fit your needs (same goes for power supplies). Unless you have spare drive caddies laying around, make sure it comes with the sleds. If your planning on rack-mounting the chassis, try to get one with rails (inner and outer included).

The above chassis have been shipping for over 10 years and are fully modular, early SAS1 variants do have a 2TB LFF capacity limit per drive.