Commercial NAS vs Supermicro Custom NAS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mason736

Member
Mar 17, 2013
111
1
18
Question to the community...

I'm in the process of planning the next version of my home configuration. I currently use an HP p4300 G2 7.2 TB SAN over iSCSI as my file store. Nothing else is used on the box, as it runs the lefthand OS. I'm beginning to grow short on space, and want to buy/build a larger SAN or NAS box to house all of my personal files, music, movies, TV shows, etc..... I run a number of VMs, and use PLEX as my media hub, however I have a separate server, C6100, that is used for that purpose, with the VMs housed locally. The new box will only be used for storage.

SO my question, is it better to buy a off the shelf NAS, such as QNAP or Synology, or purchase a used or couple generation old Supermicro/HP/Dell server and run FreeNAS, Windows Storage Server, etc......
 

j_h_o

Active Member
Apr 21, 2015
644
180
43
California, US
If you're doing this for home use -- with limited downside if things go wrong -- and you're willing to put in the time, I'd build yourself.

How important is a) cost, b) noise, c) power usage, d) how much space do you need, e) what kind of performance are you expecting/what kind of throughput do you see under load now?
 

mason736

Member
Mar 17, 2013
111
1
18
A. Cost is a factor, ideally less thank 2k.
B. Noise is not an issue.
C. Power usage not an issue
D. Storage space would be ideally 12TB or greater of usable space after raid or redundancy
E. Performance would be on par or better than the current p4300 g2 for read speeds. I can a saturate a 1GB link with transfer rates. I don't have 10gig currently, but that might be a consideration for the future


Sent from my iPhone using Tapatalk
 

SycoPath

Active Member
Oct 8, 2014
139
41
28
A. Cost is a factor, ideally less thank 2k.
B. Noise is not an issue.
C. Power usage not an issue
D. Storage space would be ideally 12TB or greater of usable space after raid or redundancy
E. Performance would be on par or better than the current p4300 g2 for read speeds. I can a saturate a 1GB link with transfer rates. I don't have 10gig currently, but that might be a consideration for the future


Sent from my iPhone using Tapatalk
I would say build your own on a SuperMicro case just for future proofing. The SM stuff is rock solid and will be easily swappable to a new motherboard if you out grow it. If you need more drive bays, easy to add an external sas card and connect to another chassis. With a consumer NAS, your locked to the number of bays it has. Also 10gb copper will eventually just start being on motherboards by default. I'd like to have the capacity to saturate that later, no current consumer NAS will be in that league, and you can't add an expander card to it.

Also, learning is fun.
 
  • Like
Reactions: cheezehead

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Expensive & easy & somebody else should provide (mediocre) support -> buy
Cheaper, faster, customized, self/community supported -> build
 

cheezehead

Active Member
Sep 23, 2012
730
176
43
Midwest, US
I would go the used supermicro route, they have chassis available for basically any need then run whatever NAS Distro/OS of your choosing.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
whats the general consensus on NAS software these days? FreeNAS, unraid, Storage Spaces?
I've always been an unRAID fan just because it is stupid simple for someone like myself that had always been a Windows guy. I've outgrown their array drive limits (28 data + 2 parity) so I'm heading black to windows server with Drivepool rather than storage spaces as I'm switching over to 60 bay chassis. Would have preferred to continue on with unRAID but I want to reduce the amount of physical (motherboards, CPU's, Ram..) machines I have to run. With Windows I drop down to one main server that runs all my JBOD chassis, holds all my Blu Ray drives for ripping and it was already running to do the transcoding for Plex anyway.

I'll be giving FreeNAS a look once V10 stable is released. Never been one of the ZFS fanboys as my data is strictly media in a home environment for Plex and I don't care to run 100+GB of ram for my 100TB of data where my unRAID servers run happily on 4GB. I also don't lose sleep at night worrying about cosmic rays so ECC memory, not to mention 100GB of it, was never at the top of my must have list.

I'm not for/against any specific platform. Just telling you what I have, what I use it for, why it does or does not still work for me. Hope that helps you make an informed decision for your build :)


Sent from my iPhone using Tapatalk
 
  • Like
Reactions: NetWise

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I'll be giving FreeNAS a look once V10 stable is released. Never been one of the ZFS fanboys as my data is strictly media in a home environment for Plex and I don't care to run 100+GB of ram for my 100TB of data where my unRAID servers run happily on 4GB. I also don't lose sleep at night worrying about cosmic rays so ECC memory, not to mention 100GB of it, was never at the top of my must have list.

Who told you that you need 100GB RAM to run ZFS? That's not true and never was. Perhaps if you want to use dedupe with 100TB, but that's a bad idea for media storage anyway. I wouldn't personally go below 8GB, but that's more for performance reasons and that I run VMs.

ECC is a similar situation. Some people say it's a must, but the people that designed ZFS say it isn't. Either way I prefer my servers to run ECC. It doesn't cost significantly more, sometimes it's cheaper, so why not? Saying that, I've run ZFS on non-ECC and never had a problem. It seems to me, it's at least as reliable as any other filesystem on non-ECC.

As for what OS/distro/etc for a home build.... If you are only going to run a file server on it, FreeNAS is hard to beat. It's easy to set up and use, reliable, and works well for the job. OpenSolaris based setups like OmniOS with napp-it are quite good as well. I've never used it, but unRAID is a very popular option as well. What I didn't like about it was that I'm stuck on unRAID if I used it. ZFS is supported by a lot of operating systems, mostly cross-compatible. The only exception I'm aware of is commercial Solaris with the latest version of ZFS not working with the open-source versions. But I wouldn't recommend commercial Solaris for a home user.

For home use, the biggest thing to consider first is number of drives and redundancy level, then performance. Some people use non-redundant primary storage with backups. Some people prefer to have online redundancy. Are you willing to replace a drive and wait for the backup to restore, having the system down while doing so? If so, that's a worthy option. If not, you have to consider parity-based or mirrors. You might want to consider ease of expansion as well. ZFS isn't as nice there if you use parity-based arrays. With mirrors, you add/replace in pairs.

Storage Spaces is new, and doesn't have the history other setups do. So it's hard to give any real info on it. The best we can do is say that some people have had luck with it. I haven't heard any horror stories either though, so that's something. But you also have to buy Windows Server to use it. So factor that into your decision making.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
Who told you that you need 100GB RAM to run ZFS? That's not true and never was. Perhaps if you want to use dedupe with 100TB, but that's a bad idea for media storage anyway. I wouldn't personally go below 8GB, but that's more for performance reasons and that I run VMs.

ECC is a similar situation. Some people say it's a must, but the people that designed ZFS say it isn't. Either way I prefer my servers to run ECC. It doesn't cost significantly more, sometimes it's cheaper, so why not? Saying that, I've run ZFS on non-ECC and never had a problem. It seems to me, it's at least as reliable as any other filesystem on non-ECC.
That's just the info I read on their support pages. I don't claim to know jack about FreeNAS but it's also why I said I wanted to check out FreeNAS 10 when it releases to stable. I want to try it with say 16GB of ram as a simple file server and see how it does.

With the ECC thing, I will go ECC when I build a new system but didn't want to buy more DDR3 stuff when I'll be transitioning to DDR4 with a new system anyway.

So I will take your advice and give it a try on a much smaller amount of ram when 10 comes out and see what I get.



Sent from my iPhone using Tapatalk
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Memory demand from ZFS (FreeNAS or otherwise) is almost completely dependent on use of duplication. With dedup active you need a large memory footprint - with it off a 16GB system will perform just fine.

FreeNAS currently defaults on dedup off for all newly created pools.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
Memory demand from ZFS (FreeNAS or otherwise) is almost completely dependent on use of duplication. With dedup active you need a large memory footprint - with it off a 16GB system will perform just fine.

FreeNAS currently defaults on dedup off for all newly created pools.
Good to know. Thanks!!


Sent from my iPhone using Tapatalk
 

cheezehead

Active Member
Sep 23, 2012
730
176
43
Midwest, US
With regards to the ECC heartburn and FreeNAS....if your buying your hardware used, DDR3 memory is cheap these days and DDR4 memory prices are dropping. With regards to how much it's really how much storage will you have on the backend. In general I'll do 8GB min + 1GB for every raw TB over 4TB.

FreeNAS10 with ZFS and the Virtualization enhancements will be nice but I'll be taking a look at other solutions. With ZFS you cannot on the fly change your raid sets and expansion options are more limited...ie if I have a 6 drive RaidZ2 and want to add to it, i'll need another 6 drives but unless I destroy the raid set and rebuild it there is no way to change it into a 12 drive RaidZ2. Some have recommended doing a series of mirrors which does work-around this to a point but it's still more limited.

Storage spaces is fine with mirrors...anything else and you'll need to do some performance tuning and SSD's to get it to run well.
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
FreeNAS10 with ZFS and the Virtualization enhancements will be nice but I'll be taking a look at other solutions. With ZFS you cannot on the fly change your raid sets and expansion options are more limited...ie if I have a 6 drive RaidZ2 and want to add to it, i'll need another 6 drives but unless I destroy the raid set and rebuild it there is no way to change it into a 12 drive RaidZ2. Some have recommended doing a series of mirrors which does work-around this to a point but it's still more limited.
That's the down side, particularly for home users. Enterprise types don't tend to mind. For me, I'm willing to deal with it for the known stability and reliability. I try to remember to point it out to people though. It's not a nice surprise to get.

While parity arrays like raidz2 are great, unless you're using really nice SSDs, or have a lot of them, you'll never max out 10GbE. If 1GbE is all you want, with low IOPS, it'll work fine. If you have more than a few clients playing media and using it as general file storage, it can become an issue, even at 1Gbps. That's why I'm on mirrors now. That and I like being able to upgrade/replace 2 at a time rather than 6. It's all trade-offs. The ultimate flexibility and speed is RAID0, but one failure takes out the whole thing... :)
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
I can't speak for the other applications but for NAS4Free / FreeNAS, moar memory is moar better IF you are doing a lot of random accesses OR if you want to run dedup.

But, for a media server that is almost exclusively non-repeating sequential access? You don't need much more than what the OS requires.

FreeNAS refused to install on systems with less than 8GB but will run in 4 if you move the USB drive after installing. NAS4Free doesn't mind installing in 4. I personally would not use less than 8 and 16 is a safe number for a home user who is not trying to do anything fancy.

I have 4 x 6TB WD Reds in a raidz1 volume in a HP microserver that won't saturate a gige port but it is more than fast enough for streaming old I love lucy episodes. 18TB usable in a small cheap box works for me and it takes ECC memory to boot.
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,714
520
113
Canada
The sweet spot on my home setup seems to be around the 12GB mark of RAM. Beyond that it's very much diminishing returns for me. It runs fine with 8GB, but I see a little dip in read speed over the wire. Beyond 12GB, even if it were to actually be faster, can't do anything for me as I'm already maxing my 1Gbit connections at that point. I'm not doing much with ZFS other than raidz2 storage with a few ZVols linked over ISCSI / SAMBA :)
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
I was hoping someone would comment about plain ole Debian and mergerfs. I had thought about that myself as a dead simple file server.


Sent from my iPhone using Tapatalk