[Feedback sought] Enthusiastic NAS build [4.5y usage update 2019-12]

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
What motherboard do you have? I'm trying to decide between a C2750 miniTX and a E3-1230V3. 31 watts from my understanding is Atom land. If the 1230V3 can idle down to the same watts as a C2750, I would much rather have a 1230V3.
I have transplanted i3-4130 with motherboard that came form lenovo ts140
4x4G ddr3L unbuff ecc ram added.
31W not always from Atom land....

During the build with 9 X 3TB ehem baracuda with disable APM and 1 ssd 64G as OS:
17-20W when the system is idling without 9 X3 TB attached and only SSD attached with running Centos 7 inside SM 2U case with 5 SM fan and 1 extrat for HBA card cooling.
65-70 when the system is idle with all drives installed..
this is interesting....
the system will shoot to ~ 120W or more when scrub is running or running CPU-BURN testing...
the system is running on 500W platinum PSU

on my suggestion
if you are in space constrain 1U... Atom/Celeron/Pentium SoC motherbaord..
or
not on very tight space 2U or more, I3 or E3 are candidates.
 

MacLemon

New Member
Feb 16, 2015
20
19
3
The 20.00.04.00 rebuild has been working solidly for me on a number of different controllers / chipsets.
Here is the LSI knowledgebase article.
Thanks a bunch for the update on the different p20 firmware releases.

I had downgraded from the p20 to p16 following the recommendations of many postings here. p16 has really worked well with my setup. No issues that I'm aware of of any kind. After having my setup up and running for a good 3 months I'm still very happy with it.

I updated to FreeNAS 9.3.1 today and now the LSI driver gives a warning about the controller firmware being p16 instead of the expected/matching p20. I'll read up on the LSI knowledge base article to decide if I should upgrade now.

Some experiences made with the build in the last months:

Acoustic dampening of the Nanoxia case is great and absolutely suitable for an office/home environment. The only sounds to be heard is a very low grunting when heavy disc access is happening. Everything else is virtually silent even with the fans on high setting due to summerly temperature in the northern hemisphere. I clean the filters once a month.

The performance of the box (C2750, 32GB, LSI, 6*4TB (Z2) + 3*4TB (Z1), FreeNAS 9.3.x) is excellent. I can have multiple VMs (VirtualBox) running while having Plex transcode two videos on the fly and copying files over the network without any issues.
I need to upgrade my switch to make use of LACP since my motherboard has 4 Gbit/s ethernet interfaces of which I only use one at the moment leaving room to grow.

FreeNAS 9.3 has been working stable with only a few minor issues being
  • Occasionally OS X TimeMachine backups start failing requiring a FreeNAS reboot to work again. I personally like to blame TimeMachine for many things but in this case only a reboot of FreeNAS helps to reproducibly resolve the issue.
  • The web UI feels somewhat sluggish at times, mostly due to synchronous network/internet requests and my internet uplink is (way too) slow which is common in my home county. Performance of everything else is excellent!
  • Plugin Updates fail a little more often than I like. Especially Plex seems to require multiple retries for any update to work.
  • The VirtualBox Plugin is severely outdated with updates being extremely unlikely. My multiple VMs are running quite fine though. (Multiple Linux Distros, one Windows 10 instance to test it, FreeBSD 10 and Hardened BSD, etc.) With 32GB of RAM there is plenty of room for VMs with enough RAM to spare for ZFS ARC. I'm looking forward to bhyve based VMs in FreeNAS 10.
Things I am missing:
  • A plain FreeBSD jail template. This should/could be much more comfortable.
  • An easy way to put proper HTTPS on all the webinterfaces of plugins. (Like Transmission, Firefly, etc.) I hope to see a lot of improvements here once the “Let's encrypt” is in full operation issuing free certificates in the next few weeks. Let's Encrypt
The only real problem:
  • FreeNAS Updates are still loaded over plaintext HTTP instead of encrypted and authenticated HTTPS. An unacceptable hindsight these days. This must be fixed with an upcoming update as soon as possible.
  • The update URL for jails is HTTP by default as well, this one can be changed to HTTPS by the user though. I've done this and it works fine. Must be changed to HTTPS for everyone with an update though.
Best Regards
MacLemon
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
.....

Acoustic dampening of the Nanoxia case is great and absolutely suitable for an office/home environment. The only sounds to be heard is a very low grunting when heavy disc access is happening. Everything else is virtually silent even with the fans on high setting due to summerly temperature in the northern hemisphere. I clean the filters once a month.

The performance of the box (C2750, 32GB, LSI, 6*4TB (Z2) + 3*4TB (Z1), FreeNAS 9.3.x) is excellent. I can have multiple VMs (VirtualBox) running while having Plex transcode two videos on the fly and copying files over the network without any issues.
I need to upgrade my switch to make use of LACP since my motherboard has 4 Gbit/s ethernet interfaces of which I only use one at the moment leaving room to grow.
.....
put server grade fans inside your case.
you will get annoying whin-ing noise due on dual-ball bearing:D..
if you need just slightly background noise, you need to create closed closet/rack with sound dampening all inside closet/rack surfaces, and put your case inside the closet.

you have to know that LACP does NOT increase transfer rate. the other words: LACP provides availability.

if you "server" are bombing from many clients, I mean 10 or more! constantly, LACP would help

for SOHO or home user, rarely you do need LACP....

those are my understanding
 

MacLemon

New Member
Feb 16, 2015
20
19
3
Follow up, after running this build for a good 18 months now. Anyone interested what my experience is? Well, it goes something like this…

The case: (Nanoxia Deep Silence 1)
The whole thing is barely audible. It gets somewhat warm with the stock fans during the summer, which is still easily compensated for by turning the fans up and opening the case chimney as a last resort.

Dust is no issue given one cleans out the intake fans, which I do once every season. The inside has accumulated less dust than expected. I'll spare you the bottom dust filter since I didn't want to shut down the machine to get it out safely. You have to believe me when I say that it's still almost clean as shipped. Some minimally gross pictures follow.



The 8x drive cage, of which 6 are populated with HGST NAS 4TB drives are almost clean. HDs haven't topped 50°C ever which is in the top range of acceptable temperature for me.


I'm using the FreeNAS default scrub interval of 35 full earth rotations, plus an occasional second. Not a single repair was necessary with neither, the 6 drive HGST RAID-Z2 run from the HBA, the 3 drive Seagate 4TB RAID-Z connected to the onboard SATA ports nor the SATA DOM I use for FreeNAS itself.

The tiny CPU fan does its job just fine keeping the heat sink shiny and does so silently, especially through the heavily padded case.



The IBM M1015 HBA (an OEM LSI 9211-8i) works perfectly fine and delivers a solid 550MB/s of read performance. Absolutely no issues. I've flashed it to firmware revision 20 by now which matches what FreeNAS 9.10 expects.



The fanless power supply doesn't show any signs of whine or other noise. Still has plenty of power to spare for more drives I intend to add.



Ending up at the back of the case we see the exhaust fan happily spinning silently dissipating off of hot air from the case.



Usage:

I'm running over 20 Jails, plus an ever changing handful of VirtualBox based VMs.
The 32GB RAM seem to be plenty for what I do and provide enough ARC that I don't see any reason to add an SSD for L2ARC. The upper ARC amount topping around 28GB and the lower mark around 10GB.


Issue:
I can only recall a single issue that forced me to actively intervene. This was an update of Plex server that had gone awry. It was fixed manually by instantiating a new Plex jail and porting over the data.

Things I'd do differently building a similar box today:
When planning my capacity I've stupidly missed that ZFS is most happy when allocation stays under 80%. So when calculating your net capacity, don't forget to multiply the result by 0.8 to arrive at the capacity you can use at full performance.
FreeNAS and ZFS will continue to run just fine beyond that point, but ZFS will switch to space saving mode which will slightly hit performance.

Over the last year projects have emerged, that happened to generate a few TBs of data which I had not anticipated when I crafted the specs.

No matter how much storage you provision, you'll eventually run out of space. :)

I'd strongly consider a Xeon-D based motherboard over the Atom C2750. More for the 128GB RAM limit and 10GE capabilities than CPU power. I'm still happy with the processing power my choice provides and I also get along quite well with my 32GB.

I'd do the Molex to SATA power cabling different, since I'm unnecessarily taking up Molex ports on my power supply which results in rather inelegant cabling. I don't care about the optics, this is not a gaming rig on display at a trade show after all. I do care for unobstructed airflow, effective cooling and silence though as well as easy maintenance. The existing, somewhat messy cabling is contradicting that.

So during upgrading my server I'll be changing the SATA power wiring to these. (I don't care about the colours, it's just what was available.)


Upgrading the box:
I'm in need of upgrading my storage pool. At the moment I'm running two separate zpools, one 6 * 4TB RAID-Z2 for storage and another 3 * 4TB RAID-Z pool for client backups.

I'm not yet sure how I'll be transmogrifying my data from the existing pools to the new pools though. I would have loved to just add the new 8-drive RAID-Z2 as a mirror to the current 6-drive RAID-Z2 and have ZFS work its magic to resolver the mirror, then remove the old pool from the mirror. From what I know about ZFS this is not possible so I'll have to make do with the FreeNAS replication function.

I'll be switching the main storage pool to an 8 * 6TB RAID-Z2 pool which means, I cannot do an in-place upgrade disk-by-disk since I'm changing the number of devices (from 6 to 8) in the pool.

I would have liked to have a downtime free option for data migration, even though a day or two of downtime is really not an issue for me in this case. Would be a different matter for a customer site though.

I've had good experience with the HGST NAS drives, so I'll be going with HGST 6TB NAS. Currently they go for shy over 200€ a piece excluding VAT.


I'll also be upgrading the backup pool to an 8 * 4TB RAID-Z2 zpool subsequently. (Mostly for local and remote clients.)

This also requires me to increase the number of SATA connections. Drive speed is not that critical, given that I access everything over paltry Gbit Ethernet, meaning I'll be doing 400MB/s at max. It's more like a maximum of 200MB/s in practice.

One solution would be to swap out the existing 8 channel SATA-III HBA for one with 16 channels, or insert another 8 channel HBA into the remaining PCIe slots.
A different solution would be to go for a SATA port multiplexer and drive both pools from the 8 channel HBA with a port multiplexer to the 16 drives.

I'm not sure if the latter solution is viable, it certainly would be cheaper though. Feedback and suggestions on this are much appreciated! (I'm not really familiar with add-on SATA port multipliers.)

Conclusion:
Overall FreeNAS has proven to be extremely reliable for me and I happily recommend it for similar use cases like the ones I've depicted here.

Future:
I'm currently working on server builds for customers and my own company at the moment. (Smaller Xeon-D and dual E5 FreeBSD servers as well as a dual E5 based storage server (FreeNAS 9.10 based) with around 72TB of raw capacity.) Hopefully I can let you in on some more hardware pr0n then.

Edits:
Eek, typos.
 
Last edited:

IApar

New Member
May 25, 2017
6
1
3
44
I have been reading here including this thread as I am also building my first Freenas system and have started acquiring parts. I have mostly 2TB drives so would be going with those.

I am still not decided whether to go with Freenas server with hypervisor as I need additional VMs on the box or go with Exi

Your build is interesting MacLemon and a good read on update. One question. Are you using encrypted volumes ? What is the performance of cpu with encrypted volume?

I am considering E3-1220L because the price difference on anything higher is huge, like 4 times expensive. Don't know if that is similar to what you have in performance.
 

MacLemon

New Member
Feb 16, 2015
20
19
3
I only run encrypted zpools and the performance on the box is around 550MB/s on spinning rust only, no SSDs involved, no ZIL/SLOG/L2ARC in use. Since my CPU (Atom C2758) sports AES-NI, that is a total non-issue which also applies to the E-1200 CPU series. I easily have cycles to spare even when doing full live transcoding of two HD video streams from an encrypted zpool.

I personally like CPU Boss to compare CPU performance. Also there is intel ARK to get to all the nitty-gritty details regarding intel CPUs.

As for virtualizing FreeNAS, choose your own poison. There is an article by the FreeNAS folks themselves Yes, You Can Virtualize FreeNAS with many things to consider before deciding to do so as well as Please do not run FreeNAS in production as a Virtual Machine! with a fair bit of warning against doing so.

I personally wouldn't virtualize FreeNAS for many reasons. I do run a bunch of VMs using the quite outdated VirtualBox Jail-Template on my box, and they work fine. I'm planning to migrate/redo them in BHyve which is supported from FreeNAS 11 on. (Currently in RC2 as of this posting.) I've done a fair amount of testing on a separate box and I'm looking forward to moving that one to eleven. VM Performance is excellent.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Great update !
For migration you could put a 2nd HBA in the system ? something cheap and setup your new pool with the disks just sitting on the desk (ok maybe some static friendly padding and use a desk fan to keep them cool a couple of days while migrating data.
 

MacLemon

New Member
Feb 16, 2015
20
19
3
Great update !
For migration you could put a 2nd HBA in the system ? something cheap and setup your new pool with the disks just sitting on the desk (ok maybe some static friendly padding and use a desk fan to keep them cool a couple of days while migrating data.
There is enough space in the enclosure to accomodate for 8 additional LFF HDs. I also do have a “spare” controller, which I'd need to flash to IT-Mode first though. It will do for the migration. (Still waiting for the last two HDs to arrive, since that project got held up a little by more important tasks.)

And yes, the plan is to copy the whole existing encrypted main storage zpool to a new set of disks. I guess this is the safest way to migrate my data and I actually don't mind the downtime.

I would have loved to add the new zpool as a mirror to the existing one, have ZFS magically mirror all the data to the new drives, then remove the old zpool from the mirror and expand to the new capacity. That way I could have migrated without any downtime, but I'm pretty sure that this is not possible, so I won't risk it.

I'll post an update once I got to migrating the whole pool to new spinning rust. :)
 

IApar

New Member
May 25, 2017
6
1
3
44
I only run encrypted zpools and the performance on the box is around 550MB/s on spinning rust only, no SSDs involved, no ZIL/SLOG/L2ARC in use. Since my CPU (Atom C2758) sports AES-NI, that is a total non-issue which also applies to the E-1200 CPU series. I easily have cycles to spare even when doing full live transcoding of two HD video streams from an encrypted zpool.

I personally like CPU Boss to compare CPU performance. Also there is intel ARK to get to all the nitty-gritty details regarding intel CPUs.

As for virtualizing FreeNAS, choose your own poison. There is an article by the FreeNAS folks themselves Yes, You Can Virtualize FreeNAS with many things to consider before deciding to do so as well as Please do not run FreeNAS in production as a Virtual Machine! with a fair bit of warning against doing so.

I personally wouldn't virtualize FreeNAS for many reasons. I do run a bunch of VMs using the quite outdated VirtualBox Jail-Template on my box, and they work fine. I'm planning to migrate/redo them in BHyve which is supported from FreeNAS 11 on. (Currently in RC2 as of this posting.) I've done a fair amount of testing on a separate box and I'm looking forward to moving that one to eleven. VM Performance is excellent.
Thanks MacLemon for your reply. yes I will check those links, I have read a few of them already. I plan to get an old HBA card LSI SAS3081E-R which I am getting for very cheap really but I think that has 2TB drive limitation and I dont have any drive that is more than 2TB at the moment so I think that would work ?
I do not know and havent found any info if anything needs to be done to that HBA card to make it work in IT mode etc.

My requirement is to run 2-3 VMs in addition to Freenas and because virtualbox that you using is outdated I really dont want to fight with it. Maybe I should wait for Freenas 11 or whatever the new one would be instead of going with 9.10 and avoid all the hassle of upgrade ? Whats your opinion on that ? Its going to take a bit of time to get all the parts anyway. I have used KVM but BHyve I have never used, not sure what should I expect if I run VMs under that.

You said you can add another 8 drives to your case. How is that ? From the pictures I dont see you can add additional drives. Did you mean take out the current 6 drives you have there and replace with 8 new drives in that case ?

BTW, how much power your system is drawing with those 6 drives in there ? I read synology 5 drive system draws only 42W with drives accessing. Your system and hopefully the one I am building are low power but are they comparable to synology ? They better be after spending so much money and I dont think we can go any more power efficient systems than these.
 
Last edited:
  • Like
Reactions: dms

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Rather than mirror the pool if that's even possible use zfs send locally.

As for synology they are reasonably optimized in terms of power but for same power you can build a system that can do a lot more.
 

MacLemon

New Member
Feb 16, 2015
20
19
3
Rather than mirror the pool if that's even possible use zfs send locally.
The point of me wishing I could temporarily do a mirror was that this would allow full migration without downtime. Would have been nice, but in this very case, not necessary.

Yes, it's perfectly possible to use zfs send and recv locally. It's as simple as
Code:
zsf send sourcevolume/sourcedataset | zfs recv targetvolume/targetdataset

My requirement is to run 2-3 VMs in addition to Freenas…
For a fresh start, FreeNAS 11 is quite ok to run. I've already done tests with Bhyve VMs and the performance is superb compared to VirtualBox.

You said you can add another 8 drives to your case. How is that ? From the pictures I dont see you can add additional drives. Did you mean take out the current 6 drives you have there and replace with 8 new drives in that case ?
There's plenty of space in that box. :) (17LFF in total) Either way, the existing 6 drives will be migrated to a different setup when everything is done.

BTW, how much power your system is drawing with those 6 drives in there ? I read synology 5 drive system draws only 42W with drives accessing.[/QUOTE]
Idle that is around 65W, fully loaded it peaked to around 140.
Power consumption also heavily depends on the drives used. I run HGST, which are more on the datacenter side than what I expect Synology to use. Meaning, they use about 10W under full load, and they're not the most silent drives.
Also I expect my build to be a lot more powerful than most Synology boxes.

[Edit: Added answer to posting before, which I had overlooked. To prevent multiple successive postings by me.]
 
Last edited:

CJRoss

Member
May 31, 2017
91
6
8
I am considering E3-1220L because the price difference on anything higher is huge, like 4 times expensive. Don't know if that is similar to what you have in performance.
I would recommend the 1230 instead of the 1220L. It's usually not much more and you get hyperthreading and a higher clock speed.
 

CJRoss

Member
May 31, 2017
91
6
8
If you are using IT mode, the drives are passed through so no, the card won't do any rounding. It's up to whatever creates the arrays to decide on the rounding factor. For example, if you wanted to be safe, you could call all 4TB drives 3.95TB drives, which lets you use any drive in the array. You can easily do this in mdadm by building out the partition that will become a member of the array to be only 3.95 and leaving the rest unused. This results in the safest way to be able to mix and match drives between different vendors and models. Yes you lose a small part of the array, but it's pretty small for the convenience.
FreeNAS does this for you automatically. It creates a 2G swap partition on each drive. If you run into a drive with a slightly smaller sector count you can just reduce the swap size.
 

IApar

New Member
May 25, 2017
6
1
3
44
I would recommend the 1230 instead of the 1220L. It's usually not much more and you get hyperthreading and a higher clock speed.
yes I was considering that too but the price of 1220L is like $70 and 1230 is $300 or so. These are E3 xeon that I was looking on ebay. The price difference is huge.
 

MacLemon

New Member
Feb 16, 2015
20
19
3
How about a November 2019 lifecycle update of that box? Why you ask? Well, it just died.

With “it died” I mean, that it does not boot anymore. My current suspicion is that the Atom C2758 CPU or the motherboard died because it won't even POST anymore.

It started with the box unexpectedly rebooting and logging a FreeNAS/FreeBSD kernel panic which I found rather odd, but not a lot I could do about it. Sometime later that day, the box was seemingly powered on, but not booting anymore. The CPU fan and enclosure fans do spin up, but that's about it. No video, no POST beeps.
I've tried reseating the RAM, swapping banks, etc. without any result other than dusty fingers.

Pretty much the only thing it does is spin up all fans, spin up the disks, make the HBA have two amber LEDs solid on and blink with the green LED on the motherboard. That's all. (Marked with orange arrow in the picture.)

It's dead, Jim.jpg
Up until now, this has been a reliable server.

I'm open to suggestions on how to proceed from here. I trust my ZFS that all my data is safe and sound even though I can't access it at the moment.
 

Marsh

Moderator
May 12, 2013
2,644
1,496
113
If you are in US, open a support ticket with SuperMicro , request RMA for repair.
SM would repaired or replaced the bad board for free.

Mention it is Intel Atom C2000 AVR54 Bug
 
  • Like
Reactions: Patriot

MacLemon

New Member
Feb 16, 2015
20
19
3
RMAs can be requested worldwide :), which is exactly what I did.

If SuperMicro does replace/repair the board, I'm happy with that.
If they don't, I'm in the market for a new server motherboard and CPU, preferably with more SATA onboard channels and SFP+ options.

I'll keep you updated on SuperMicro's response.

Edit 2019-11-26 18:46 (UTC)
RMA requested, number received by SuperMicro, logic board extracted, labelled and packaged for shipping to the Netherlands.
 
Last edited:
  • Like
Reactions: SRussell and Marsh

SRussell

Active Member
Oct 7, 2019
327
152
43
US
This has been an excellent read. I am new to the board and I love the open exchange of ideas. The dialogue alone has me excited about learning.

Do you plan to do any posted documentation of systems you build for clients?
 

MacLemon

New Member
Feb 16, 2015
20
19
3
This has been an excellent read. I am new to the board and I love the open exchange of ideas. The dialogue alone has me excited about learning.
I can only second that. Being a noob here myself I was welcomed very friendly and I received excellent help from people here. Keep in mind that I'm not really active here, I literally only posted in this very thread since I joined. :p

Do you plan to do any posted documentation of systems you build for clients?
I never thought about it. I consider them pretty “boring” to be honest as they're mostly off the shelf, rack mount boxes. Mostly no special requirements. It's basically the usual “I need a fileserver with 100TB of capacity. It must be super fast, dirt-cheap, and we'd like it delivered yesterday.” kind of requests. (You know… pick two of three.)
The only interesting thing for me at the moment is obtaining server motherboards with 10GE SFP+ slots, as having 10GE copper Ethernet onboard showed to be mostly useless unless you want to connect it back-to-back with another machine. To me fiber appears to be cheaper and easier available these days.

The difference with this very build was, that I needed it for myself, and it would be running in my office. That posed contraints requiring research and learning. Like making it very affordable yet sustainable, as silent as possible, provide enough capability of everything for a few years to come.
So far, I'm gratified with the result. I consider this thread to be the full life cycle report of a SOHO/lab server from genesis to decommission.
 
  • Like
Reactions: SRussell