[Feedback sought] Enthusiastic NAS build [4.5y usage update 2019-12]

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Oversized shouldn't hurt I think. It will only use what it needs. As this is platinum, you should be above 90% efficient anyway.

I like it so far and would buy again. I love the fanless design.
 

namike

Member
Sep 2, 2014
70
18
8
43
Here is a vote for the the E3-1230v3 / X10SL7-F-O setup ;) Essentially same price as your C2750 setup + M1015, but more horsepower and more flexibility.

I know you have said you are going to move from 32GB to 64GB in the future....however depending on your VM and usage, discounting the E3 platform for that reason alone in a home environment I think would be a mistake based on my personnel setup. Mine idles in the low 30's watts in ESXi fully populated with 32GB of memory, full size ATX Cosair CX430 bronze PSU, and a single Crucial M4 128 GB SSD.

There is lots of flexibility with this setup:

Baremetal FreeNAS server? No problem. 8 port LSI SAS controller onboard, plus 2 Intel SATA3 ports, plus 4 Intel SATA2 ports. Still not enough drives later down the road? Toss a HBA into the PCI-E 3.0 X8 slot, or connect the onboard SAS controller to an expander.

ESXi all in one setup is easily supported (my personnel setup right now). CPU/MB support VT-d. Pass the entire LSI controller through to your NAS OS (FreeNAS, Omni/Napp-it, xpenology, etc etc).

If you want to see any specific tests with my X10SL7/1230V3 setup, I'll try to accommodate!

-Mike
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
These are outdated!
--
I'm currently working to spec a somewhat enthusiastic home NAS system and would like your feedback on my choice of parts.

Here's the details so far, reasonsoning why I chose these is below.
Build’s Name: Work in progress
Operating System/ Storage Platform: FreeNAS/FreeBSD
CPU: Intel® Atom™ C2750
Motherboard: SuperMicro A1SAM-2750F
Chassis: Fractal Design NODE 804
Drives: 6 × Seagate ST4000DM000 Data Sheet (PDF), Product Manual (PDF)
RAM: This RAM doesn't fit as pointed out by BlueLineSwinger. 2 × 16GB Samsung DDR 3 1600 CL 11, ECC Reg. 1.35V
Add-in Cards: 1 × IBM ServeRAID M1015 (2 × 4 Port SATA/SAS HBA)
Power Supply: TBD
Other Bits:
  • A small-ish SATA SSD (probably Samsung EVO). Capacity undecided.
  • 2 × Mini-SAS SFF8087 to SATA breakout cables. (SAS8087OCF-06M)
  • A bunch of cables that I've missed to list here
  • Cable ties, velcro, to keep things tidy.
  • Standard power cord :)
  • UPS to cope with power dips/surges and to cleanly shut the box down in case of a power failure.
Usage Profile:
Mostly classical “Home NAS” usage, like storage of audio, video and other random files. (AFP, SMB/CIFS)
Plex- and Firefly (mt-daapd/iTunes) server. Must be capable of live transcoding two full HD video streams simultaneously. (CPU supports SSE/VMX but I don't expect Plex to use it at all.)
Local backup target for OS X's TimeMachine.
Other remote machine's backups over ssh/rsync each to their own FreeBSD jail.
Some Jails/VMs for mixed software/OS services and testing. (CPU is VT-x/EPT capable.)
ZFS-RAID Z2 for the main storage pool connected to the HBA.
ZFS-RAID Z1 for further backup storage (probably by recycling a few existing spindles I can free up once this box is in production) connected to the onboard SATA 2 ports.
Encrypted storage supported by AES-NI.
Probably the usual things you forgot that only come to mind after you have ordered parts. What could that be?

Intentions:
Low energy consumption when idle yet enough power for the enthusiast power user in me. Silent operation, as few fans as possible, as many as needed. Large fans to keep the rpms down and the noise frequency low. I prefer good quality components over cheap ones but don't want to spend unnecessary amounts.
IPMI interface goes to my management LAN.
2 × Gbit/s ethernet ports trunked for file access
Remaining 2 ethernet ports used for jails/VMs, likely on different VLANs. (TBD)
HBA goes into the PCIe 8 × slot. The 4 × slot remains empty for now.
I intend to run the HDDs in acoustic managed mode for silent operation given the performance impact is worth it.
No need for graphics or audio on this box.

Open questions:
  • Which ATX power supply to get? Should be very silent and energy efficient. (Eyeing an 80 Plus Gold or Platinum model) Will 450W be ok-ish, even if I decide to add more HDs later up to a maximum of 10 (LFF) and 2 (SFF) which fit in the chassis? (ATX PSU up to 260mm fits.) I also need to get power to all the spindles.
  • Is it likely that I need more fans than the 3 × 120mm fans that come with the chassis? (It can accomodate up to 10 fans but I don't intend to build a hovercraft.)
  • I'm looking into getting a TPM 1.2 module for storing SSH host keys of the box. Any suggestions what TPMs work with FreeBSD?
  • I'd like to connect my hardware random number generator to one of the USB ports. (preferably an internal port.)
  • This bullet intentionally left blank.
  • Have I missed something totally obvious? I'm not that used to building my own boxes.
Why I chose these components:
Feel free to disagree and suggest better components and please tell me why!

CPU: The Atom C2750 has 8 cores (8 threads, no hyperthreading) and provides plenty of multithreading speed yet has very low power consumption and support for up-to 64GB RAM. (ZFS and VMs looove RAM, though I'll start with 32GB.)

Why not these CPUs?
  • Atom C2758: Supports QuickAssist which is less suitable to general server tasks but better for network packet handling. C2758 vs. C2750
  • Atom C2550: 4 Cores/4 Threads version. About half the multithreading performance which would still be enough for video transcoding. 5W less power consumption under full load, not much of a difference when idle. C2750 vs. C2550
  • Xeon E3-1230 v3: Significantly higher power consumption. Limited to 32GB RAM. (I intend to upgrade to 64GB RAM within a year.) E3-1230 vs. C2750
RAM:
1.35V modules use less power than 1.5V modules and stay cooler leading to better energy efficiency and more silent operation. Can save between 4W (idle) and 8W (loaded) with 4 × 16GB DIMMs unscientifically extrapolated from this comparison of 1.5V/1.35V RAM on this forum. Modules as speced by SuperMicro for the chosen motherboard.

Motherboard:
Why not these MBs?
  • ASRock Rack C2758Di: A tempting alternative at a good price point. It sports additional 4 × SATA 3 and 2 × SATA 2 ports providing enough SATA ports. They are powered by Marvell SE9230, SE9172 which are known for poor throughput and stability problems. Only 2 × Gbit Ethernet ports + IPMI. ASRock does not have any reputation in the server market.
  • SuperMicro A1SAi-2750F: Pretty much identical. mITX form factor opposed to µATX. Hence only a single PCIe 8× slot. Only actual downside being the use of SO-DIMMs which are more expensive and hard to get. Since there is enough space in the chassis for a µATX board I prefer the one listed at the top.
  • SuperMicro A1SA7-2750F: This board would pretty much be the perfect fit with its 16 × SATA 3 ports powered by an LSI 9116 controller. Since it uses a proprietary form factor which doesn't fit in any standard chassis it would have required use of a SuperMicro chassis which are great in the data center, but acoustically inappropriate for home use.
HBA SAS/SATA Controller:

Why not this one?
  • LSI SAS 9211-8i: This would be my HBA of choice The chosen IBM controller is identical to this one and its firmware can be crossflashed. The LSI original is about twice the price for the same product. They are based on the LSI 2008 controller chip.
  • A gazillion other LSI PCIe controllers…
If need be I can still add two SSDs as L2ARC and ZIL to accelerate my ZFS storage pools.

The LSI SAS 2008 RAID Controller/ HBA Information thread here has been of tremendous help for me! Thanks to Pieter!

Disks
Based on experiences by Backblaze with tons of disks, building their Storage Pod 4 and also my personal experience.

Why not this one?
  • HGST HDS5C4040ALE630 (Datasheet (PDF)): I'd actually prefer this drive over the Seagate model. I mostly rejected the HGST disks because I already have 3 Seagates I can use and mixing brands is not the smartest idea. The Seagate disks are about 20% cheaper than the HGST.
Any other questions or suggestions?

Best regards
MacLemon
@MacLemon: So what did you end up doing?
 

MacLemon

New Member
Feb 16, 2015
20
19
3
I had some busy days recently. Now the first parts arrived. So here are a few pictures to keep you happy while I wait for more parts to arrive. :)


SUPERMICRO™ A1SAM-2750F



Logic board detail: Piezo squeaker and pin headers. A squeaker is a highly welcome feature on a server which is often neglected. Simple audible bits of information like “finished boot up”, “ACPI shutdown” and “RED ALERT!” are useful.


Logic board detail: SATA Ports with the CPU heatsink in the background. The two white ports are SATA 2 (3Gbit/s), the black ones and the odd yellow port are SATA 3 (6Gbit/s). Sometimes yellow SATA ports are SATA-Expresss (10Gbit/s) which this one isn't. <s>I have still to find out if the yellow color has any significant meaning.</s>
The yellow color indicates support for SATA-DOM power as added in #46 by piglover and in #47 by markarr.


Logic board detail: I/O panel - from left to right: RS232, Management Ethernet RJ45 100Mbit/s (IPMI), 2 × USB, 2 more USB 2.0 ports, 4 × Gbit/s Ethernet RJ45, only partially visible on the far right: VGA DB15.


Of course it comes with the usual leaflet documentation guiding you where all the ports and pin headers are.


The included I/O bezel requires you to break out a few parts so you can actually access them all.


The inside of the I/O bezel is padded with EMI shielding mesh.


Also included with the logic board came six straight-connector SATA cables which are 60cm in length.


Some RAM also arrived. I went with 4 × Samsung M391B1G73QH0-YK0 which are 8GB DDR3, 1600MHz ECC UDIMMs. The reason was simply the price. As CreoleLakerFan pointed out in posting #42 in this thread the 16GB memory modules are only available from Intelligent Memory and they are hugely expensive. (Around 2 × the price of 8GB modules for the same amount of RAM.)


I'll keep you updated as more parts arrive. By now you've all noticed that I'm not in a rush in finishing this. :)
 
Last edited:
  • Like
Reactions: MiniKnight

MacLemon

New Member
Feb 16, 2015
20
19
3
Sorry for the double posting, couldn't avoid it this time. You need to post more in between!

More parts have arrived. \o/


Four of the six HGST HDs found their way to my place.


Here's the SFF-8087 cables to connect the HDDs to the HBA.

The HDs shall be connected to this IBM M1015 controller. I still need to flash this one into IT mode with LSI Firmware.


This is where the SFF-8087 cables will connect.


Nice PCIe 2.0 × 8 connection.


I'm happy to do any testing or benchmarking for you!
If you want something tested, please state what you want tested and how to test it! On one hand, I have no clue, on the other hand, other forum members will probably want to compare their system. (Exact command line and if applicable, all the ports that are needed to run them.)

I'm currently experimenting with FreeNAS 9.3 on this setup. If you want tests run on other operating systems, give it a try. I might even install it just for fun. I do not have any Windows licenses, but anything that is open source has a good chance.

I don't care if the tests are destructive (to data, not hardware, so please refrain from suggestions involving explosives). :) Reconfiguring the zpool in between is fine as well.

Best regards
MacLemon
 
Last edited:
  • Like
Reactions: Patrick

neo

Well-Known Member
Mar 18, 2015
672
363
63
The two white ports are SATA 2 (3Gbit/s), the black ones and the odd yellow port are SATA 3 (6Gbit/s).
You have that backwards. The (2x) SATA3 ports are white and the (4x) SATA2 ports are black (with 1 yellow DOM port). Supermicro always uses the same color scheme across their motherboards.
 

MacLemon

New Member
Feb 16, 2015
20
19
3
Caution, more pictures ahead!

All the parts I ordered have arrived by now. \o/




Little Boxes, little, boxes…

…and one more box. The Power Supply.



So many things to do. Like flashing the IBM Serve RAID M1015 HBA with geniuine LSI Firmware. The article IBM ServeRAID M1015 Part 4: Cross flashing to a LSI9211-8I IN IT OR IR MODE was extremely helpful in that regard. Even though, by now, it is a little outdated due to LSI changing their website and a few of their tools are broken now.

I'm seriously disgusted by the fact that all of their Firmware updates are not digitally signed and only distributed over unencrypted HTTP connections leaving a wide gap for malware injections into their firmware and flashing tools. An unacceptable downside for businesses.


This is what the stock Firmware of the IBM M1015 looks like. Outdated and in IBM mid-80s corporate business manner, crippled.

I had to flash it in two steps. The first one booted from FREEDOS 1.1 USB Stick to erase the controller firmware and BIOS with an empty firmware (sbrempty.bin). Doing so makes it safe to reboot the computer between flashing steps which you will need to do.


Next I had to flash the new firmware (2118it.bin) and BIOS Option ROM (mptsas2.rom) file and SAS configuration back to the controller. Since LSI's tools for FREEDOS, etc. are quite b0rked I used the EFI shell tool. Luckily the SUPERMICRO A1SAM-2750F does come with a built in EFI-Shell that you can use. You only need to provide the necessary files on a FAT32 formatted thumb drive.


Flashing the BIOS Option ROM and firmware.



Finally adding the SAS configuration back to the HBA which now designates itself als LSI Megaraid 9211-8i. It is now running Firmware Phase 20.00 in IT mode.


For the power supply I wanted to go with a fanless model. The least powerful PSU I could find was the Seasonic X-400 fanless. (SS-400FL) It's a fully modular design with 80PLUS Platinum certification and all the other marketing mumbo-jumbo you could ask for. Platinum level means 90% efficiency at 20% load, 92% efficiency at 50% load and still 89% efficiency when fully loaded.


Since fanless designs completely rely on thermal convection it's basically a huge box of honeycomb mesh and Seasonic prominently reminds you multiple times that this PSU must be mounted in the right direction or the universe will implode. I personally doubt the universe implosion claim but it may well cause the heat-death of your PSU if ignored. Followed by sparks and flames.


The backside is sufficiently boring, sporting your usual home-appliance power plug and specified for 110-230V AC at 50Hz or 60Hz.


The cable side is well labelled and all connectors are keyed so you can't plug them in the wrong way. The 12V rail is strong within that one. It's also available in 460W and 520W models with even more power. Since the 400W model already has double the power I actually need for my build, I should be fine.

The separate ratings are:
+3.3V: 20A
+5V: 20A
amounts to 100W

+12V: 33A (for all those spindle motors) 396W
-12V: 0.5A (6W)

+5Vsb: 2.5A (12W)

Which is conservatively marketed as a 400W power supply. I like that.


Modular PSUs come with cables. Who'd have thunk? A lot of cables to be exact. Since I don't run any GFX cards I won't need the PCI-E supplies, but the many SATA and Molex connectors come in handy for powering many drives for which there is plenty of space in the enclosure.



Also included are a bunch of accessories including a helpful manual, cable ties, mounting screws and a sufficiently punny “Powered by Seasonic” sticker.


Nice touch are the three reusable velcro ties.



Mounting in the case was flawless. Even though it is 160mm deep I can still use all drive cages for the enclosure. As expected, the PSU is completely silent as I wanted it to be.
 

MacLemon

New Member
Feb 16, 2015
20
19
3
Sorry for the double posting, I really couldn't avoid it this time.

Speaking of the enclosure, I went with the Nanoxia Deep Silence 1 (Revision B) which was suggested by member NeverDie in posting #8.


It is huge. It is black. It is really well built and sports a lot of acoustic dampening. Oh, did I mention that it is black?


On the top you'll find a thing they call “air chimney" which is marketing speak for a vent you can open to help convection or mount fans or a radiator under.


Even when opened it is dust filtered and doesn't make a noticable change in sound emission. This is a totally unscientific emotion based on my own ear's impression. Your ears may vary.


It's got a big green LED circled power button on the front top. Storage access on the logic board SATA ports is signalled with blinking red LEDs in between. Certainly leaves no doubt that this thing is ON.



Directly behind that there is a small hatch which is a little fiddly to open.


It reveals access to the so called front-panel USB ports and 3.5mm audio jacks. Since my motherboard doesn't have any audio or USB 3.0 I only connected the two USB 2.0 ports and they work fine for connecting a keyboard or thumb drive during initial installation of your software.


The Nanoxia comes with huge amounts of accessories weighing in at almost 600 grams on my IKEA kitchen scale. It includes mounting material like cable ties…


…as well as a metric ton of mounting screws for pretty much anything you could want to mount inside. Motherboard standoffs for Mini-ITX, µATX, ATX, etc. The power supply screws weren't missing I only had the PSU already mounted when taking the photos and I was too lazy to take them out again. Of the depicted SATA cables (57.5cm) you get six. They all look the same, so extrapolate from the one on the pic.


If you want to mount a bunch of 2.5" disks or SSDs you get a separate cage.

That one snugly slides into a 3×3.5" cage. This seems to be a new addition in the Revision B. model. All cages as well as the drive slides are black painted metal.


The mounted µATX motherboard, sans cabling looks like this and really easy to fit in. There is plenty of space.



With all the components inside and most of the cabling tucked away on the backside it looks like this. I went with 1m SFF-8087 multi SATA cables for the HBA. This is longer than necessary. You'll get by with 70cm just fine. There are enough through holes with rubber thingies to traverse. Between the power supply on the bottom left and the hard drives on the right hand side is space for another 3×3.5" cage in which you can also mount the 6×2.5" SSD cage. This works fine with a PSU that is a maximum of 160mm in length. The two dangling SATA cables are just spare so I can easily find them should I add two more drives there.

Top right hand are 3×5.25" bays for optical drives and you can swap one of the bezels for using a floppy drive or other externally accessible drive. Since that server will be tucked away out of sight I won't make use of that. If you like you could fit a 5×3.5" cage in that space as well.


How is cooling with so many drives you may ask yourselves by now. I haven't added any fans to the ones that came with the enclosure. That means 2×120mm front intake fans directly in fron of the hard drives and a single 140mm exhaust fan on the top backside. I just disregard the mini fan on the CPU. This is surprisingly silent and certainly usable in an office environment, especially when stashed away in a closet.

Harddrive Temperature is around 42°C with the fans connected to the built in fan-controller at it's lowest setting. The motherboard sensors for CPU, peripherals and RAM range between 29°C and 37°C. You can shave of a degree or two by setting the fans to their maximum setting which makes them noticable. All emitted sounds are rather low frequency. No need to fear scaring your cat. Probably unless you captivate her inside the case which I don't recommend.


Power Consumption is pretty much where I expected it to be. With 6×HGST 4TB NAS disks, 3 fans, the 8GB SATA DOM, LSI HBA and the motherboard I get a reading of around 70 Watts idle.


That idle reading fluctuates ±3 Watts.



Peak power consumption is during bootup when the drives spin up. This gives me a peak of just below 150W. I tried setting the LSI HBA to staggered spin up of the drives to reduce that peak which did not materialize in a relevant reduction. When adding the last 3 drives I expect that peak to rise to 180W which is still within 50% load of the power supply. A 250W to 300W power supply would have been sufficient for this build but I wasn't able to find a fanless model in that range.


Regarding performance, I'm very happy with my choice. I can watch 3 Full HD video streams that require live transcoding in Plex and get about 60% load. Your milage may vary by a large margin, but this certainly works fine for me.
I haven't done a lot of VM testing yet, but some Linux and Solaris machines are here to come. I'll surely give OS X Server a shot and maybe I'm able to retire that trusty Mac mini then.

Disk speed is way beyond my needs. With a 6 disk RAID-Z2 configuration I almost get 400MB/s read/write with bonnie++. Given that I'll be accessing this mostly over Gbit ethernet that is plenty of room. Just for fun I did try the 6 disk stripe and easily topped 950MB/s but of course that is a ridiculous setup nobody should use.

Conclusion:
I'm a happy camper with that performance, low noise levels and power consumption of this build. Did I mention that it's black? :)

Thanks a bunch for bearing with my prolonged ramblings. I hope this helps others in building their own boxes. You folks certainly did help me a lot with your suggestions and questioning of my choices.

MacLemon
 
  • Like
Reactions: Patriot

neo

Well-Known Member
Mar 18, 2015
672
363
63
MacLemon said:
Flashing the BIOS Option ROM and firmware.
Most people don't flash the OptionROM on a IT firmware as it's unneeded and increases boot time.

MacLemon said:
It is now running Firmware Phase 20.00 in IT mode.
If you are still planning on running FreeNAS, that version is too new and incompatible.
 

MacLemon

New Member
Feb 16, 2015
20
19
3
Most people don't flash the OptionROM on a IT firmware as it's unneeded and increases boot time.
If you are still planning on running FreeNAS, that version is too new and incompatible.
I don't mind the few seconds since I don't reboot all the time, but I agree that it is de-facto unnecessary.

FreeNAS 9.3 fully update does give a warning about Phase 20 being newer than the expected Phase 16, but I have not found any actual incompatibilities yet. Everything I have tested so far worked flawlessly. Anything particular you can name that is known incompatible and causes actual problems? Any links you could share that relate to that?
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
It's been a while, but I remember seeing quite a few reports of people having problems with 20 (quite possibly not all FreeNAS even), and that 19 was the recommended stable firmware. If I was going to touch the firmware on mine 19 is what I would upgrade to, but its already at 18 and working fine so I've left it alone.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
The LSI SAS2008 firmware rev 20 has trouble on Windows based systems (7/8/2012/2012R2) but Linux drivers seem to be happy with it. I'm fairly certain the issue is related specifically to Windows, but you should check the FreeBSD or FreeNAS forums to confirm compatibility with BSD.
 

paylesspizzaman

New Member
Aug 31, 2015
2
1
3
41
How were you able to measure the wattage to the CPU? Is there a bios measurement or something that reports it?

I also have the E3-1230V3. With 16GB of memory, the CPU, stock inte CPU cooler fan that came with the CPU, motherboard, memory, an SSD, and the picopsu power supply and the 12v brick that feeds it all together consume about 31 watts (as measured on a kill-a-watt type device) when FreeNAS is idling.
What motherboard do you have? I'm trying to decide between a C2750 miniTX and a E3-1230V3. 31 watts from my understanding is Atom land. If the 1230V3 can idle down to the same watts as a C2750, I would much rather have a 1230V3.
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,140
594
113
New York City
www.glaver.org
Here's what I wrote in another thread here about P20:

That was true for 20.00.00.00 and the first rebuild, 20.00.02.00. The 20.00.04.00 rebuild has been working solidly for me on a number of different controllers / chipsets.

Here is the LSI knowledgebase article.