U-NAS NSC-800+Supermicro X10SDV-TLN4F Storage Server Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jimmy_1969

New Member
Jul 22, 2015
24
16
3
54
Jakarta
11 Oct-15, Project Introduction

Background

This build is primarily about two things:
  • Replacing my old NetGear ReadyNAS NV+ with a more up-to-date platform
  • Exploring FreeNAS virtualisation on a CentOS7/KVM host
Upfront I would like to thank all the contributors of this forum for sharing your valuable experiences. And in the spirit of sharing I plan to continue that tradition in this build log.

One of the challenges with this build is that I am based in Jakarta, Indonesia, so the majority of the components are shipped half across the planet, and then disappears on to the black hole of custom clearance and local supply distributers. This means tracking deliveries works well until it arrives to Jakarta, and after that the process time-warps back to the nineteen eighties, and typically on-line updates seize to exist. Eventually, the delivery arrives but it is impossible to predict how long the local process will take.

The implication of this is that you need to do a lot of homework to ensure the components ordered will play nicely together, as getting replacement equipment easily can add on 2-4 weeks of lead-time.

Overview of Hardware Components
01 Chassis 800x558.jpg
Chassis: 1x U-NAS NSC-800 v.2 (URI: u-nas.com :: U-NAS Server Chassis :: U-NAS NSC-800 Server Chassis)

02 MB & RAM 800x530.jpg
Motherboard: 1x Supermicro x10sdv-tln2f Rev 1.02 (URI: Supermicro | Products | Motherboards | Xeon® Boards | X10SDV-4C-TLN2F)
RAM: 1x Samsung M393A4K40BB0-CPB 32GB DDR4-2133 Memory MEM-DR432L-SL01-ER21 (URI: http://www.amazon.com/Samsung-M393A4K40BB0-CPB-DDR4-2133-Memory-MEM-DR432L-SL01-ER21/dp/B00U6O78W8)
Power Supply Unit: 1x Seasonic SS-350M1U 350W PFC 80 Plus Gold (URI: Ready 1U Seasonic SS 350M1U 350W PFC 80Plus Gold Power Supply w 1U Bracket | eBay)
Power Extension Cable: 24 Pin ATX 2.01 Power Extension Cable (URI: http://www.amazon.com/StarTech-8-Inch-Power-Extension-ATX24POWEXT/dp/B000FL60AI)

03 HBA-&-Fans.jpg
Host Bus Adaptor: 1x LSI MegaRAID 9240-8i 8ports PCI-E 6Gb RAID Controller (URI: New LSI MegaRAID 9240 8i 8PORTS PCI E 6GB RAID Controller IBM M1015 46M0861 | eBay)
Host Bus Adaptor Fan: Noctua NF-A4x10 40x10 mm (URI: Amazon.com: Noctua 40x10mm A-Series Blades with AAO Frame, SSO2 Bearing Premium Retail Cooling Fan NF-A4x10: Computers & Accessories)
Case Fans: 2x Nanoxia Deep Silence 120mm PWM, NDS120PWM-1500 (URI: http://www.amazon.com/gp/product/B00CHX0SQY)
CPU Fan: 1x Noctua 92 x 14 mm Low-Profile (NF-A9x14) (URI: http://www.amazon.com/gp/product/B009NQM7V2)
Hard drives: temporarily re-using some old 4xWD SATA 500 GB drives for ZFS storage pool, and a Maxtor DiamondMax 300 GB as boot drive. Once this platform has been verified I plan to put in 4xWD 4G Reds and a SSD boot drive.

Overview of Software Components
Host server OS: CentOS 7.0, 64-bit
Host Bus Adaptor: Original LSI 9240-8i Raid card cross-flashed to become a SAS 9211-8i Host Bus Adaptor in IT mode using firmware version 16.00.00.00 and Bios version 07.31.00.00.
Virtual Plex Media Server , version 0.9.12.11 running CentOS 7.0 as guest os
Virtual FreeNAS server, release 9.3.0

Design Considerations
For this project I was looking for a small-scale chassis with potential to scale up to eight drives. Hot-swap drive cages was not a mandatory requirement as this is for home use. The NAS unit will be located in my home office so it needs to be quiet, aesthetically pleasing, and energy efficient.

During my research I identified two candidates: Fractal Design Node 304 and U-NAS NSC-800. Both fulfilled the small-scale factor requirement. I settled for the U-NAS as I preferred its design.

The real challenge for a small build like this is to find a Mini-ITX board that would enable to support 10G ethernet in the future, would support minimum eight SATA drives, and run Intel NICs. Brian Moses did an excellent write-up about his build based on the ASRock C2550D4I motherboard which almost won me over. But reading others struggling with the Marvell chipset and FreeNAS put me off. And if I wanted to have a go at running FreeNAS as a VM, it was clear that a HBA and PCI-passthrough was the way to go. So back to the drawing board.

Then I had an epiphany. The review of Supermicro X10SDV-4C-TLN2F on this site by Patric showed a new possibility. A Mini-ITX board already 10G NIC capable and a PCIe slot for a HBA card would be the way to go! FreeNAS would love the RAM potential of this board; up to 128 GB of DDR4 ECC RAM. Not that I ever plan to deploy that much storage/RAM for FreeNAS. But then again, six years ago I never thought I would need more than 3 GB of NAS storage in my house...

//Jimmy
 

jimmy_1969

New Member
Jul 22, 2015
24
16
3
54
Jakarta
11 Oct-15, Some Initial Lesson's Learned

Author's Remark

This post will share a few of my initial learnings when building this project. As of now the build is still ongoing.

U-NAS Chassi Fan Noice
Based on other build logs with this chassis, I decided to replace the original Gelid Solutions Silent 12 PWM stock fans with Nanoxia Deep Silence 120 mm fans. In hind-sight this was not required, as the stock fans runs inaudible in my configuration.

U-NAS Chassi USB-3 Support
The chassis has a front-end connector for USB-2. The vendor offers an optional USB-3 connector which I selected. However, the Supermicro X10SDV-4C-TLN2F Mini-ITX does not come with USB-3 capable front header connector. The built-in back-end connectors supports USB-3, but the two on-board connectors for front-end only supports USB-2.

Seasonic PSU Installation
Luckily the PSU I ordered came with a back plate. This wasn't something I gave much thought when ordering, but without it mounting it would have been tricky. The PSU is 20 cm long and is basically only secured against the backplane of the chassis. Under the PSU there is a 5 mm gap to the chassis floor, which means the two back-end mounting screws needs to handle the entire weight. I added some rubber spacers between the chassis and the PSU to improve the balance.

When connecting the PSU to the motherboard, use the 24-pin power connector only. The additional 4-pin connector is not used in this configuration.

The Seasonic PSU is a great little piece of kit. It is modular and comes with pre-made cables. Which unfortunatly are about 3-4 centimeters too short for this configuration. So a short 24-PIN power cable extender is needed.

Motherboard X10SDV-4C-TLN2F Installation and Over-Heating Issues
This board comes with an Intel D-1520 fan-less CPU. Currently the only DDR4 ECC 32 GB memory officilally supported by Supermicro is Samsung M393A4K40BB0-CPB 32GB DDR4-2133. But unlike most other fan-less Mini-ITX motherboards I have tried, this one will over-heat within a couple minutes of use without a CPU fan. Then the over-heat protection kicks in, causing the board to switch off. This is a very fundamental thing that needs to be understood. Stable operation requires supplementary cooling. Full stop.

This is something I learnt the hard way after first booting up the motherboard. As I entered the BIOS set-up menu everything initially worked as expected. But after a few minutes enumerating the menu options using keys started to fail, to the point where the entire BIOS entered a freezed state. LED8 on the motherboard is for over-heat indication (steady green light), or Power-/Fan failure indication (flashing green light). The thing is that without a connected fan, the LED will indicate fan failure even if CPU is over-heated. There is another over-heat LED header to which an external LED can be connected, and that is how I finally figured out why my system was unstable.

Also note that this motherboard does not come with a PC speaker. So if you would like to be able to hear the BIOS error beeps you might want to buy a 4-pin speaker and connect that to header JD1. Considering this motherboard costs around 700 USD it's a mystery to me why Supermicro opted to exclude a 50 cent speaker.

Motherboard X10SDV-4C-TLN2F NIC Driver Issue
According to other posts the board is equipped with 10GbE NICs based on Intel's X557-AT 10GBASE-T chip (driver ixgbe). Component specifications can be found here. Since this chip was recently released some distributions don't have drivers for it yet. My initial tests using CentOS 7.1 (kernel 3.10.0-229) installer ISO was unsuccessful. The kernel detects an ixgbe device but does not manage to configure any network device.

//Jimmy
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
I just looked if Patrick's review covered your points and saw this literally at the top of the review under negatives
Lack of 10GbE drivers included in current generation server operating systems means that one can have challenges installing directly from OS CDs. Passive cooling means that for desktop/ NAS applications additional cooling will be required.
Spot on save the speaker. Nice point there on the speaker. Maybe @Patrick should cover that in reviews.
 
  • Like
Reactions: jimmy_1969

jimmy_1969

New Member
Jul 22, 2015
24
16
3
54
Jakarta
@Jeggs101
Thanks for your reply.

Regarding the cooling, my experience is that even running the CPU on idle, installed on a open-air test bed in a 22°C room, with only RAM and a keyboard, just booting in to the BIOS is enough to over-heat the board in 2-3 minutes. Unless you run this in an ice-cold server hall, passive cooling alone will not suffice for any application.

I just wanted to make this point crystal clear to anyone doing a project based on this board. Given that it is not that self-evident when an over-heat situation has occurred, you could mistake these symptoms for a hardware- or BIOS failure. D-1520 is very much uncharted territory for me, and I hope sharing my beginner's mistakes will help others being successful in their builds.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I find that much of the server gear is meant for active cooling unless you have good case fans. My x8 and x9 systems overheat on my workbench when doing an ls. The solution is a 20mm fan over the north bridge or any chip that has a heatsink on it. I do this for my SAS cards and will probably consider on my 10GbE gear. For my workbench machine for sure.
 
  • Like
Reactions: jimmy_1969

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
I find that much of the server gear is meant for active cooling unless you have good case fans. My x8 and x9 systems overheat on my workbench when doing an ls. The solution is a 20mm fan over the north bridge or any chip that has a heatsink on it. I do this for my SAS cards and will probably consider on my 10GbE gear. For my workbench machine for sure.
I get a 120mm fan and just rest it on whatever heatsink or anything I can. Cools these boards no probs.

Good solution for workbench, not as much that unas case. Good read Jimmy.
 
  • Like
Reactions: Bhench

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Thanks for all the info jimmy_1969 - I've also got an NSC 800 and was musing an upgrade from my current ASRock E3C226D2I to one of the xeon D's (mostly to get 10GbE and an M2 slot), good to know about the cooling requirements.

Was doubly interesting to me since I've configured the CPU HSF (a low-profile Noctua NH-L9i) to never come on, and I can cool my setup perfectly well "passively"; I just keep drive slot one empty and the air flow from the two rear fan provides enough throughput to keep an idling CPU cool. Did the BIOS or IPMI interface report sky-high temps before initiating a shutdown? Be interesting to see what sort of temp those things got to at BIOS-idle (OS-idle in my limited experience tends to be much lower since there's power-saving going on).

Is there enough headroom between the top of the heatsink and the edge of the drive cage to fit in a NF-A9x14? How do you plan on mounting it?
 

jimmy_1969

New Member
Jul 22, 2015
24
16
3
54
Jakarta
Did the BIOS or IPMI interface report sky-high temps before initiating a shutdown? Be interesting to see what sort of temp those things got to at BIOS-idle (OS-idle in my limited experience tends to be much lower since there's power-saving going on).
Surprisingly BIOS does not contain any temperature CPU read-outs. All motherboard sensors are accessible through the IMPI, so once I had that up and running is wasn't that difficult to spot the 98°C (!) Red temperature warning.

Is there enough headroom between the top of the heatsink and the edge of the drive cage to fit in a NF-A9x14? How do you plan on mounting it?
It's a tight fit but that is exactly the solution I am aiming for. As for mounting, here is a good example (contributed in this forum by @Diverge).
 

kidchunks

Member
Sep 28, 2014
39
18
8
35
Last edited:
  • Like
Reactions: jimmy_1969

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
I see a big issue on Unas800 case. the chamber betwen HD backplate to fans is very limited.
to make cooling better, there is the only one way: use STATIC pressure fans that louder..
to make sure the air flow is cooling motherboard, HBA, and 8 HDs..

I believe, there is NO way to silent UNAS800 case without sacirfising higher temp.

blower fans would reduce noise much than axial model..
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I use two BeQuiet 120mm fans on mine blowing in the usual backwards direction, PWMed at about 800-900rpm under almost all loads and disc temperatures (a bunch of WD Green 6TB) are about 10°C over ambient at idle and maybe 15-20°C over ambient under extremely heavy load (i.e. RAID6 rebuild) so I've never really thought beefier fans were needed. I don't really have anything that'll run the CPU (Xeon E3 1230v3) any hotter than 40°C for any length of time with the CPU HSF running between 800 and 1000rpm.

Surprisingly BIOS does not contain any temperature CPU read-outs. All motherboard sensors are accessible through the IMPI, so once I had that up and running is wasn't that difficult to spot the 98°C (!) Red temperature warning.
Yoiks, that's quite surprising given that the xeon-D platform is meant to be so frugal with power, do you have a power meter to see what it's draw at the wall is like? During a cold boot my server peaks at about a 95W draw during disc spin-up and idles at about 65W with all the discs spun-up... I'm wondering if you can avoid the temperature cut-out by booting into an OS with power management features (e.g. linux from a bootable USB).

That said... I wish there were some nice aftermarket HSF solutions for these boards, I'm loathe to resort to clips or glue myself as something always inevitably comes loose and starts rattling at some point.
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
I use two BeQuiet 120mm fans on mine blowing in the usual backwards direction, PWMed at about 800-900rpm under almost all loads and disc temperatures (a bunch of WD Green 6TB) are about 10°C over ambient at idle and maybe 15-20°C over ambient under extremely heavy load (i.e. RAID6 rebuild) so I've never really thought beefier fans were needed. I don't really have anything that'll run the CPU (Xeon E3 1230v3) any hotter than 40°C for any length of time with the CPU HSF running between 800 and 1000rpm.
....
my sharing and understanding:
8 HDS or less?

blowing backward is kind of "temporay patch", since in ideal system case: cold-air(ambient) ->case->hot-air. the air flow inside the case must be not circulated. this is the reason many real case (workstation and server) always put backplane outtake fans that pulls air from inside case to outside.


you blow HD directly through tiny squeare holes that would works on your green HDs since does not generate much heat due on low spinning.

some heat air is running circulate inside the case due on a way to take out the air from the case.

cpu e3 is very very low to generate heat :D. the maximum averagely that I see during busy time on mine is 35C, very very cold on my understanding.
I can push to 50C (as I remember) when running cpuburn test with 4 threads.


10C over ambient during idle is kind of scary for me :D especially on my currents HDs are 7.2K RPM and generate much heat during non idle situation
my rule is 5C over ambient, and 10C on average during not idle, or 15C during rare spike usage situation.
my ambient temp is 25-27C


at the end, those are yours stuffs :D. you can do whatever you want :D
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
I'm very interested in the temperature results you get when your system is fully operational (OS installed and running). I have the 1540 + 2 x 10Gig NICs version of this board with the active cooling fan on the heatsink and I also have a U-NAS NSC-800 case that I'm looking to move to since I need more hard drive bays than I currently have in my SuperChassis 721TQ-250B. I'm concerned about heat though so it's either go with the U-NAS case or move to a more traditional rack mountable style chassis. Only problem is finding a shallow enough rack case because the standard depth is way too big for my space.
 

jimmy_1969

New Member
Jul 22, 2015
24
16
3
54
Jakarta
Project Update 19 October-15

Motherboard X10SDV-4C-TLN2F NIC Driver Issue Progress Update
Followed the advise to move to Fedora. Installed Fedora Server 23 Beta with 4.2 kernel and the latest ixgbe-4.1.5.tar.gz. NetworkManager is a pain so I gave up and disabled it, and went for old-school manual IP configuration. After a lot of fiddling around I finally managed to get the link up! Still has a bit of trouble-shooting left to make the network configuration stable and to survive reboots.

At least for now I know my hardware is OK.

//Jimmy
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
Project Update 19 October-15

Motherboard X10SDV-4C-TLN2F NIC Driver Issue Progress Update
Followed the advise to move to Fedora. Installed Fedora Server 23 Beta with 4.2 kernel and the latest ixgbe-4.1.5.tar.gz. NetworkManager is a pain so I gave up and disabled it, and went for old-school manual IP configuration. After a lot of fiddling around I finally managed to get the link up! Still has a bit of trouble-shooting left to make the network configuration stable and to survive reboots.

At least for now I know my hardware is OK.

//Jimmy
I think, you have to edit ifcfg-ethX and set service network to run during boot in compatible mode...

I use two centos 7 with oldschool network script in compatible mode and one centos 7 with networkmanager daemon...
 

jimmy_1969

New Member
Jul 22, 2015
24
16
3
54
Jakarta
Project Update 24 Oct-15

Power Measurements
Performed a set of power test today. All measurements done on 240 V AC at power socket level.
Test scenarios and results:
  1. BIOS Idle#1, Bare-bone: Motherboard /w CPU & RAM, PSU: 35W
  2. BIOS Idle#2: Adding 1xCPU fan, 2xChassis Fans and HBA card: 42 W
  3. BIOS Idle#3: Adding 1xHDD WD WDS500KS 500GB: 55W
  4. OS idle: Booting Fedora 23 and measuring idle load, HDD in active mode: 45 W

The blow table shows a simulation based on using WD 4Gb Reds (w/ 6.8 W power consumption/HDD).
Power Consumption Table.png
Note that this is OS idle levels, but it looks like even with eight drives there should be ~20% headroom until hitting my utilisation target level of the 350W PSU (50% utilisation).

Best Regards

//Jimmy
 
Last edited:

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Nicely done - although power consumption still seems a tad high for my liking. Does fedora come with any aggressive power saving turned on by default? I added the following udev rules to my install and chopped off about10W from my idle power budget in my E3 system.
Code:
effrafax@wug:~# cat /etc/udev/rules.d/80-power-saving.rules
# turning on all available powersave tunables
# pci / pcie
ACTION=="add", SUBSYSTEM=="pci", ATTR{power/control}="auto"

# usb suspend
ACTION=="add", SUBSYSTEM=="usb", TEST=="power/control", ATTR{power/control}="auto"

# sata ALPM
ACTION=="add", SUBSYSTEM=="scsi_host", KERNEL=="host*", ATTR{link_power_management_policy}="min_power"
YMMV on affecting stability/performance of course but that got me to an idle draw of about 65W including the M1015 and six WD 3TB (although haven't retested since upgrading the drives).
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
by define PM -> "min_power" saves 10 W?.
on my understanding, there is/are services running that eat processing cycles, use top to know

mostly on BIOS, there is selection, power saving or performance. linux kernel would agree when seeing during loading kernel.
just a warning:
setting power saving could cause trouble for baremetal suchas: esxi, proxmox or others
I see directly esxi and proxmox got frozen with powersaving enable in bios.

mine is running centos 7.X, wih bios setting to performance:
I am running i3 with a dual nic inte (old model), 1 SSD, 5 X 80mm SM fans (power hunger + sas2 backplane), HBA card, 1 X40 mm for HBA, 4X4G ECC unbuff ram, and 9 HD barracuda 3T. total average comsumption is 95-100W, I force all barracuda 3T with no APM that raises power consumption.


I tested with only psu + motherboard + 4X4G ram = 14W on i3
E3 should be not many differences in idle
 

matt_garman

Active Member
Feb 7, 2011
212
40
28
In case you haven't seen this: I did a similar build almost two years ago (can't believe it's been that long!).

Mine has held up perfectly fine. Since I initially built it, I've upgraded RAM, upgraded the OS (CentOS 6.5 to 7.x), and replaced one failed hard drive. Otherwise, it's been "set it and forget it". I keep mine in a closet in the basement. The upside of that is I don't have to worry about noise, so I just max out all fans and let it go. However, due to the tiny amount of space available for a CPU heatsink + fan assembly, I've been running with two cores disabled ever since I built it. I think I'd likely be OK with all four cores enabled, but I don't need the power, and disabling cores is an easy way to lower the effective TDP of the CPU. (It doesn't improve idle power consumption measurably though.)

However, I've lately been flirting with the idea of migrating the build to the iStarUSA S-35-DE5. Hard drive capacities have grown faster than my storage needs, so now I can consolidate to fewer (but bigger drives). Thus, I can lose the HBA and run off the motherboard's SATA ports directly. I can also put a "proper" big tower heatsink cooler on the CPU and confidently enable all four cores. I like that this case is somewhat modular, as I actually used the effectively-the-same S-35-3DE1 for a build for my parents. With the way SSD capacities and prices are trending, I'm thinking in a few more years I can move entirely to SSD at a reasonable cost.
 
  • Like
Reactions: TangoWhiskey9