U-NAS NSC-800+Supermicro X10SDV-TLN4F Storage Server Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jimmy_1969

New Member
Jul 22, 2015
24
16
3
54
Jakarta
In case you haven't seen this: I did a similar build almost two years ago (can't believe it's been that long!).
Hi Matt,

As a matter of fact I did read come across your build log when I planned for this project. It was a great inspiration to see how you managed to fit in those components in such a small chassis. Was really impressed with your dual-SDD mounting solution.

Never considered disabling cores until I read your post. How much would you say that contributes in terms of temp decrease? What type of CPU-, HDD and internal chassis temperatures do you typically have at load?

Thanks for sharing your experiences!
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Never considered disabling cores until I read your post. How much would you say that contributes in terms of temp decrease? What type of CPU-, HDD and internal chassis temperatures do you typically have at load?
I've got pretty much the same setup as Matt - E3C226D2I, E3-1230v3, M1015, two SSDs, six 6TB WD greens, seasonic 300W PSU. Tried disabling cores myself (I also don't need the power) but found it made no difference to overall temps. Ambient temp at the minute about 18°C and all three fans at ~1000rpm.

Code:
effrafax@wug:~# sensors
coretemp-isa-0000
Adapter: ISA adapter
Physical id 0:  +42.0°C  (high = +80.0°C, crit = +100.0°C)
Core 0:         +34.0°C  (high = +80.0°C, crit = +100.0°C)
Core 1:         +34.0°C  (high = +80.0°C, crit = +100.0°C)
Core 2:         +41.0°C  (high = +80.0°C, crit = +100.0°C)
Core 3:         +33.0°C  (high = +80.0°C, crit = +100.0°C)

nct6776-isa-0290
Adapter: ISA adapter
+3.30V:       +3.38 V  (min =  +2.98 V, max =  +3.63 V)
+5.00V:       +5.09 V  (min =  +4.75 V, max =  +5.26 V)
+12.00V:     +12.36 V  (min = +11.40 V, max = +12.62 V)
M/B Temp:     +33.0°C  (high = +60.0°C, hyst = +50.0°C)  sensor = thermistor
CPU Temp:     +35.5°C  (high = +80.0°C, hyst = +75.0°C)  sensor = thermistor

effrafax@wug:~# hddtemp /dev/sd{a..i}
/dev/sda: Crucial_CT240M500SSD1: 26°C
/dev/sdb: Crucial_CT240M500SSD1: 26°C
/dev/sdc: WDC WD60EZRX-00MVLB1: 28°C
/dev/sdd: WDC WD60EZRX-00MVLB1: 28°C
/dev/sde: open: No such file or directory
/dev/sdf: WDC WD60EZRX-00MVLB1: 29°C
/dev/sdg: WDC WD60EZRX-00MVLB1: 28°C
/dev/sdh: WDC WD60EZRX-00MVLB1: 26°C
/dev/sdi: WDC WD60EZRX-00MVLB1: 28°C

effrafax@wug:~# ipmitool sensor
ATX+5VSB         | 5.040      | Volts      | ok    | 4.230     | 4.710     | na        | na        | 5.550     | 5.610
+3VSB            | 3.440      | Volts      | ok    | 2.780     | 2.820     | na        | na        | 3.660     | 3.680
Vcore            | 1.790      | Volts      | ok    | 1.240     | 1.260     | na        | na        | 2.090     | 2.100
VCCM             | 1.350      | Volts      | ok    | 1.090     | 1.120     | na        | na        | 1.720     | 1.750
+1.05            | 1.060      | Volts      | ok    | 0.870     | 0.900     | na        | na        | 1.220     | 1.250
VCCIO_OUT        | 1.010      | Volts      | ok    | 0.850     | 0.900     | 0.940     | 1.150     | 1.210     | 1.270
BAT              | 3.200      | Volts      | ok    | 2.380     | 2.500     | na        | na        | 3.580     | 3.680
+3.30V           | 3.360      | Volts      | ok    | 2.780     | 2.820     | na        | na        | 3.660     | 3.680
+5.00V           | 5.070      | Volts      | ok    | 4.230     | 4.710     | na        | na        | 5.550     | 5.610
CPU_FAN1         | 1000.000   | RPM        | ok    | na        | na        | 100.000   | na        | na        | na
REAR_FAN1        | 1000.000   | RPM        | ok    | na        | na        | 100.000   | na        | na        | na
FRNT_FAN1        | 1000.000   | RPM        | ok    | na        | na        | 100.000   | na        | na        | na
M/B Temperature  | 33.000     | degrees C  | ok    | na        | na        | na        | 80.000    | na        | na
CPU Temperature  | 33.000     | degrees C  | ok    | na        | na        | na        | 91.000    | na        | na
+12.00V          | 12.300     | Volts      | ok    | 10.100    | 10.300    | na        | na        | 13.300    | 13.400
 

matt_garman

Active Member
Feb 7, 2011
212
40
28
As a matter of fact I did read come across your build log when I planned for this project. It was a great inspiration to see how you managed to fit in those components in such a small chassis. Was really impressed with your dual-SDD mounting solution.

Never considered disabling cores until I read your post. How much would you say that contributes in terms of temp decrease? What type of CPU-, HDD and internal chassis temperatures do you typically have at load?
Sorry for the delayed reply, I didn't see this until now.

I can't claim credit for the dual SSD idea; someone else on the web did that first (I think I linked to the post that provided the inspiration).

Anyway: disabling cores won't lower idle temps (or power consumption). But what it will do is effectively lower the max TDP of your CPU. So if you have a process (or multiple processes) capable of maxing out all cores simultaneously, then you will have less current draw (and therefore less heat) when cores are disabled.

Based on what I've read, and confirmed through anecdotal reports on the web, as well as my own testing, modern Intel CPUs, when they are in their deepest power-saving mode, are effectively disabled. That's why BIOS-level core disabling doesn't save power or lower temps at idle: electrically, there is virtually no difference.

Every now and then, it seems like I get a process that gets in a bad state and eats up a whole CPU core. It's rare, but usual suspects include kipmi and multimedia related stuff (e.g. mpd, before I moved that to a separate system). I don't do regular monitoring on this system, so it's possible a process could get into a state like that and go unnoticed for a while. Consider the worst-case scenario, where several processes (or one multi-threaded process) goes nuts, eating up all four CPUs. Then I could get into a situation where temps might be a problem. But with two cores disabled, that worst-case-scenario becomes not quite so bad. And since I don't need the computing power either way, I'm not giving anything up to have a little extra safety margin.

If you have a kill-a-watt or similar power meter, you can easily demonstrate this to yourself. Measure your system power draw in a fully idle state, this is your baseline (say roughly 30W in my case). Now, run something like prime95 with all cores enabled. Make note of the max power draw. This ought to be roughly your baseline plus the TDP of your CPU (30 + 80 = 110 in my example). Now disable half your cores and repeat. What you should see is the max power draw is now roughly baseline plus half the CPU's TDP (30 + 40 = 70 in my case). The numbers won't be exact, because there is the extra power draw of PSU inefficiency in there to blur the numbers a bit. But in this experiment, the difference is very obvious.

However, your Xeon D-1520 is already a 45W part (versus 80W for my E3-1230v3). Disabling two cores lowers my max TDP approximately 40W, while disabling half of yours is only good for a 20ish W max TDP savings. So the effect will be less pronounced with your system.

So, having said all that (whew!): I used to run Munin and log all kinds of system info, but never checked it, so never bothered to set it up after I upgraded my OS. So here are snapshot temps at the moment:

Ambient: 74F = 23C (using a USB temperature probe in the closet where this server lives)
Core 0: 41C
Core 1: 37C
Drive temps: 34C to 36C
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
I totally agree, disable some core would lower down temp in idle :D
 

jimmy_1969

New Member
Jul 22, 2015
24
16
3
54
Jakarta
Project Update 8 Nov-15: 10 GBe NICs Sorted and CPU Fan Installed

Getting ixgbe driver to work on CentOS 7.1
In the spirit of sharing, see this post on how to get Intel X557-AT 10 GBe NICs working on CentOS7.1 using Intel's ixgbe 4.1.5 drivers. The scenario I was struggling with was to get the link to auto-negotiate the speed to connect to my 1Gb Dell PowerConnect 5324 switch (no 10 GB switch in my lab).

CPU Fan and Final Assembly
The UNAS chassis is rather small, presenting us with two major challenges: heat dispursal and component management. The cap between the CPU heat-sink and the opposed chassis wall in only ~25 mm, but due to other components there is no way a standard 20 mm wide fan can fit there. Enter the Noctua NF-A9x14 PWM static pressure fan. It's a low profile fan measuring 14 mm x 92 mm x 92 mm.

Mounted the fan using cable ties. One challenge is that the chassis wall blocks quite a lot of air flow. There is about a 5 mm gap between fan and heat sink, and the air flow could potentially be improved by applying spacers between the wall and the fan to enhance air inflow.

01-Top_view_medium.jpg 04-CPU_fan_medium.jpg
Top view showing the CPU fan installed

The chassis is equipped with eight hot-swap slots. My strategy is to only deploy half of them, so to optimise the heat dispursion I mounted my drives in the four right-most slots as far away from the CPU as possible.

The chassis has a prefabricated air intake at the left side for motherboard cooling. I put a spare Sunon KDE1204PKV3 40 mm fan there to improve the inflow. It's a bit of a challenge as the front part of this chamber is also used for cable management. Note the 24 pin molex expansion cable to the right. This was requires as the Supermicro motherboard's PSU power connector to the left of the right-most lower screw.
05-Side_Case_fan_medium.jpg
Side fan

Another tight squeeze is the MSI 9240-8i card and the 2.5" OS drive HDD. The card is 165 mm in length which overlaps the hard drive installation plate with 21 mm. Luckily, the MSI sits 2 mm higher than my drive case. I put a plastic protector on top of the drive just in case to avoid short-circuits. Note the hard drive mounting plate which is required for internal case disks. I actually have two of these plates but need 20 mm M3 spacers in order to install a second set on top of the first one. One problem is that one of the four mounting screws are blocked by the MSI card, so I decided to put this on the back-burner for now.

02-Top_view_rear_medium.jpg 03-Boot_drive_and_hba_board_medium.jpg
Tight fit with MSI card only 2 mm above the drive case

One of the biggest challenges in this build was to manage the internal power cables. The Seasonic PSU is equipped with two 24 pin molex connectors, and there is not much dead space in this chassis to hide unused cables.
06-Power_Cable_mgmt_right_side_medium.jpg
HDD Power Cable Management

As part of this build I also replace the stock chassis fans with two Nanoxia Deep Silence 120 mm fans. In hind-sight, the original stock fans aren't that bad (Silent 12 PWM from Gelid Solutions). And since the BOIS does not push the fans to max RPM, they run very quetly. Quiet enough to for the case to be next to me. It's just that with the Nanoxia I can run the BIOS fan setting on max and still get whisper quiet performance.
07-Rear_Chassi_Fans_medium.jpg

What's next? Hardware-wise I plan to replace the old 500 GB WD drives to 4 or 5 TG drives in a x4 configuration. But before doing that, I plan to get cracking with installing oVirt 3.6 on my host, deploy a FreeNAS VM and do PCI pass-through of the HBA. The objective is to run this server as a management host for KVM virtualisation together with a FreeNAS VM that will constitute the main storage solution for my lab. I expect these work loads would only use a fraction of the system resources, leaving room to deploy additional VMs.

//Jimmy
 
Last edited:
  • Like
Reactions: EluRex

jimmy_1969

New Member
Jul 22, 2015
24
16
3
54
Jakarta
Project Update 18 Feb-16

Virtual Environment
After playing around with a few different virtual environments I decided to go with Proxmox. It installed easily and the 10GB NICs came up after the initial installation without any hassle. For instructions on how to install, pls refer to this guide made by Patrick from STH.

In my case the host server uses 2x250GB SSDs (SAMSUNG SSD 850 EVO 2.5" SATA III 250GB) in a RAID-1 configuration as Proxmox boot drives, and to store the guest VMs. VMs are backed up to a secondary NAS using NFS share. This is a less fancy solution compared to the Ceph cluster in Patrick's guide, but it gets the job done.

In addition the host server also has a 2x6TB HDD configuration (HGST Deskstar NAS Hard Disk Internal SATA III 7200 RPM 6TB). Each drive is configured for PCI pass-through to the FreeNAS VM. In the below picture we can see the host server, "unas-host", running one guest VM, "freenasvm100", as well as the external NFS share used for VM backups, "readynas_nfs_backup".
Proxmox GUI.JPG
FreeNAS Virtual Machine
The two HDDs are configured as a mirrored ZFS pool in FreeNAS, seen in the below GUI screen-shoot as "bigpool".

FreeNAS GUI.JPG

Data Migration
When I researched this topic I wanted to find methods for migrating data between two NAS systems which supported data integrity on file content level. In the real world a new storage deployment is seldom done in isolation. But rather it is part of a bigger scenario where existing data needs to be migrated. For someone working with unique content like photo or video, any data corruption would be disastrous. But despite my best efforts it proved challenging to find a good guide on how to migrate files across storage systems from a data integrity perspective. I would be very interested in hearing from professionals about what procedures they use to ensure data integrity during IT migration projects.

My old NAS from which I am migrating from,contains about 3 TB of data. The migration strategy follows the following steps:
  1. Initial bulk migration: Do a complete copy of all files in the source NAS using a NFS share on the target NAS.
  2. Incremental migration: Maintain data integrity my doing period incremental copies using rsync.
  3. Repeat #2 until new target NAS is considered stable and ready for service
Using NFS to copy files is far quicker than rsync, but obviously not very intelligent. To add some data integrity I TARed each directory to be migrated, piped the file names to MD5 and saved each individual file's MD5 hash in a separate file. The big idea being that once I unpack the tar on the target NAS, the data integrity of the file content could be validated using the MD5 hash file as master reference. That was the concept.

In reality, my source NAS is running a ReadyNAS fork of Debian Sid containing md5sum tool. And the target NAS is based on Freebsd which ships with md5 tool. And the format between these two is obviously different. So practically, using md5 checks between two platforms for data integrity was more complicated than anticipated. Being able to do integrity checks on individual file content level would be great, but it's fiddly to make it work between BSD and linux systems.

A simpler approach would be to do integrity checks based a tool supported on both platforms, e g gzip. Here the CRC checks are done on archive level, rather than on individual file level. Meaning unpacking would detect a corrupted archive, but would not be able to repair the data. So the trade off is that in event of data corruption detected in the target NAS, the entire archive needs to be reprocessed in the source NAS and copied across again.

Stability Period
Right now my FreeNAS is in a stability/monitoring phase during which it mainly serves as a backup for my old NAS. I need to improve my UPS set-up before I deploy my UNAS as primary NAS server.

Project Findings So Far
This has been a very enjoyable project with a lot of challenges. Just getting the system to post and run the CPU without overheating was a major hurdle. I expected the visualization to and FreeNAS to be the most difficult parts to grasp. But I was impressed with the level of maturity that Proxmox and FreeNAS has reached, and found the initial learning curve to be less steep than expected. This is mainly thanks to a very active user community and forums. Going from zero to actual deployment is achievable thanks to that support, and it builds confidence that if I were to run in to operational issues in the future there are experienced willing to share their knowledge.

//Jimmy
 
Last edited:

jimmy_1969

New Member
Jul 22, 2015
24
16
3
54
Jakarta
7 Year Later - Update 29 Sep-22

Background

I thought I would make a final postscript about this build since it has been seven years since the project started.
Serve The Home has lots of inspiring build posts, bit I though I would contribute by doing one that shares long-term lessons learned. Hopefully, this will help others who are in the same process as I was seven years ago building a home NAS.

Original Goals
  • Small-scale chassis with potential to scale up to eight drives
  • Quiet operation as it was intended for a home office location
  • Aesthetically pleasing
  • Energy efficient
The Test of Time
Overall this NAS been running consistently 24/7 without any major issues on FreeNAS and later TrueNAS CE as OS. With 32 GB RAM is has enough memory headroom to also act as my local hypervisor, running a blend of freebsd jails and bhyve VMs.

Backups are done to a local Supermicro 1U server using ZFS snapshots.

One lesson around the U-NAS platform is that the mini-ITX formfactor has its drawbacks. A mini-ITX chassis is great after you have assembled all components and are in a stable system state. But when something happens that requires access to internal parts, you have two issues.

Firstly, the tight space makes working quite fiddly, any swap/fix takes considerably longer time than with a larger chassis. All this translates to longer down-time when internal component issues happens, which was not something I originally considered. Over time, you need to open the chassis to periodically clean internally, and eventually fans with worn-out bearings will need to be replaced.

Secondly, the small form-factor combined with the U-NAS style of motherboard mounting makes the latter vulnerable for damage every time the MB has to be removed and inserted back again. The MB backplane is protected from shortening only by a thin PVC sheet. This was my most costly lesson learned, as I accidently manage to kill the original motherboard doing a remove/insert operation.

Besides that motherboard, I have also replaced the PSU (original broke) and the rear exhaust fans (worn out).

The chassis front consist of a rubbery plastic material. Originally, this was a smooth dark colour and easy to clean. Now it has has dried up to the point that the surface is sticky. Other minor issues is the power button that bottoms out very quickly and sometimes get stuck.

Conclusion
As much as I like the look of the U-NAS NSC-800 v2 chassis, I would not recommend it long-term. It is simply too impractical to work with internally. It suffers from the same issues as most purposely designed NAS chassis.

Internal serviceability is definitely something I would consider for any future NAS build project.

Best Regards

//Jimmy
 
Last edited:
  • Like
Reactions: i386 and Marsh

matt_garman

Active Member
Feb 7, 2011
212
40
28
As much as I like the look of the U-NAS NSC-800 v2 chassis, I would not recommend it long-term. It is simply too impractical to work with internally. It suffers from the same issues as most purposely designed NAS chassis.

Internal serviceability is definitely something I would consider for any future NAS build project.
Thanks for posting the long-term follow up! I admire your ability to more or less leave a system be, rather than constantly upgrading and being seduced by new toys.

We had very similar systems, at least at one time. I've too-often been seduced by "shiny new things" and/or "change for change's sake" and detailing all my hardware swaps would be a lengthy post. Anyway, I initially actually started with two U-NAS NSC-800 chassis, one for the main server, and one to be a backup server. Both systems have been shuffled in and out of different hardware over the years. Both NSC-800 chassis were sold (IIRC, I actually sold one to @Patrick).

Anyway, I have somewhat come full-circle. While I'm using a tower chassis for my main server, I've gone back to U-NAS for the backup server. But I'm using the newer, slightly bigger NSC-810a. At the cost of a slightly bigger footprint, it moves the motherboard mount from the side to the top, and has room for MicroATX (and you can still use MiniITX of course). It has proper mounting standoffs, so no miserable plastic sheet to contend with!

I fully agree with your criticisms of the older NSC-800 model - doing just about anything to the motherboard was an exercise in frustration. But at least the motherboard frustrations are mostly gone with the 810a. I've only used MiniITX, I do think MicroATX would be a bit snug. And there's not a lot of vertical clearance, so you're stuck with OEM or low-profile heatsinks. With the motherboard no longer mounted to the side, it mostly frees up that side area; I use it as space to shove extra cable lengths. If you need to swap a fan, that's still a big job, as it requires a fairly substantial disassembly, as before. In short, physical serviceability is definitely improved, but still not ideal.

And for what it's worth, I mentioned the iStarUSA S-35-DE5 on the previous page. That was one of the many stops for my main server over the years (looks like I did a writeup here). I still have that chassis, though I'm planning to sell it. But if you can get by with MiniITX and only five drives, I think it checks a lot of boxes. You get actual (toolless!) hotswap bays; there's room for a good tower CPU heatsink; a mount for a 120mm chassis fan, creating a well-defined front-to-back airflow; cramped but workable serviceability.