Media NAS/Dev server build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

devnoobops

New Member
Nov 27, 2024
7
0
1
Build’s Name: The BIG one
Operating System/ Storage Platform: Linux/Proxmox
CPU: Intel 14500/14600 (?)
Motherboard: Asus W680 IPMI (?)
Chassis: Fractal Define XL 7 (?)
Drives: At least 12x20TB Exos drives
RAM: 64/128GB non-ECC
Add-in Cards: LSI HBA 16 slots (?), Intel 2xSFP+
Power Supply: (?)
Other Bits: 2 SSDs for boot and internal storage

Usage Profile: NAS for Plex purpose, dev environment for software engineering (for 2-3 people), DevOps lab

Hey. After weeks of planning new server Im completely lost. I wish you can help me clarify things.

My purpose is to build "power efficient", cheap and semi-powerful server for all my needs.

The first and main purpose of this machine is to host my Plex library. About 200TB of media + supporting apps.
Second purpose is to help me test some devops solutions based on kubernetes
Third purpose is to allow me and my coworkers run CI/CD workflows, dev instances of apps (docker + kubernetes) and supporting apps (some support ticket etc)

NAS/Media part will be built around SnapRAID and MergerFS with Plex and Jellyfin and arr stack. Im aiming for Intel QSV for transcoding purposes, thats why Intel.
Other parts will be hosted on multiple VMs. Not much usage, I don't need a lot of power. Currently that workflows are working on single 13gen i5 Intel NUC without issues.

Im not sure to choose server grade equipment or rather consumer grade. I was strongly considering Dell R740xd2, but it consumes too much power and is waaaay too loud - Im planning to keep it in my "storage closet" close to bedroom.

First question - if I pick Asus W680 motherboard, it has slimSAS connector. I completely dont understand how SAS expanders work. Is it possible to expand this 4 slots for 16 SATA drives? If no, which LSI HBA card should I pick? I saw many germans in here, lets say Im living in this area so my availability should be comparable. 9300 16i or 9400 16i? Or my use case is so basic that I should pick some gaming/consumer grade motherboard and dont overcomplicate things? IPMI seems really usable, but I can go for pikvm.

Do you know any better chasis for 12-16 HDDs? I have 3U free in rack. Define XL 7 costs about 200 euro, I found nice rack chasis but they cost more like 500-700 euro.

Is this a big problem to buy 2x32GB at first, and then expand to 4x32GB? Or better buy all 4 sticks at once, from same batch?

Do I need 16 inputs HBA card or should I use expander for some reason?

Maybe AMD is better choice? Then I would need to invest in h265 encoding/decoding capable GPU.

Does consumer grade motherboard allow me to easily passthrough HBA card to vm?

Should I be worried about power supply? In theory, 12-16 disks should not take more than 50-100W, but during powering on they can draw like 250W-300W to spin up. What kind of PSU can handle this? Should I invest in Platinum/Titanium rated PSU or something like Gold should be enough?

Did I missed something?

I will be very glad for any advices. I dont have any defined price range, but it will be great if I can fit under 2000-2500 euro.
 
Last edited:

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,250
1,644
113
SlimSAS doesn't mean you have SAS support unfortunately. It's still SATA only on that motherboard (or NVMe), so, you will need a separate HBA.

Also not sure why you're going for a W680 motherboard and then not using ECC? That's really the whole point of the chipset.

For memory, you will get reduced performance with 4 DIMMs. There's a middle ground where you have 2 x 48GB DIMMs that you may wish to consider.
 
  • Like
Reactions: nexox

devnoobops

New Member
Nov 27, 2024
7
0
1
SlimSAS doesn't mean you have SAS support unfortunately. It's still SATA only on that motherboard (or NVMe), so, you will need a separate HBA.

Also not sure why you're going for a W680 motherboard and then not using ECC? That's really the whole point of the chipset.

For memory, you will get reduced performance with 4 DIMMs. There's a middle ground where you have 2 x 48GB DIMMs that you may wish to consider.
Interesting facts! Thanks a lot. I still cant decide to go server grade or consumer grade motherboard, and this Asus is most interesting "server" one I found. I wasnt aware that 4 RAM sticks can affect performance that way. I assumed more sticks = more channels = more performance. Maybe I will go for ECC, depending on prices during BF.

Z690 is good alternative for W680?


Might as well get 14700
From what I found, 14700 consumes much more power and 14500/14600 are more efficient. Does 14700 have any advantage except more cores/power?
 

bugacha

Member
Sep 21, 2024
57
13
8
From what I found, 14700 consumes much more power and 14500/14600 are more efficient. Does 14700 have any advantage except more cores/power?
TDP of 14500/14600/14700 is 65W

14700 has 20 cores. More cores always better for Virtualization
 

Tech Junky

Active Member
Oct 26, 2023
676
231
43
A cheap Sparkle card for $100 works well on my AMD setup. It also frees you from Intel only builds. I get 600-1200fps converting files with an a380. All of them should have the same performance though so no need to spend more thinking it will do better. Though the Battlemage cards are coming soon which is the next Gen.
 

Greg_E

Member
Oct 10, 2024
49
18
8
With 12 big spinning drives power economy is not something you are going to get. You are probably going to need a 16 drive controller card which gives some room for the system drives to be made into at least a mirrored pair. I'd go with server class equipment on something that big. Also use big memory modules to have open slots for expansion.

If you go with dual processor, you may need 4 memory modules, two for each processor.

I don't like using my storage for hypervisor work, I'd let storage be storage and run the other stuff off of other devices. But that's me.
 

devnoobops

New Member
Nov 27, 2024
7
0
1
With 12 big spinning drives power economy is not something you are going to get. You are probably going to need a 16 drive controller card which gives some room for the system drives to be made into at least a mirrored pair. I'd go with server class equipment on something that big. Also use big memory modules to have open slots for expansion.

If you go with dual processor, you may need 4 memory modules, two for each processor.

I don't like using my storage for hypervisor work, I'd let storage be storage and run the other stuff off of other devices. But that's me.
Thank you for that input. What benefits can I count on investing triple money and double yearly for electricity? How nfs/smb can benefit from 2 cpus in this scenario? Does snapraid/mergerfs are that much cpu intense in scale of 16 HDDs? Its plex server for one person and small dev environment for few developers. I dont want to replicate mistake of most reddit users, who invested in multiple rackmounted dell servers to end with one small docker container per psychical machine :) Or expensive TrueNAS instances with terabytes of memory to handle 1000 files. Currently 11 HDDs are handled by Synology unit in my setup with potato CPU and 13 gen i5 Intel Nuc, im missing mostly hdd bays.

...and the 14700 only uses more power than a 14600 if you feed it more information to process.
Im adding 14700 to the list. Right now its double price of 14600, but black friday is day of miracles :)

A cheap Sparkle card for $100 works well on my AMD setup. It also frees you from Intel only builds. I get 600-1200fps converting files with an a380. All of them should have the same performance though so no need to spend more thinking it will do better. Though the Battlemage cards are coming soon which is the next Gen.
I have amazing results on existing 13500 on intel nuc, Im affraid of energy usage of dedicated card. Does you gpu draw power when its idle or only when transcoding?

If picking consumer grade hardware, I have issue finding motherboard for LGA1700 with 3x x8 PCIE ports. Or at least 2x x8 PCIE and 1x x4 PCIE (possible GPU, HBA card, 10Gb card - all require x8)
 

devnoobops

New Member
Nov 27, 2024
7
0
1
It's one of the lower power cards at ~35W.


I'm using an ASRock PG lightning and it has four slots that don't auto shift the lanes provided when adding or removing cards.

There's a x1 10ge nic for $100 from owc.
35W at idle? Thats a lot.

I saw this x1 nic, but its terminated with ethernet. Im looking for SFP+ one. I guess X520 should work fine using x4.
 

Tech Junky

Active Member
Oct 26, 2023
676
231
43
And whats about idle? 99% of time im direct playing

Looks like the A310 is even more efficient. However, I know the PCP power ratings are a bit higher than actual.
1732816911049.png
PCP shows it at 75W max power but as you can see the system is reporting a top end of 43W.

It all comes down to how you want to approach the media handling. I convert all my stuff to mkv to drop the size of the files from Plex OTA. The reduction is about 1/8th the TS size when done and the GPU handles them with a quickness of about 1 min per file on average. Letting the CPU handle it took longer at a higher W in comparison. Even when running files through my laptop w/ intel / NVIIDIA took much longer as well.
 

devnoobops

New Member
Nov 27, 2024
7
0
1

Looks like the A310 is even more efficient. However, I know the PCP power ratings are a bit higher than actual.
View attachment 40237
PCP shows it at 75W max power but as you can see the system is reporting a top end of 43W.

It all comes down to how you want to approach the media handling. I convert all my stuff to mkv to drop the size of the files from Plex OTA. The reduction is about 1/8th the TS size when done and the GPU handles them with a quickness of about 1 min per file on average. Letting the CPU handle it took longer at a higher W in comparison. Even when running files through my laptop w/ intel / NVIIDIA took much longer as well.
Im watching media in original quality, I dont care about disk space. I transcode ocasionaly during international trips with slow internet, using jellyfin. Thats why im asking for idle consumption. I expect that GPU will be off 99,9% of time, during 99% of playbacks. I dont care too much about consumption during playback because I dont expect it to happen often. But even 10W idle usage is too much for me.
 

Greg_E

Member
Oct 10, 2024
49
18
8
Your drives have a 5.6 watt average idle, 9.6watt average max (averaged between sata and sas), with 12 of them you've kind of stepped past the "I want low power draw and low heat" stage. You will probably need 20-30 watts just in fans to keep them happy. Add in enough processor to do the things you want, probably an 85 watt TDP, more fans to move that heat and the heat from the ram... I'd expect you will want a 500-750 watt power supply just to try and handle transient surge, probably redundant supplies. I'd expect your total idle draw to be between 100-200 watts, probably 200-300 watts when you are working.

At this point, an extra graphics card is almost meaningless for your power draw. If it changes the encoding time by 10%, you will probably see a reduction in total power draw over the job, the idle on that card is probably well worth the extra.

My HP DL360e gen8 storage with 8 2.5" spinning drives running Truenas Scale pulls 100 watts at idle. I does have dual low power processors and 96gb of ram (all slots full), but that's just the way these things need to be run. I have others with 3.5" drives, they pull a little more at idle, and when working it's normal to see another 100 watts.
 
  • Like
Reactions: Tech Junky

devnoobops

New Member
Nov 27, 2024
7
0
1
Your drives have a 5.6 watt average idle, 9.6watt average max (averaged between sata and sas), with 12 of them you've kind of stepped past the "I want low power draw and low heat" stage. You will probably need 20-30 watts just in fans to keep them happy. Add in enough processor to do the things you want, probably an 85 watt TDP, more fans to move that heat and the heat from the ram... I'd expect you will want a 500-750 watt power supply just to try and handle transient surge, probably redundant supplies. I'd expect your total idle draw to be between 100-200 watts, probably 200-300 watts when you are working.

At this point, an extra graphics card is almost meaningless for your power draw. If it changes the encoding time by 10%, you will probably see a reduction in total power draw over the job, the idle on that card is probably well worth the extra.

My HP DL360e gen8 storage with 8 2.5" spinning drives running Truenas Scale pulls 100 watts at idle. I does have dual low power processors and 96gb of ram (all slots full), but that's just the way these things need to be run. I have others with 3.5" drives, they pull a little more at idle, and when working it's normal to see another 100 watts.
Currently my whole setup, unifi networking stack (5 devices), intel nuc 13 and synology (along with rx517 expansion) with 11 HDDs consume 170W on average. I believe I can fit under 200W limiting 3 devices to one and adding few hdds.
 

vincococka

Member
Sep 29, 2019
54
28
18
Slovakia
Interesting facts! Thanks a lot. I still cant decide to go server grade or consumer grade motherboard, and this Asus is most interesting "server" one I found. I wasnt aware that 4 RAM sticks can affect performance that way. I assumed more sticks = more channels = more performance. Maybe I will go for ECC, depending on prices during BF.

Z690 is good alternative for W680?



From what I found, 14700 consumes much more power and 14500/14600 are more efficient. Does 14700 have any advantage except more cores/power?
W680 is about allowing you to use ECC memory with desktop CPU series from Intel.
14700 consumes +- same as 14500/14600 during idle - difference max 1-2-3 Watts.
During LOAD energy consumption is different song - depends on workload you'll throw on it.
Personally I use 14700k with Q670 chipset for remote management (famous Intel AMT / KVM - but who cares) and it works for me multiple years "sufficiently/fine" .
 
  • Like
Reactions: name stolen