Hardware Recommendations for Home Servers (NAS + Virtualization)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

thn80

New Member
Aug 10, 2020
16
1
3
Hi,

what I have at the moment:
  • A Windows-based system.
  • Intel Core i5-4690S
  • 32 GB RAM
  • Adaptec RAID Controller
  • 8x HDDs
  • The whole system consumes in 24/7 low load state (all drives spinning, all VMs on low load) around 130 W.
What I want:
  • TrueNAS CORE as NAS + Proxmox VE for virtualization.
  • Data integrity of the stored data has highest priority.
  • If anyhow possible the new system(s) should consume a maximum amount of power comparable to my current system (or if not possible, only minimally above).
  • My information so far:
    • Running TrueNAS and Proxmox on the same system is not recommended, because
      • Virtualizing Proxmox on top of FreeBSD bhyve makes no sense, because of nested virtualization.
      • Virtualizing TrueNAS in Proxmox and passthrough of the HBA could risk the data integritiy if something goes wrong and it makes troubleshooting more complex. Virtualizing TrueNAS on top of Proxmox is also officially not supported in the TrueNAS forum.
    • Going with ESXi (as recommended in the TrueNAS forum) is not an option for me, because Proxmox is much more user friendly.
    • My current approach is trying to find two low-power systems, one for TrueNAS and the other for Proxmox (and with this point I would like to access your experience).
  • The TrueNAS CORE system:
    • Power Consumption: From other systems running TrueNAS with 8 drives, it should be possible to reach a low load power consumption with spinning drives of around 80 watts with the CPUs mentioned below.
    • CPU
      • The CPU should have a very low idle power consumption.
      • I'm thinking about an Atom C3758, Pentium D1508, Xeon D-1518. Do you have any better recommendations? Which CPU would you use?
    • Mainboard
      • Supermicro mainboard preferred
      • ECC RAM
      • IPMI
      • >= 8 SATA/SAS connections
      • Network: SFP+ preferred (for 10 GBit/s network, because 1 GBit/s would limit even the relatively slow RAIDZ3)
    • Chassis with Hotswap for the drives (not decided, yet, if 19" or desktop chassis)
    • Storage:
      • I tend to use a RAIDZ3 (8 HDDs)
  • The Proxmox VE system:
    • What shall be virtualized:
      • Plex
      • Syncthing
      • Nextcloud
      • UniFi controller
      • FreePBX
      • Gitlab
      • 2 VMs running Win 10
      • 1 VM running Docker
    • Power Consumptions: With my goal of 130 watts and 80 watts for the TrueNAS system, we have around 50 watts left for the Proxmox system. Is this possible with my requirements?
    • CPU
      • The CPU should have a very low idle power consumption (all virtualized instances are usually running in idle).
      • Plex should be able to use the CPU for transcoding some (1..2) 1080p and 4K streams. For this, the CPU should contain an integrated GPU (especially for the 4K transcoding stuff).
    • Mainboard
      • Supermicro mainboard preferred
      • ECC RAM
      • At least one PCIe slot (for a Hauppauge WinTV-Quad HD card passed-through to Plex)
      • IPMI
      • Network: SFP+ preferred (for 10 GBit/s network)
    • Storage: Two mirrored SSDs for the system (all other data will be accessed via network on the NAS).
Do you have any recommendations for the hardware for both systems?

Thanks a lot in advance,

Thomas
 

zer0sum

Well-Known Member
Mar 8, 2013
850
475
63
Why not run TrueNAS Scale or UNraid as a single system?
Either of them can provide the storage you need and can easily run all of the virtualized things you need either as containers or full VM's.
Unraid is incredible with the amount of containers you can run in mere minutes

As far as hardware that all depends on your own personal budget.

I have had great success with SuperMicro X10/X11 motherboards and cheap Xeon's with ECC.
More recently I've switched over to X470D4U and Ryzen 5600X as I wanted something compact and with newer CPU's
I always use a dedicated GPU for Plex and find I can do everything I need with a cheap single slot Nvidia P400
 

Parallax

Active Member
Nov 8, 2020
420
212
43
London, UK
I would be repeating what I said here albeit I note you want to use Core instead of Scale which helps a little.* Personally I would trim it down to just Proxmox with an LXC or VM which you bind mount or pass through respectively your ZFS array to, if you don't want to run the SMB and NFS sharing directly out of Proxmox.

* Although cynically I would say you are learning about BSD jails that no-one uses instead of a k8s implementation that no-one uses, so it's probably six of one and half a dozen of the other.
 

Parallax

Active Member
Nov 8, 2020
420
212
43
London, UK
On the hardware side I run a HPE Microserver Gen10 Plus which is very good from a power consumption (and nearly every other) perspective but only takes 4 drives. So I think you would probably need something like a Fractal Node 804 case. I'm in the UK so we have more limited NAS-style case options than you would in the US say but you get the idea.

If you're trying to save on power, why not run everything on the "NAS" box? With a low power but reasonably high core count CPU you should be able to do everything you want and spin the drives in 25W (for the server) + 8x ~8W (for the drives) = under 100W unless you're really caning the CPU.
 

thn80

New Member
Aug 10, 2020
16
1
3
@Parallax thank you very much for your comments. I already had a look on the other thread.
@itronin in the thread mentioned by Parallax you also gave some very good comments. It would be nice if you could share some of your experience also regarding my specific use case here.

Regarding of a data integrity standpoint, my impression is that TrueNAS is very good at this point. ZFS is natively implemented and also fully integrated into the webinterface (which helps if you do not use it on a daily basis). Also ZFS replication tasks, scrubs, etc. can simply be configured via the webinterface. For my current understanding, the other systems (OpenMediaVault, Unraid) also support ZFS, but only as add-on and, therefore, all the maintenance is not fully supported via the webinterface.

If you absolutely don't see an option to run two independent systems with my maximum power consumption mentioned above, maybe, we could discuss about running a single machine. Do you see any options which were not mentioned, yet, how to run Proxmox and TrueNAS on the same machine?

Do you have any experience with the virtualization on TrueNAS CORE (bhyve + jails)? Are these two working well? Would this be an alternative for Proxmox or would you better not go this way?
 

Parallax

Active Member
Nov 8, 2020
420
212
43
London, UK
Regarding of a data integrity standpoint, my impression is that TrueNAS is very good at this point.
"Data integrity" means your data stays safe. This is true to the extent that ZFS and your backup strategy allows, with anything that implements ZFS properly. I don't see your data being more or indeed less safe with TrueNAS than any other platform.
ZFS is natively implemented and also fully integrated into the webinterface (which helps if you do not use it on a daily basis). Also ZFS replication tasks, scrubs, etc. can simply be configured via the webinterface. For my current understanding, the other systems (OpenMediaVault, Unraid) also support ZFS, but only as add-on and, therefore, all the maintenance is not fully supported via the webinterface.
That's true. Personally I just use the CLI and that's enough for me, but I fully accept others will want a GUI. You probably need to think how often you expect to want to manually scrub or snapshot your environment and this should factor in your decision-making.
If you absolutely don't see an option to run two independent systems with my maximum power consumption mentioned above, maybe, we could discuss about running a single machine. Do you see any options which were not mentioned, yet, how to run Proxmox and TrueNAS on the same machine?
Well, one benefit of Proxmox is that it natively supports ZFS, so it's a bit of a waste to pass through your disk controller to the VM but I suppose you could do that if you really wanted.
Do you have any experience with the virtualization on TrueNAS CORE (bhyve + jails)? Are these two working well? Would this be an alternative for Proxmox or would you better not go this way?
I can't comment on TrueNAS Core because I haven't used it, but basically with one server you need to decide whether you want a NAS that can do virtualisation tasks, or a virtualisation environment that can do NAS tasks. That should make it clearer if you want TrueNAS/Unraid/OMV etc or Proxmox/ESXi/Harvester/XCPng etc.
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
@itronin in the thread mentioned by Parallax you also gave some very good comments. It would be nice if you could share some of your experience also regarding my specific use case here.
I wrote a lengthy reply but I've nuked it
I missed some of what you are looking for and now I see I'm missing some information from you to give a more narrowly scoped response.

A) Budget
B) Re-use of hardware? If yes, more detail on what you have please, drives, type, capacity. Adaptec controller information etc. etc.
C) New? Used?
D) World Region, supply chain, access to used etc. etc.
E) Physical space requirements
F) Noise requirements
G) Do you consider any of the services in this infrastructure to be mission critical, 24/7, which includes people yelling at you when media/TV (tuner) is not available...
 

thn80

New Member
Aug 10, 2020
16
1
3
I wrote a lengthy reply but I've nuked it
I missed some of what you are looking for and now I see I'm missing some information from you to give a more narrowly scoped response.

A) Budget
This is not decided yet, but I think we will definitely be above 1k and below 4-5k (Euros).
B) Re-use of hardware? If yes, more detail on what you have please, drives, type, capacity. Adaptec controller information etc. etc.
I don't plan any re-use at the moment, because 1) my current hardware is absolutely not server-grade and 2) the drives are already some years old. 3) I need the old system while the new system is set up to transfer all the data.
C) New? Used?
I prefer new.
D) World Region, supply chain, access to used etc. etc.
Germany
E) Physical space requirements
I assume you are talking about the chassis and not the storage space, right? I would prefer a 19" rack chassis, however, as I do not have the rack itself, yet (still living in an apartment and not an own house), I have to check if I can make some free space for this. In worst case I have to go with a desktop chassis. However, I have seen that Supermicro has some chassis which are desktop + rack at the same time. You can simply mount them in one of two versions.
F) Noise requirements
It does not have to be absolutely silent, my current server is located in my office room and the fans are very clearly hearable. However, I don't want to sit beside one of those "jet-engine" like servers. A smartphone app show 55-65 dB directly in front of my current server.
G) Do you consider any of the services in this infrastructure to be mission critical, 24/7, which includes people yelling at you when media/TV (tuner) is not available...
No, there is nothing absolutely mission critical. If the services are not available for2-4 days it will be acceptable. The important part is that no data gets lost, but if I would have to reinstall the system (and importing a previous configuration backup) that's fine.
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
Thanks for the additional information. I'll be able to go through this a bit more .

Without getting too philosphical...

Your motherboard requirements are pretty tightly constrained ie. not a lot of options. SM, SFP+, and 8+ SATA ports. really, 10 (boot mirror).

Have you already looked at the supermicro x11sdv-4c-tp8f? Here's a bay listing in the US that says they ship to EU countries if you can't find it there *or* this listing could be cheaper even with VAT than over there(?)... Here's the STH review on the SM complete system.

To me this could be a good choice for Server 1 and Server 2, except for no intel QSV on server 2, however you could add in a GPU and you can slot in the Hauppauge.

These may push out your power budget a little bit since the CPU top-end is 3/4 of your NAS power budget but being a NAS I don't think you'll sterss it a whole lot.

To my mind there is value in standardizing your mainboard's in the even of a failure.

one last question. Do you plan to SLOG or no?
 

thn80

New Member
Aug 10, 2020
16
1
3
Have you already looked at the supermicro x11sdv-4c-tp8f?
To me this could be a good choice for Server 1 and Server 2, except for no intel QSV on server 2, however you could add in a GPU and you can slot in the Hauppauge.

Do you plan to SLOG or no?
Currently I don't plan to use a SLOG.

For the two systems I'm again struggling with the idea to combine both systems into a single system which consists of Proxmox and TrueNAS (maybe SCALE) on top. I'm again thinking about this, because it saves much power, compared to the two single systems , while it is able to provide more CPU power if required.
Do you have also a recommendation for a Supermicro mainboard + CPU (preferred with integrated GPU) for my requirements?
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
Do you have also a recommendation for a Supermicro mainboard + CPU (preferred with integrated GPU) for my requirements?
Sadly I do not.

Combining all into one with the power requirements and iGPU I do not see a supermicro solution for a single system. There are a number of discussions around the Internet regarding iGPU and supermicro mainboard challenges (bmc, internal igpu not enabled, can power own but can't pass through etc. etc.) If your power requirements were in the 170-200w range I could see this with an X10SRL-F, maybe 2650v4 or 2680v4 (used cost is attractive) an LSI HBA, nvidia GPU (P2000), and dual ConnectX-3 SFP+ NIC, all add-in cards. That is basically my AIO configuration except I have 16 x 16EXOS. It idles around 230W. Removing 8 drives from that idle usage still has the idle around 200w which is well outside your envelope.
 

thn80

New Member
Aug 10, 2020
16
1
3
Combining all into one with the power requirements and iGPU I do not see a supermicro solution for a single system. There are a number of discussions around the Internet regarding iGPU and supermicro mainboard challenges (bmc, internal igpu not enabled, can power own but can't pass through etc. etc.)
Thank you very much for this hint, I was not aware of such problems.
I tried to find and understand some more information on this "issue". However, currently I don't fully understand what the problem is and how to avoid it if using Supermicro. Are you a bit deeper in this issue and could summarize it in one sentence?
(I hope I don't get cute with you when asking this question to get some more information from your really, really high knowledge :oops:)
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
On SM bopards, enabling igpu (and on board video) disables BMC which is in and of itself a video adapter so no OOB mgmt.Passing through igpu when its enabled results in no video from the server at that point, = no console access - soooo power off/power on if there's an issue? Some SM boards simply don't support igpu. check out the plex and unraid forums for more info. 4 sentences. sorry couldn't do it in one sentence.

I have not delved particularly deep and I don't have a SM board to test it. I do have a commerical itx board and 9100t I'm going to play with when I have time.

As an aside I've wondered about simply going to a serial console in this situation so you have some kind of OOB when passing through the igpu in ESXI or prox etc... Again would have to be a board where the igpu works. I have not searched to see if this has already been done (it probably has).

As another aside I don't think SM sees a business case for messing with this (and I'd concur) which is why there are challenges (at least based on what I've read).
 

cw823

Active Member
Jan 14, 2014
414
189
43
Unraid on an i9-9909k is what I run which checks all of your boxes except maybe needing BMC. Plex runs via Docker with igpu passes thru.

Ran freenas for years, should’ve switched to unraid years earlier
 
  • Like
Reactions: itronin

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
Unraid on an i9-9909k is what I run which checks all of your boxes except maybe needing BMC. Plex runs via Docker with igpu passes thru.

Ran freenas for years, should’ve switched to unraid years earlier
concur with the use case fit. OP has stated prox if they do a single build though which is why I didn't mention unraid.

me I still think a nice cheap small baremetal plex/emby/jellyfin kinda thing, tmm, sff, or other solves that and the idle power will be really low. But that means two boxes.