Hybrid (AIO) Gaming desktop, VM Host (Server) & NAS: To do or not to do?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

laujuth

New Member
Feb 14, 2016
16
0
1
45
Usage Profile: This build will be used as my own desktop/WS, capable of gaming on dual 27" monitors @1080p, and will serve as the non-critical file, mail and web server for the family. It would be nice to have my own test lab as well (db/app server).

Other information…

Ok, I know some of you would tell me I'm insane to try and combine all these features into a single box, but because of financial reasons and a lack of space, I'm still gonna go for it (unless you can manage to convince it's a really bad idea :p). This is what I'm thinking of: A lightweight linux distro as host OS, QEMU/KVM for virtualization, a BTRFS/ZFS SATA600 raid vol. on the host (+ samba for file serving), 4-5 server VM's (on SSD) and a W10 VM with (PCIe) pass-through of an SSD, GPU, USB controller and sound card (let's get bare-metal!).

Build’s Name: ATAH
Operating System/ Storage Platform: Fedora Server, CentOS, W10 / BTRFS or ZFS Raid 1-5
CPU: Intel Core i7-5820K x2 (CPU pinning 6 cores for host & server VM's, 6 cores for W10 VM)
Motherboard: ASRock Rack EP2C612D16C-4L (or similar. Need at least 1 M2 PCIe 3.0 port).
Chassis: Fractal Design define R5 (case)
Drives: 1x Samsung Pro 950 512G, 1x Samsung EVO 850 and a couple of WD RED's
GPU: MSI Geforce GTX970 4GB (VGA passthrough) + 1 low performing one to drive the host.
RAM:
2x 16GB ECC DDR4
Add-in Cards: USB controller, Sound card
Power Supply: Corsair RM750x
Other Bits:

Usage Profile: Server hosting, gaming, file serving, general desktop usage, always on

Other information…
This desktop needs to be as silent and energy efficient as possible. I'm building it from a desktop-first, server-second perspective and know that I'll have to make some compromises by going this way. So, here are my 2 questions to you: 1) Is this doable? and 2) Where are the octa core Skylakes?? Should I wait for Q3 and the arrival of Broadwell-E cpu's? I'm afraid my build will consume a lot of power (2x140W for the CPU's alone).

Thanks :)
 

Kal G

Active Member
Oct 29, 2014
160
45
28
44
Interesting idea. I don't see any practical reason it wouldn't work.

You'll need to use Xeon processors if you want more than one CPU and to drive the ECC memory.

Is there a reason you behind going with 12 cores? What kind of VMs are you looking to host?

What does your budget look like?
 

laujuth

New Member
Feb 14, 2016
16
0
1
45
Ah damn, didn't know I needed Xeons for those. Thought support for those had to come from the motherboard only. So no point waiting for Broadwell-E either then .. which is too bad since those cpu's seem to be priced attractively.

I don't particularly need 12 cores, 8 should be enough (games are benefiting from higher clocked quad cores nowadays so would split in 4+4), but there are no Skylake octa cores and the Xeon octa cores are way beyond my budget (starting at 2ke). I'm planning to spend somewhere between 2-3k euro on it, but can go a bit above it (so no 2ke just for te CPU :().

I'm looking at VM's for OneCloud, Mail serer, Web server, 2 game servers, a teamspeak server, a Windows 2012R2 server and some extra ones for testing.
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
Any reason you want to do both on the same system? I run a VM stack on a Thinkcenter M82 SFF which is slient and low power, no ECC but budget wise they're ~200-300 on ebay and with 150usd in upgrades a good microserver(I moved to 32gb of ram the quad core i5 runs my home stack just fine)

Also with Hyper-v Being built into W8+Pro and Server 2012 You may get better gaming results(you did say desktop first) with that as your host os
 

laujuth

New Member
Feb 14, 2016
16
0
1
45
Any reason you want to do both on the same system? I run a VM stack on a Thinkcenter M82 SFF which is slient and low power, no ECC but budget wise they're ~200-300 on ebay and with 150usd in upgrades a good microserver(I moved to 32gb of ram the quad core i5 runs my home stack just fine)

Also with Hyper-v Being built into W8+Pro and Server 2012 You may get better gaming results(you did say desktop first) with that as your host os
I would go for 1 system because I figured it was doable and would cost me quite a bit less, not having to buy everything twice (I'd much rather spend 600e on a high end cpu than 600e on 2 mid rangers). Also I love the idea of playing with QEMU/KVM and PCIe passthrough :). I could also wait for Broadwell-E, get a 600US$ octa core, run W10 as main OS and use VMWare Workstation Pro to run the VM's (still no ECC though and will have to use storage pools/ReFS).

Thanks f0r the input though, I might still go with 2 systems.
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
I would go for 1 system because I figured it was doable and would cost me quite a bit less, not having to buy everything twice (I'd much rather spend 600e on a high end cpu than 600e on 2 mid rangers). Also I love the idea of playing with QEMU/KVM and PCIe passthrough :). I could also wait for Broadwell-E, get a 600US$ octa core, run W10 as main OS and use VMWare Workstation Pro to run the VM's (still no ECC though and will have to use storage pools/ReFS).

Thanks f0r the input though, I might still go with 2 systems.
I wouldnt mess with vmware workstation, they're discontinuing it and unless you're working with an ESXI server there's not a lot of call for it, Fedora and CentOS are supported well enough in hyper-v(my PBX is CentOS with freePBX on it)
 

laujuth

New Member
Feb 14, 2016
16
0
1
45
Thanks for the tip, but I'm actually in the middle of passing the Linux System Administration certification and that is why I'm going to opt for the QEMU/KVM virtualization solution (got a Hyper-V cluster to manage @ work :)) I've decided to split things up (will wait for the next gen GPU's for the desktop) and this is what my server build will look like:

Build’s Name: M2Sweet
Operating System/ Storage Platform: Fedora Server / BTRFS or ZFS RaidZ
CPU: Intel Xenon E3-1245 V5 (integrated GPU will be nice to push Kodi to TV)
Motherboard: Gigabyte GA-X150M-PRO ECC
Chassis: Fractal Design Node 804
Drives: 1x Samsung Pro 950 512G, 1x WD Red 2TB (single to start, then a mirror and eventually a RAIDZ with 3)
GPU: Integrated
RAM:
2x Crucial CT16G4RFD4213 16GB DDR4 ECC
Add-in Cards:
Power Supply: Corsair RM550x
Other Bits:

Would appreciate it if anyone could have a look and approve this build :)

Also, how come a single 16GB Crucial DDR4 ECC RAM module (2,133GHz / 14,07ns true latency / 1,2V) cost 92,50e while exactly the same modules from HP, IBM, etc. cost over 200e? Why would anyone spend that more money? Is it the after sale service?

Thx! ^^
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
for your modules, it's about the validation, HP ect guarantees it to work with a certain system.
I might push for an intel 750 instead of the 950 pro it's more designed for the workload you're planning
For your VM's in addition to what you're planning(fedora/centos) I encourage you to play around here Welcome – SUSE Studio you can get a lot of your legwork done pre deployment and then just tell it to make you an ISO or even a KVM qcow file(or esxi or hyper-v they do vary in choice based on version some)

any reason for the node 804 vs something with hotswap capability or 5.25 bays to add such features?
 
  • Like
Reactions: laujuth

laujuth

New Member
Feb 14, 2016
16
0
1
45
I might push for an intel 750 instead of the 950 pro it's more designed for the workload you're planning
Can you elaborate on this? As far as I can tell from benchmarks, the 950 beats the 750 in everything except for random writes. I'm actually still doubting on going the PCIe NVME route instead of a normal SATA one. What areas do you think I'll see huge improvements in?

any reason for the node 804 vs something with hotswap capability or 5.25 bays to add such features?
euh ... it looks good? :D To be honest, I haven't even thought of hotswap capability as I don't consider this to be a critical server. If a disk fails, I'll just yell "it's going down for a couple of minutes", turn the server off and swap the disk(s). I thought motherboards with SATA hotswap support were hard to find or am I wrong?
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
Neither drive is slow, hell both will do well for a VM workload, the 950 excels in client workloads but the 750 is derived from intels enterprise division and shows this when you see benchmarks with higher queue depths especially agasint the larger(1.2tb) 750 the 950 also shows some thermal throttling avoided by the AIC formfactor but I suspect this will be true for many m2 ssd's

As for hotswap it's part of the spec and i've never encountered a board that didn't support it you can buy cases that support it although server cases are rarely attractive or get one like this Thermaltake - Core X2
and add 2.5 or 3.5 bays to it.

As far as NVME vs SATA unless you're running a large DB application An array of SATA drives will give you more space and some resilliance but it will take 4-6 SATA drives to do what NVME is capable of performance wise and it will eat cpu cycles doing so.
 

laujuth

New Member
Feb 14, 2016
16
0
1
45
Thanks again, you've been of great help :) If you don't mind me asking one last thing... I've just noticed that that Gigabyte mobo doesn't have any video out (not even VGA). So I'm guessing it's pointless to go for the E3-1245 with integrated GPU and might just as well go with the E3-1240 (same specs, no gpu). My question is, how does one do the initial install on such a build?? Install and configure the OS on a different system, install the SSD in the new build, hope that it boot fine and that you can connect to it remotely? Or simply buy an additional GPU which has video out?
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
That board is intended for a dedicated gpu, some outpuless boards have what's called IPMI to manage them, I would consider a different board if you want gpu output without an AIC, honestly it looks like it's intended to be a workstation board paired with a quadro so an igp would be a waste there
 

laujuth

New Member
Feb 14, 2016
16
0
1
45
That board is intended for a dedicated gpu, some outpuless boards have what's called IPMI to manage them, I would consider a different board if you want gpu output without an AIC, honestly it looks like it's intended to be a workstation board paired with a quadro so an igp would be a waste there
Yeah, I'm saving 10e by taking the one w/o igp and will go for a GeForce GT 730 2GB DDR5, no fan and can be found at around 60e. I can only find 5 LGA1151 micro-ATX mobo's which feed my needs and 4 of them are Supermicro's (starting at 260e). The Gigabyte one is available at ~110e + 60e for the dedicated GPU. Sounds like a not too bad deal to me :)
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
the GT730 is a great card but I'd reccomend something like a Quadro 2000 or 4000 off ebay(I know they're under 100 usd here for the 2000 at least) I'm not sure about KVM or QEMU but I know they support RemoteFX in hyper-v for gpu visualization if you wanted to toy with that and can offer more performance than the GT730(the 4000 should make it look bad) and will offer display port out.
 

laujuth

New Member
Feb 14, 2016
16
0
1
45
I'm actually gonna go with a GT720 instead. TDP of 19W, no fan and around 50e. I don't really care about its performance, will the desktop for that ;)
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,649
2,065
113
I'm not sure if it's possible, but you could always try using a USB->VGA/DVI adapter... I have some I used to use before video cards worked with so many screens at once (that didn't cost a lot). It was resolution limited, but worked fine.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
If I were doing this there are a few easy ways to do it.

Windows 10. Run development VMs in Hyper-V. File server for your house you can use homegroups or the easy built-in file sharing. Compatibility with games will be great.

CentOS (or Debian/ Ubuntu or whatever) you can run most of those services in docker instances. You'll benefit by needing less VM overhead. If you really want VMs you can add KVM and be ready to go.

I would strongly suggest keeping a gaming PC and a server on two different boxes if you can. You'll want your server to stay up 24x7 but your gaming PC may need to reboot or get upgraded.
 
  • Like
Reactions: Sergio