Building an AIO server - advice needed

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Here's the situation. I'm going to colo a system at a budget provider. In the system I plan to have 8 hard drives and 2 SSDs.

In terms of hypervisor it will either be ESXi or Hyper-V. I do plan to have a Windows desktop VM but then mostly Linux VMs after that.

The two SSDs will be ~1TB capacity drives. They'll be in RAID 1 configuration for redundancy.

The additional 8 drives I'm planning to use two different brands so that way I have some protection against bad drives. Here's my options (using standard RAID terms not ZFS for everyone's ease):
  • Big RAID 10 array. Redundancy and performance.
  • RAID 60. Can lose two drives on either array and be "OK"
  • RAID 6 w/ 6 drives + 2 hot spares
  • RAID 5 *4 drives then mirror.
I purposefully have twice as many drives as needed so that way if one fails, I'll be ok even if it takes 2 weeks to get a drive to the facility.

Performance is less of a concern but I do plan on using Linux + napp-it as that looked intriguing. The drives will be passed through to the Linux-VM.

The primary reason for the 8 drives is backup. Assuming 4TB drives and 1-1.5TB of VMs but then I'll want to have longer term archival so have like 3 days worth of 12 hour snapshots. Most of the data is base Linux OS stuff. RAM wise it'll have at minimum 64GB.

What would you do in this case?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
If you need a desktop with graphics power on your AIO, you
should use Hyper-V (Server 2012) or you may try (experimentally)
-ESXi with pass-through of a video card

But I would split the server and the desktop part.

If you want to use ZFS, I would create two pools
-pool1: mirror from SSDs (to store VMs on them)
-pool2: general use - raid-z2/3 from 6/7 disks, optionally and L2Arc.

My favourite OS for ZFS storage is Solaris and OmniOS,
you may use and offer storage for other VMs and ESXi via NFS , SMB or iSCSI

I prefer ESXi over Hyper-V (as well as Xen, the former source of Hyper-V)
but thats a personal preference.
 
Last edited:
  • Like
Reactions: Jeggs101

Mike

Member
May 29, 2012
482
16
18
EU
What's the thought behind Vmware in a single server setup. The remote management possibilities are next to nill.
With mostly Linux vms you may want to think about simple containers as the overhead in IO across your huge disk pool would be minimal.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
What's the thought behind Vmware in a single server setup. The remote management possibilities are next to nill.
With mostly Linux vms you may want to think about simple containers as the overhead in IO across your huge disk pool would be minimal.
I was going to VPN into a virtual network then use that to manage the VMs.

@gea I don't really need video remote desktop connection is fine. This is a single box going out so I cannot split it. Great advice on the pools.
 

Mike

Member
May 29, 2012
482
16
18
EU
I was going to VPN into a virtual network then use that to manage the VMs.

@gea I don't really need video remote desktop connection is fine. This is a single box going out so I cannot split it. Great advice on the pools.
So if the VPN is down you need remote Vmware hands-on :(
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
So if the VPN is down you need remote Vmware hands-on :(
Well... if the VPN is down there is a good chance the box is down so he would have to get someone to do a reboot/ spider loaner anyway.

To me the biggest issue is what happens if your LAN side goes down. If that happens, there are not many options available.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
How else do I run Windows and Linux VMs though? Is it basically KVM on rhel or centos and use virtio? I wanted to have a security appliance in front and didn't know you could do that with KVM.
 

Mike

Member
May 29, 2012
482
16
18
EU
KVM is what you are thinking of. Virtio is a set of paravirtual devices used by KVM and others. Windows support on KVM is ok, although I found that the least-awful display drivers make a ton of difference. I would say that either Windows with Hyper-v or Linux with Xen, KVM, LXC or the latter two combined is more secure in a non-cluster setup. You can safely manage your Linux host with only an SSH key and whatever appliance you want to hide the rest of your machines behind of.
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
How is remote management for VMware next to nil? vSphere client talks to it remotely. You can run vCenter server as a vm - windows or appliance. In 6.0 beta there's even a hosted web client. If the VPN is down, either way, regardless of hyper visor there is no access. In either case, I'd want to assume that you'd have some manner of remote management via ipKVM/ILO/IDRAC/etc. Hrm - if that's the remote management one means, then yes, there is very little CONSOLE based configuration. But other than ensuring the networking works so that the management client can reach it, what does one really need?

Why so many spares? Why would it take two weeks to get a drive? Don't get me wrong, protecting data is critical, but if you're assuming 4 drive failures in a short time for half the total disk - I don't know that one more is going to help all that much. Anything that takes out 4 of 8 disks in a week is pretty statistically likely to take out the rest, no?

Plus with 2 hot spares and two parity, the two hot spares would have been rebuilt into the array AND then get lost as well. That's a lot of failure.
 

Mike

Member
May 29, 2012
482
16
18
EU
How is remote management for VMware next to nil? vSphere client talks to it remotely. You can run vCenter server as a vm - windows or appliance. In 6.0 beta there's even a hosted web client. If the VPN is down, either way, regardless of hyper visor there is no access. In either case, I'd want to assume that you'd have some manner of remote management via ipKVM/ILO/IDRAC/etc. Hrm - if that's the remote management one means, then yes, there is very little CONSOLE based configuration. But other than ensuring the networking works so that the management client can reach it, what does one really need?

Why so many spares? Why would it take two weeks to get a drive? Don't get me wrong, protecting data is critical, but if you're assuming 4 drive failures in a short time for half the total disk - I don't know that one more is going to help all that much. Anything that takes out 4 of 8 disks in a week is pretty statistically likely to take out the rest, no?

Plus with 2 hot spares and two parity, the two hot spares would have been rebuilt into the array AND then get lost as well. That's a lot of failure.
If you want to make your remote management interfaces public you are free to do so. If it doesn't have a package manager it won't happen in my book. Management of your hypervisor from a Windows guest? What happens if your hypervisor fails, how can you then RDP to it? Good luck with the non-windows tools available, or... wait... there are none.
I'm sure it's great to deploy in a farm, but you can't treat it like a pet.
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
The original poster indicated he was going to use ESXi or HyperV and have at least one Windows VM - suggesting management from Windows was acceptable.

Regardless of hypervisor, the management interface shouldn't be public. VPN and management from Windows guest was suggested - which means VPN and managing from remote would also potentially work. Or is this AIO going to run firewall/VPN on the same server? (Sounded like it was). If it's as a guest, and the hypervisor fails, then you can't make a VPN anyways. If it is a VPN/Firewall in the host itself then by effect the management interface is public in some manner no? If it is not either on the host as native or guest then there is a portion of this they is not 'all in one' - which is fine but let's you build it differently. If you have some manner of 1u firewall doing the VPN then you can connect to any internal management, host, or guest.

The remote management tools for Hyper-V Are also windows based.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
It's not a popular way of doing things, but ESXi can be managed almost completely from an SSH shell or the local console. It's the hard way to do things, but in a pinch it works to remotely get things working again. I ran an ESXi box at home for years with no Windows box or vSphere Client to manage it from - create VMs using vi to edit .vmx files, and register them with 'vim-cmd solo/registervm [path to vmx]'. Get a list of all VMs with 'vim-cmd vmsvc/getallvms' and remember the VMID of the one(s) you want to work with. Then you can turn them on with 'vim-cmd vmsvc/power.on [vmid]', issue guest shutdown with 'vim-cmd/power.shutdown [vmid]', hard-reset them with 'vim-cmd/power.reset [vmid]', etc. Tons of commands available, type just 'vim-cmd' for a list of namespaces, or 'vim-cmd vmsvc/' (or different namespace) for a list of commands in that namespace. There's also the older esxcfg-* commands still around from the ESX 2/3/4 days as well as the newer esxcli command (which can also be used remotely if you install the SDK or through PowerShell). The only things you can't effectively do from a ESXi shell (local or remote) are things that depend on vCenter or affect multiple hosts - no vMotion-ing, no working with dvSwitches, etc. ESXi also supports VNC-protocol access to the console of a VM, functionality that is not well documented or exposed in any of the GUI clients - you need to add a few lines to the .vmx file, but once thats done if you can SSH to something on the same network as the ESXi management vmkernel interface, you can then do an SSH tunnel/port-forward to the ESXi management IP/port that you specified in the vmx and connect with whatever standard VNC client you like.
 
  • Like
Reactions: lmk

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Interestingly enough, I had to do something similar with one of the STH Hyper-V hypervisors. Of course, there is a dedicated pfsense box but I VPN'd in then got things working using remote powershell. A few commands and things were back to working. Surely not ideal but it did work.