TOH - The One Hypervisor

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Wixner

Member
Feb 20, 2013
46
3
8
Build’s Name: The One Hypervisor
Operating System/ Storage Platform: Proxmox/ Local ZFS
CPU: Intel Xeon E5-2620v3
Motherboard: Supermicro X10SRi-F
Chassis: Supermicro CSE216E16-R1200LPB
Drives: TBD (Probably 24 Seagate Savvio 15K.2 ST9146852SS 146GB 15K 2.5" SAS)
RAM: 4x 16GiB Samsung M393A2G40DB0-CPB DDR4
Add-in Cards: Intel 910 800GiB PCIe SSD, LSI 9207-8i HBA
Power Supply: Redundant Supermicro PWS-1K21P-1R
Other Bits:

Usage Profile:
One Hypervisor to rule them all, One Hypervisor to find them,
One Hypervisor to bring them all and in the nested world bind them

L0-hypervisor for nested virtualization

Background
I have recently been promoted to 'Operative Datacenter Chief' and need to up my game in 'alternative' hypervisors like Proxmox and XenCenter as I am an Hyper-V and vSphere guy. Not being able to purchase the required hardware for multinode clustering and whatnot I need to bind them in my current hardware.

I've made some early tests using Proxmox and local ZFS storage on consumer SSDs (4x 250GiB Samsung 840 EVO) and so far I like it. The plan is to populate the chassis with 24 used Seagate Savvio 15k.2 146GiB SAS2-drives in a striped, mirrored VDEV for 1.7TiB storage and use the Intel 910 as SLOG and L2ARC.


Nitty Gritty Bits to Consider

How to distribute the Intel 910?
The Intel 910 is based on four 200GiB flash modules and as this strictly is a lab environment the 'requirement' for mirrored SLOG isn't really there, but performance is. perhaps 2x 200GiB striped VDEVs as SLOG and 2x 200GiB striped VDEVs as L2ARC is the way to go.

Alternatives to Proxmox
The L0-hypervisor will never be member of a cluster or distributed storage so Proxmox might be a little over the top for that. Perhaps any other distribution with ZFS-on-root, KVM and a decent web-vmm will suffice

  • Step 1 - Reallocation of existing hardware
    As more and more demons appear in my head regarding the storage, I've decided to swap the four Samsung EVO 840 for the Intel 910 in my workstation and start out with a zfs-stripe of 4x200GiB for initial vm-storage. When the decision for storage has been made, the Intel 910 will serve as SLOG and L2ARC (if needed).
 
Last edited:
  • Like
Reactions: Patrick

Eric Faden

Member
Dec 5, 2016
98
6
8
41
Cool build... I'm personally building a hypervisor and debating the os choice myself.


Sent from my iPad using Tapatalk
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,709
517
113
Canada
My understanding of how the ZFS SLOG works, leads me to suggest that you'll want just enough space, plus a smidge over what you can reliably transfer, before your transactions are flushed to disk. The rest of the space can be used as your L2ARC. While you could mirror vdev's on the same disk to achieve redundancy, I would think that your performance gains will be minimal at best vs having a real disk in there, and only adds to complexity, should something go awry further up the line. Providing your SLOG device is faster/ has appreciably lower latency than your array and has reliable power fail protection, you should be fine running a single disk. Also, you'll want to max out your RAM long before you go adding an L2ARC or you might just find you hit a performance penalty rather than achieving your goal :)
 

Ivan Dimitrov

Member
Jul 10, 2016
52
5
8
40
I am using Proxmox for home server and desktop virtualisation and is really nice that you have the hypervisor handling your storage.
Considering that the system will be lab for practise purposes do you really need all that spinning rust? I would go with one small SSD for Proxmox and one more big one for the VMs + a big disk for backups. Performance wise will be be comparable and maybe better. Price wise not sure or may be you just have those 15k disk laying around :)
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
Considering that the system will be lab for practise purposes do you really need all that spinning rust? ... snip .... Price wise not sure or may be you just have those 15k disk laying around :)
My ears are bleeding just thinking about 24 of those screamers. And, at 40 bucks a pop, that's close to a grand if you are buying them. That same money would buy 8 of the 480gb s3500's for similar raid 1/0 capacity at a much higher iops and fraction of the heat / noise.
 
  • Like
Reactions: whitey

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
If you're set on those 15K RPM drives I have probably 20-24 of them, many are new old stock too.
 

Wixner

Member
Feb 20, 2013
46
3
8
I am using Proxmox for home server and desktop virtualisation and is really nice that you have the hypervisor handling your storage.
Considering that the system will be lab for practise purposes do you really need all that spinning rust? I would go with one small SSD for Proxmox and one more big one for the VMs + a big disk for backups. Performance wise will be be comparable and maybe better. Price wise not sure or may be you just have those 15k disk laying around :)
I get 30 of these (6 coldspare) for around 470 bucks so they're not too expensive.

My ears are bleeding just thinking about 24 of those screamers. And, at 40 bucks a pop, that's close to a grand if you are buying them. That same money would buy 8 of the 480gb s3500's for similar raid 1/0 capacity at a much higher iops and fraction of the heat / noise.
The thought of an allflash solution is indeed intriguing - perhaps good ol' ebay needs to get a thorough search. I guess there is no real need for a dedicated L2ARC and SLOG using a DC S3500 VDEV

And yes - my lab environment will be placed in one of my soon-to-be datacenters so the noise is of no concern
 
Last edited: