New server build for ESXi (and maybe HyperV)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BSDguy

Member
Sep 22, 2014
168
7
18
53
Hi All

I'm currently running the following at home in my lab:

Mobo: Supermicro X10SL7-F
CPU: Intel Xeon E3-1230 V3 Haswell
RAM: 4 x 8GB DDR3 ECC 1600 (32GB in total)
Case: Fractal Design Mini
PSU: Seasonic 550W
Drives: 2 x 128GB Samsung Pro 840
1 x 256GB Samsung Pro 830
1 x 512GB Samsung Pro 850
Fans: 5 BeQuiet fans (120/140mm) for cooling
IcyDock 5.25" bay to house the 4 SSD drives
HDD: Western Digital 4TB Red
Samsung 1TB

I'm running HyperV 2012 R2 core and its been running great with about 10 VMs on it. The problem I have is, that, in my job I need to test/learn/evaluate/etc lots of software all the time as I move around between clients. Therefore, 32GB of memory just isn't enough anymore.

Moving forward, I would like to build a completely new server which can be upgraded as the environment grows. Since this server is vital for my job I don't mind spending the money on it. I'll probably reuse the drives from the setup I have currently and the PSU as well as the fans and IcyDock. I also have an IBM ServeRAID M5015 SAS/SATA Controller that can be used (which will be needed for ESXi)

I could use some input please as to what hardware I should purchase. I am going to run ESXi 6 on the new server and will stop using HyperV for now although I would like for the new server to be able to run HyperV as well should I change my mind later on.

I'll need the server to run at least 20 VMs in the beginning and will need it to grow to about 25-30 later on.

I've been thinking of the following hardware:

Mobo: Supermicro X10SRA-F(supports up to 512GB memory!)
CPU: Intel Xeon E5-2630 V3 Processor Haswell (8 phydical cores)
RAM: 4 x 32GB DDR4 (128GB in total)
Case: Fractal Design Define XL R2 Black Pearl

So the questions I have are:

1) I am unsure of what to do for the storage side of things. Currently I don't use any RAID but now that I rely on my setup so heavyly I need to start doing this. So I want to maximise IOPS for the VMs and have RAID to protect me from a hardware failure. Should I just use 4 x 512GB Samsung Pro 850s in RAID 5? RAID 10? I don't mind buying more drives.

2) Should I be thinking about using PCIe SSDs? Can you RAID these to protect yourself from drive failure?

3) I think the PSU is enough for the new setup?

4) Will ESXi 6 run on the new Supermicro motherboard?

5) Practiaclly speaking, how many VMs can I have running on a single SSD drive?

Some of the types of VMs I run (and will run) are: Exchange, SQL, Veeam, System Center, Remote Desktop Services, Citrix, BackupExec, WSUS, Windows Server 2012 R2 (and 2016), etc etc.

I'm only interested in running a single server as this will live in my lounge so the old server will be decommiussioned and possibly sold later on. I can't run both moving forward.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
The same advice I gave you on the other forum.

PS: You can RAID PCIe based storage, you just can't hardware raid them at the moment.
 

gea

Well-Known Member
Dec 31, 2010
3,162
1,195
113
DE
If you want a single box for your VM server and storage especially with ESXi, you should look at a board with an included LSI HBA. I would also suggest to check for 10 GbE as this is the future.

With socket 2011 R3 SuperMicro currently lacks such a board.
You may compare the ASRock Rack > EPC612D4U-2T8R

1. about SSD storage
The Samsung are fast desktop SSDs. For VM usage you may check for SSDs with powerloss protection like the Intel S3500 and up or use a hardware raid controller with cache and BBU when using Windows ntfs /Linux ext4 or

use a virtualized ZFS storage appliance that offers a far better datasecurity and is faster due the better cache mechanism. You also do not need to worry about the write hole problem. You can use deskop SSDs there if you add a Slog device with powerloss protection. I offer a ready to use ZFS storage VM if you want to try. Such a VM can provide storage via NFS or iSCSI for ESXi or Hyper-V or SMB for general use storage.

2. yes you can use them and raid them with software raid like ZFS but they are too new. I would expect the one or other trouble. If you use a raid of several good 6G SSDs, performance is similar.

3. without many disks, PSU is uncritical

4. ESXi will run on current Intel serverchipsets.
Problems are mainly around nic and raid/HBA controller
Intel nics and LSI HBA are mostly ok

5. depends
If you use a lot of readcache like with ZFS, nearly all reads are from RAM.
You only need to care on write loads,
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,807
113
On the motherboard, you might try Supermicro | Products | Motherboards | Xeon® Boards | X10DRH-CT

The bad thing is that it moves you to E-ATX BUT onboard LSI SAS + 10 SATA ports + 10GbE. The nice thing about DP platforms is that you can add a second CPU and RAM fairly inexpensively when you need more space in 6-12 months. It means you have 10-15 min of downtime instead of having to rip and replace a motherboard.

Other quick thoughts
1) I do like using hardware RAID. I would probably check the Great Deals forum here for S3500's or other enterprise drives that work well on LSI SAS controllers. Having PLP on the drives is also nice (for designs that need it).
2) You can but I think @gea mentioned the PCIe RAID side is less mature. You can do it with Intel drives and desktop OSes as an example, but if you want a boot volume I still would use SATA/ SAS.
3) That PSU is fine. 550w is a lot.
4) Yes
5) As gea mentioned there are two factors: space (including snapshots) and IOPS per VM.
 

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
I would stick with SM motherboard that serves me well with no issue at all!
get a used one prev generation to make a good buy on ebay or buy current motherboard and new processor if your boss give you permission

I would stick with Intel SSD S3XXXX fo server, Samsung should be ok on software raid, just to pick up for enterprise, not consumer model.

Yes,, you can do software raid pcie SSD on ZoL(ZFS on Linux), but you NEED "NVMe capalble linux kernel", I think Centos 7.X (RHEL 7.X) already built in the kernel. Do not know on open Solaris variances... :p

esxi6.X should support current (not very new hardware), you need to cross0check since I already moved to proxmox 3.4 and 4.0 from esxi5.X in early 2015 (time goes by fast... we are in 2016 now).

You can run many VM on single SSD with your needs. this is my one recent low-power mini-itx with 8G RAM: router VM, 1 Debian mid heavy lift, 1 debian light heavy lift. you should know your needs first, and start to create your vm' and measure the load, my rule of thumb is max peak load is 80% of the maximum to give a head as a buffer.
I do prov... my SSD too. ex: 120G ssd, I will bump down to 100G usable, the reason? you can search on STH or on the net...

Based on you spec, you 550W is doing 50% or less when is ON...

just my suggestion: get e5 and mucho RAM since you would running many VMs. get e5 with all real cores(maximum that based on your budget) without HT. HT(hyper treading) is useless for baremetal for my understanding.

last not least:
I am not bias to Soft or HW Raid, if you already have IBM ServeRAID M5015, get intel SSD, since I know samsung SSD on that is not good in performance(I could be wrong:p).
my systems are consists of HW(IBM M5014, HP P410,) and SW( btrfs and ZFS on Linux).
know your objective and dive in to get more detail to start your build, you will not get dissappointed.. at least 80% or more achieved you objectiveness

good luck!!!
 

gea

Well-Known Member
Dec 31, 2010
3,162
1,195
113
DE
One remark about raid controller vs HBA

If you intend to use Windows with ntfs or Linux with ext4 or ESXi with VMFS locally, you should prefer a hardware raid-6 controller with cache and BBU to be protected against the write hole problems.

If you intend to use shared storage with a modern software raid and a CopyOnWrite filesystem like ZFS where the write hole problem does not exist, you should avoid hardware raid controllers in any case and use a simple HBA that does no more than offering disks to the OS like LSI 9207 or 9211, 3008 etc best with a raidless IT mode firmware to give ZFS disc access without a hardware raid layer between for best performance and best reliability.
 
Last edited:

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
Hw versus soft raid is like snake oil debate ha-ha.

Pick one of those for the best of your interest..
 

gea

Well-Known Member
Dec 31, 2010
3,162
1,195
113
DE
No, there are technical reasons and cases when you should use one or the other and when you should avoid one.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
wait, someone has raid'd pci-e ssd's? I need to see this.

Are you being sarcastic?

There are threads here showing insane performance from NVME RAID. (Software)
I could have sworn I've seen your name in there replying or likes, maybe I'm going crazy, lol ;)

There's also many people who've RAID'd (and booted on skylake) Intel 750s on [H] and other forums online.
 

Deci

Active Member
Feb 15, 2015
197
69
28
Are you being sarcastic?

There are threads here showing insane performance from NVME RAID. (Software)
I could have sworn I've seen your name in there replying or likes, maybe I'm going crazy, lol ;)

There's also many people who've RAID'd (and booted on skylake) Intel 750s on [H] and other forums online.
i have raided 2x samsung 950 pro 256gb drives together on a gigabyte motherboard, but it only offers ~3-3.2GB/s which isnt really worth it over the ~2.2-2.5GB/s you get with a single drive given you can get the 512gb for less than 2x 256 drives, but this probably has something to do with the 2nd m.2 port only being x2 bandwith, it really needs an add in x4 to m.2 card.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
i have raided 2x samsung 950 pro 256gb drives together on a gigabyte motherboard, but it only offers ~3-3.2GB/s which isnt really worth it over the ~2.2-2.5GB/s you get with a single drive given you can get the 512gb for less than 2x 256 drives.
I'm talking about 8 - 14 NVMe drives, the performance is... well... insane :)
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
I was hoping to find another chassis as I refuse to pay $900 for one, but now I'm just making my own 'caddy' for the nvme drives so -- write up with pics eventually, hasn't been top priority lately :)
 
  • Like
Reactions: Chuntzu