Help With New Hypervisor Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

eptesicus

Active Member
Jun 25, 2017
151
37
28
35
I'm browsing this forum all the time, but I've finally come around to start a thread.

I currently have two home-built E3-based servers in my home lab (seen here) that both have dual roles - hypervisor and NAS. I'm getting rather limited by what I can do, and would like to build a dedicated hypervisor, and dedicate my other two servers to just being NAS'.

What I want is a dual CPU 2011-3 server that will have a single CPU (probably an E5-2630v3) and 64-128GB of RAM to start. I can add an additional CPU and RAM later. I wish I could wait for EPYC to release (if anything, for prices to come down a bit), but the build's gotta happen before October. I don't mind going to ebay to get some things used, but I've never built a dual CPU system, so I don't know exactly what I should look for or get.

Build’s Name:
TLSIWGTBFQSTBIAGMIO (The Last Server I Will Get To Build For Quite Some Time Because I Am Getting Married In October)
Operating System/ Storage Platform: Windows Server 2016 & Hyper-V or VMware
CPU: Intel Xeon E5-2630 v3
Motherboard: I'm thinking Supermicro, but their motherboard selection is crazy. ATX or E-ATX, 16 DIMM slots, and the right PCI slots for a RAID controller and 10GbE.
Chassis: Supermicro CSE-216 (Already picked up this on ebay)
Drives: I'm thinking I'll just pick up some server pulls on ebay. I'd like to give 15k drives a shot with 2x for the host OS, and 4-8x 146-300GB drives for VMs.
RAM: Depends on the mobo, but 64-128GB
Add-in Cards: RAID controller . I'd probably be fine with another IBM M5015, but am open to recommendations for this system.
Power Supply: Included with the chassis

Any and/or recommendations would be appreciated.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Do you really need a dual socket mainboard?
I had two 1366 mainboards with dual sockets for my homelab and never populated the second socket (but spent the money for the more expensive board :/)
Supermicro single socket mainboards can use all the e5 (4-22 core) v3 and v4 xeon cpus and have up to 8 ram slots (256gb ram with 32gb dimms).

I'm thinking Supermicro, but their motherboard selection is crazy.
Yes, they have many options, with or without ipmi, onboard sw or hardware raid controllers, hba, nvme support, atx or eatx. Play with their mainboard selector and post the mainboard that you want to get.
MBSA
I'd like to give 15k drives a shot with 2x for the host OS, and 4-8x 146-300GB drives for VMs.
I would't use sas (>7200rpm) spinnners, they consume too much power (more heat!) and can be easily outperformed by ssds (1x 15k rpm sas drive ~210 iops, 8x sas drives = 1680 iops vs intel s3500 ssd = 7000 iops)
(Okay, maybe if the vm server runs just for a few hours per day and the drives are cheap)
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Personally I'd suffix the server name with a number;

TLSIWGTBFQSTBIAGMIO01
TLSIWGTBFQSTBIAGMIO02
etc

...in case you end up getting married in october again in the future. And be very wary of getting married in january, june or july if you value your naming convention!

On a more serious note I'd agree with i386 and think very long and hard about dual socket, especially if WAF is going to be an emerging consideration...! Unless you really definitely need big honking CPU power you're likely to save a load of money and power by sticking with single socket systems. What tasks are you planning on] doing on this that chew the CPU?

Don't use 15k drives at all if you can avoid it; high performance platters are dead drives walking IMHO, having been nearly completely replaced by SSDs. They're expensive, noisy (WAF again!), power hungry and spares will become increasingly hard to come by in the coming years I think - I'd stick to basic SSDs for the OS, or get some enterprise SSDs if you plan on doing stuff like storage tiering (or even just some VMs on SSD). Then you can use cheap 5400-7200rpm platters for high density storage.
 

eptesicus

Active Member
Jun 25, 2017
151
37
28
35
Do you really need a dual socket mainboard?
I had two 1366 mainboards with dual sockets for my homelab and never populated the second socket (but spent the money for the more expensive board :/)
Supermicro single socket mainboards can use all the e5 (4-22 core) v3 and v4 xeon cpus and have up to 8 ram slots (256gb ram with 32gb dimms).

Yes, they have many options, with or without ipmi, onboard sw or hardware raid controllers, hba, nvme support, atx or eatx. Play with their mainboard selector and post the mainboard that you want to get.
MBSA

I would't use sas (>7200rpm) spinnners, they consume too much power (more heat!) and can be easily outperformed by ssds (1x 15k rpm sas drive ~210 iops, 8x sas drives = 1680 iops vs intel s3500 ssd = 7000 iops)
(Okay, maybe if the vm server runs just for a few hours per day and the drives are cheap)
Need? Probably not. However, there are a couple VMs that I run that I'd like to allocate more CPUs to. I was thinking dual socket so that I can get an 8c/16t CPU to start (and save money initially), cheaper than a 16c/32t CPU... But then add the second CPU when I'm able. You are making me rethink it though...

I'm glad you mentioned IPMI. I have it on my other two servers, and definitely want it on this one. Supermicro's mobo selector isn't the most intuitive and there's a lot of workstation motherboards that keep coming up, but I'll see what I can find.

The reason why I was thinking SAS is because of how cheap the server pulls are on ebay. $20-40 for a 146GB SAS drive... I was thinking of loading up on these and just have quite a few hot spares at the ready. The server will be on 24/7... But the drives are cheap... Would it be ok to get used enterprise SSDs? Or is it imperative to buy those new because of the how much may have been written to them over time?


Personally I'd suffix the server name with a number;

TLSIWGTBFQSTBIAGMIO01
TLSIWGTBFQSTBIAGMIO02
etc

...in case you end up getting married in october again in the future. And be very wary of getting married in january, june or july if you value your naming convention!

On a more serious note I'd agree with i386 and think very long and hard about dual socket, especially if WAF is going to be an emerging consideration...! Unless you really definitely need big honking CPU power you're likely to save a load of money and power by sticking with single socket systems. What tasks are you planning on] doing on this that chew the CPU?

Don't use 15k drives at all if you can avoid it; high performance platters are dead drives walking IMHO, having been nearly completely replaced by SSDs. They're expensive, noisy (WAF again!), power hungry and spares will become increasingly hard to come by in the coming years I think - I'd stick to basic SSDs for the OS, or get some enterprise SSDs if you plan on doing stuff like storage tiering (or even just some VMs on SSD). Then you can use cheap 5400-7200rpm platters for high density storage.
Maybe it should be OHCIN-WIN16HYPV-TLSIWGTBFQSTBIAGMIO01 and OHCIN-WIN16HYPV-TLSIWGTBFQSTBIAGMIO02 just for safe measure... :)

Please excuse my ignorance... but WAF?

I currently run 10 VMs between my two hosts, but would like to allocate more CPU resources to a couple of them and build a number of other VMs. I will be running some "cloud" and streaming services at home for my family and friends as well. I know that a surveillance server might be better suited for a dedicated hardware server, but I'd also like to run that on this as well.
 

eptesicus

Active Member
Jun 25, 2017
151
37
28
35
I've made some more decisions!

CPU: Intel Xeon E5-2630 v3
Cooler: SuperMicro SNK-P0048AP4
Motherboard: Supermicro MBD-X10DRI
Chassis: Supermicro CSE-216... I forgot to mention that this includes a BPN-SAS2-216EL1 backplane.
RAM: Samsung 16GB M393A2G40DB0-CPB3Q (quantity: 4)

I'm still on the hunt for some drives, but right now, I'm looking at finding good RAID controller for the BPN-SAS2-216AL1 backplane. Recommendations here? I like my IBM M5015s, even though they're slow to expand arrays (coming from my 40+TB servers of course). I'm not sure if I should do the same controller for this build, or find another, better performing, card. I'm unsure what's a great card to find in the used market.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
I'm looking at finding good RAID controller for the BPN-SAS2-216AL1 backplane. Recommendations here?
Do you need HW-Raid or just a HBA ?
Generally, look for LSI 2308 chip, i bought some Supermicro AOC-S2308L-L8i what is IR Firmware for USD 49.99 recently at eBay. Should be possible to get some similar deals.

Will flash mine to LSI 9207 as i want IT.
 

eptesicus

Active Member
Jun 25, 2017
151
37
28
35
Do you need HW-Raid or just a HBA ?
Generally, look for LSI 2308 chip, i bought some Supermicro AOC-S2308L-L8i what is IR Firmware for USD 49.99 recently at eBay. Should be possible to get some similar deals.

Will flash mine to LSI 9207 as i want IT.
Hardware RAID. I went ahead and just ordered another M5015 on ebay. I figure I'll upgrade the RAID controller if/when I upgrade my backplane to SAS3. Since I currently use M5015s in my other two hosts, I'd be ok with having a backup after I upgrade.

I scored 8x 600GB HGST SAS3 HDDs on the forums here that I'll run my VMs on. I think I'll run those in RAID 10, but am up to suggestions. That should give me over 2TB of usable storage for VMs. Most of my current VMs are pretty small in size (40 GB or so), but I have a couple that are larger. For my downloads VM, I may just have a dedicated NIC to put all of those files on my primary storage server. Or I might use Always Sync to move them automatically off of that server and store them on my storage server to organize later. I was going to get an SSD for the host OS, but remembered that I have one in my backup server that I currently run VMs on. I'll just transfer that SSD to the new host to use for the OS. At some point I'll upgrade that.
 

eptesicus

Active Member
Jun 25, 2017
151
37
28
35
Since I'm planning on running VMware on this as opposed to Hyper-V, I keep going back and forth on buying a couple of small SSDs to run in Raid 1 for the boot drive and use my spare SSD as a cache, booting from a flash drive and using my spare SSD as a cache, or just using my spare SSD as the boot drive like I had originally planned.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
If you're settled on ESXi you've got very little to gain from installing and running it on SSDs when you could just boot from USB, but if it keeps your hardware setup simpler then running straight from a couple of SSDs is certainly doable. If it's getting an SSD to itself it needn't be a fast and/or enterprisey one.