VMWare Mega Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Kryax

Member
Oct 14, 2017
44
1
8
Long time lurker and love this website! Looking for recommendations, tweaks, and advice on a new build that I am starting to work on. See below for all details:

Build’s Name:
VMWare Build

Operating System/ Storage Platform:
ESXI VMware vSphere with Operations Management Enterprise Plus v6.5
- Freenas VM (64GB RAM)
- Cisco VIRL VM (64GB RAM)
- GNS3 VM (8GB RAM)
- VCenter Server (10GB RAM)
- Windows 10 VM (4GB RAM)
- Windows 2016 Server VM (8GB RAM)
- PFSense VM (4GB RAM)
- (Misc/FUTURE VM's)

CPU:

1 x Intel Gold 6126

Motherboard:
1 x Supermicro X11SPL-F

Chassis:
1 x Supermicro CSE-846BE1C-R1K23B 24 Bay Storage Chassis

Drives:
24 x 8TB HGST NAS (Freenas RaidZ2 4 x VDEV, 6 drive per VDEV, and 1 x Pool for Storage)
1 x Supermicro SSD-DM032-SMCMVN1 32GB SATA DOM (ESXI Boot Drive)
2 x Intel Optane SSD 900P 480GB, AIC PCIe 4.0, 20nm, 3D XPoint (ESXI Datastore)

RAM:
6 x Samsung M393A4K40BB2-CTD 32GB DDR4-2666 Lp Ecc Reg Dimm

Add-in Cards:
1 x LSI SAS 9211-8i
1 x Intel Ethernet Server Adapter I350-T4V2

Power Supply:
2 x PWS-1K23A-1R (1200W Power Supply)

CPU Heatsink:
1 x Supermicro (SNK-P0068APS4) 2U Active CPU Heat Sink Socket LGA3647-0

Cables:
2 x Supermicro (CBL-SAST-0508-02) Internal MiniSAS o MiniSAS HD 50cm Cable

Usage Profile:
VMWare ESXI to consolidate existing VMWare and Storage servers into 1 box. Below are the main specs of the existing servers:

Existing VMWare ESXI Server
-Intel Xeon E3-1270v2
-Supermicro MBD-X9SCM-F-O
-Kingston 32GB (4 x 8GB) DDR3 SDRAM ECC Unbuffered 1333
-16GB USB Drive (ESXI Boot)
-250GB OCZ Vertex 4 (ESXI Host)
-1TB WD Black HDD (Datastore)

Existing Freenas Storage Server
-Intel Xeon E3-1230v1
-Supermicro MBD-X9SCM-F-O
-Kingston 32GB (4 x 8GB) DDR3 SDRAM ECC Unbuffered 1333
-20 x WD Red 3TB NAS running RaidZ2 with 2 x 10 vdev in 1 x storage pool
-2 x IBM M1015 (Flashed to IT)
-Norco 4220

Comments:
So I will start off with where I have issues or require recommendations in choosing hardware.

CPU/Motherboard/Compatibility
So between the Intel W-2155 or Intel Gold 6146 or AMD Epyc 7401P. For the price, the Intel W-2155 or AMD Epyc 7401P are the best. The issue I have with the Intel W-2155 is the Supermicro motherboards don't have much in PCI-E slots. This would hinder further upgrades and expansion of cards which might be a deal breaker. The AMD Epyc 7401P has quite a few expansion slots but I worry about compatibility and loss of some features. I will be running this in a VM environment, and as such need to make sure that I am able to do pass through of hardware (HBA cards to Freenas VM). Also worried about not being able to utilize Intel's AES-NI that I currently use for PFSense and encryption accellaration. The Intel Gold 6146 is the safe bet but also much more expensive.

HBA Choices
I am out of touch on what the latest recommendations for these cards are. I will require the ability to be able to do passthrough of the hardware to Freenas in a VM environment. As such, I know that I will either require 3 x 8 port or 2 x 16 port cards. Having SAS3 would be nice too for future upgrades but if not a big deal.

Cases
Currently have a Norco 4220 and am familiar with the case. It has served me well over the years. Now I know that Supermicro cases are better built that include dual power supplies and also offer expansion options. Basically looking for some general recommendations and even the possibility of upgrade SAS2 expanders to SAS3 if I wanted to. Will require at least 20 drive bays but might consider more options.

RAM
This will have to be determined a little later and will depend on which CPU/Motherboard I choose.


Edit 1: All comments/concerns have been addressed so far.

Other:
This really is a culmination of various topics but thought it would be best suited in this forum. Feel free to move it if needed.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Any reason you settled on the 6146? That is a pricey CPU if you do not need that ultra-high clock speed.

On EPYC - the hint is it is supported on 6.5u1.
 

Kryax

Member
Oct 14, 2017
44
1
8
Any reason you settled on the 6146? That is a pricey CPU if you do not need that ultra-high clock speed.

On EPYC - the hint is it is supported on 6.5u1.
I would only settle on the Gold 6146 as a last resort. Also I am looking for a processor with at least 10 cores but has the best single core performance speeds too. I know there are a few applications that I use that utilize the single core speed. The Gold 5118 is not a bad choice for the cost but definitely a cut into base and turbo clock speeds. My first choice is still the W-2155 and since it is fairly new there might be a chance of more motherboards that have more PCI-E slots. AMD Epyc is actually a really good processor too and I would lean towards it but again still worried about compatibility issues. Overall I will probably be waiting another month or two before making my final decision. Right now just trying to lay the groundwork and do my research while also getting any feedback where I can.

If you have any recommendations or comments please share your experiences!
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@Kryax OK. Just to be clear, I like the architecture, but AMD EPYC does not offer extreme single core performance. 3.0Ghz is about as high as you are going to get. So if you are looking for cores, that is one thing. If you are looking for maximum single-threaded performance, the Intel CPUs are going to be faster.

I do think the EPYC 7401P is an awesome chip. 24 cores and 64MB L3 cache for $1075 is my top choice for EPYC 1P value. AMD EPYC 7401P Linux Benchmarks and Review - Something Special

Background: I have now tested every single socket and almost all dual socket EPYC configs.

I do think you are right to be a bit more concerned with compatibility, especially if you want to do live migrations.

W-2155's still working on getting my hands on the chips although hoping in the next two weeks.

Also, lower on cores but I have been using 6134's quite extensively and I really like the performance/ power of those chips.
 

Kryax

Member
Oct 14, 2017
44
1
8
@Kryax OK. Just to be clear, I like the architecture, but AMD EPYC does not offer extreme single core performance. 3.0Ghz is about as high as you are going to get. So if you are looking for cores, that is one thing. If you are looking for maximum single-threaded performance, the Intel CPUs are going to be faster.

I do think the EPYC 7401P is an awesome chip. 24 cores and 64MB L3 cache for $1075 is my top choice for EPYC 1P value. AMD EPYC 7401P Linux Benchmarks and Review - Something Special

Background: I have now tested every single socket and almost all dual socket EPYC configs.

I do think you are right to be a bit more concerned with compatibility, especially if you want to do live migrations.
Yea I guess the reason I was considering potentially the AMD Epyc was also the 128 Lanes it offers too. I was considering other potential projects where adding a bunch of NVME drives in PCI cards. I guess I need to really settle on what other potential future requirements I need and try to make a compromise between clock speeds, PCI lanes, etc. Like I said I am still in the early planning stages right now.

Definitely interested in this review when you get your hands on these processors. I have only looked at the Supermicro motherboards as I absolutely love IPMI but I will probably do more research after more motherboards are available to see if this is still the processor to consider.
 

Waterkippie

Member
Oct 12, 2017
58
15
8
53
Here are some more LGA3647 motherboards you could look into:
ASRock Rack > Products
http://www.tyan.com/MB@en-US@0@1~sorte
Also for drives, the 960 pro might not be the best choice for a server, it has pro in the name but its a consumer product and known to get hot pretty soon and throttle down.

Try looking into the Intel DC or Samsung enterprise PCI-E ssd's with proper cooling.

Or are you only using it for booting? You might want some SSD's for cache to speed up those disks?
Or a proper raid controller with cache and bbu?
 

Kryax

Member
Oct 14, 2017
44
1
8
Here are some more LGA3647 motherboards you could look into:
ASRock Rack > Products
http://www.tyan.com/MB@en-US@0@1~sorte
So the ASRock motherboard has one model that is single socket:
ASRock Rack > EPC621D8A
Looks very similiar to the Supermicro X11SPL-F I was looking at. Does the IPMI management interface differ? I have only used Supermicro IMPI interfaces.

Also looked at the Tyan motherboards. Here are the 3 models that are 3647 and single socket:
http://www.tyan.com/Motherboards_S5630_S5630GMRE-L2
http://www.tyan.com/Motherboards_S5630_S5630GMR
http://www.tyan.com/Motherboards_S5630_S5630GMRE
So a few things stand out. PCI-E slots look limited unless you opt for a dual socket motherboard (which I know in general supports more PCI-E slots and is not shared). I am trying to get a single socket solution. Second thing is the SSI-CEB form factor might limit case choices. I believe that the Supermicro cases are up E-ATX and ATX so unsure if that would be an issue.

Also for drives, the 960 pro might not be the best choice for a server, it has pro in the name but its a consumer product and known to get hot pretty soon and throttle down.

Try looking into the Intel DC or Samsung enterprise PCI-E ssd's with proper cooling.

Or are you only using it for booting? You might want some SSD's for cache to speed up those disks?
Or a proper raid controller with cache and bbu?
So I probably should update the OP with those details. I plan to dedicate the 20 6TB drives for Freenas but plan to run ESXI on seperate drives and not from Freenas pool which is just being used for storage. So I will need to figure out an ESXI boot drive (USB/SSD/etc), host/vms drive (SSD), and datastore drive (HD or SSD). I plan to run these natively off the motherboard and system itself.

Also have an update to narrow at least CPU/Motherboard, I think I have ruled out an AMD Epyc. Cisco VIRL does not officially support it:
Installing VIRL using the vSphere Client
While I have read that people have had some success, there are also some that had lots of issues. I probably don't want to spend too much time having to make adjustments and tweaks just to get it running. As such, updated the OP to reflect the changes. Still going to wait and see what Xeon-W motherboards come out too before crossing it off the list.
 
Last edited:

Kryax

Member
Oct 14, 2017
44
1
8
So updated OP with a few more specific items. Just preordered the Intel 900P Optane. Also getting a Intel quad port NIC that I will be utilizing with Cisco VIRL to integrate and connect some 3750 switches. Still need to narrow down quite a few things some of which was mentioned previously.

1. Chassis
- Could use recommendations on the various Supermicro 846 chassis. Based on the following page:
Tower / 4U Chassis | Chassis | Products - Super Micro Computer, Inc.
The main differences seem to be with power supply types and redundancy. But what would be the differences between say SC846TQ-R1200B and SC846E16-R1200B? I am looking for a chassis that will be able to do both SAS3/SATA3 connections.
2. HBA cards
- Related to the chassis inquiry above, I have the 20 x HGST will be used for Freenas storage and SATAIII drives. I want to use an HBA for JBOD mode. I was looking at doing 3 x LSI SAS 9300-8i or 2 x LSI SAS 9311-16i. I am familiar with the Norco 4220 case and have 2 x M1015 in JBOD mode so I am familiar with that setup. Was wanting to do the same thing with this new build and hence wondering how the backplanes work with the Supermicro cases in conjuction of the HBA.
3. ESXI Boot Drive
- Both my current VM and Freenas servers boot off USB drives. Not really a fan since I have had them go bad over the years and usually when I do an update. Wondering with some of the suggested items above, what is a good solution these days. Do people use SSD's since they have come down so much in price these days?
 

Kryax

Member
Oct 14, 2017
44
1
8
E16 > Expander Backplane
TQ > Direct attach, individual sata connectors on the backplane

On supermicro boards you can use their sata doms (16-120gb versions available)
Thanks for that info and helps narrow down some choices. So after further looking at the 846 series there seems to be 2 backplanes choices between the chassis I was looking at. Still a little fuzzy on HBA to backplane and cable type requirements:

SC846BE1C-R1K23B
-BPN-SAS3-846EL1
SC846TQ-R1200B
-BPN-SAS-846TQ

With that said, I assume that if I went with the BPN-SAS-846TQ backplane I would require 3 x LSI SAS 9300-8i or 2 x LSI SAS 9311-16i and this would require a fanout cable. If I opted for the chassis with the BPN-SAS3-846EL1 I would only require 1 x LSI SAS 9300-8i and just a direct cable to the backplane. Also keep in mind I need the ability to do passthrough of the HBA(s) to Freenas in this VM environment.

Below is the manual for the chassis's and sections "B" and "C" for reference to the backplane information:
https://www.supermicro.com/manuals/chassis/4U/SC846.pdf

And just for sanity sake what is the difference between the Supermicro backplanes compared to a Norco 4224?
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
If I remeber correctly norco has only direct attach backplanes (similar to the supermicro a backplanes) so that you will need a controller with 24 ports (or 3 controllers with 8 ports)
 

Kryax

Member
Oct 14, 2017
44
1
8
If I remeber correctly norco has only direct attach backplanes (similar to the supermicro a backplanes) so that you will need a controller with 24 ports (or 3 controllers with 8 ports)
Thanks for the info. So it looks like I am going to settle on the SC846BE1C-R1K23B Supermicro chassis with a LSI SAS 9300-8i to save some of the PCIE slots. Updated OP with latest list of items so if there is anything else that stands out let me know. I will probably be purchasing some of the parts here within the next few weeks.
 

Kryax

Member
Oct 14, 2017
44
1
8
Are you sure the one Optane SSD will be enough storage for vms?
I believe I will have enough to support the mains ones I posted in the OP. I did get the 480GB 900P. If I need to expand to more VM's I will get another 900P. I only load the main OS files, programs, and leave some extra room for updates/upgrades. All external storage will be via the shared pool that I create with Freenas.

- Freenas VM (20GB VM / 128GB RAM)
- Cisco VIRL VM (100GB VM / 64GB RAM)
- Windows 10 VM (60GB VM / 4GB RAM)
- Windows 2016 Server VM (60GB VM / 8GB RAM)
- PFSense VM (20GB VM / 4GB RAM)
- (Misc/FUTURE VM's)
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Is that OS of the VM on the 900p and extra disks on FreeNas? Or all VMs on 900p and 'external' storage via SMB?
Don't really get it yet
 

Kryax

Member
Oct 14, 2017
44
1
8
Is that OS of the VM on the 900p and extra disks on FreeNas? Or all VMs on 900p and 'external' storage via SMB?
Don't really get it yet
Sorry I know my terminology and explanations is not at an expert level so please excuse me if I and stating things incorrectly or out of context (I only do this stuff as a hobby).

To answer your question your 2nd statement is my goal. All VM OS's to include Freenas will be on the 900P. Those VM's will have the option utilize an "external" storage share drive storage via SMB/NFS that will be created from Freenas VM pool (for example running an FTP server but use a network share to access the folders). I know most people prefer to run their VM datastore for their OS's off a Freenas pool via ISCI over the network or even in a virtual all in one box setup but I am not doing that. I understand I lose out on some of the backup features and high availalbility but that is not my goal as I am trying to consilidate down from 2 boxes to just 1. In the future, I may use the this setup and create another Freenas pool with 2 other seperate VM servers but probably only for study reasons.
 

Kryax

Member
Oct 14, 2017
44
1
8
Yea to elaborate on how I am creating the Freenas Storage pool I plan to do the following:

1. Build new server, install 900p, install 9300-8i, connect 20 x 6TB to backplane and 9300-8i.
2. Install ESXI on 32GTB SATADOM and begin initial setup.
3. Create all host VM's on the 900P.
4. Create Freenas VM on the 900P.
5. Do a passthrough of the 9300-8i to Freenas (hopefully this will not be an issue with the Supermicro chassis/LSI card I am choosing).
6. Verify that the 20 x 6TB drives are available.
7. Create 2 vdev and 1 pool share (10 x 6TB per vdev).
8, In the future I may use the extra 4 slots on the Supermicro chassis to create a seperate vdev/pool if I decide to get 2 more physical VM servers running.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Nice AIO setup. Only other thing I would do is connect both ports of the 9300-8i to the backplane so that you get 2x12Gbps bandwidth.

I had posted the steps for a similar AIO config earlier. Different hardware and using Napp-it instead of freenas.

ESXI / Napp-IT All In One with USB Datastore

You can ignore the section for USB pass-through and substitute Napp-it with Freenas.
 

Kryax

Member
Oct 14, 2017
44
1
8
Nice AIO setup. Only other thing I would do is connect both ports of the 9300-8i to the backplane so that you get 2x12Gbps bandwidth.

I had posted the steps for a similar AIO config earlier. Different hardware and using Napp-it instead of freenas.

ESXI / Napp-IT All In One with USB Datastore

You can ignore the section for USB pass-through and substitute Napp-it with Freenas.
Thanks for that guide. I will bookmark it and reference some of your sections for setup. I have setup ESXI from scratch but looks like some of the GUI has changed since I am still using 5.5u1 (started at 5.0 and upgraded since). I have messed with VCenter before the 60 day license expired but exclusively been using Vsphere Client since then.

I also found this guide when I was doing research. In some of his later posts he goes through mini tutorials of ESXI and Freenas stuff in quite some detail:

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
 
Last edited: