All in One Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Nimesh Bhundia

New Member
Feb 11, 2018
16
1
3
44
Build’s Name: Unnamed as of yet.
Operating System/ Storage Platform: ESXi / Undecided on Storage build
CPU: E5-26xx of some flavour, cheaper the better (not bought yet)
Motherboard: X9Sri-F (Bought)
Chassis: Generic OEM (Rackmountable Tower Server/Pedestal (inc VAT) | eBay (ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2057872.m2749.l2649) (Bought)
Drives: 8 X 6TB WD REDS for NAS (Bought), Datastore Storage undecided
RAM: 64GB DDR3L ECC Reg (hopefully buying from fellow forum member)
Add-in Cards: HBA SAS9205-8i flashed to IT mode (bought)
Power Supply: Corsair RM/AX 650 (Bought)
Other Bits: Generic 5.25 to 3.5 drive bays (Bought)

Usage Profile: Home Lab VM duties, NAS

Hey Guys,

Just looking to get some ideas and thoughts from you guys. So I currently have a Synology DS1815+ and a generic whitebox ESXi setup. The issues I face are:

1) I run a fair few dockers on the NAS and the CPU doesnt have the grunt that I would want.
2) My generic ESXi build was limited to 32GB and I was having some weird Linux guest issues with SMB.
3) The NAS and VM Lab are in a spare room, but are loud also I am loosing this room and need to move stuff to the attic.

I started to think about my next steps and came up with this idea to consolidate into one build, not only am I a proud nerd I am a hoarder so I have loads and loads of gear lying around the house. I will be frank, I am cheap, so I like to save money where I can, so 2nd hand ebay bargains are always welcome. My work has an office in the US and I have a willing team member who will ship smaller stuff across the pond.

So here is where I could do with some of your collective wisdom.

Base Hardware.

1) CPU Cooling: I have a corsair H80i kicking around that I am planning to use. Given that the base OS is ESXi and there isnt the corsair link app running, will it just run the fans at full speed?
2) 5 of the 8 bays will be used for the 5.25 to 3.5 adapter bays, these have their own cooling, I was planning to either buy an adapter or do some modding to add a 120mm fan to the remaining bays to pull air into the chassis. Sound reasonable?
3) I am trying to decide on the CPU, I would like to keep things cheap. Looking on the bay I can pick up a E5-2620 pretty cheap, is it worth moving up to one of the 6c or 8c models?

ESXi Installation / Datastore.

1) How would you guys install ESXi? Good quality USB? if so any recommendations? I also have plenty of SSD's sitting around here from 60-256GB.
2) Any thoughts on Datastore storage? My usage here isnt anything huge, when tinkering I tend to throw up a few Linux VM's. Eventually want to start planing with Windows based servers again too. I was thinking
of grabbing one of those PCI-E based storage devices from ebay. Alternatively I have 3 x 2TB WD Greens already here or a 500GB Samsung 840 Pro.

NAS VM

I have tried various different NAS software solutions over the years (OMV, rockstor, WHS, Freenas and Synology /Xpenology). I always come back to synology / xpenology, its so simple to use and the interface is one of the best out there. My storage is split into three categories:

1) My media library (non critical, I can rerip if required)
2) My random storage (Non criticial, drivers, ISO's applications installers etc which can all be redownloaded)
3) Personal documents and photos (Critical, currently being synced to cloud storage)

The plan is to Pass through the HBA to a NAS VM and have the eight 6TB drives connected to that and to use Xpenology. I would setup a SHR-2 array that would allow for 2 disk failures. I prefer the SHR offering as expanding is pretty easy and can be done a single drive at a time. Plus I am pretty familiar with it.

1) Am I really missing something by not using FreeNAS? From my limited knowledge I know that it supports BTRFS but doesnt guard against bitrot.

Thanks for putting up with my wall of text post. Would really appreciate any insight you guys can offer.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Sorry I am on the plane and we are about to take off.

On the CPU if you can get a E5 V2 chip you may be happier. Better PCIe 3.0 support with Ivy Bridge and the 22nm process shrink offers significant savings. There are a lot of eBay deals for V2s.

I believe the Corsair cooler is PWM based so the motherboard firmware controls it like a fan. The app is there to customize not as a mandatory addition.
 

Nimesh Bhundia

New Member
Feb 11, 2018
16
1
3
44
Cool thanks for that Patrick.

Been doing some further reading and looks like I am going to opt for xpenology. The freenas / zfs stuff is really going over my head...
 

ljvb

Member
Nov 8, 2015
97
32
18
47
Just pickup a cheap HP DL380eG8 (you can get a 380p gen8 for slightly more). I picked one up for 300 to replace my older Supermicro X9 series with a single e5 v1 6 (12 with hyper threading) and 32GB ram.

The new one came with 32GB. I had a stash of about 40 600GB new in packaging sas 10k drives given to me (I did not ask my friend where he got them :) ). Keep in mind, the gen 8 and newer "require" new smart caddies which if you buy 25.. cost almost as much as the server. The older caddies will work with some effort, not recommended, but will work in a pinch (I plan to replace the caddies over time).

As it stands right now, in the chronicles of "Things you should not do" and "I hate HP"... (I might make a new post about this), I had to slice and dice the rear cage power cable into SATA cables to support extra drives as the DL380eG8 does not include wiring for non backplaned drives.

I have the DL380 with 2 LSI flashed cards (M1015 and Dell H31) connected to the front backplane, with 25 600GB 10k SAS drives. 2 1TB WD Red drives connected to the HP P420 Smart Array controller card in Raid 1 (mirrored) configuration for boot drive (these are the two drives I had to splice in). 64GB of ram, the original 32, and the 32 from my old box as well as a 4 port Intel PCI gig card.

Currently running Vsphere 6.7 with 3 core VM's running on the non iSCSI Datastore.
1) PFsense with 1 ethernet directly passed through (this connects to my ethernet from the ONT from my Verizon FIOS)
2) FreeNAS with the 2 HBAs directly passed through
3) vCenter to manage things.

ESX uses the FreeNAS box for secondary storage for other various VM's via iSCSI. I have a series of commands I run through /etc/rc.local.d/local.sh which starts the FreeNAS VM (outside of ESX's ability to autostart VM's), waits about 3 mins, then rescans the iSCSI subsystem to make sure that ESX iSCSI datastore is mounted before my tertiary VM's start.

One of the problems I have, is that my wireless relies on DHCP to get it's IP. (Orbi, regretting that decision lately.. but needed at the time in my new house because I needed to expand my wireless without having the house physically wired for network cable, and it's satellites have ethernet which allows me to use my Tivo minis without having to make holes in the house to run coax/ethernet). All DHCP based devices have this issue, as they drop their IP till the PFsense box which is my DHCP server is booted. I will be moving DHCP to my cisco switches will resolve this problem.
 
  • Like
Reactions: Awerellwv

weust

Active Member
Aug 15, 2014
353
44
28
44
Where you put ESXi is a bit up to you, of course.
Currently, mine runs on a mirrored 120GB SSD setup. ESXi 6.7 (starting with 6.0 iirc) used the extra space automatically as it's first datastore.
If you use a USB stick or SD card, you will need to use a datastore for logging.
I forgot if it tells you to do that after installation, but other wise make sure to configure it.

Other then that, USB stick or SD card is just fine.