Rebuilding my ESXI Home Lab | A multi staged Project (with photos)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
Hey guys,


Im not really the most eloquent with words but for the last couple of weeks I have been wanting to rebuild the ESXI Home lab.

So I have been posting random questions getting advice from many of the great members here in thoughts of formulating a plan.

There are many things that I will be trying for the first time (Infiniband for example) and other things that I have much experience with.



I thought I would attempt to collect my thoughts ideas and progress and present it in a staged format with the hope of doing two things

1. Helping any other members like myself with things they may not have thought about or tried before

2. Getting any feedback good or bad on my setup and goals.


--------------------------------------------------------------

Build Guides will be listed in the reserved threads.
 

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
----Current Setup with a little background story----


-------------------------------------
Esxi Server (with integrated storage) VERSION 1

Build’s Name: Vmware Whitebox
Operating System/ Storage Platform: Vmware ESXI 5.5
CPU: Xeon E3 1245V2 3.4GHz Quad Core
Motherboard: Asus P8C WS
Chassis: Fractal Design Define XL
Drives: 9 WD Red 3 TB HDD
RAM: 32Gb of Corsair Dominator DDR3 1600 (Used on Amazon)
Add-in Cards: LSI 9265-8I
Add-in Cards: Intel 24 Port SAS Expander
Add-in Cards: Intel Intel I340-T4 Quad PCI-E Nic
Power Supply: Corsair Professional Series Gold AX750 750Watt PSU

Usage Profile: Running a few Critical Home Vm's like fileserver, Backup Server, Test VM's and a few others


Story on this: I originally built this box back in Oct 2012 and even listed in on the forums. The plan back then was to build an "All in one box" that would house all my vm's and storage to keep everything simple right. Lol well I soon started to outgrow this simple setup.

The box was great It was capable of hitting 700 - 800 Mbps of disk throughput on just raw Hard Drives. The problem however is I started to think about expanding......

As most of you might know the really cool features of ESXI always require more than 1 host (Clustering, DRS, HA, FT, ect) and I had built myself into a box with no where to go. Time to improve.....

--------------------------------------------------------------------
Photos:







 
Last edited:

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
----Current Setup with Additional Storage----


-------------------------------------
Esxi Server More Drives More Spindles VERSION 2


1. Synology DS1812

2. Seagate Backup Plus 4TB External Hard Drives (x8 lol good story on that below)


Story on this:

So in order to Fix building my self into a 1 host box like I did in the previous post I decided all I needed was some shared storage.

Everyone knows Esxi loves shared storage right.... So my new plan was how do I get shared storage in order to expand my little home lab enterprise (lol enterprise yeah right)

So I purchased a Synology DS1812 and 8 External 4 TB hard Drives (go big right lol).

Some of you may be wondering why did he purchase 4TB External Drives instead of internal versions. Well that's a good question the answer is quite simple PRICE.

If you have not noticed sometimes for the "newest" biggest capacity drives the cheapest versions are the external version not the internal version. For me this was especially true for the Seagate 4TB disk when it was released.

Internal 4TB Drive Price : $200.00

External 4TB Drive Price: $150.00

So I purchased 8 Drives and got the drive shucking. Lol I can not take all the credit for this awesome idea I actually learned about the Idea from a BackBlaze blog. Backblaze prides themselves in offering unlimted storage for cheap (like 5 bucks or somthing) and during the last

flood they had to resort to "drive farming" in order to keep their stock levels up.

Seriously read about it here for yourself:

[video=youtube;zZNOqrUZmnw]http://www.youtube.com/watch?v=zZNOqrUZmnw[/video]

Backblaze Blog » Farming hard drives: how Backblaze weathered the Thailand drive crisis


Anyway back to the topic of discussion I now had a great shared setup for future host for my single ESXI cluster (for now) however there was a bit of a drawback.

The Synology box was awesome (I highly recommend btw) except when it came to moving 10TB of a virtual file server from Internal HDD's on a RAID controller across 1GB ethernet network (technically 2GB Ethernet since its teamed but that doesn't really count for one host).


<10 TB VMDK on LSI RAID CONTROLLER> ---------------1 Gbps Ethernet-------------------------------<Synology DS1812> = Slow | It took about 26 Hours straight.


So this brought me to my next delimea.

How do I move huge chunks of Data across my network without waiting 24+ Hours? Also how can I stop being bottlenecked by the Synolgy and its 1Gbps Ethernet connection?

My drives were capable of doing 700Mbps + but I was only getting 106Mbps out of the Synology since it was bottlenecked by its nics.

The answer will be presented in another post but i'll give you a guess: (It has a 10 in it).....

--------------------------------------------------------------------
Photos:









Excuse my mess on the HP switch I was troubleshooting somthing earlier (required longer cable). Oh yeah I live an apt and I still found a way to wire my whole apt for ethernet.

Lol this laundry room is my MDF
 
Last edited:

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
----Research oh the research----


So in the last post i discussed that I was being bottlenecked by the 1gbps connection to my Synology NAS and I had a few ideas on how to fix it.


Well for the last week or so I have been reading through the forums like a mad man trying to learn a whole lot about 2 technologies in perticular:


1. 10 Gig ethernet

2. 10 Gig InfiniBand


10 Gig E was simple enough since it was meant to work with existing networks of today with noting new to learn just make sure you have the appropriate network cable in place and your golden.

The drawbacks to 10G E is it was and still is very expensive.


10 Gig Infiniband on the other hand was relatively cheap for point to point setups (Host and your shared storage server) however its drawback was the learning curve. Thats not to say that dealing with


10 Gig infiniband is hard of difficult or anything of that nature however its not as common of a technology as ethernet.

At work I generally use Fiber Channel, ISCSI, NFS for working with our Vmware cluster so I had to no experience with Infiniband.

So the question was which technolgy do I go with and what was the plan....
-------------------------------------------------

Well I asked the community for a few pointers which you can view in these threads:

http://forums.servethehome.com/vmwa...mmunity-cheap-storage-backend-esxi-5-5-a.html

http://forums.servethehome.com/vmware-virtualbox-citrix/2510-any-guides-setting-up-ipoib-esxi.html

http://forums.servethehome.com/netw...tween-connectx-2-vpi-connectx-2-en-cards.html


-----------------------------------------The Starting of a plan -----------------------------------------------------------------------------------

In the End I came up with a plan (not a cheap plan btw) but I decided I wanted to use Ethernet and IPoIB (Ip over Infiniband).

I wanted to try out both technologies. I would build a few more nodes and put Infiniband and 10G E cards in them and use them for different functions in ESXI.

So off to Ebay to see if I could find some deals on the tech required. I will speak more about the actual plan in the next post (which might not be until tomorrow its getting late and I have to work to afford this crazyness lol).


I leave you with a few more helpful links (besides the ones above that were most helpful) that were useful in understanding a bit more about Infiniband and how it could operate in the ethernet world.

------ Helpful Links---------

Infiniband connection between 2 windows servers - [H]ard|Forum

Deploying Windows Server 2012 Beta with SMB Direct (SMB over RDMA) and the Mellanox ConnectX-2/ConnectX-3 using InfiniBand – Step by Step - Jose Barreto's Blog - Site Home - TechNet Blogs

Infiniband Connection to ESXi 5.0 Server: vmware infiniband networking

http://forums.servethehome.com/networking/2205-going-10gbe.html


I leave you with just 1 photo below of the mystery box that will be discussed in the next post
--------------------------------------------------------------------
Photos:





----------------------------------------------------------------------------------------------------
Ok so I lied I have more than 1 Photo and an extended "funny for ya" "just happened 10 mins ago"

1. This is how you properly cool a 9265-8i (you know the one with the passive heatsink) while moving 10TB of Data:



2. This is why it does not matter if you cool the LSI RAID Controller because the Synology box got hot instead and stopped responding (you know during the 10TB Data move which means I get to Re-Do):



Everything that says (inaccessible) was on the Synology box that stopped responding from what I can only assume was getting hot or something.


Damn laundry room MDF lol...

But on the Plus side that LSI Contoller is nice and cool with a bedroom fan blowing on it (lol if LSI only knew this)


 
Last edited:

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
Hey guys thanks for the comments sorry for the delay been busy with work and deploying Microsoft System Center 2012.

The post below will be a bit long since it is really my thoughts and actions over a couple of days.

Enjoy...


-----Formulating the Plan-----



[Network Interface Cards]

Ok in the last installment I talked about wanting to explore the benefits of faster throughput connections (10G Ethernet and 10G Infiniband)


Well after a bit of research I determined 2 important things.

1. 10 G Ethernet was going to be EXPENSIVE even for Point to Point Connections but it would just work

2. 10 G Infiniband was going to be cheap but was going to require a little more elbow greese.

Since I wanted to try them both out guess that meant I better start looking on ebay for some damn good deals and thats exactly what I did.



---Ebaying---

Note: I am going to include prices for the parts I purchased in this section so that anyone wanting to get a relative idea on what they should be paying can.


After a couple of days on eBay I purchased the following:


1. (x1) Intel X540-T2 10Gbps - $315.00 (+ shipping)

2. (x2) Intel X540-T1 10Gbps -$300.00 ea. (Free Shipping)

3. (x3) Mellanox ConnectX-2 EN 10Gbps -$65.00 ea. Best Offer (Free Shipping)

4. (x2) QLogic Copper QSFP to QSFP Cable -$24.49 ea. (Free Shipping)

**********************
Tips for ebay shopping:

Sometimes you can get gear at a lower price if you look for the "Or Best Offer" Auctions (Dont assume the lowest buy it now is the lowest price you can get for something).

I was able to drop the Mellanox Cards from 95 a piece to 65 a piece. 1 Of the Intel nics was dropped from $360 to 300.

Also when someone says "best I can do" that does not necessarily mean the best they can do. If you feel that you are not comfortable paying their asking price counter offer or walk either way it cant hurt.
***********************

Well I have started to finally get some of these cards in the mail. I am still waiting on the Mellanox cards to arrive but I have all 3 of the Intel Nics.

The original plan for all these nics I purchased was to use them for the following:

3 Intel 10G nics | SAN Network "NFS or ISCSI"

3 Mellanox 10 Nics | VMotion Network


I however changed that plan after I figured out the Compute side of the equation (More about this later in the compute section).









[Compute Power - The more the merrier "Until you get that electric bill"]

Alright with the cards sorted and arriving by the day I turned my focus on the next part of my problem "Addational Hosts".

I knew that I wanted to be able to use more of Vmware's advance features but with a 1 node cluster this was impossible.

Though this was a home lab/production system (lol ok who are we kidding im not making money with it so its not production) I still wanted HA (High Availability) and the option to service 1 of my servers offline while keeping all the vms up and running.


The first time I build my "whitebox" I used off the shelf consumer parts after doing tons of reserch on what was supported in vmware and what was not.

A few of the things I found for instance is when you get the K skew of Intel processors you lose VT-D.

My current system uses an Asus WS Motherboard. This is Asus boards for professional workstations that have been vetted to work with aftermarket adapter cards (mostly HBA's , Raid Controllers, and Server Nics).

It worked great but had a few flaws...

Cons:
1. It had no form of remote management so the initial setup had to be done in front of a monitor keyboard and mouse (so old school lol) .

2. VT-D never seemed to work for me (even though it was supported ) . I might have had back luck or something on this one but I could never pass though anything correctly (Video cards or USB Sticks).


I had learned a lot since I build this box and wanted to see If I could do it better, and possibly cheaper.

For my next round of whiteboxes I turned to SuperMicro.

Personally I never had any real dealings with SuperMicro motherboards but I had seen them mention many times in the forums and also Patrick had featured them many times on the STH Main site.

So I started my search there and here is what I came up with.


--Build Guide---

Build’s Name: MI6 (lol because I like James bond flicks)
Operating System/ Storage Platform: Vmware ESXI 5.5 / Server 2012R2 & possibly Freenas
CPU: Intel Xeon E3-1230V3 (Qty: 2)
Motherboard: SuperMicro MBD-X10SLM+-F-O (Qty: 2)
Chassis: Fractal Design Arc Mini (Qty: 2)
Drives: Mushkin Ventura Plus 16GB USB 3 Flash Drive (Qty: 2)
RAM: 16GB Kits of Crucial 1600Mhz DDR3 UDIMM ECC Memory (Qty: 2)
RAM: 24GB Kit of Kingston 1600Mhz DDR3 UDIMM ECC Memory (Qty: 1)
RAM: 8GB Single Dimm Kingston 1600Mhz DDR3 UDIMM ECC Memory (Qty: 1)
Power Supply: Corsair CX430M 430W PSU (Qty: 2)
Other Bits: 1 Fractal Design Silent 140MM Fan (for my Storage Server) (Qty: 1)
Other Bits: HP 1810-8G Smart Switch (Qty: 1)
Other Bits: Intel I350-T2 PCI-E Dual Port (Nic for VM Traffic) (Qty: 1)


I went with this particular SuperMicro board vs a few of the cheaper versions because:

- It had USB3

- It had an internal USB Type A header (so you could plug in a flash drive directly to the board and boot ESXI)

- Better PCI-E Connectivity (Review the lanes to see what I mean I "love" those x16 that are really x8 or x8 that is really x2 types lol)

Choosing this board however did come with a sacrifice. I would not be able to run both my Intel 10GE and my Infiniband 10G + my Dual Port PCI-E Nics card at the same time. This is due to how I am structuring my Virtual Network (Will make more sense in the next Section).

So my current plan is to try one technology then pull it out and try the other just for fun.


Here are some pics of the parts I have so far:













[Vmware Home Lab - Network Diagram]

I'll try to keep this part short and sweet (lol cause your eyes might be glazed over by now).

I wanted to diagram out new purposed ESXI Network before I assembled any parts to help make sense of all this connectivity.

As it currently stands each host will have 6 Nics (lol yes I said 6)

- 1 For IPMI Remote Management

- 2 For VM Client Traffic

- 2 For Vmotion and Management Traffic

-1 For NFS/ISCI Network

See just reading that is hard to mentally picture (at least it was for me) so thats why I wanted to make sure I got somthing on paper before I started building. It also made me realize I needed 4 more nics that I had + an additional switch (hence the dual port intel nic + hp 1810-8G switch in the previous build list).

I downloaded some visio stencils and looked for some examples of decent diagrams to help me get started on this.

Below is my purposed Vmware Home lab network for 1 host. So basically take this diagram and double everything (except the switches) and you have a better Idea on what I am working with.




-----The Diagram-----




--Wrap it up my eyes hurt already---

Ok I think this is enough of an update for now. I am still in the process of receiving parts and depending on what else comes in this week I could be building the nodes and re-configuring the network as early as this weekend.

Once the nodes have been build and the network has been re-configured I will post my next update but that most likely wont be until at least Sunday at the earliest.

Thanks for reading I'm going to go get some rest.

If you have any questions just leave them below and Ill answer .



Links:

http://www.servethehome.com/supermicro-intel-xeon-e31200-v3-haswell-server-motherboard-lineup/

The Unofficial VMware Visio Stencils | Technodrone

Visio Stencils | Jonathan's Technology Blog

https://communities.vmware.com/message/2115651?tstart=0

Virtualization: Resistance Is Futile: VMware vSphere 5 Host Network Designs

Newegg.com - Computer Parts, PC Components, Laptop Computers, LED LCD TV, Digital Cameras and more!
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I can't wait to see more of this! Great read thus far!
 

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
Also enjoying the read so far.

Haha, I read that backblaze post as well. I'd be tempted to do it but I want the warranty, and to know what drives I'm getting.

Funny you say that I thought the exact same thing when I was drive shucking those seagates.

When I checked the serial number on the internal drives they all had full warranty which was a HUGE relief (because I didn't want to have to put them back together if one went out).
 

vegaman

Member
Sep 12, 2013
60
3
8
Auckland, New Zealand
And it doesn't show up in the warranty check as an external drive or anything? Wow, that's surprising.

I've done it to a few WD drives in the past when the enclosure died and people wanted to keep the data, warranty was always voided though.

Edit: also, hmm. The heat problems make me think I will have to build the ventilation system I started planning.
 
Last edited:

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
Yeah well the LSI cards are known to run hot. The Fractal case I have has sound dampening which makes is super quiet but also isnt great to air flow.

I have ordered a fan for the case (which requires that you remove the sound damping on the side panel) but its not hear yet.

As for the Synology that was just odd it just completely quit responding with no drive activity. I went into the software and changed the fans to Cool instead of quiet and I am going to try the transfer again.

The 10TB tansfer will make much more sense in my next post when I write it sometime tomorrow.

Also just a FYI I checked the warranty on the drives again and its not showing up on the seagate site.

I dont know if what I am typing in is wrong but I am just copying what the synology tells me. Tomorrow ill try pulling a drive out and using what serial is reported on the bare drive but I know the last time i check everything showed up as all good on the seagate warranty checker.
 

jtreble

Member
Apr 16, 2013
93
10
8
Ottawa, Canada
...As most of you might know the really cool features of ESXI always require more than 1 host (Clustering, DRS, HA, FT, ect) and I had built myself into a box with no where to go. Time to improve...
Have you priced out the licensing for the "really cool features"? Have you considered using something other than a VMware-based system? Just curious.
 

DBayPlaya2k3

Member
Nov 9, 2012
72
4
8
Houston
I am lucky In regards to VMware as its provided for me to use for free as part of my university package for learning purposes. So it doesn't cost me anything (lol well besides the tuition I pay). For me VMware also made the most sense because it is what I am in charge of managing at work (along with other things) .

I have thought of using hyper-v as well and plan on doing a nested lab (esxi cluster running hyper-v cluster) so I can get familiar with Microsofts take on virtualization.

I thought about Xen but it has not made it onto my to do list.

Also of note you can get free trial licenses to play with all the features of VMware if your willing to wipe n reinstall. This is a good way to learn more about the technology if you arnt familur with it. There is also the free esxi license which is great for home labs (obviously without all the features).
 
Last edited:

Rain

Active Member
May 13, 2013
276
124
43
I am lucky In regards to VMware as its provided for me to use for free as part of my university package for learning purposes. So it doesn't cost me anything (lol well besides the tuition I pay).
You're very lucky. My University provides only the "simple" stuff to students (Windows, MS Office, ect). I want free (... legal...) VMware goodness!