Hey guys thanks for the comments sorry for the delay been busy with work and deploying Microsoft System Center 2012.
The post below will be a bit long since it is really my thoughts and actions over a couple of days.
Enjoy...
-----Formulating the Plan-----
[Network Interface Cards]
Ok in the last installment I talked about wanting to explore the benefits of faster throughput connections (10G Ethernet and 10G Infiniband)
Well after a bit of research I determined 2 important things.
1. 10 G Ethernet was going to be EXPENSIVE even for Point to Point Connections but it would just work
2. 10 G Infiniband was going to be cheap but was going to require a little more elbow greese.
Since I wanted to try them both out guess that meant I better start looking on ebay for some damn good deals and thats exactly what I did.
---Ebaying---
Note: I am going to include prices for the parts I purchased in this section so that anyone wanting to get a relative idea on what they should be paying can.
After a couple of days on eBay I purchased the following:
1. (x1) Intel X540-T2 10Gbps - $
315.00 (+ shipping)
2. (x2) Intel X540-T1 10Gbps -$
300.00 ea. (Free Shipping)
3. (x3) Mellanox ConnectX-2 EN 10Gbps -$
65.00 ea. Best Offer (Free Shipping)
4. (x2) QLogic Copper QSFP to QSFP Cable -$
24.49 ea. (Free Shipping)
**********************
Tips for ebay shopping:
Sometimes you can get gear at a lower price if you look for the "Or Best Offer" Auctions (Dont assume the lowest buy it now is the lowest price you can get for something).
I was able to drop the Mellanox Cards from 95 a piece to 65 a piece. 1 Of the Intel nics was dropped from $360 to 300.
Also when someone says "best I can do" that does not necessarily mean the best they can do. If you feel that you are not comfortable paying their asking price counter offer or walk either way it cant hurt.
***********************
Well I have started to finally get some of these cards in the mail. I am still waiting on the Mellanox cards to arrive but I have all 3 of the Intel Nics.
The
original plan for all these nics I purchased was to use them for the following:
3 Intel 10G nics | SAN Network "NFS or ISCSI"
3 Mellanox 10 Nics | VMotion Network
I however changed that plan after I figured out the Compute side of the equation (More about this later in the compute section).
[Compute Power - The more the merrier "Until you get that electric bill"]
Alright with the cards sorted and arriving by the day I turned my focus on the next part of my problem "Addational Hosts".
I knew that I wanted to be able to use more of Vmware's advance features but with a 1 node cluster this was impossible.
Though this was a home lab/production system (lol ok who are we kidding im not making money with it so its not production) I still wanted HA (High Availability) and the option to service 1 of my servers offline while keeping all the vms up and running.
The first time I build my "whitebox" I used off the shelf consumer parts after doing tons of reserch on what was supported in vmware and what was not.
A few of the things I found for instance is when you get the K skew of Intel processors you lose VT-D.
My current system uses an Asus WS Motherboard. This is Asus boards for professional workstations that have been vetted to work with aftermarket adapter cards (mostly HBA's , Raid Controllers, and Server Nics).
It worked great but had a few flaws...
Cons:
1. It had no form of remote management so the initial setup had to be done in front of a monitor keyboard and mouse (so old school lol) .
2. VT-D never seemed to work for me (even though it was supported ) . I might have had back luck or something on this one but I could never pass though anything correctly (Video cards or USB Sticks).
I had learned a lot since I build this box and wanted to see If I could do it better, and possibly cheaper.
For my next round of whiteboxes I turned to SuperMicro.
Personally I never had any real dealings with SuperMicro motherboards but I had seen them mention many times in the forums and also Patrick had featured them many times on the STH Main site.
So I started my search there and here is what I came up with.
--Build Guide---
Build’s Name: MI6 (lol because I like James bond flicks)
Operating System/ Storage Platform: Vmware ESXI 5.5 / Server 2012R2 & possibly Freenas
CPU: Intel Xeon E3-1230V3 (Qty: 2)
Motherboard: SuperMicro MBD-X10SLM+-F-O (Qty: 2)
Chassis: Fractal Design Arc Mini (Qty: 2)
Drives: Mushkin Ventura Plus 16GB USB 3 Flash Drive (Qty: 2)
RAM: 16GB Kits of Crucial 1600Mhz DDR3 UDIMM ECC Memory (Qty: 2)
RAM: 24GB Kit of Kingston 1600Mhz DDR3 UDIMM ECC Memory (Qty: 1)
RAM: 8GB Single Dimm Kingston 1600Mhz DDR3 UDIMM ECC Memory (Qty: 1)
Power Supply: Corsair CX430M 430W PSU (Qty: 2)
Other Bits: 1 Fractal Design Silent 140MM Fan (for my Storage Server)
(Qty: 1)
Other Bits: HP 1810-8G Smart Switch (Qty: 1)
Other Bits: Intel I350-T2 PCI-E Dual Port (Nic for VM Traffic)
(Qty: 1)
I went with this particular SuperMicro board vs a few of the cheaper versions because:
- It had USB3
- It had an internal USB Type A header (so you could plug in a flash drive directly to the board and boot ESXI)
- Better PCI-E Connectivity (Review the lanes to see what I mean I "love" those x16 that are really x8 or x8 that is really x2 types lol)
Choosing this board however did come with a sacrifice. I would not be able to run both my Intel 10GE and my Infiniband 10G + my Dual Port PCI-E Nics card at the same time. This is due to how I am structuring my Virtual Network (Will make more sense in the next Section).
So my current plan is to try one technology then pull it out and try the other just for fun.
Here are some pics of the parts I have so far:
[Vmware Home Lab - Network Diagram]
I'll try to keep this part short and sweet (lol cause your eyes might be glazed over by now).
I wanted to diagram out new purposed ESXI Network before I assembled any parts to help make sense of all this connectivity.
As it currently stands each host will have 6 Nics (lol yes I said 6)
- 1 For IPMI Remote Management
- 2 For VM Client Traffic
- 2 For Vmotion and Management Traffic
-1 For NFS/ISCI Network
See just reading that is hard to mentally picture (at least it was for me) so thats why I wanted to make sure I got somthing on paper before I started building. It also made me realize I needed 4 more nics that I had + an additional switch (hence the dual port intel nic + hp 1810-8G switch in the previous build list).
I downloaded some visio stencils and looked for some examples of decent diagrams to help me get started on this.
Below is my purposed Vmware Home lab network for 1 host. So basically take this diagram and double everything (except the switches) and you have a better Idea on what I am working with.
-----The Diagram-----
--Wrap it up my eyes hurt already---
Ok I think this is enough of an update for now. I am still in the process of receiving parts and depending on what else comes in this week I could be building the nodes and re-configuring the network as early as this weekend.
Once the nodes have been build and the network has been re-configured I will post my next update but that most likely wont be until at least Sunday at the earliest.
Thanks for reading I'm going to go get some rest.
If you have any questions just leave them below and Ill answer .
Links:
http://www.servethehome.com/supermicro-intel-xeon-e31200-v3-haswell-server-motherboard-lineup/
The Unofficial VMware Visio Stencils | Technodrone
Visio Stencils | Jonathan's Technology Blog
https://communities.vmware.com/message/2115651?tstart=0
Virtualization: Resistance Is Futile: VMware vSphere 5 Host Network Designs
Newegg.com - Computer Parts, PC Components, Laptop Computers, LED LCD TV, Digital Cameras and more!