Help me build a VMware home lab

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

VR Bitman

Member
Sep 9, 2014
40
3
8
123
I am looking to build a home lab. I actually already have one made of consumer hardware, but it's just not cutting it anymore and I would like to move on to a serious setup and virtualize my existing PC(s).

I saw various threads/posts on this website about great deals for used server equipment, but I live in Europe and these deals are typically not very convenient ($700+ in shipping..).

These are my requirements:
- a cluster of at least 2 identical nodes (3x 1U rack servers or 1x multinode 2U server? which is better?)
- low power consumption, as low as possible
- good scalability in terms of compute resources: I don't need machines with many cores or GBs of RAM out of the box, but I would like to be able to install at least 128GB per node in the long run
- preferably 10G Ethernet, integrated or through daughter cards if costs are lower
- 2, preferably more PCI-E slots per node to accomodate cards that will be presented to VMs via DirectPath I/O

Also...
- how do you guys deal with the noise at home? any tips for reducing it or cooling the machines through alternative methods?
- what is the most convenient solution in terms of storage for a home lab? an external NAS? VSAN?

Can anyone give me some pointers for what would work best at home?
Thanks.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Well, your requirements really do limit choices.

128GB RAM/ node means no Atom C2750 (also no VT-d there), and no Xeon E3-1200. The Westmere-EP/ E5-2400 comfortably has 6 DIMMs/ processor so 12 DIMMs in a 2P server. That means you get 96GB with 8GB RDIMMs or 192GB with 16GB RDIMMs (I guess you could mix for 144GB). Still, that is going to be a big factor at that amount of memory.

My sense is, you are going to be looking at a Xeon E5-2600 V1, V2 or V3. 8 DIMMs per processor means you can get 128GB eventually in single processor configurations (16GB*8) even. Single processor will have a big impact on keeping power consumption low.

What form of 10GbE? 10Gbase-T, SFP+, QSFP+ (thinking Mellanox adapters). That will impact which boards you get.

OK so let's say you have 2 nodes and decide on SFP+. You can get something very inexpensive like a Mikrotik CRS226 (see the thread here) which only has 2x SFP+ ports but also has 24x NICs and can do some L3 stuff in software. Not ideal but fanless and <12w operation which is spectacular with those specs. Mikrotik also has very inexpensive SFP+ transceivers.

NIC wise, the new X710 adapters will be dual port SFP+ 10GbE. Check the main site just published for Fortville thermals, and that was the 2x 40GbE connection version.

In your position the question becomes do you want to go with now 2 major generations old technology (e.g. LGA1366) or do you want something newer. Going with a L5638/ L5639/ L5640 will give you the features you want except you are likely going to stop at 96GB. If that is OK, then it saves you a few thousand dollars.

What I might do if I were you is go very small for the initial start-up. Just about everyone here will tell you they undergo constant additions. Then build out as needed.

On the storage front, yikes. I like having network storage. Then use vSAN as something to test and play with. Keep your data secure in one spot, versus playing with the architecture storing a critical copy of data. TBH - it is really easy for me to provision a Synology network share or iSCSI target. Performance is nothing like I get with home built but there is something nice about having one platform I do not tinker with.
 

VR Bitman

Member
Sep 9, 2014
40
3
8
123
Hello Patrick,

thanks for taking the time to reply. I guess I should've been more specific (and maybe also more realistic). I don't really need 128GB of RAM, my reasoning was that I might be able to save money in the long run by being able to gradually add RAM instead of buying new servers in the future.
I can definitely settle with a maximum of 96GB!
As regards networking, I'd like to go with the cheaper solutions. SFP+ tends to be the cheaper standard? What about the switches? Are (10 Gb) SFP+ switches cheaper than Base-T switches?

For storage, I guess I'll go with a NAS. I already have one but it's underwhelmingly slow and want to get rid of it. What would you recommend? Synology products? I require at least 32TB of storage and decent transfer rates. I won't settle for anything that can't sustain at least 100MB/s read and write.
 

VR Bitman

Member
Sep 9, 2014
40
3
8
123
Never heard of it, does it pool together local disks? How does it present the storage resources to vSphere? What advantages are there compared to, say, VSAN?
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Never heard of it, does it pool together local disks? How does it present the storage resources to vSphere? What advantages are there compared to, say, VSAN?
Don't get me wrong. I like vSAN and it should be standard on every platform. BUT - you are building a VMware lab. I would have both vSAN and some sort of storage that sits outside of lab technologies.
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Directpath will prevent some ha features of VMware. If ha is a consideration in your cluster.
 

VR Bitman

Member
Sep 9, 2014
40
3
8
123
I am aware of the limitations of DirectPath I/O, thanks. Can anyone help me with my previous questions?
 

Kristian

Active Member
Jun 1, 2013
347
84
28
As regards networking, I'd like to go with the cheaper solutions. SFP+ tends to be the cheaper standard? What about the switches? Are (10 Gb) SFP+ switches cheaper than Base-T switches?
If SFP+ tends to be the cheaper standard will probably depend on who you are asking.
Generally the decision will depend on the number of 10GBE switch ports you need.

Considering Switches like the Netgear ProSafe Plus XS708E, 8-Port I would say that 10GBase-T has the cheaper switches when it comes to more than 2 10GBE ports


SFP+ was the cheapest road for me to go with 2x Brocade 1020, Lamps bought from Fibre-Store and the Mikrotik CRS226 that Patrick was mentioning.
(All together $400)
 

ehorn

Active Member
Jun 21, 2012
342
52
28
I am aware of the limitations of DirectPath I/O, thanks. Can anyone help me with my previous questions?
HA is not a requirement for you... cool... Most guys who do multi-node at home wish to work with these advanced HA features, which is why I mentioned it.

What is it you want to do with your lab?
 

sboesch

Active Member
Aug 3, 2012
467
95
28
Columbus, OH
Never heard of it, does it pool together local disks? How does it present the storage resources to vSphere? What advantages are there compared to, say, VSAN?
It uses the ZFS file system. You can create pools of mirrors, RAID like pools using RAIDZ. You can present the disks via NFS or iscsi, you can even have the storage as a VM on a host and present the disks via NFS as well. Checkout napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana, Solaris and Linux :Downloads
 

wildchild

Active Member
Feb 4, 2014
389
57
28
Zfs can expose either nfs or iscsi , depending on what you prefer.
Take a look at zfsbuild.com for some nice graphs.
Me personally ; i like my san/nas seperated from my hypervising, as it gives more freedom in hypervising, and more speed.
I use 3* dl360 g6 (vshere 5.5) and a supermicro sc 846 with 14* 1 tb, 2 * dc3700, 4* 840 pro, and 4 * 1 gb in lacp, faster and more reliable tham most high end sans in the same config.
As for not knowing zfs...
Developped by sun, used nowadays by : oracle exadata; dell z-nas; nexenta, opensource impl ; omnios, smartos, freebsd, freenas
Just to name a few off the top of my head
 

VR Bitman

Member
Sep 9, 2014
40
3
8
123
Thanks everyone for the input.
I'm seeing a number of good deals for used equipment with DDR2 memory.
I've never worked with DDR2 in servers, might this be a choice that I'll regret down the road or it won't make much impact on performance?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
I would skip DDR2. Much higher power consumption. Lower capacity. Slower. And extremely old/ power inefficient platforms.

The DDR3 transition happened in the 2007-2008 timeframe if I am not mistaken. So you are basically buying 7 year old technology. (Assuming here you are looking at Xeon 5400 era platforms.)

Nehalem (Xeon 5500) was a huge architectural change and what is really the basis for all processors since in terms of the basic core.

The low end of Nahelem is basically around the performance of today's Atom C2750 but can handle more RAM.

DDR2 is basically two generations of memory behind and 5 generations of processors.
 

VR Bitman

Member
Sep 9, 2014
40
3
8
123
Excellent, thanks.
I hope to settle on something in the next few days and then show you guys what I plan on doing.
 
  • Like
Reactions: Patrick

VR Bitman

Member
Sep 9, 2014
40
3
8
123
I just had another thought, I was thinking of mixing different servers so that I can play around with hardware from different vendors. I will make sure the CPUs will be the same and each will have the same amount of memory, to maintain a balanced cluster.
What do you guys think?
 

sboesch

Active Member
Aug 3, 2012
467
95
28
Columbus, OH
With VMware you can specify EVC mode which will determine which generation processor VMware will emulate and support. This gives you the ability to mix and match processor generations. I have no issue mixing and maxing CPU and memory capacities, they do not all have to be the same.
 

VR Bitman

Member
Sep 9, 2014
40
3
8
123
Yes, I was aware of the EVC mode. However, it "dumbs down" the processors by disabling (masking) CPU features. I want to match CPUs and memory capacities because it will ensure a perfectly balanced and functional cluster and will eliminate any need for workarounds, special features etc.
Everything has to be perfect because buying twice is not an option.
 

rayt

New Member
Apr 18, 2013
10
0
1
Would a single or dual e5 with das be an effective choice?. I have an Asus z9pa system boots ubuntu and I run kvm or virtualbox machines on with vm storage on a pike raid5 das
It works well for me as I can assign. As much resources an CPU power as I like. I tried to migrate to the free version of vm ware but I found. It too limiting
 

VR Bitman

Member
Sep 9, 2014
40
3
8
123
HA is not a requirement for you... cool... Most guys who do multi-node at home wish to work with these advanced HA features, which is why I mentioned it.

What is it you want to do with your lab?

I somehow missed this post. I apologize.
I want to use it for learning purposes but also to virtualize my current desktop system.

Would a single or dual e5 with das be an effective choice?. I have an Asus z9pa system boots ubuntu and I run kvm or virtualbox machines on with vm storage on a pike raid5 das
It works well for me as I can assign. As much resources an CPU power as I like. I tried to migrate to the free version of vm ware but I found. It too limiting

I've ordered the Mikrotik router (thanks guys!) + cables and adapters so far. I still need to decide about the actual server hardware.