Need advice for a lab refresh, not sure which direction to go in

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

idea

Member
May 19, 2011
86
5
8
Sorry, that PCIe card will definitely not fit.
Thanks dba!


I have a new idea that is spawned off of the past few rambling posts I have been writing. This idea is much more realistic

"One C6100 to do it all, plus a SAS or FC JBOD unit"
  • Node #1 and Node #2 - ESX hosts booting off of either a single USB thumb drive, or a single SATA drive. No PCI-E card, no mezz card
  • Node #3 - Linux with ZFS (bare metal). Booting off of a mirrored pair of SATA disks. PCI-E SAS or FC HBA with external ports. No mezz card
  • Node #4 - Powered off
  • JBOD unit- Going to look on eBay for a 12/16/24 JBOD unit with either SAS or Fiber Channel connectivity. The SGI Rackable 3U 16 Bay SE3016 SAS Expander Chassis looks awesome with the exception that the fans are loud and very difficult to replace
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
If you do go with this three-node solution, I have one suggestion: Infiniband!

Add a Mezzanine card to the three active nodes and buy two Infiniband cables - total cost is around $500. Connect each ESX node directly to the storage server node and use IPoIB. You'll get very fast data access without having to buy a switch. Without some sort of fast networking such as this, you will see very poor performance if you try to run your 10-20 VMs over one or two Gigabit ports. Note that while I did exactly this with Hyper-V, I have not tested the Dell Infiniband Mezzanine cards with VMware.

Also, if you keep one node powered off, you can save .3A by unplugging the power cable from the interposer for that sled. The motherboards will otherwise draw quite a bit of power even when they are not running.

Thanks dba!


I have a new idea that is spawned off of the past few rambling posts I have been writing. This idea is much more realistic

"One C6100 to do it all, plus a SAS or FC JBOD unit"
  • Node #1 and Node #2 - ESX hosts booting off of either a single USB thumb drive, or a single SATA drive. No PCI-E card, no mezz card
  • Node #3 - Linux with ZFS (bare metal). Booting off of a mirrored pair of SATA disks. PCI-E SAS or FC HBA with external ports. No mezz card
  • Node #4 - Powered off
  • JBOD unit- Going to look on eBay for a 12 or 24 JBOD unit with either SAS or Fiber Channel connectivity. The SGI Rackable 3U 16 Bay SE3016 SAS Expander Chassis looks awesome with the exception that the fans are loud and very difficult to replace
 

idea

Member
May 19, 2011
86
5
8
If you do go with this three-node solution, I have one suggestion: Infiniband!

Add a Mezzanine card to the three active nodes and buy two Infiniband cables - total cost is around $500. Connect each ESX node directly to the storage server node and use IPoIB. You'll get very fast data access without having to buy a switch. Without some sort of fast networking such as this, you will see very poor performance if you try to run your 10-20 VMs over one or two Gigabit ports. Note that while I did exactly this with Hyper-V, I have not tested the Dell Infiniband Mezzanine cards with VMware.

Also, if you keep one node powered off, you can save .3A by unplugging the power cable from the interposer for that sled. The motherboards will otherwise draw quite a bit of power even when they are not running.
That is great advice thanks very much. While spec'ing out my build I definitely leave room to add Infiniband or 10gig in the future, or if the JBOD unit I get is Fiber Channel I may even go that route. For now 1gig is plenty for a proof of concept.

And thank you for saving me .3A until I find a purpose for Node #4.