C6100 & ESXi Newbie questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

shanghailoz

New Member
May 10, 2013
9
0
0
I've taken the plunge and ordered a C6100 to see if I can use it to replace some existing servers I have in our datacenter.

Its my first toe dabble into doing VM vs actual hardware as previously pricing wasn't that much different here in .cn for a 1U vs a spare ethernet port, so I didn't deem it too interesting. Prices have changed though, and its now viable.

Hopefully I can replace 4 separate servers with one of the C6100 4 in 1 2U chassis, and have an easier to maintain setup.

My plan is to setup a clustering type solution over ESXi.


I'm not completely sure on how this will work though with the C6100. As I understand it, each node is basically a separate computer.
Conceptually I'm just wondering how it ties together.

These are my thoughts - can anyone tell me if this is the right way to do it or have suggestions, or if my thoughts are bzzzt wrong! :p

Theory:
Setup a raid card (eg LSI card with dual SFF8087 -> 8 drivers) to provide RAID10 storage on one of the nodes.
Setup ESXi or Citrix CloudXEN on that node, and setup openindiana or similar for a ZFS share for all the VM's I plan to run, with passthru to the controller for that VM.
Then setup a VM set for HA clustering that will run on the other 3 nodes.

Does ESXi support that sort of setup? Or would I need to install ESXi on each node, and they talk to each other to share VM's?
Or do I just setup netboot on the other nodes and run the VM's on the other nodes that way?

This is the part I'm confused on!

I'll have hardware next week, so I can play around, but thought I'd ask first so someone can set me straight ;)

I have plenty of experience in linux, and dare I say it windows, but little to none with ESXi or other virtualization underpinnings.

My aim is to setup a clustered solution for hosting, with a RAID10 for hardware redundancy underlying it all.
Initially will be shared hosting to replace existing servers, with the possibility of VPS provisioning for clients later.

Hardware inside the C6100 will most likely be small SSD for ESXi and then 4TB SATA x 8 in RAID10 for storage.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,807
113
The way to think about these is that they are 4x distinct servers. The servers just share an outer casing, power supplies, fans and a PCB to hot swap drives.
 

shanghailoz

New Member
May 10, 2013
9
0
0
So looks like I should stick some infiniband or similar HBC inside and boot over iSCSI or NFS for internode communications.

Just unclear on how the different nodes will talk.
Does ESXi support multiple Nodes for sharing VM's? Or should I look at CloudXEN?

Would I be running multiple copies of ESXi - eg one per node, or running one, and netbooting the other node's off of that?
Or is that just a matter of what implementation I want to go with?

Questions questions :)
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,807
113
Personally, I would run one ESXi bare metal HV per node.
 

Mike

Member
May 29, 2012
482
16
18
EU
If you want to make a buck of it please note that you need vmwarez licenses to do the fancy stuff in your cluster. Some other platforms allow those for free but may have a slightly higher learning curve. Xen, prox, kvm, whatnot.