I've taken the plunge and ordered a C6100 to see if I can use it to replace some existing servers I have in our datacenter.
Its my first toe dabble into doing VM vs actual hardware as previously pricing wasn't that much different here in .cn for a 1U vs a spare ethernet port, so I didn't deem it too interesting. Prices have changed though, and its now viable.
Hopefully I can replace 4 separate servers with one of the C6100 4 in 1 2U chassis, and have an easier to maintain setup.
My plan is to setup a clustering type solution over ESXi.
I'm not completely sure on how this will work though with the C6100. As I understand it, each node is basically a separate computer.
Conceptually I'm just wondering how it ties together.
These are my thoughts - can anyone tell me if this is the right way to do it or have suggestions, or if my thoughts are bzzzt wrong!
Theory:
Setup a raid card (eg LSI card with dual SFF8087 -> 8 drivers) to provide RAID10 storage on one of the nodes.
Setup ESXi or Citrix CloudXEN on that node, and setup openindiana or similar for a ZFS share for all the VM's I plan to run, with passthru to the controller for that VM.
Then setup a VM set for HA clustering that will run on the other 3 nodes.
Does ESXi support that sort of setup? Or would I need to install ESXi on each node, and they talk to each other to share VM's?
Or do I just setup netboot on the other nodes and run the VM's on the other nodes that way?
This is the part I'm confused on!
I'll have hardware next week, so I can play around, but thought I'd ask first so someone can set me straight
I have plenty of experience in linux, and dare I say it windows, but little to none with ESXi or other virtualization underpinnings.
My aim is to setup a clustered solution for hosting, with a RAID10 for hardware redundancy underlying it all.
Initially will be shared hosting to replace existing servers, with the possibility of VPS provisioning for clients later.
Hardware inside the C6100 will most likely be small SSD for ESXi and then 4TB SATA x 8 in RAID10 for storage.
Its my first toe dabble into doing VM vs actual hardware as previously pricing wasn't that much different here in .cn for a 1U vs a spare ethernet port, so I didn't deem it too interesting. Prices have changed though, and its now viable.
Hopefully I can replace 4 separate servers with one of the C6100 4 in 1 2U chassis, and have an easier to maintain setup.
My plan is to setup a clustering type solution over ESXi.
I'm not completely sure on how this will work though with the C6100. As I understand it, each node is basically a separate computer.
Conceptually I'm just wondering how it ties together.
These are my thoughts - can anyone tell me if this is the right way to do it or have suggestions, or if my thoughts are bzzzt wrong!
Theory:
Setup a raid card (eg LSI card with dual SFF8087 -> 8 drivers) to provide RAID10 storage on one of the nodes.
Setup ESXi or Citrix CloudXEN on that node, and setup openindiana or similar for a ZFS share for all the VM's I plan to run, with passthru to the controller for that VM.
Then setup a VM set for HA clustering that will run on the other 3 nodes.
Does ESXi support that sort of setup? Or would I need to install ESXi on each node, and they talk to each other to share VM's?
Or do I just setup netboot on the other nodes and run the VM's on the other nodes that way?
This is the part I'm confused on!
I'll have hardware next week, so I can play around, but thought I'd ask first so someone can set me straight
I have plenty of experience in linux, and dare I say it windows, but little to none with ESXi or other virtualization underpinnings.
My aim is to setup a clustered solution for hosting, with a RAID10 for hardware redundancy underlying it all.
Initially will be shared hosting to replace existing servers, with the possibility of VPS provisioning for clients later.
Hardware inside the C6100 will most likely be small SSD for ESXi and then 4TB SATA x 8 in RAID10 for storage.