Just wanted to see if I could get any feedback on my initial cloud buildout and if you thought I should do it differently and any other suggestions, etc . I already have most of the hardware, though I am willing to adjust it a bit to get a better base setup for my initial cloud. I am completely new to openstack (I come from a xensource based cluster that I am going to be retiring and selling), so hoping to get some ideas from someone that is much more knowledgable with the openstack platform or just with such buildouts in general. I will be using KVM instead of Xen and most VM's are used for general web hosting (web servers, email servers, databases, etc). The "Cloud" is used for private and public purposes. Mostly public though. My target audience is SMB's (1 to 50 employees). I know I am trying to do all of this on the cheap and not the newest hardware, but I am also not trying to cram a lot of VM's on each compute node either. Probably only a dozen at most per node.
So here is what I have for the newer openstack setup:
Networking:
(1) 1U 48 x 1GbE Layer 2 Switch with 4 x 10GbE additional ports
Servers:
(1) 1U Intel Celeron 1.8ghz Server - Used for vpn/pfsense for external access to management LAN
(2) 2U Quad Node 2U Dell C6100. Each has 4 x Dual Xeon 5520's with 24GB Ram, 4 x 1Gb nics, 1 x IPMI nic, 3 x SATA hotswap (PXE boot?)
(2) 1U Dual Xeon 5520's with 72GB Ram, 4 x 1Gb nics, 1 x IPMI (BMC)
(1) 2U Storage Server, 1 x Xeon 5520 CPU, 48GB Ram, 12 Hotswap bays, SAS Backplane, 4 x 1Gb nics (ipmi shared with 1 nic), 2 x 10GbE, 1 x 4 port SAS HBA, 1 x 8 Port SAS HBA, 1 x 8 Port SAS External
(1) 3U SAS JBOD connected to Storage Server, 16 bay SAS hotswap
(1) 2U Backup Server, Intel Core2Quad Q6400, 4gb Ram, 3ware 9650se 8 Port Raid Controller, 8 x SATA Hotswap
Hard Drives Currently Available:
(12) 300GB SAS 15K hard drives
(12) 72GB SAS 15k Hard drives
(3) 120GB SATA SSD's
(4) 2TB 7200 SATA Hard Drives
(8) 750GB 7200 SATA Hard Drives
My original intent was to do an ZFS iSCSI SAN for the storage server, but now I am thinking I should have more redundancy and maybe find a way to use all the empty hard drive bays on the Dell C6100's. Thats literally a total of 24 unused bays at this point. Unfortunately it seems like Ceph would use to much cpu/ram resources to run both Ceph on them and use them as Compute Nodes. I could possibly get another storage node, if needed as well as right now I don't really have any redundancy there.
I know I need to provide a lot more information, but I figured I would start here and see what kind of suggestions I could get on how you think I could best use this hardware for openstack. How I should best do the storage for the VM's is probably whats bugging me the most. As I said, I thought ZFS/iscsi was going to be great for the Storage Node, but the more I look into openstack, the more I am thinking maybe I should look at a different storage setup. The key needs are performance, stability, and the ability to do live migrations (HA doesnt have to be automated, but would be nice). Thoughts? Ask any questions you like and I really appreciate your time on this.
So here is what I have for the newer openstack setup:
Networking:
(1) 1U 48 x 1GbE Layer 2 Switch with 4 x 10GbE additional ports
Servers:
(1) 1U Intel Celeron 1.8ghz Server - Used for vpn/pfsense for external access to management LAN
(2) 2U Quad Node 2U Dell C6100. Each has 4 x Dual Xeon 5520's with 24GB Ram, 4 x 1Gb nics, 1 x IPMI nic, 3 x SATA hotswap (PXE boot?)
(2) 1U Dual Xeon 5520's with 72GB Ram, 4 x 1Gb nics, 1 x IPMI (BMC)
(1) 2U Storage Server, 1 x Xeon 5520 CPU, 48GB Ram, 12 Hotswap bays, SAS Backplane, 4 x 1Gb nics (ipmi shared with 1 nic), 2 x 10GbE, 1 x 4 port SAS HBA, 1 x 8 Port SAS HBA, 1 x 8 Port SAS External
(1) 3U SAS JBOD connected to Storage Server, 16 bay SAS hotswap
(1) 2U Backup Server, Intel Core2Quad Q6400, 4gb Ram, 3ware 9650se 8 Port Raid Controller, 8 x SATA Hotswap
Hard Drives Currently Available:
(12) 300GB SAS 15K hard drives
(12) 72GB SAS 15k Hard drives
(3) 120GB SATA SSD's
(4) 2TB 7200 SATA Hard Drives
(8) 750GB 7200 SATA Hard Drives
My original intent was to do an ZFS iSCSI SAN for the storage server, but now I am thinking I should have more redundancy and maybe find a way to use all the empty hard drive bays on the Dell C6100's. Thats literally a total of 24 unused bays at this point. Unfortunately it seems like Ceph would use to much cpu/ram resources to run both Ceph on them and use them as Compute Nodes. I could possibly get another storage node, if needed as well as right now I don't really have any redundancy there.
I know I need to provide a lot more information, but I figured I would start here and see what kind of suggestions I could get on how you think I could best use this hardware for openstack. How I should best do the storage for the VM's is probably whats bugging me the most. As I said, I thought ZFS/iscsi was going to be great for the Storage Node, but the more I look into openstack, the more I am thinking maybe I should look at a different storage setup. The key needs are performance, stability, and the ability to do live migrations (HA doesnt have to be automated, but would be nice). Thoughts? Ask any questions you like and I really appreciate your time on this.
Last edited: