OpenStack Buildout Advice needed

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MACscr

Member
May 4, 2011
119
3
18
Just wanted to see if I could get any feedback on my initial cloud buildout and if you thought I should do it differently and any other suggestions, etc . I already have most of the hardware, though I am willing to adjust it a bit to get a better base setup for my initial cloud. I am completely new to openstack (I come from a xensource based cluster that I am going to be retiring and selling), so hoping to get some ideas from someone that is much more knowledgable with the openstack platform or just with such buildouts in general. I will be using KVM instead of Xen and most VM's are used for general web hosting (web servers, email servers, databases, etc). The "Cloud" is used for private and public purposes. Mostly public though. My target audience is SMB's (1 to 50 employees). I know I am trying to do all of this on the cheap and not the newest hardware, but I am also not trying to cram a lot of VM's on each compute node either. Probably only a dozen at most per node.

So here is what I have for the newer openstack setup:

Networking:
(1) 1U 48 x 1GbE Layer 2 Switch with 4 x 10GbE additional ports

Servers:
(1) 1U Intel Celeron 1.8ghz Server - Used for vpn/pfsense for external access to management LAN
(2) 2U Quad Node 2U Dell C6100. Each has 4 x Dual Xeon 5520's with 24GB Ram, 4 x 1Gb nics, 1 x IPMI nic, 3 x SATA hotswap (PXE boot?)
(2) 1U Dual Xeon 5520's with 72GB Ram, 4 x 1Gb nics, 1 x IPMI (BMC)
(1) 2U Storage Server, 1 x Xeon 5520 CPU, 48GB Ram, 12 Hotswap bays, SAS Backplane, 4 x 1Gb nics (ipmi shared with 1 nic), 2 x 10GbE, 1 x 4 port SAS HBA, 1 x 8 Port SAS HBA, 1 x 8 Port SAS External
(1) 3U SAS JBOD connected to Storage Server, 16 bay SAS hotswap
(1) 2U Backup Server, Intel Core2Quad Q6400, 4gb Ram, 3ware 9650se 8 Port Raid Controller, 8 x SATA Hotswap

Hard Drives Currently Available:
(12) 300GB SAS 15K hard drives
(12) 72GB SAS 15k Hard drives
(3) 120GB SATA SSD's
(4) 2TB 7200 SATA Hard Drives
(8) 750GB 7200 SATA Hard Drives

My original intent was to do an ZFS iSCSI SAN for the storage server, but now I am thinking I should have more redundancy and maybe find a way to use all the empty hard drive bays on the Dell C6100's. Thats literally a total of 24 unused bays at this point. Unfortunately it seems like Ceph would use to much cpu/ram resources to run both Ceph on them and use them as Compute Nodes. I could possibly get another storage node, if needed as well as right now I don't really have any redundancy there.

I know I need to provide a lot more information, but I figured I would start here and see what kind of suggestions I could get on how you think I could best use this hardware for openstack. How I should best do the storage for the VM's is probably whats bugging me the most. As I said, I thought ZFS/iscsi was going to be great for the Storage Node, but the more I look into openstack, the more I am thinking maybe I should look at a different storage setup. The key needs are performance, stability, and the ability to do live migrations (HA doesnt have to be automated, but would be nice). Thoughts? Ask any questions you like and I really appreciate your time on this.
 
Last edited:

MACscr

Member
May 4, 2011
119
3
18
Just wanted to update this thread with a quick diagram I setup for what I envision the simplified version of the network would be. Do note that I only have 21U's in my colo to work with right now and have 20amps (really only 16amps usable) at 110v.

 

MACscr

Member
May 4, 2011
119
3
18
My current idea that I am contemplating with hardware broken down into specific tasks:

 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
iscsi you want a dedicated switch. the 2910al is the bottom of the line for iscsi. no cheap crap, and don't cheap out and use one switch.

Just my experience you can disregard it if you want.

honestly if you don't need mega redundancy, just do DAS or SAN (MSA2000/P2000).

MSA60/70's are $99 each nowadays.
 

MACscr

Member
May 4, 2011
119
3
18
I do appreciate your experience and opinion, so thanks for the response!

Well I already have the hardware for the most part except the second storage node that you saw me list in the second diagram. I do have a layer 2 switch that will do about 96Gbps. Its one of those open source brocade ones. I haven't really tested it yet, but I think that should be be fine, plus the 10GbE uplinks will be nice from the SAN's. Though I do have a 5424 Dell PowerConnect that I was planning to sell and I guess I could keep it instead and use that for everything else. Or just get a second 48 port switch for the option of redundancy or just plain separating things out physically. Besides redundancy, is there a reason why I should get a second switch? I don't think I will be pushing its max throughput and the ports are supposed to do wire speed.