Openstack lab build guidance

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

fractal

Active Member
Jun 7, 2016
309
69
28
33
I hope this is the right section -- openstack questions seem spread across various sections on this site.

I am looking to set up a small openstack cluster here at home. It is mostly to learn. I was thinking of using multiple physical nodes to get a feel for where the bottlenecks are rather than a one box virtualized node. But, I could probably be convinced otherwise.

By small, I mean maybe a dozen instances with 4-16 G RAM and 2-4 cores a few hundred GB of disc each. It is small enough I could probably stuff it all on a single VMware server if I tried.

I am hoping I can leverage some/all of the equipment I already own to constrain costs. I would love to have a bunch of Xeon D nodes and multiple 10G networks but that's well and truly out of the budget.

I have a supermicro 6026TT-HDTRF with two nodes each running a pair of X5670's that I figured would make pretty dandy compute nodes. They are busy crunching BOINC cobbles for me but I can repurpose them.

My first real stopper was figuring out what to use as a controller node and/or a network node. I do not expectmuch external traffic but can dedicate a machine to the network node if it will help visualize a larger network. The documentation is pretty vague on how processor / memory intensive the controller and network nodes are. Can I get away with something like a celeron for those or are dual xeons more appropriate? I have some of each, and a few grades in between I can use for this project.

Is an x9scl with 16-32GB ram and an i3 sufficient for a controller node for openstack cluster of this size or should I plan on an x8dte? I have a modest selection of Sandy Bridge, Ivy Bridge and Westmere gear that can be easily repurposed for the duration of this fun.

I have similar question for a network node.

I expect that storage nodes will be a completely different game when I get around to them.

Please be gentle with me ;)
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,827
113
What is your total size looking like it will be? Are you thinking 3 nodes, 5 nodes, 7 nodes, 9 nodes or more?

I actually now think about doing OpenStack with slightly older than bleeding edge hardware. Sometimes the provisioning tools do not have the newest NIC drivers and etc. I would aim to get server NICs with whatever you do go with.
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
Thinking about it I probably had in mind somewhere between 5 and 10 nodes but that is not set in concrete.

My initial though was to use supermicro LGA 1366 motherboards where possible primarily because they are cheap, processors are cheap, memory is cheap and they are new enough to have modern chipsets. It helps that I have some. I then had the idea of using LGA 1155 motherboards to save a bit of power where the reduced core and memory would not be a bottleneck. I would love to use LGA 2011 boards but I can buy an entire LGA 1366 system for the price of a LGA 2011 motherboard.

That is a good point to consider the NICs. I prefer Intel for gigabit when I have to add cards. I would go with connectx-2 cards for 10G but would go with X520's if they were worth the extra money.
 

Michael Hall

Member
Oct 9, 2015
40
10
8
Something else to consider...

I'm slowly gathering the parts for a fairly low-power, 6-12 node, OpenStack cluster using Athlon 5350-based nodes. They're only quad-core 2.05 GHz, but they're dirt cheap (board and CPU for under C$100, brand new) and my test node idles at around 32W, and pulls about 50W under full load, including a quad gig NIC and a temporary SSD. (I've got 6 of the 200GB Toshibas from the deal thread en-route.) That's running off a 430W 80+ Bronze PSU. Idle power was a couple of watts less running from a PicoPSU, but the crappy "12V" adapter I was using was only actually putting out 10.75V, so it was dying under load.

The nodes will go into an Ikea Helmer drawer unit with custom laser-cut acrylic drawer fronts and backs. My aim, as long as cooling is adequate, is 2 nodes/drawer. Hopefully a couple of ducted 80 mm fans--1 blowing over the CPUs & VRMs and 1 over the NICs, which get quite hot--will be able to keep the temps under control without being too noisy. Unfortunately, the RAM runs crosswise, like most desktop boards, so it'll block some of the airflow over the CPU coolers.

The whole cluster will run off a 600-720W 12V power supply, with a PicoPSU in each node. I'm also designing my own management cards, with serial-over-LAN, fan monitoring and power control.
 
Last edited:

fractal

Active Member
Jun 7, 2016
309
69
28
33
I have been playing with devstack a bit in a VM with 4 cores and 8 gig of ram and it works 'ok'. The UI is a little slow but I am slowly getting the hang of things.

Hardware requirements appear to be very modes. Looking at 2.3. Host Requirements I get the feeling that pretty much any 64 bit platform will work for the controller and the network node and the requirements for the compute and storage nodes are application dependent.

I am looking at the modestly priced supermicro x9sci-ln4f for a network node. I have a chassis, processors and memory that will work with it. Are there any better choices in the < 100 dollar range for a network node?

Likewise, I am looking at a supermicro x9scl-f for the controller node. It would use the same chassis, processors and memory that the x9sci-ln4f of which I have a selection. I have several of these in use in other projects with one spare.

I am not opposed to buying better gear and I am homing in on the devil I know instead of the one I don't. This stack will never exceed one rack and is unlikely to exceed a half a rack so from what I have read, I won't need more than a dual core plus a few gig of ram for either of the nodes.
 

Dean

Member
Jun 18, 2015
116
11
18
48
I have been playing with openstack for a little while, in an esxi 6.0 environment.

I have a 12 core / 64gB, local storage setup currently. I have experimented with Conjure up for openstack, single VM, 4 core / 32GB of ram. 32GB and up for an entire openstack environment seems to be the sweet, start point. It deploys openstack in its entirety and is usable to an extent meaning Conjure up ( to a single VM ) utilizes LXD containers - it has its limitations, ie: cant utilize an .iso to deploy a instance ( VM ) currently. Environment functions very well.

With that, I had previously experimented with MAAS, which can be done virtually. My next test is MAAS on esxi with Conjure up with MAAS, which also deploys openstack - utilizing the nodes you have setup virtually and will deploy openstack services across all nodes. This should now open the door to the functionality I need with Windows Images in openstack.

Point is - starting - test. Get a feel, see what works, what doesnt - then as far as hardware and purpose you will get a better idea as to any specific need and expectation.

my 0.2 cents..
 
Last edited:
  • Like
Reactions: Marsh