Best OpenStack in November 2017

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
We have a client that wants OpenStack for storage, VMs and containers, but also needs to fit in about 1.3kW.

I think it's crazy and I'm planning to Proxmox + Docker. I figured I'd ask STH what the realistic minimums we would need for OpenStack hosting as of November 2017.

If it is a matter of nodes, the plan is to use Xeon D. We'll probably use 2-3 DP nodes anyway for Proxmox then a few Xeon D nodes with SSDs for extra cluster witnesses and Ceph.

I wanted to take the pulse of easiest to install and maintain today. As you can tell, they have a low budget for everything.
 

nkw

Active Member
Aug 28, 2017
136
48
28
We have a client that wants OpenStack

....

easiest to install and maintain ... low budget
I'm not sure these phrases should go together.

What are they trying to accomplish?

If they want to automate everything using an API or need to have different tenants then I could see the need for OpenStack.

I just checked one of my Xeon-D boxes and it was idling around 50W and went to around 75W when loaded, so you could probably squeeze quite a few into 1.3kW if you had to, but for that number of machines (<20) I would think Proxmox + Docker makes more sense if it is just storage + some containers.

I don't think there is a 'minimum' for OpenStack but it is a lot to bite off if you don't need the features it offers. If they like the Redhat ecosystem vs Proxmox then maybe consider oVirt? I would probably go Proxmox over oVirt unless support (RedHat) was a critical need.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Openstack has a very high overhead requirement to be deployed in a useful way. You need at least 3 hosts dedicated to the OpenStack services (more if you want true HA). And you also probably want a good overlay network stack (likely Contrail or Nuage) running, which means you burn another 1-2 hosts for the controllers associated with that. So before you begin adding hosts to support workloads (Nova) you've burned 3 to 7 full servers.

You can collapse some of that, but there are compromises if you do, and for small deployments there is really not much benefit. Unless you are virtualizing across 30 or more hosts I wouldn't recommend OpenStack. If you are operating at that scale it can be useful - but smaller than that the overheads are just too high. Also, OpenStack is big and complex and still somewhat fragile in places. You will want skilled staff to run it (which again argues against it unless the operation is big enough to handle the overheads).

Its really unlikely that you'll get something useful at 1.3kW.

Assuming its a small operation I'd do exactly what you propose - a few Xeon-D nodes, Proxmox+Ceph+(maybe)Docker.

BTW - I am a strong OpenStack advocate when used the right way. I run architecture for a very significant OpenStack deployment and it works well for us, but we are now over 7,000 hosts spread over multiple locations across the whole USA. I am painfully aware of the overhead OpenStack brings and we've had a few failed attempts to deploy it in small sites (single rack, 10-20 hosts, etc). I just doesn't pay off at that scale.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I would suggest 3x DP nodes, then 2-4 ceph cluster nodes. Get to 5 or 7 and use mirrored ZFS for any ultra important data. Small ceph deployments scare me.

I think @nkw and @PigLover have good points. That is going to be a tight budget for OpenStack.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I assume the requirements are almost 100% linux then ?
Going to echo what others have said, stick with a standard virtulisation solution and throw docker if wanted / needed on top, just setup all you auto provisioning using native tools and done.

I have played with ceph and I really want to love it... but I just can’t, performance is from what I see sub optimal and resilience is questionable especially if not setup right on a good number of nodes.
Having just said that I don’t know what’s better than that, still looking...