Reorganize our infrastructure for OpenShift/Kubernetes

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrik Dufresne

New Member
Oct 19, 2016
22
2
3
38
Hello,

As I don't have a lot of experience setting up OpenShift and Kubernetes, I'm asking for help here as a way to brainstorm and find creative way to leverage our existing infrastructure.

As a new initiative to embrace Docker, we start dockerizing all our software and we are deploying them on OpenShift / Kubernetes. Currently, our OpenShift infrastructure is really a testing environment with a single node created with "oc cluster up". Now we are planning to create a production-ready environment for OpenShift and we have some problem to setup an appropriate storage solution. Because we are tight on a budget, we want to leverage the existing hardware in our possession.

To put it simply: how would you reorganize this hardware to build a multi-master OpenShift.

(A) 3 servers: currently used for virtualization with proxmox with ZFS
Board: Supermicro X8DTL
Dual Cpu: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
Memory: 48GB
Disk: 4x Crucial_CT275MX300SSD1

(B) 1 server:
Board: Supermicro X9SCL/X9SCM
Cpu: Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz
Memory: 32GB
Disk: 8x WDC 2TB HDD
Disk: 2x SSD 60GB

(c) 1 server:
Board: Supermicro X10SLM-F
Cpu: Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz
Memory: 32GB
Disk: 6x HDD 4TB
Disk: 2x SSD 250GB

Out of this hardware, I have trouble to figure out a way to provide a reliable and fast shared file system for OpenShift. We have tried GlusterFS but it's not performing well enough on a 1GB network.

Our current idea is to invest into a 10GB network to create a GlusterFs or a cephfs mesh with the 3 server A. With this solution, we would leave out server B and C.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
You are not going to like this thought, but I think if you really want performance you need to re-think network and storage. Ceph generally requires quite a few OSDs to get maximum performance. With 5 nodes on 10GbE you can probably get okay performance. In our hosting cluster, there is a big drop off between ZFS RAID 1 NVMe SSDs and a Ceph cluster with 7 nodes and 30 SATA/ SAS SSDs.

Shared storage with few disks is hard.
 

Patrik Dufresne

New Member
Oct 19, 2016
22
2
3
38
Yep, I have the feeling GlusterFs or Ceph doesn't fit our need since we don't plan to have more than 3 nodes. As Patrik mention, there is a huge performance drop using this technology. Figuring out the storage for Kubernetes is not that easy... We usually use Proxmox ZFS storage replication when we want our data to be "safe". Not sure it's possible to do the same for Kubernetes.

Another tough was to buy a direct-attached storage and connect two servers on it. But this would limit us to only two nodes for OpenShift and I'm not sure OpenShift persistent volume scheduler is wise enough to manage this.