Proxmox Cluster shared storage model?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

cookiesowns

Active Member
Feb 12, 2016
234
83
28
28
Hey All,

So I'm attempting on building a multi node Cluster on Proxmox. I plan on doing shared storage for the most part, but some nodes will have local SSD storage in RAID-1/RAID-10.

The biggest problem right now is that I either lose SATA 6G ports, or go RAID and lose 10G networking. So that almost rules out Ceph?

The biggest problem is that most of my nodes only have 2x Sata 6G, and are 1U so maximum of 4 drives.

My I/O needs aren't very heavy, but having at least decent IOPS per VM would be really nice. I've been looking at the SFF C6220 setups, but it appears I'd be limited in terms of SATA 6G ports as well, but it solves my 10G networking limitation..

I've also been looking at adding a 24x2.5" shelve to my existing storage infrastructure on ZFS, and passing storage as NFS to the Proxmox cluster, but not sure what the performance implications are...

TLDR; What are some shared storage models that you guys have adopted for Proxmox with 12-14 nodes?
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Well you could get a LSI hba for the C6220 and have 6g ports. Ceph and Proxmox works great. You can also share ZFS storage easily but that's not as good since it is not HA.

C6220 SFF setup:
SSD1 6G: Ceph fast
SSD2 6G: Ceph fast
SSD3: Boot intel 320
HDD1: Ceph slow/ bulk
HDD2: Ceph slow/ bulk
HDD3: Ceph slow/ bulk

HDDs: Seagate 4TB 2.5" SATA drives for 61% off by shucking externals


You can swap out a HDD for another SSD as a journal device or as cache for the slow pool. Another way you can do it is to change HDD1 and make 3G SSD4. Get large/ cheap SSDs and use those for ZFS R1 storage and share storage in excess of boot.
 

cookiesowns

Active Member
Feb 12, 2016
234
83
28
28
Buying today, I would get the HP instead of the C6220.
HP S6500 and SL230s Gen8 - CRAZY deal
And for faster I/O FlexLOM IB/EN cards for the HP s6500/s230 $45

Or single servers which are cheap.

You can also make single CPU nodes as an option. I found when I moved to more nodes, Ceph and Proxmox clustering worked much better than when I tried using few nodes.
How many nodes did you scale up to, to get ceph/proxmox clustering working better?

My current approach is this:

8 node Ceph/Proxmox cluster

4x E5 V1 nodes with 8x SSDs for OSD's, with 2x10G networking. dedicated bonded 1G cluster network for Quorum.

4x E5 V3/V4 nodes with some local storage, and primarily used as compute nodes with storage on Ceph. bonded 2x10G for storage, 2x 1G for VM traffic / Quorum.

Another approach is to drop it down from 8x SSD's into 4x SSD's, and have 4 dedicated ceph nodes, and the remaining 16 drives on the compute nodes. ( 4 x drives each )