Proxmox storage setup

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

artbird309

New Member
Feb 19, 2013
25
2
3
A background. I currently have a one NAS/SAN that is running Omnios with Napp-It exporting storage on NFS and iSCSI to a 3 node VMware cluster. Something is up with the storage box and I have given up with trying to figure it out. It doesn't like to connect and I can't get above 1GB but it works great as a NAS so I'm not worrying about it and plan to move the VM storage to the hyper converged setup.

The current VMware setup is 3 Dell R410 with 6x 1GB NICs. I'm trying to bring down my yearly subscriptions in my lab and would like to move away from VMware to Proxmox. I have currently have 4x 1TB WD RE4 drives, 4x 300GB SAS drives, and a few random 500GB drives I'm sure I can fill the rest of the bays if I need to.

I'm trying to figure out how best or if it's possible to setup a Proxmox with Ceph cluster that I can get at least 1GB storage speed or preferably higher with this setup. I am willing to spend up to $150 per host in networking or SSDs if I need to to make this work well.

I'm not sure how best to setup the storage or the networking in this setup. I'm looking for your guys suggestions before I commit to switching and if I need to invest in addition equipment.

Sent from my XT1650 using Tapatalk
 

vl1969

Active Member
Feb 5, 2014
634
76
28
here is the Proxmox CEPH setup Wiki, Storage: Ceph - Proxmox VE

I do believe(from everything I have seen so far in my research) that you have all you need to build out a nice ceph cluster. your only limitations would be network and this needs a lot of consideration,even IF you have 6 nics per node the bandwidth may not be enough for a smooth run. I am not sure if it possible to add the 10G network to a running cluster, so do more researching and also maybe someone here chime in with more info.
if you do need 10G than 150 per node is not enough. you will need at least a 2 port card in each node and hence also a 10G switch.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
with 3 nodes you can still do 10G without a Switch and potentially cheap DAC as described here:

Full Mesh Network for Ceph Server - Proxmox VE

But guess to get good performance and reliability with ceph your setup / number of nodes and osd's is a bit small. Also, SSD in each node would help. So maybe a bit tight with USD 150 / node for decent speed and reliability on ceph.

For smaller setup / number of nodes gluster on 10G or even on rdma with inexpensive connectx2 might be better. Also sheepdog could be worth a try.

Alex

p.s. i have no experience with ceph yet. too scared by the requirements regarding the number of nodes/OSD's and the complexity of management..
 

artbird309

New Member
Feb 19, 2013
25
2
3
And that's what's I was thinking would be my biggest limiting factor would be the networking and was trying to figure out how much they would limit me. Right now I'm fine as long as I can max a 1GB connection but I have read that might not be possible writing to 2 nodes and the EC. But I don't know how that factors in when you start to bond NICs.

I did see that about doing a mash setup and I would be fine with that but I'm wondering if I would start to hit the limits of my spinning disks.

I'm not set on Ceph it just seems to the the high one right now and it looks like it'll do what I want. Be able to turn off any node for maintenance without losing any services but I'm willing to looking at others if they will work better.

What is the recommended node and OSD I figured 3 nodes and 4 OSDs per wouldn't be that bad. I'm only looking as using it for VM storage so I'm not needing the most reliable system I plan to backup to my NAS for all my crucial VMs.

Sent from my XT1650 using Tapatalk
 

artbird309

New Member
Feb 19, 2013
25
2
3
@artbird309 switch + 10gb NICs will help you. You have enough nodes. After network I'd look at journal devices.
I think that's what I'll start with then. I haven't seen any 10gb switches that would fit my budget. But the mesh setup should work, just need to look into what NICs and DACs I can get cheap that will work.

Sent from my XT1650 using Tapatalk
 

artbird309

New Member
Feb 19, 2013
25
2
3
I have been looking at the Mellanox ConnectX-2 and Intel X520-DA2 NICs. Is there a huge advantage to getting the Intel NICs. I have seen a few issues with the ConnectX-2 and Windows 10\Server 2016 but nothing major I can't live without or changing firmware won't fix.

The price difference is pretty large about $50 from what I am seeing on eBay right off so I'm leaving towards the ConnectX-2 with DACs.

Sent from my XT1650 using Tapatalk
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
For me the ConnectX-2 worked fine for everything I used, might require a change to the card or firmware but worked... I've replaced them with ConnectX3 for 40GbE utilization, and still use the Intel 2 port cards in certain servers that don't need the 40G.
 

Net-Runner

Member
Feb 25, 2016
81
22
8
41
CEPH isn't great for small deployments. For 2 or 3 nodes one should use a VSAN (data moving/mirroring). There are multiple free choices here that can be deployed on top of Proxmox as storage controller virtual machines and present mirrored storage pool back to hypervisor over iSCSI/SMB/NFS/whatever.
Try searching HPE VSA, EMC Unity or Starwinds.
 
  • Like
Reactions: Connorise

artbird309

New Member
Feb 19, 2013
25
2
3
I'm seeing that more now that I would need more nodes to use Ceph well. In my little testing on my spare server with only one osd and 1gb nic. I was getting about half the max of the drive.

I'm looking into the VSAN products, I didn't thinking about them working when I moved outside of VMware and Hyper-V till now. Looks like I could use HPE VSA or Starwinds. It looks like EMC Unity is only for VMware.

Sent from my XT1650 using Tapatalk
 

Net-Runner

Member
Feb 25, 2016
81
22
8
41
I have tried HPE VSA but couldn't get any reasonable storage performance out of it. Starwinds main production scenarios are Hyper-V and ESXi but i have tested it on top of KVM and XenServer and it works. Requires windows license inside the storage controller virtual machine to run which might be crappy for someone. They released a Linux VSA recently but it is still a beta, however forums say it is being intensively developed right now.
 
  • Like
Reactions: Connorise