How would you setup this Proxmox storage?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gbeirn

Member
Jun 23, 2016
69
17
8
123
I have three identical servers, let's call them server1, server2 and server3. RAM, CPUs everything but the storage is identical.

Currently server1 has a RAID 10 of 8 SSDs. This is a development server that hosts in progress VMs and LXCs until they are ready to move to production.

Server2 and server3 each have a single SSD (300GB) using DRBD for VMs. They also each have 2x64GB SSDs mirrored for LXC containers. These are the production servers.

Would you set it up any differently? The virtual machines and containers are internal services (dns, dhcp, etc. ) and webservices (websites, web applications, etc. ).

I'm confident with how I have development setup. Production I'm not so sure.

The idea is to keep production high availability. Assume redundant powers supplied, circuits, UPS and regular backups (which we have).

Hopefully this explains clearly the setup. Any feedback is appreciated.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
I'm not sure the demands of your software but developing on a 8x SSD setup and then in production on a 1x SSD is a huge difference in available performance too. Why does the development server have a 8x SSD performance sounding setup and then the production only are single SSD per-host or at-most mirrored SSD, and 2 hosts? Why do production servers have single 300GB and not at-least RAID1.

I personally would run 4+ SSD per-host with RAID10 either hardware or software based.

You're already running 3x SSD per-host so sell-off the small, old, low-performing and use the $ to get 4x matching SSD.

With current deals on SSD going around if you don't have write-intensive workload I would go with those S3500 deals popping up! You should be able to get 300GB, 480GB or 800GB for deals on Ebay. If you watch ebay and are no tin a rush and want capacity the Samsung Ascend 960GB go for $220-$260 each, new.
 
  • Like
Reactions: Patrick

gbeirn

Member
Jun 23, 2016
69
17
8
123
Thanks for the feedback!

The current workflow is several projects being worked on at the same time on the development server.

Some projects get moved on, some are stalled, some get canned altogether. The drives are older model SSDs and push about 600mb/400mb read/write.

The production projects are all relatively low demand projects but they need to be up all the time. The SSDs in there are S3500s and Micron drives.

Probably 14 hours of the day they sit idle not doing anything and the other 10 hours they process data from clients across the internet. Here they are limited by the clients upload speeds, a particular client may push at most 10-15mb/s. There are or will be approximately 12 or so of these.

Scaling out of course to hundreds of these web applications would of course require a different setup, but we aren't there yet and that's a good problem to have down the line.

So I guess in summary: development has the heavier load and faster compile times means more work can get done. Production needs uptime and reliability as its utmost concern.

Does that give more helpful information? Anything You'd change?

Thanks!
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
What's your networking between the 3 nodes? How big are the 8 SSDs on server 1?

If it were me, and if you have the network to support it (10Gbe), I'd spread the SSDs to be 3 per host and run Ceph spread across the 3 servers (3 or 4 OSDs/Node, 3 MONs, 1 per node). You'll get less performance than your Raid 10 but the production servers will get better performance. And you'll get a level of resilience you don't have today (any 1 server can be down and you still have full access to all of the data).

Ceph is a PITA to set up - but the Proxmox Ceph tools make simple deployments like this a breeze to set up and operate.
 
  • Like
Reactions: whitey

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Ceph is a PITA to set up - but the Proxmox Ceph tools make simple deployments like this a breeze to set up and operate.
Tis' not a PITA if you desire a vanilla/from scratch roll or no need for proxmox if you have something like 'this' cheatsheet/runbook/work instruction 'soup to nuts' handy! :-D
 

Attachments

Last edited:
  • Like
Reactions: T_Minus

gbeirn

Member
Jun 23, 2016
69
17
8
123
I don't have the network infrastructure to support it currently ( only 1GB) but it looks like it is possible to have the ceph network on a 10gb link without a 10gb switch (forgetting the term).

The 8 SSDs in server1 are 80GB.
 
Jan 30, 2016
36
7
8
37
If you put a 10gb dual port nic in each server. You can create a network easily enough for ceph. It's what I did before I picked up a switch


Sent from my iPhone using Tapatalk
 
  • Like
Reactions: T_Minus

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
The proxmox team even has a wiki on it. Just need a 2-port 10Gbe card for each server and 3 cables (either 3 SFP+ DAC cables or 6 SFP+ modules and 3 short fiber cables).

Note that my reading of this wiki suggests that it should work fine, I have not personally tested or tried this particular config.

Full Mesh Network for Ceph Server - Proxmox VE
 
  • Like
Reactions: T_Minus
Jan 30, 2016
36
7
8
37
That's the guide I followed however if memory is correct I had to add more routes than in the wiki to get it going.


Sent from my iPhone using Tapatalk
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
That's the guide I followed however if memory is correct I had to add more routes than in the wiki to get it going.


Sent from my iPhone using Tapatalk
maybe easier but with some drawbacks regarding optimal routes would be enabling stp on the brdiges plus packet-forwarding. this wouldn't require any routes.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
maybe easier but with some drawbacks regarding optimal routes would be enabling stp on the brdiges plus packet-forwarding. this wouldn't require any routes.
True - but the downside of that is you will get kernel-level forwarding of packets from Machine A to C via B, etc. While perhaps not a big deal at 1Gbe, when operating at 10Gbe the extra latency you introduce can cut your throughput in half or more.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
True - but the downside of that is you will get kernel-level forwarding of packets from Machine A to C via B, etc. While perhaps not a big deal at 1Gbe, when operating at 10Gbe the extra latency you introduce can cut your throughput in half or more.
Exactly this can happen between any two nodes, but no hassle to put the right routes and (in theory) should also work with 3+ nodes and only 2 interfaces. Had this running before configuring routes on a 3 node cluster for inter-vm / live-migration traffic on 2x 10Gbe and performance still was 'ok'.
 

gbeirn

Member
Jun 23, 2016
69
17
8
123
Thanks everyone. I've been reading up on ceph for the past few days. I think I'm going to skip ceph this go round. I'd want to setup a test platform for it before rolling it out.

Perhaps I'll nest a proxmox ceph inside this proxmox.
 
Jan 30, 2016
36
7
8
37
Thanks everyone. I've been reading up on ceph for the past few days. I think I'm going to skip ceph this go round. I'd want to setup a test platform for it before rolling it out.

Perhaps I'll nest a proxmox ceph inside this proxmox.
My first experience of CEPH was in VM workstation on my desktop. It's a good idea to get a feel for it.


Sent from my iPhone using Tapatalk