1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

How would you setup this Proxmox storage?

Discussion in 'Linux Admins, Storage and Virtualization' started by gbeirn, Feb 15, 2017.

  1. gbeirn

    gbeirn Member

    Joined:
    Jun 23, 2016
    Messages:
    66
    Likes Received:
    13
    I have three identical servers, let's call them server1, server2 and server3. RAM, CPUs everything but the storage is identical.

    Currently server1 has a RAID 10 of 8 SSDs. This is a development server that hosts in progress VMs and LXCs until they are ready to move to production.

    Server2 and server3 each have a single SSD (300GB) using DRBD for VMs. They also each have 2x64GB SSDs mirrored for LXC containers. These are the production servers.

    Would you set it up any differently? The virtual machines and containers are internal services (dns, dhcp, etc. ) and webservices (websites, web applications, etc. ).

    I'm confident with how I have development setup. Production I'm not so sure.

    The idea is to keep production high availability. Assume redundant powers supplied, circuits, UPS and regular backups (which we have).

    Hopefully this explains clearly the setup. Any feedback is appreciated.
     
    #1
  2. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    5,101
    Likes Received:
    927
    I'm not sure the demands of your software but developing on a 8x SSD setup and then in production on a 1x SSD is a huge difference in available performance too. Why does the development server have a 8x SSD performance sounding setup and then the production only are single SSD per-host or at-most mirrored SSD, and 2 hosts? Why do production servers have single 300GB and not at-least RAID1.

    I personally would run 4+ SSD per-host with RAID10 either hardware or software based.

    You're already running 3x SSD per-host so sell-off the small, old, low-performing and use the $ to get 4x matching SSD.

    With current deals on SSD going around if you don't have write-intensive workload I would go with those S3500 deals popping up! You should be able to get 300GB, 480GB or 800GB for deals on Ebay. If you watch ebay and are no tin a rush and want capacity the Samsung Ascend 960GB go for $220-$260 each, new.
     
    #2
    Patrick likes this.
  3. gbeirn

    gbeirn Member

    Joined:
    Jun 23, 2016
    Messages:
    66
    Likes Received:
    13
    Thanks for the feedback!

    The current workflow is several projects being worked on at the same time on the development server.

    Some projects get moved on, some are stalled, some get canned altogether. The drives are older model SSDs and push about 600mb/400mb read/write.

    The production projects are all relatively low demand projects but they need to be up all the time. The SSDs in there are S3500s and Micron drives.

    Probably 14 hours of the day they sit idle not doing anything and the other 10 hours they process data from clients across the internet. Here they are limited by the clients upload speeds, a particular client may push at most 10-15mb/s. There are or will be approximately 12 or so of these.

    Scaling out of course to hundreds of these web applications would of course require a different setup, but we aren't there yet and that's a good problem to have down the line.

    So I guess in summary: development has the heavier load and faster compile times means more work can get done. Production needs uptime and reliability as its utmost concern.

    Does that give more helpful information? Anything You'd change?

    Thanks!
     
    #3
  4. PigLover

    PigLover Moderator

    Joined:
    Jan 26, 2011
    Messages:
    2,352
    Likes Received:
    829
    What's your networking between the 3 nodes? How big are the 8 SSDs on server 1?

    If it were me, and if you have the network to support it (10Gbe), I'd spread the SSDs to be 3 per host and run Ceph spread across the 3 servers (3 or 4 OSDs/Node, 3 MONs, 1 per node). You'll get less performance than your Raid 10 but the production servers will get better performance. And you'll get a level of resilience you don't have today (any 1 server can be down and you still have full access to all of the data).

    Ceph is a PITA to set up - but the Proxmox Ceph tools make simple deployments like this a breeze to set up and operate.
     
    #4
    whitey likes this.
  5. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,216
    Likes Received:
    676
    Tis' not a PITA if you desire a vanilla/from scratch roll or no need for proxmox if you have something like 'this' cheatsheet/runbook/work instruction 'soup to nuts' handy! :-D
     

    Attached Files:

    #5
    Last edited: Feb 15, 2017
    T_Minus likes this.
  6. gbeirn

    gbeirn Member

    Joined:
    Jun 23, 2016
    Messages:
    66
    Likes Received:
    13
    I don't have the network infrastructure to support it currently ( only 1GB) but it looks like it is possible to have the ceph network on a 10gb link without a 10gb switch (forgetting the term).

    The 8 SSDs in server1 are 80GB.
     
    #6
  7. abundantmussel

    abundantmussel New Member

    Joined:
    Jan 30, 2016
    Messages:
    23
    Likes Received:
    5
    If you put a 10gb dual port nic in each server. You can create a network easily enough for ceph. It's what I did before I picked up a switch


    Sent from my iPhone using Tapatalk
     
    #7
    T_Minus likes this.
  8. PigLover

    PigLover Moderator

    Joined:
    Jan 26, 2011
    Messages:
    2,352
    Likes Received:
    829
    The proxmox team even has a wiki on it. Just need a 2-port 10Gbe card for each server and 3 cables (either 3 SFP+ DAC cables or 6 SFP+ modules and 3 short fiber cables).

    Note that my reading of this wiki suggests that it should work fine, I have not personally tested or tried this particular config.

    Full Mesh Network for Ceph Server - Proxmox VE
     
    #8
    T_Minus likes this.
  9. abundantmussel

    abundantmussel New Member

    Joined:
    Jan 30, 2016
    Messages:
    23
    Likes Received:
    5
    That's the guide I followed however if memory is correct I had to add more routes than in the wiki to get it going.


    Sent from my iPhone using Tapatalk
     
    #9
  10. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    405
    Likes Received:
    34
    maybe easier but with some drawbacks regarding optimal routes would be enabling stp on the brdiges plus packet-forwarding. this wouldn't require any routes.
     
    #10
  11. PigLover

    PigLover Moderator

    Joined:
    Jan 26, 2011
    Messages:
    2,352
    Likes Received:
    829
    True - but the downside of that is you will get kernel-level forwarding of packets from Machine A to C via B, etc. While perhaps not a big deal at 1Gbe, when operating at 10Gbe the extra latency you introduce can cut your throughput in half or more.
     
    #11
  12. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    405
    Likes Received:
    34
    Exactly this can happen between any two nodes, but no hassle to put the right routes and (in theory) should also work with 3+ nodes and only 2 interfaces. Had this running before configuring routes on a 3 node cluster for inter-vm / live-migration traffic on 2x 10Gbe and performance still was 'ok'.
     
    #12
  13. gbeirn

    gbeirn Member

    Joined:
    Jun 23, 2016
    Messages:
    66
    Likes Received:
    13
    Thanks everyone. I've been reading up on ceph for the past few days. I think I'm going to skip ceph this go round. I'd want to setup a test platform for it before rolling it out.

    Perhaps I'll nest a proxmox ceph inside this proxmox.
     
    #13
  14. abundantmussel

    abundantmussel New Member

    Joined:
    Jan 30, 2016
    Messages:
    23
    Likes Received:
    5
    My first experience of CEPH was in VM workstation on my desktop. It's a good idea to get a feel for it.


    Sent from my iPhone using Tapatalk
     
    #14
Similar Threads: setup Proxmox
Forum Title Date
Linux Admins, Storage and Virtualization Ideal Plex Setup with Proxmox/LXC/Docker/KVM? Mar 26, 2017
Linux Admins, Storage and Virtualization couple of strange questions about proxmox and pfsense setup. Mar 23, 2017
Linux Admins, Storage and Virtualization use case evaluation for pfSense on Proxmox setup Oct 5, 2016
Linux Admins, Storage and Virtualization Colocation Server Setup Sanity Check Monday at 12:11 PM
Linux Admins, Storage and Virtualization ZFS Advice for new Setup Jan 2, 2017

Share This Page