1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Tips for building proxmox servers

Discussion in 'Linux Admins, Storage and Virtualization' started by rickygm, Jan 18, 2017.

  1. rickygm

    rickygm New Member

    Joined:
    Jan 18, 2017
    Messages:
    7
    Likes Received:
    0
    Hi my first post here, I need some advice, in the office we have the need to renew three servers that we have in hp, and well the budget is not much, we want to mount a cluster with proxmox and three nodes.
    I have been seeing some options to build those three nodes and I have a list of hardware, the idea is to have a cluster with a nas to 10Gbe.

    We plan to run about 16 vm between medium size and small, Linux and windows , add hardware list:

    Networking for vm trafic and ISCSI NAS:

    Amazon.com: Intel Corp X540T2 Converged Network Adapt T2: Computers & Accessories

    Amazon.com: Intel PWLA8492MT PRO/1000 MT PCI/PCI-X Dual Port Server Adapter: Electronics

    Motherboard and CPU:

    Amazon.com: Supermicro DDR3 800 LGA 2011 Server Motherboard X9DRL-3F-O: Computers & Accessories

    Amazon.com: Intel Xeon E5-2620 v2 Six-Core Processor 2.1GHz 7.2GT/s 15MB LGA 2011 CPU BX80635E52620V2: Computers & Accessories

    Case for rack and psu:

    Amazon.com: Silverstone Tek 2U 12-Bay 3.5-Inch Hot-Swap Rackmount Storage Server Chassis Cases RM212: Computers & Accessories

    Amazon.com: Seasonic SS-500L2U 500W 80 Plus Gold EPS12V 2U Server Power Supply: Computers & Accessories

    Hd for proxmox server in raid 1 by hardware:

    Amazon.com: Seagate Cheetah 15K.7 300 GB 15000RPM SAS 6 Gb/s 16MB Cache 3.5 Inch Internal Bare Drive ST3300657SS: Electronics

    Memory ECC , 64gb by server :

    Crucial 16GB Kit (8GBx2) DDR3/DDR3L-1600MT/s (PC3-12800) DR x8 ECC UDIMM Server Memory CT2KIT102472BD160B/CT2CP102472BD160B at Amazon.com

    Grateful for any comments!
     
    #1
  2. PigLover

    PigLover Moderator

    Joined:
    Jan 26, 2011
    Messages:
    2,516
    Likes Received:
    967
    One comment immediately pops out for me: ditch that 15k spinny thing and use SSD. You'll spend more (but not as much as you think) and be glad for spending it every day that the system runs. Use a good quality enterprise drive like the Intel S3700/S3710 (or a bit cheaper S3500/3510 series). 400GB S3700/3710s should be available "lightly used" for ~$150-180 or brand new for ~$350.
     
    #2
    tuxflux85 and Patrick like this.
  3. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    575
    Likes Received:
    161
    The proxmox installer will create ZFS mirrors to install to. I would go that way and skip the hardware RAID. If you want VMs to live on there as well, the SSD comment is well deserved. Of course, I use slower spinners for my VMs.. but it's a home system and I don't mind that it's a bit slow to get going. A pair of SSDs is on the shopping list though, if that tells you anything.

    The other bits seem alright to me. I've never used the Silverstone chassis, but I have used some of their desktop stuff and it works well. Pretty hard to go wrong with Seasonic PSUs as well..
     
    #3
    sno.cn, niekbergboer and Patrick like this.
  4. rickygm

    rickygm New Member

    Joined:
    Jan 18, 2017
    Messages:
    7
    Likes Received:
    0
    I agree with the ssd disks, but that hard disk is only for booting the proxmox OS, but I want to do a hardware raid 1 to avoid any future failure.
     
    #4
  5. rickygm

    rickygm New Member

    Joined:
    Jan 18, 2017
    Messages:
    7
    Likes Received:
    0
    The vm will be in the nas, in the local disks of proxmox we will have nothing, we will enable HA in the cluster so I need the nas.

    You think the seasonic would not go well?, I can not find a better one
     
    #5
  6. PigLover

    PigLover Moderator

    Joined:
    Jan 26, 2011
    Messages:
    2,516
    Likes Received:
    967
    If its just for boot then get a pair of smaller, cheaper Enterprise SSDs and use ZFS Raid 1 boot. Trivial to do with the Proxmox installer. 15k drives are really a waste these days - which is why you can buy them so cheap.
     
    #6
    Patrick likes this.
  7. rickygm

    rickygm New Member

    Joined:
    Jan 18, 2017
    Messages:
    7
    Likes Received:
    0
    Ok I make the change on the disks and add ssd , And the psu who think I can improve it?
     
    #7
  8. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    575
    Likes Received:
    161
    My intention was to say the PSU is likely fine, though I've never used one personally. I have some of their other models and they work well.
     
    #8
  9. vl1969

    vl1969 Active Member

    Joined:
    Feb 5, 2014
    Messages:
    391
    Likes Received:
    39
    I just have to ask,

    I am planning a home Proxmox build myself, and want to use ZFS raid-1 for OS drive. 2x120GB SSDs
    1 samsung 840 pro and one intell ??? don't remember the model now.
    but I want to use BTRFS raid-10 for all the data. my reasoning is that, I am not good with ZFS, and only reason I plan for ZFS OS drive is that it works right out of the box for OS. Proxmox installer has the option and it's very easy to setup. may add a second raid-1 zpool using 2 1TB for local storage (have a couple of 1TB drives that are doing nothing) but want to build out the BTRFS pool to hols all my data and share it as a file server right from the host. (NFS/SAMBA or NFS on the host and SAMBA from an OMV VM via remote mount) posted on Proxmox forum and got a scolding for mixing the file systems on single host.

    do you also think it is a bad idea?
     
    #9
  10. rickygm

    rickygm New Member

    Joined:
    Jan 18, 2017
    Messages:
    7
    Likes Received:
    0
    OK
     
    #10
  11. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    9,997
    Likes Received:
    3,310
    Sorry on phone so hard to post longer comments. I either use SATA DOMs or 240GB s3500 to boot ZFS mirror.

    Inexpensive and space to download ISOs or images to easily.
     
    #11
  12. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,341
    Likes Received:
    729
    Thread necro :p haha j/k, drink the ZFS koolaid!
     
    #12
    Last edited: Jan 18, 2017
  13. niekbergboer

    niekbergboer Member

    Joined:
    Jun 21, 2016
    Messages:
    66
    Likes Received:
    27
    Does that boot with ZFS with UEFI by now? I know I tried that back with VE 4.2, and it didn't work then. Without UEFI, it should work, though.

    The problem seemed to be with the UEFI version of GRUB.
     
    #13
  14. vl1969

    vl1969 Active Member

    Joined:
    Feb 5, 2014
    Messages:
    391
    Likes Received:
    39
    now you really lost me :-D
     
    #14
  15. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,252
    Likes Received:
    601
    I haven't used UEFI Proxmox because I ran into bugs with it a long time ago. That ZFS boot one being a good example
     
    #15
  16. rickygm

    rickygm New Member

    Joined:
    Jan 18, 2017
    Messages:
    7
    Likes Received:
    0
    I've never used uefi, it's the first time I hear you have problems with that kind of bios
     
    #16
  17. sno.cn

    sno.cn Active Member

    Joined:
    Sep 23, 2016
    Messages:
    144
    Likes Received:
    46
    Yep. I've been migrating a bunch of production servers from ESXi to Proxmox, and ditching hardware RAID in favor of ZFS. Proxmox is installed on two small SSDs in a ZFS mirror, with big SSDs in RAID 10 for VMs, making storage maintenance ridiculously easy. I'm even running Proxmox on my SMB/NFS storage servers, still with all of my storage drives in ZFS RAID 10, and let me tell you, life is good.
     
    #17
    Patrick likes this.
  18. vl1969

    vl1969 Active Member

    Joined:
    Feb 5, 2014
    Messages:
    391
    Likes Received:
    39
    may I ask why ditching ESXi for Proxmox?
    ESXi is more proven technology so what is the reasoning?

    thanks
     
    #18
  19. Jon Massey

    Jon Massey Active Member

    Joined:
    Nov 11, 2015
    Messages:
    293
    Likes Received:
    74
    I'd ditch the PRO/1000 MT - just because the motherboard has an old-school PCI slot doesn't mean you have to use it! I350 much more modern and a better bet.
     
    #19
  20. sno.cn

    sno.cn Active Member

    Joined:
    Sep 23, 2016
    Messages:
    144
    Likes Received:
    46
    To your point, it certainly wasn't an overnight decision. The main factors that won me over were storage flexibility, LXC, simple pricing model, easy backups, no need for a management server, HA included, and the fact that it's been really solid for my use case.

    That being said, in my experience VMware has MUCH better/easier support for Windows guests, making ESXi my first choice for those VMs.
     
    #20
Similar Threads: Tips building
Forum Title Date
Linux Admins, Storage and Virtualization Planning on building a homelab Oct 22, 2016

Share This Page