ZFS Pool Planning

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by Nabisco_DING, Apr 6, 2013.

  1. Nabisco_DING

    Nabisco_DING New Member

    Joined:
    Apr 5, 2013
    Messages:
    2
    Likes Received:
    0
    Hello!
    I have been reading about ZFS in and out for about a year and I am getting ready to setup my first test storage server!
    I am eventually going to be setting up a storage server for my small business that will serve data to a hypervisor (haven't really decided yet but I am teetering to the side of Proxmox rather than ESXi 5.1)

    I was reading this document and I am now a little confused. I know that this document is for assisting IT sysadmins in an enterprise environment but I some questions...

    I was originally thinking of creating a root pool for the OS and a tank pool to serve to the hypervisor to store the virtual machines; however, the Oracle document suggests creating separate pools for windows and linux virtual machines.

    Why is that?
    Why not create a single pool for both Windows and Linux VMs?

    At this time, I think that they did this because having 24 servers-worth of read/write requests going through a single pool would degrade performance for all servers that are making those requests. If this is the case, why not separate them further into more pools?

    Ideally, wouldn't it be best to segregate servers based on how much read/write requests each server makes?

    I feel like I am overthinking this hahaha.
     
    #1
  2. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,070
    Likes Received:
    669
    There are a lot of tuning options if you need to guarantee a certain I/O level that is available for a single application. But in general I would create two pools, one highspeed pool for all VM's (best from SSDs) with a fill-level below 60% and a second slower but large storage/backup pool and share the highspeed pool it as NFS datastore (and per SMB for easy backup/move/clone).

    For small to medium setups, you may think about a virtualized storage server (All-In-One) to have highspeed connectivity between storage and VMs without the cost of highspeed network (10G or FCoE)
     
    #2
  3. Nabisco_DING

    Nabisco_DING New Member

    Joined:
    Apr 5, 2013
    Messages:
    2
    Likes Received:
    0
    Regarding SSDs for pools, I am worried about SATA vs SAS. I am reading various opinions on the matter on the internet but nothing conclusive as some people say that SATA vs SAS only matter for spinning platters but then you get dire warnings like this person

    Again, this person is mentioning designing a storage appliance for an enterprise however, although I am a small business owner, I still care about my data.

    Another note, some people also talk about having issues mixing SATA and SAS drives together and this author went so far as to recommend an all SAS drive line-up for storage pools (but the author still uses SATA SSDs for his L2ARC and ZIL devices). In the comments section, the author and another commenter say that they have been having issues mixing SATA and SAS drives.

    Reading things like this, I came to the conclusion that since I am not a certified storage engineer , I should probably stick to the tried and true so that I can minimize the risk of data loss/corruption/etc.

    With this said, I am not against using SSDs for my storage pool but I just need to know the risks involved since losing my data will cause me to basically go bankrupt.
     
    #3
  4. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,070
    Likes Received:
    669
    SAS vs Sata
    SAS is mainly used in enterprise storage. It has advantages regarding multipathing, expanders and cable length.
    But there are a lot of Sata enterprise disks and enterprises using Sata due to costs and higher available capacity.

    SSD vs conventional disks
    in both areas you can find ones with problems and reliable ones.
    In the SSD (and affordable MLC) area, expect similar failure rates like conventional disks, at least in the first say five years.

    Knowing, that I/O of SSD can be 100x better, who would hesitate.
    If data is really important, allow at least a two disk failure (3way mirror, Z2 or Z3)
     
    #4
Similar Threads: Pool Planning
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it ZFS Pool Degraded -> Unavail Feb 21, 2019
Solaris, Nexenta, OpenIndiana, and napp-it ZFS 20 x 8TB Mirrored pool (40 drives) Nov 4, 2018
Solaris, Nexenta, OpenIndiana, and napp-it Solaris (OmniOS) w/ Napp-It ZPool Share Permissions for CIFS [Solved] Sep 17, 2018
Solaris, Nexenta, OpenIndiana, and napp-it NAPP-IT bug: zpool add disks to mirror, pool incorrectly reported non-existant Aug 31, 2018
Solaris, Nexenta, OpenIndiana, and napp-it Intel Optane (32G/800P/900P) for ZFS pools and Slog on Solaris and OmniOS May 14, 2018

Share This Page