FreeNAS server...will this hardware suffice? Multiple zpools?

Discussion in 'FreeBSD and FreeNAS' started by IamSpartacus, Feb 21, 2017.

  1. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,743
    Likes Received:
    358
    So I'm considering a FreeNAS build for shared storage to present to VMware hosts. I'd be re-purposing the VSAN drives I'm currently using in my hosts which are as follows:

    4 x Hitachi 400GB HUSSL SAS SSD's
    4 x Intel S3500 800GB SATA SSD's.

    For MoBo/RAM I've got a SuperMicro X10SDV-2C-7TP4F and 16GB of DDR4 2133 Registered RAM.


    I have zero experience with FreeNAS/ZFS so I'm before I even get into the nitty gritty research on this I'm just looking for some expert opinions on whether or not my drives will work well as a shared storage array to present to VMware. I'm assuming my Mobo/CPU/RAM combo will suffice. I'd really prefer not to spend any more money on additional drives at this time.
     
    #1
  2. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,603
    Likes Received:
    1,374
    If you're only using that # drives why not do an 'all in one' instead of a separate system?

    I'm also not too sure how performance-minded an all SSD setup would be with only 2C too. That mobo/setup is awesome though :)
     
    #2
  3. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,743
    Likes Received:
    358
    Define what you mean by an AIO system in this regard?
     
    #3
  4. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,603
    Likes Received:
    1,374
    #4
  5. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,743
    Likes Received:
    358
    #5
  6. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,735
    Likes Received:
    848
    Looks like a fine system to me for dedicated FreeNAS/ZFS duties. If you want a lil' more bang for buck as @T_Minus touched on, go AIO route (and throw in more memory) :-D

    Standalone filer though 16 gb memory is fine for FreeNAS, my AIO's have 2 vcpu, 12 GB memory, and vmxnet3's (backed by phys 10G nics/switches of course).

    EDIT: Also w/ all ssd disks I'd do a capacity pool of the s3500's (raidz/back that sh|t up, else get two more s3500 800gb'ers and go raidz2) and a performance pool of the 4 400GB hussl's, probably r10/stripped mirror config but arguments could be made for other disk layouts.

    You interested in spinners (for super capacity pool w/ ssd accel) at all or is this all VM ONLY storage?
     
    #6
  7. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,743
    Likes Received:
    358
    @whitey @T_Minus I think I'm in agreement with you both on how the server is going to be setup (in a VM on ESXi) I was just thrown by the AIO verbiage because I'm not going to have any of my bulk (spinner) disks in this server. This server will be for VM shared storage only so the only VM that will run on this ESXi host is FreeNAS (or Napp-it if I went that route).

    With regard to the allocation of the Hitachi HUSSL's, would I not want to dedicate any of them as SLOGs?
     
    #7
    Last edited: Feb 21, 2017
  8. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,735
    Likes Received:
    848
    Yeah those 400GB hussl dev's are too big/waste as a SLOG, a 100GB OTOH would work (I have a bunch of those for that duty). If they are only for VM stg call s3500's tier 2 stg, and hussl's tier 1 stg :-D and pick your stg protocol preference...I will warm you though a zvol iSCSI share does seem to perform better than a NFS conn from FreeNAS to vSphere it seems, not by any sort of drastic/showstopper difference in my book v.s. the ease of use w/ NFS along w/ a wealth of other benefits.
     
    #8
  9. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,743
    Likes Received:
    358
    Interesting. I do prefer to use NFS because of the ease of use as you alluded to but I also like to squeeze every ounce of performance I can out of my hardware...so tough call. How have you measured your storage throughput over both protocols btw?

    Also, good tip on using smaller SSD's as a SLOG. Are you mirroring your SLOG or just using one in your server(s)? And do you think the D-1508 will suffice in not being my bottleneck?
     
    #9
  10. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,735
    Likes Received:
    848
    I have rough/high level numbers simply watching zpool iostat, cacti graphs SNMP off 10G ex3300 switch, and ESXi esxtop numbers in 'real-world' use case scenario's and I can in good conscience say that at least on FreeNAS you will 'usually' see a 15-20% performance/throughput bumpup when using iSCSI v.s. NFS in my experience...least FreeNAS to vSphere that's what I have noticed even over same ethernet fabric/infra (dedicated stg vlan of course) and using SAME exact damned zpool/config...side by side...blows my mind a lil'

    EDIT:non-mirrored hussl4010 BTW, ZFS can suffer slog loss and not lose data (just loses performance on a spinner pool at least, slog on all ssd pool is kinda a waste IMHO/2cents) so I personally and not too concerned abt that, especially in a home lab setting...much as I DO push my lab infra...in PRD I guess you could argue for mirrored slog :-D
     
    #10
  11. azev

    azev Active Member

    Joined:
    Jan 18, 2013
    Messages:
    588
    Likes Received:
    146
    For my lab setup i setup aio with ssd pool in stripe raid 0 basically. I backup all my vm using veeam if I ever need to restore them.
     
    #11
  12. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,743
    Likes Received:
    358
    I backup with Veeam as well but if I'm going to store all my VM's on a single node (thus introducing a SPOF) I will be going with a RAID10 config to give me some redundancy. Not so much for recovery as it is uptime.
     
    #12
  13. azev

    azev Active Member

    Joined:
    Jan 18, 2013
    Messages:
    588
    Likes Received:
    146
    For my lab use case performance and total pool size takes much higher priorities than uptime.
    There are some important VM such as AD/etc that ran the whole house, but since I have 2 sites ( I put a server at my brother basement and link our house via VPN), even if my main vm storage crashed, basic amenities for the home still up (internet access, wi-fi, etc).
    So far in about 6mo or so it's been stable.
     
    #13
  14. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,743
    Likes Received:
    358
    I don't really use the term "homelab" for a reason because while yes I do a lot of what I do for learning, my home network runs a lot of services that I don't want to be down. Specifically my Plex server is shared among many family members/very close friends who've come to rely on it.

    For just VM's the space I'll get out of running in RAID10 will more than suffice. All my bulk storage is stored on other servers.
     
    #14
  15. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,144
    Likes Received:
    433
    Why are you moving those drives out of vSan for VM Storage?
     
    #15
  16. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,743
    Likes Received:
    358
    I'm learning that vSAN really isn't the way to go unless all your hardware is on the HCL (my controllers aren't). Also, I find myself tinkering/doing maintenance on my hosts often and it causes issues if you take a node offline for more than 30 minutes. And lastly, you need at least 3 hosts (preferably 4) to get all the benefits of having a vSAN cluster which is great until you realize you only have 4 hosts to satisfy that need and not because you are actually making use of it.

    I feel that a 2-3 node HA cluster all connecting to a shared storage server will fit my needs better.
     
    #16
  17. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,144
    Likes Received:
    433
    What was the effect?
    I am loooking to build a vsan env and am not too happy with it yet - o/c I am not on HCL completly either; I am having issues with software and performance where its just not what I'd expect given the hw I put in;)
     
    #17
  18. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,743
    Likes Received:
    358
    The performance is not what I had hoped for either and it's just not as flexible as I want for my home network. Not saying I don't like the product, as I'm actually planning to use it at work this upcoming year but I'll be using VSAN ready nodes and I obviously won't be tinkering.
     
    #18
  19. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,743
    Likes Received:
    358
    @whitey What are the pros/cons of running FreeNAS baremetal vs in a VM on ESXi other than the obviously ability to share the hardware among other VMs?
     
    #19
  20. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,144
    Likes Received:
    433
    It used to be *not* recommended to run in a VM, but nowadays its kind of not a big deal anymore (it always ran fine if you adhered to the basic principle of passing on a HBA instead of doing stuff like RDM)
    I have found no real con to be honest, but maybe @whitey has;)
     
    #20
Similar Threads: FreeNAS serverwill
Forum Title Date
FreeBSD and FreeNAS NFSv4 weirdness in ESXi 6.5 and FreeNAS Mar 16, 2019
FreeBSD and FreeNAS FreeNAS install failed Feb 18, 2019
FreeBSD and FreeNAS Freenas error Feb 15, 2019
FreeBSD and FreeNAS What to expect if FreeNAS loses connection to disk shelf? Jan 12, 2019
FreeBSD and FreeNAS X9SRL-F w/E5-1650v2 or X9DRH-iTF w/dual-E5-2680v2 for FreeNAS? Jan 7, 2019

Share This Page