FreeNAS server...will this hardware suffice? Multiple zpools?

Discussion in 'FreeBSD and FreeNAS' started by IamSpartacus, Feb 21, 2017.

  1. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    2,085
    Likes Received:
    489
    What kind of performance are you seeing over NFS as I'm considering both iSCSI vs. NFS for ESXi storage on FreeNAS? I've been hearing iSCSI offers slightly better performance.
     
    #41
  2. Potatospud

    Potatospud New Member

    Joined:
    Jan 30, 2017
    Messages:
    17
    Likes Received:
    2
    My research has indicated over and over that while iSCSI is typically faster, NFS is easier to setup and go. For NFS, there also isn't a space utilization penalty of 'dont use more than 50% of the pool' (which is what the FreeNAS handbook states when using iSCSI). I don't have the benchmark results or screenshots in front of me at the moment but I may be able to dig them up when I get home but if memory serves I was able to achieve basically full line speed from either setup (bare metal and AIO) when using NFS for the datastore. I started with the AIO setup and through viability testing felt comfortable moving to the two separate bare metal setups.
     
    #42
  3. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    2,085
    Likes Received:
    489
    This is over 10GbE? That is VERY encouraging. I'm looking forward to testing this once I get my FreeNAS box up.
     
    #43
  4. Potatospud

    Potatospud New Member

    Joined:
    Jan 30, 2017
    Messages:
    17
    Likes Received:
    2
    Correct, to be more precise, I believe it was like 9.28Gbps. ESXi 6, FreeNAS 9.3 and Intel X520-DA2 in each box using 1m DAC. Oh and jumbo frames @9k.
     
    #44
  5. Potatospud

    Potatospud New Member

    Joined:
    Jan 30, 2017
    Messages:
    17
    Likes Received:
    2
    Also the FreeNAS box has 4 pools; 4x S3500 for SAN (ESXi datastore), 2x i535 mirror for jails, 4x 3TB WD Red main bulk storage and 2x 1TB mirror WD Blacks for NAS user folders. The SAN storage and bulk storage are striped mirrors whereas the other 2 pools are just straight mirrors.
     
    #45
  6. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,770
    Likes Received:
    863
    He's talkin network throughput NOT disk throughput to that 4 disk s3500 pool, NO WAY he's pushing 10G w/ 4 s3500's bet my mortgage on it! :-D
     
    #46
  7. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    2,085
    Likes Received:
    489
    Truth. I doubt I'd be pushing 10G even with a 4 disk HUSSL pool if we're talking sustained reads/writes.
     
    #47
  8. Potatospud

    Potatospud New Member

    Joined:
    Jan 30, 2017
    Messages:
    17
    Likes Received:
    2
    Affirmative, that's network throughput not disk. I apologize for not clarifying as I can see now that seemed a bit misleading.
     
    #48
  9. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,770
    Likes Received:
    863
    Yeah no way, not even w/ 8 in raid-0 I don't think, least not over NFS/iSCSI, maybe close or saturated on the filer. I know w/ 4 husmm 400 gb dev's in raid-0 on a ZoL CentOS setup I could push 1.7GB read/1.2GB write on a simple dd w/ 1M block size spitting out a 20G file LOCALLY...NFS/iSCSI was pushing 400-450MB/sec back to vSphere on at least a sVMotion of 20 or so VM's from a source pool that could read plenty fast. All hypervisor to hypervisor, AIO to AIO, over 10G.
     
    #49
  10. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,770
    Likes Received:
    863
    All, good I knew because that's right abt what I push w/ iperf across those platforms and trust me I've tried to saturate 10G OVER net w/ 4 ssd's (SAS and sata s3700's/hussl's/husmm's) and could not do it :-D

    Shame too cause I WAS trying to justify picking up a EX4300 and go to 40G eth haha!

    Not being a smartarse at all originally:-D
     
    #50
  11. Potatospud

    Potatospud New Member

    Joined:
    Jan 30, 2017
    Messages:
    17
    Likes Received:
    2
    Lol same same!!! Usually I can come up with at least some justification to the Mrs as to why I'm acquiring new hardware but when I attempted I do so for some 40GbE adapters, a switch to match and a 12G SAS expander shelf, all I could muster was "ummm because it'd be sweet and really really fast....". Needless to say I was shot down, crash and burn. So that upgrade phase is on the back burner for now. *Sigh*
     
    #51
  12. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    3,920
    Likes Received:
    662
    Power savings? QSFP vs 10GBE ... ;)
     
    #52
  13. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    2,085
    Likes Received:
    489
    I've seen a bunch of users still using 9.3. Is there a specific advantage to doing that or you just haven't gotten around to upgrading?
     
    #53
  14. Potatospud

    Potatospud New Member

    Joined:
    Jan 30, 2017
    Messages:
    17
    Likes Received:
    2
    At the time of that testing I was on 9.3 but I have since moved to 9.10 (and ESXi 6.5) but I am eagerly awaiting 10 (due in about 2 days). A major transition I'm both excited for and slightly nervous about is their move to Docker containers from BSD jails which I've come to know and love. I've read plenty that indicates a much better level of performance and portability but I haven't really worked with Docker containers much. I'll probably end up upgrading my AIO setup first and play around. Once I'm comfortable I'll upgrade the bare metal box.
     
    #54
    Last edited: Mar 4, 2017
  15. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    2,085
    Likes Received:
    489
    [​IMG]

    How do you guys have networking configured in your FreeNAS box (interested in both iSCSI and NFS setups)? My network is physically connected as you see above and looking for some suggestions. It's been sometime since I've configured a bare metal server as I've been mainly working with only virtual hosts over the past few years.
     
    #55
  16. Potatospud

    Potatospud New Member

    Joined:
    Jan 30, 2017
    Messages:
    17
    Likes Received:
    2
    Mine is similar, 10GbE direct attached to ESXi. Then 4x 1GbE (lagg0 using LACP) to the main switch. Bare metal FreeNAS is an A1SRM-2758F and the ESXi boxes are X9SCM-F-O but one box is bare metal ESXi and the other is an AIO setup but all have X520-DA2's and are direct attached for backups and NFS exported datastores. They all have at least 1x 1GbE dedicated to management and then the remaining 1GbE links are either dedicated to certain tasks or LACP'd together. I can either make a diagram and post tonight or dig out the diagrams I swear I've already created (but can't remember where I saved them).
     
    #56
  17. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,770
    Likes Received:
    863
    Got this, for me and your setup seems similar enough, I prefer stub-vlan/dedicated setups for each type of stg traffic.

    EX: My config
    LAN - vlan 10 (routed/firewalld)
    MGMT - vlan 11 (routed/firewalld)
    VMOTION - vlan 12 (stub vlan/no routing/gw)
    NFS - vlan 13 (stub vlan/no routing/gw)
    iSCSI - vlan 14 (stub vlan/no routing/gw)
    FT - vlan 15 (stub vlan/no routing/gw)

    Then just setup a trunk port from phys switch to phys nic in vSphere, define/tag vlans on appropriate uplinks, create std vSwitch or vDS virtual switches w vlan port groups/port profiles, map to VM's (in your case stg AIO VM w/ vmxnet3 vnics on each network), create vmk's for NFS/iSCSI mounts, create iSCSI initiators, map luns, add datastores.

    High level, that will isolate/segregate/dedicate a single broadcast domain for each type of traffic on your lan, focusing in on the IP SAN side of the house here specifically.

    :-D

    EDIT: Worthy mention may help if I told ya I (or it may be obvious) that I have multiple vnics added to my FreeNAS AIO, one for LAN that is routed/on proper vlan for SMB/NFS/iSCSI shares but the hypervisor traffic is totally isolated.
     
    #57
  18. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    2,085
    Likes Received:
    489
    Since my NICs will be connected to a switch and not directly to ESXi hosts, I assume I'll just throw my FreeNAS NICs into the NFS/iSCSI VLAN?

    P.S. I like diagrams ;).


    Thanks for the breakdown @whitey.

    I assume is for FT so that's your H/A heatbeat network?

    As for as vNetworking I'm already all over that piece as far as the hypervisor's are concerned. But with regard to the FreeNAS box itself since it's baremetal, I assume I'll just place all the NICs into the NFS/iSCSI VLAN (depending on which i go with) since the only traffic that will be inbound/outbound on those interfaces is VM traffic?
     
    #58
    Last edited: Mar 7, 2017
  19. Potatospud

    Potatospud New Member

    Joined:
    Jan 30, 2017
    Messages:
    17
    Likes Received:
    2
    I presume that is accurate. I haven't done much official work with VLANs only "will this work" testing. And yeah same here, I'm a visual kinda guy so I'll see if I can dig those up for ya and post em' tonight.
     
    #59
  20. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    2,085
    Likes Received:
    489
    Are there any gotchas for using LACP in FreeNAS? I've got my Cisco SG350XG LAG group setup with LACP. Once I configure the LAG group in FreeNAS my server is unreachable. I've tried setting the VLAN on the LAG group on the switch to both General and Access but neither work. Just want to be sure I'm not missing something on the FreeNAS side before I start going crazy.

    Does the VLAN I've assigned to the ports need to be defined in FreeNAS by any chance? I saw the VLAN tab but figured that was for virtual interfaces.
     
    #60
    Last edited: Mar 7, 2017
Similar Threads: FreeNAS serverwill
Forum Title Date
FreeBSD and FreeNAS Newbie questions about FreeNAS Yesterday at 4:23 PM
FreeBSD and FreeNAS Guinea pigs needed - aka FreeNas 11.3 is out Jan 30, 2020
FreeBSD and FreeNAS FreeNAS Cloud sync with delayed delete? Oct 9, 2019
FreeBSD and FreeNAS Windows, FreeNAS, eSATA, Port Multipliers, Non-IT RAID & VMs Sep 20, 2019
FreeBSD and FreeNAS Need FreeNAS sharing help Jul 4, 2019

Share This Page