FreeNAS server...will this hardware suffice? Multiple zpools?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

markarr

Active Member
Oct 31, 2013
421
122
43
Are there any gotchas for using LACP in FreeNAS? I've got my Cisco SG350XG LAG group setup with LACP. Once I configure the LAG group in FreeNAS my server is unreachable. I've tried setting the VLAN on the LAG group on the switch to both General and Access but neither work. Just want to be sure I'm not missing something on the FreeNAS side before I start going crazy.

Does the VLAN I've assigned to the ports need to be defined in FreeNAS by any chance? I saw the VLAN tab but figured that was for virtual interfaces.
I have found that LACP is very picky about config and it should be the same on both ends, in the end I had it drop out on me with a couple different switches and ditched the effort.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I have found that LACP is very picky about config and it should be the same on both ends, in the end I had it drop out on me with a couple different switches and ditched the effort.
Without going virtual is there any other way to bond dual NICs in FreeNAS that is worthwhile?
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
You've given the handbook a quick read through, yes?

7. Network — FreeNAS® User Guide 9.10.2-U2 Table of Contents
Yes, I have many times.

I figured out my issue. First I needed to configure one of my 1Gb NICs (I'll later be adding these to their own failover LAG group) to an interface and give it an IP. Once I did that I was able to create a new LAG group for my 10Gb NICs no problem and assign an IP in the VLAN assigned to those ports on my switch.

What's weird is...I'm able to ping that IP now but the LAG group in FreeNAS is showing the media status as down and on the Cisco side it shows 1 port as active and the other as standby.
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I think I'm gonna bite the bullet and throw ESXi on this box and run FreeNAS in a VM. Managing the hardware on VMware (especially networking) is so much easier.
 

Potatospud

New Member
Jan 30, 2017
17
2
3
39
If the hardware can handle the workload and it works for you I say go for it. There's a best practices guide for virtualization by the FreeNAS developers out there floating around so stick to that and everything should be fine. I have both and while the AIO setup is more for sandbox testing, it also serves as a backup for the bare metal setup and I've never lost any data or had any kind of pool corruption.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Ok so I finally got my server setup and settled back on baremetal. I have my two pools of single mirrored vdevs setup and exported via NFS. What's the best way for me to go about testing performance over NFS so I can compare this to iSCSI at a later date? I've got VMware I/O Analyzer installed and setup but I have no idea what tests to run and how to analyze the results.
 

Potatospud

New Member
Jan 30, 2017
17
2
3
39
Hmmm well I've never used VMware I/O analyzer so I'm not much help there. Off the top of my head I can tell you that I ran iperf and ATTO, and I few others the names of which escape me at the moment. Anybody else have any ideas, care to add anything?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
fio in a vm? Thats what I used to test a bunch of slogs recently.
But ideally you'd use something as close to your actual workload as possible, whether that's read or write centric depends on your setup;)
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Sorry for being slightly OT,
I wonder why there is no STH standard that covers most of the typical use cases? Or at least a standard method and or standard parameters so that benches get comparable and everybody has the same starting point ?:)
 

wildchild

Active Member
Feb 4, 2014
389
57
28
Lacp and ISCSI is a big no-no, for nfs it will work, but rather picky.
For ISCSI you would want to use multipathing and round robin.
Do not forget to optimze you IOPS over all av. Paths.
Standard esx used 1000 iops before switching to another path, effectively not using rr untill a large file transfer
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Lacp and ISCSI is a big no-no, for nfs it will work, but rather picky.
For ISCSI you would want to use multipathing and round robin.
Do not forget to optimze you IOPS over all av. Paths.
Standard esx used 1000 iops before switching to another path, effectively not using rr untill a large file transfer
LACP is off the table for me anyway regardless of whether I chose NFS or iSCSI. I've tried with both 9.10 and 10RC1 with no luck between FreeNAS and my Cisco SG350XG-24F. At this point I'm just using a 1Gb LAG between my Dell X1052 and FreeNAS for management and a single 10Gb link for NFS (for the moment).
 
  • Like
Reactions: wildchild

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Lacp and ISCSI is a big no-no, for nfs it will work, but rather picky.
For ISCSI you would want to use multipathing and round robin.
Do not forget to optimze you IOPS over all av. Paths.
Standard esx used 1000 iops before switching to another path, effectively not using rr untill a large file transfer
Yep, this can be quite helpful...under the right conditions :-D

https://kb.vmware.com/selfservice/m...nguage=en_US&cmd=displayKC&externalId=2069356

What's the point of setting "--IOPS=1"?

Automating the IOPS setting in the Round Robin PSP - CormacHogan.com
 

wildchild

Active Member
Feb 4, 2014
389
57
28