FreeNAS server...will this hardware suffice? Multiple zpools?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I opted for FreeNAS + ESXi combo and have been quite pleased. This particular box is more for sandbox testing so I don't break the bare metal setups I have which are ESXi and FreeNAS. I've been using 4x Intel S3500's (striped mirrors) in the bare metal FreeNAS box for SAN storage, exported via NFS over 10GbE for over a year now and it's flawless. For long term data storage or data that is deemed critically important (docs, pics, home videos, local backups and those of friends and family and especially the wife's docs, pics, etc) I highly recommend ZFS.
What kind of performance are you seeing over NFS as I'm considering both iSCSI vs. NFS for ESXi storage on FreeNAS? I've been hearing iSCSI offers slightly better performance.
 

Potatospud

New Member
Jan 30, 2017
17
2
3
39
What kind of performance are you seeing over NFS as I'm considering both iSCSI vs. NFS for ESXi storage on FreeNAS? I've been hearing iSCSI offers slightly better performance.
My research has indicated over and over that while iSCSI is typically faster, NFS is easier to setup and go. For NFS, there also isn't a space utilization penalty of 'dont use more than 50% of the pool' (which is what the FreeNAS handbook states when using iSCSI). I don't have the benchmark results or screenshots in front of me at the moment but I may be able to dig them up when I get home but if memory serves I was able to achieve basically full line speed from either setup (bare metal and AIO) when using NFS for the datastore. I started with the AIO setup and through viability testing felt comfortable moving to the two separate bare metal setups.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
if memory serves I was able to achieve basically full line speed from either setup (bare metal and AIO) when using NFS for the datastore.
This is over 10GbE? That is VERY encouraging. I'm looking forward to testing this once I get my FreeNAS box up.
 

Potatospud

New Member
Jan 30, 2017
17
2
3
39
This is over 10GbE? That is VERY encouraging. I'm looking forward to testing this once I get my FreeNAS box up.
Correct, to be more precise, I believe it was like 9.28Gbps. ESXi 6, FreeNAS 9.3 and Intel X520-DA2 in each box using 1m DAC. Oh and jumbo frames @9k.
 

Potatospud

New Member
Jan 30, 2017
17
2
3
39
Also the FreeNAS box has 4 pools; 4x S3500 for SAN (ESXi datastore), 2x i535 mirror for jails, 4x 3TB WD Red main bulk storage and 2x 1TB mirror WD Blacks for NAS user folders. The SAN storage and bulk storage are striped mirrors whereas the other 2 pools are just straight mirrors.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Correct, to be more precise, I believe it was like 9.28Gbps. ESXi 6, FreeNAS 9.3 and Intel X520-DA2 in each box using 1m DAC. Oh and jumbo frames @9k.
He's talkin network throughput NOT disk throughput to that 4 disk s3500 pool, NO WAY he's pushing 10G w/ 4 s3500's bet my mortgage on it! :-D
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
He's talkin network throughput NOT disk throughput to that 4 disk s3500 pool, NO WAY he's pushing 10G w/ 4 s3500's bet my mortgage on it! :-D
Truth. I doubt I'd be pushing 10G even with a 4 disk HUSSL pool if we're talking sustained reads/writes.
 

Potatospud

New Member
Jan 30, 2017
17
2
3
39
He's talkin network throughput NOT disk throughput to that 4 disk s3500 pool, NO WAY he's pushing 10G w/ 4 s3500's bet my mortgage on it! :-D
Affirmative, that's network throughput not disk. I apologize for not clarifying as I can see now that seemed a bit misleading.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Truth. I doubt I'd be pushing 10G even with a 4 disk HUSSL pool if we're talking sustained reads/writes.
Yeah no way, not even w/ 8 in raid-0 I don't think, least not over NFS/iSCSI, maybe close or saturated on the filer. I know w/ 4 husmm 400 gb dev's in raid-0 on a ZoL CentOS setup I could push 1.7GB read/1.2GB write on a simple dd w/ 1M block size spitting out a 20G file LOCALLY...NFS/iSCSI was pushing 400-450MB/sec back to vSphere on at least a sVMotion of 20 or so VM's from a source pool that could read plenty fast. All hypervisor to hypervisor, AIO to AIO, over 10G.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Affirmative, that's network throughput not disk. I apologize for not clarifying as I can see now that seemed a bit misleading.
All, good I knew because that's right abt what I push w/ iperf across those platforms and trust me I've tried to saturate 10G OVER net w/ 4 ssd's (SAS and sata s3700's/hussl's/husmm's) and could not do it :-D

Shame too cause I WAS trying to justify picking up a EX4300 and go to 40G eth haha!

Not being a smartarse at all originally:-D
 

Potatospud

New Member
Jan 30, 2017
17
2
3
39
All, good I knew because that's right abt what I push w/ iperf across those platforms and trust me I've tried to saturate 10G OVER net w/ 4 ssd's (SAS and sata s3700's/hussl's/husmm's) and could not do it :-D

Shame too cause I WAS trying to justify picking up a EX4300 and go to 40G eth haha!

Not being a smartarse at all originally:-D
Lol same same!!! Usually I can come up with at least some justification to the Mrs as to why I'm acquiring new hardware but when I attempted I do so for some 40GbE adapters, a switch to match and a 12G SAS expander shelf, all I could muster was "ummm because it'd be sweet and really really fast....". Needless to say I was shot down, crash and burn. So that upgrade phase is on the back burner for now. *Sigh*
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Correct, to be more precise, I believe it was like 9.28Gbps. ESXi 6, FreeNAS 9.3 and Intel X520-DA2 in each box using 1m DAC. Oh and jumbo frames @9k.
I've seen a bunch of users still using 9.3. Is there a specific advantage to doing that or you just haven't gotten around to upgrading?
 

Potatospud

New Member
Jan 30, 2017
17
2
3
39
I've seen a bunch of users still using 9.3. Is there a specific advantage to doing that or you just haven't gotten around to upgrading?
At the time of that testing I was on 9.3 but I have since moved to 9.10 (and ESXi 6.5) but I am eagerly awaiting 10 (due in about 2 days). A major transition I'm both excited for and slightly nervous about is their move to Docker containers from BSD jails which I've come to know and love. I've read plenty that indicates a much better level of performance and portability but I haven't really worked with Docker containers much. I'll probably end up upgrading my AIO setup first and play around. Once I'm comfortable I'll upgrade the bare metal box.
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113


How do you guys have networking configured in your FreeNAS box (interested in both iSCSI and NFS setups)? My network is physically connected as you see above and looking for some suggestions. It's been sometime since I've configured a bare metal server as I've been mainly working with only virtual hosts over the past few years.
 

Potatospud

New Member
Jan 30, 2017
17
2
3
39
Mine is similar, 10GbE direct attached to ESXi. Then 4x 1GbE (lagg0 using LACP) to the main switch. Bare metal FreeNAS is an A1SRM-2758F and the ESXi boxes are X9SCM-F-O but one box is bare metal ESXi and the other is an AIO setup but all have X520-DA2's and are direct attached for backups and NFS exported datastores. They all have at least 1x 1GbE dedicated to management and then the remaining 1GbE links are either dedicated to certain tasks or LACP'd together. I can either make a diagram and post tonight or dig out the diagrams I swear I've already created (but can't remember where I saved them).
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Got this, for me and your setup seems similar enough, I prefer stub-vlan/dedicated setups for each type of stg traffic.

EX: My config
LAN - vlan 10 (routed/firewalld)
MGMT - vlan 11 (routed/firewalld)
VMOTION - vlan 12 (stub vlan/no routing/gw)
NFS - vlan 13 (stub vlan/no routing/gw)
iSCSI - vlan 14 (stub vlan/no routing/gw)
FT - vlan 15 (stub vlan/no routing/gw)

Then just setup a trunk port from phys switch to phys nic in vSphere, define/tag vlans on appropriate uplinks, create std vSwitch or vDS virtual switches w vlan port groups/port profiles, map to VM's (in your case stg AIO VM w/ vmxnet3 vnics on each network), create vmk's for NFS/iSCSI mounts, create iSCSI initiators, map luns, add datastores.

High level, that will isolate/segregate/dedicate a single broadcast domain for each type of traffic on your lan, focusing in on the IP SAN side of the house here specifically.

:-D

EDIT: Worthy mention may help if I told ya I (or it may be obvious) that I have multiple vnics added to my FreeNAS AIO, one for LAN that is routed/on proper vlan for SMB/NFS/iSCSI shares but the hypervisor traffic is totally isolated.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Mine is similar, 10GbE direct attached to ESXi. Then 4x 1GbE (lagg0 using LACP) to the main switch. Bare metal FreeNAS is an A1SRM-2758F and the ESXi boxes are X9SCM-F-O but one box is bare metal ESXi and the other is an AIO setup but all have X520-DA2's and are direct attached for backups and NFS exported datastores. They all have at least 1x 1GbE dedicated to management and then the remaining 1GbE links are either dedicated to certain tasks or LACP'd together. I can either make a diagram and post tonight or dig out the diagrams I swear I've already created (but can't remember where I saved them).
Since my NICs will be connected to a switch and not directly to ESXi hosts, I assume I'll just throw my FreeNAS NICs into the NFS/iSCSI VLAN?

P.S. I like diagrams ;).


Got this, for me and your setup seems similar enough, I prefer stub-vlan/dedicated setups for each type of stg traffic.

EX: My config
LAN - vlan 10 (routed/firewalld)
MGMT - vlan 11 (routed/firewalld)
VMOTION - vlan 12 (stub vlan/no routing/gw)
NFS - vlan 13 (stub vlan/no routing/gw)
iSCSI - vlan 14 (stub vlan/no routing/gw)
FT - vlan 15 (stub vlan/no routing/gw)

Then just setup a trunk port from phys switch to phys nic in vSphere, define/tag vlans on appropriate uplinks, create std vSwitch or vDS virtual switches w vlan port groups/port profiles, map to VM's (in your case stg AIO VM w/ vmxnet3 vnics on each network), create vmk's for NFS/iSCSI mounts, create iSCSI initiators, map luns, add datastores.

High level, that will isolate/segregate/dedicate a single broadcast domain for each type of traffic on your lan, focusing in on the IP SAN side of the house here specifically.

:-D

EDIT: Worthy mention may help if I told ya I (or it may be obvious) that I have multiple vnics added to my FreeNAS AIO, one for LAN that is routed/on proper vlan for SMB/NFS/iSCSI shares but the hypervisor traffic is totally isolated.
Thanks for the breakdown @whitey.

I assume is for FT so that's your H/A heatbeat network?

As for as vNetworking I'm already all over that piece as far as the hypervisor's are concerned. But with regard to the FreeNAS box itself since it's baremetal, I assume I'll just place all the NICs into the NFS/iSCSI VLAN (depending on which i go with) since the only traffic that will be inbound/outbound on those interfaces is VM traffic?
 
Last edited:

Potatospud

New Member
Jan 30, 2017
17
2
3
39
Since my NICs will be connected to a switch and not directly to ESXi hosts, I assume I'll just throw my FreeNAS NICs into the NFS/iSCSI VLAN?

P.S. I like diagrams ;).
I presume that is accurate. I haven't done much official work with VLANs only "will this work" testing. And yeah same here, I'm a visual kinda guy so I'll see if I can dig those up for ya and post em' tonight.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Are there any gotchas for using LACP in FreeNAS? I've got my Cisco SG350XG LAG group setup with LACP. Once I configure the LAG group in FreeNAS my server is unreachable. I've tried setting the VLAN on the LAG group on the switch to both General and Access but neither work. Just want to be sure I'm not missing something on the FreeNAS side before I start going crazy.

Does the VLAN I've assigned to the ports need to be defined in FreeNAS by any chance? I saw the VLAN tab but figured that was for virtual interfaces.
 
Last edited: