Finally finished all the testing and ready to make then change to zfsonLinux

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83
I guess I jumped on ZoL a bit too early (about 6 years ago), specifically on Ubuntu. I got burned after the Ubuntu upgrade. Never looked back. Luckily FreeNAS was able to import the ZFS pool as is and no data was lost. Just for a reference, FreeNAS does support VMWare VAAI with iSCSI, not with NFS. Agree on holding off Freenas upgrades and for the love of god, don't use their plugins/jails/vm etc as these pretty much guaranteed to be broken later on
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
I tried all different options on the server, but each has its own problem.

FreeNAS - LACP issue (workaround to use load balancing, bug reported in 11.2Ux and upstream FreeBSD). NAS features (SMB/CIFS) are great. iSCSI performance is fantastic, but not able to delete a lun is a road blocker until it's fixed. (the bug was reported in 11.2Ux)

Linux - a few crashes on iSER, could be SCST version, haven't got enough time to figure out.

Solaris 11.4 - crash/coredump when Comstar lun is accessed.

In the end, I decided to virtualized the storage servers on ESXi since using the entire server for a storage server is overkill, and I can have multiple storage servers to serve different protocols.

It's getting really really really complicated now with the configuration. lol.
ESXi version - 6.7U3
Disk - 2*64GB Supermicro SataDom (planned for OS mirror but now runs ESXi).
NVME - HGST SN260 6.4TB AIC
SAS - 8*10TB HGST SAS connected to onboard SAS3008 for data zpool
CPU - EPYC 7302P
Memory - 4*32GB Micron PC3200. 128GB was planed for the storage server to start, will add another 128GB later since it's running ESXi now.
Motherboard - SuperMicro H11SSL-NC, onboard SAS3008 flashed to IT mode.
Network adapters. I have front end and backend storage traffic sperated on different nic and switches.
- Intel X550-T2 - didn't plan to use it, but this is the card I have besides X540-T2 that has SR-IOV and Virtual Function work and supported by guest VMs.
- HPE 530SFP (BCM57810) - supports SR-IOV and nic partitioning (4 Physical Function per port and 16 virtual function per PH.). Unfortunately wans't able to make SR-IOV work in guest except pass through the entire physical nic to guest. Thus, added X550.
- Melanox CX-3. - will passthrough to Linux if I want use iSER. SR-IOV is no longer supported in ESX 6.7 for CX-3 cards.

The mother BIOS needs some setting explicated on to make PCI passthrough work properly.
- IOMMU - Enable. Default is Auto
- ACS - Enable. Default Auto.

The SN260 AIC has the following namespace created
- ns1 600GB for ESXi local datastore
- ns2 1800GB for zpool (RDM to VM)
- ns3 1800GB for zpool (RDM to VM)
- ns4 1800GB for zpool (RDM to VM)
- ns5 24GB (RDM to VM) for slog for data pool (only 3 non-VM datastore NFS volumes will be using slog)
The namespace needs to be formated to 512B for ESXi to see them, default is 4K.

Only 1 NVME AIC is used but the VMs on them are non-critical and I have daily backup for the VMs. Will probably change to U2 NVME in the future since the motherboard has 2 onboard NVME ports.

In order to manage namespace, I need a Windows machine with HDM 3.4 and WD NVME driver. I build a Windows VM and passthrough the NVME to the VM and I was able to manage them.
Interesting performance stat here
- Read 6.2GB/s, write 2.2GB/s when the NVME card is passed through to VM directly
- Read 5.8GB/s write 2.2GB/s when namespace is RDMed to VM
- Read 1.7GB write 1.7GB when the disk/namespace is used as datastore and test inside VM

To be continued......
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
If you have some perf values from the tests ... I assume you do have different requirements than I but 'performance is fantastic' was not something I associated with FreeNas up until now:p
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
If you have some perf values from the tests ... I assume you do have different requirements than I but 'performance is fantastic' was not something I associated with FreeNas up until now:p
I was able to saturate 2*10Gb network for both read and write so 2.2GB/s read and write. around 30MB/s for 4K both read and write for Q1T1.

Storage vMotion between datastores from the same freenas server runs at 1.4GB/s in the backend.

Wasn't able to get the same performance on linux.
 
  • Like
Reactions: Rand__

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
iSCSI so async pool or sync?

And yeah, ZoL seems to have issues still with performance :/