TrueNAS Beta 2 is working great!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

sboesch

Active Member
Aug 3, 2012
467
95
28
Columbus, OH
I installed Beta 2 on one of my servers, it has dual E5-2620 v3 CPUs, and 128GiB of RAM. I have 8x 4TB rust disks in a pool, and two 2TB SSDs in a pool. I have identical speeds and performance as I did with ZOL on Debian Buster. My previous experience with TrueNAS Beta 1 was not so hot, I couldn't get the write speeds up to snuff.
I am now happy enough with the performance to mount NFS shares on my Proxmox hosts and move some of the guest VM disks over to it. Not sure what's changed with Beta 2, but I saw a two fold improvement in disk speed.
Note, I imported the pools from my Debian Buster install, so I have changed nothing with the layout of the zvols since the TrueNAS Beta 1 install.

Screenshot from 2020-08-23 16-31-18.png
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
I was just going to comment your another post to see if you have tried beta 2.

I installed it in a VM to play with, and quite happy with what I see, haven't used it for VM storage. Just working on my new 100G cx-5 card to reconfigure my storage esxi, will build a new storage vm with 12 beta 2 later today.
 
  • Like
Reactions: sboesch

zack$

Well-Known Member
Aug 16, 2018
701
315
63
Recently tested 11.2U1 through 12 Beta 2.

Not only is it much more stable as a VM on esxi (no buggy pass through issues with a nvme and network card) but performance is great.

Up time of over a week and absolutely no issues.

Also, API key management is very nice and integrates well with the latest TrueCommand.
 
  • Like
Reactions: sboesch

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Got it up and running in a VM with PCI passthrough nic and disks.
Performance is better than 11.3. It will get better once released.

With so many new improvements, I'm not looking at iSER anymore, VMware's iSER isn't much better than iscsi, NVMEoF might still be interesting if I'm going that route.
 
  • Like
Reactions: sboesch

sboesch

Active Member
Aug 3, 2012
467
95
28
Columbus, OH
Got it up and running in a VM with PCI passthrough nic and disks.
Performance is better than 11.3. It will get better once released.

With so many new improvements, I'm not looking at iSER anymore, VMware's iSER isn't much better than iscsi, NVMEoF might still be interesting if I'm going that route.
24 hours now for me, I am only doing NFS mounts, no iscsi. Performance is solid, I think I can eek out a few more IOPS on reads, they're about 10% slower right now than ZOL on Debian.
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
24 hours now for me, I am only doing NFS mounts, no iscsi. Performance is solid, I think I can eek out a few more IOPS on reads, they're about 10% slower right now than ZOL on Debian.
NFS-Default.JPG

This is NFS test from a VM on NFS datastore.

Only 1* 6.4TB disk in zpool, sync disabled since the nvme has plp and the server is on ups.
 
  • Like
Reactions: sboesch

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Did a bit more test for run. Both iSCSI and NFS 4.1 utilizes the same network connections, both 2 paths.

iSCSI-Default-CompressionOff.JPGiSCSI-Default-CompressionOn.JPGpNFS-Default-CompressionOff.JPGpNFS-Default-CompressionOn.JPG

Really looking forward to the final release. Will go back to NFS for simplicity.
 
  • Like
Reactions: sboesch

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Had an unexpected reboot last night.

Seems like it's caused by syslog-ng core dump. Removing remote syslog and see.

Saw another bug can't locate /etc/zfs/export but it doesn't seem to affect nfs export.

Another test using Real profile. Surprised NFS is faster than iSCSI, and it seems NFS 4.1 has better multipath than iSCSI MPIO.
iSCSI-Real-CompressionOn.JPGpNFS-Real-CompressionOn.JPG
 
  • Like
Reactions: sboesch

zack$

Well-Known Member
Aug 16, 2018
701
315
63
Beta 2.1 is out. Don't know what it fixed.
August 27, 2020

iXsystems is pleased to announce the general availability of TrueNAS 12.0-BETA2.1! This is a 12.0-BETA2 hotpatch release to fix a ZFS permissions issue that affects the base FreeBSD OS (NAS-107270). 12.0-BETA2 users are encouraged to update to BETA2.1 as soon as possible.
It's gotta be a nasty bug to get a beta 12.2.1 update.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Uh oh :/

Anyone have indication why performance is up? That's goodside at least.
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada


I tried putting high load on it but it was stable during test. The reboots happened in early mornings when there was no load.
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Did some ESXi tuning for power management and VM CPU/Core, managed to get writer up quite a bit. The sequential write is as fast as local for my HGST SN260, I was able to tune read to reach 5300MB/s (dual 25Gb ports) but in the end, balanced read/write is more important.

The VM use dual 100Gb CX-5 SR-IOV VF, tried tuning for 40G/100G but both have negative impact on performance, the clients are 10G/25G. Revert back to 10G buffer tuning with best performance.

pNFS-Default-AfterTune.JPGpNFS-Real-AfterTune.JPG
 
  • Like
Reactions: sboesch and T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
What did you tune in ESXI For Power management ?

Any idea why it is stable now? ANy other changes?
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
What did you tune in ESXI For Power management ?

Any idea why it is stable now? ANy other changes?
CPU is 7302P, with 8*32G DDR 3200.
Changed NPS from 1 to 4. Changed power policy to high performance from balanced, change vCPU from 1CPU/4core to 4 CPU/1core, set cpu reservation to 12 ghz, and latency sensitivity to high for the VM.

Combined, max cpu mhz is up from 6600mhz to 7800(balanced) to 8200mhz(high performance). I still can't go over 8200 mhz but I I'm getting max performance my nvme or network can push.

The first time it crashed, I guess it was caused by syslog-ng. I sent all logs to graylog. Disabled remote syslog but it still rebooted. Couldn't figure out what caused the second reboot.

I'm not using any other feature, no plugin, no jail, just plain storage, NFS/SMB and iSCSI.
 
Last edited:
  • Like
Reactions: T_Minus