Hi all,
I recently created a new NAS using TrueNAS to replace my old self-built hardware RAID-based NAS. The NAS is part of my homelab and consists of two datasets:
1) The first dataset contains media files for Plex
2) The second dataset contains application data for various applications. An example would be Nextcloud, so all user data managed by Nextcloud will be saved to this dataset.
The hardware I picked for the system is the following:
HP DL380 Gen9 12xLFF storage server
2x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz (56 Threads)
128GB DDR4 ECC
LSI 9305-16i (Firmware: 16.00.12.00, IT mode)
1x Samsung Datacenter SSD PM893 240GB SATA 6G as boot device
12x Seagate Exos 16TB HDD SAS 12G for storage pool
2x Intel Optane SSD 900P as SLOG in storage pool
After I built up the system I installed TrueNAS-12.0-U8 and configured all 12 HDDs in a RaidZ3 to create the main NAS pool using both Intel Optane SSDs as SLOG.
I configured the ZFS pool in the following way:
ashift=12
sync=standard
compression=lz4
recordsize=128KiB
atime=off
exec=off
The rest is left default in TrueNAS.
I don't know if the performance I measured is expected. So I'm looking for advice, is the NAS really slow?
Here are my results for various block sizes using async and sync IO:
Async:
Sync:
I'm aware of the fact that the tests may not workaround any caches, so if you have further tests I could execute, please tell me. I do not have any performance results of a similar system, thats why I'm quite unsure if the measured performance is "good".
Thanks in advance!
I recently created a new NAS using TrueNAS to replace my old self-built hardware RAID-based NAS. The NAS is part of my homelab and consists of two datasets:
1) The first dataset contains media files for Plex
2) The second dataset contains application data for various applications. An example would be Nextcloud, so all user data managed by Nextcloud will be saved to this dataset.
The hardware I picked for the system is the following:
HP DL380 Gen9 12xLFF storage server
2x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz (56 Threads)
128GB DDR4 ECC
LSI 9305-16i (Firmware: 16.00.12.00, IT mode)
1x Samsung Datacenter SSD PM893 240GB SATA 6G as boot device
12x Seagate Exos 16TB HDD SAS 12G for storage pool
2x Intel Optane SSD 900P as SLOG in storage pool
After I built up the system I installed TrueNAS-12.0-U8 and configured all 12 HDDs in a RaidZ3 to create the main NAS pool using both Intel Optane SSDs as SLOG.
I configured the ZFS pool in the following way:
ashift=12
sync=standard
compression=lz4
recordsize=128KiB
atime=off
exec=off
The rest is left default in TrueNAS.
I don't know if the performance I measured is expected. So I'm looking for advice, is the NAS really slow?
Here are my results for various block sizes using async and sync IO:
Async:
Code:
fio --filename=test --ioengine=posixaio --rw=randread --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
fio --filename=test --ioengine=posixaio --rw=randwrite --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
fio --filename=test --ioengine=posixaio --rw=read --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
fio --filename=test --ioengine=posixaio --rw=write --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
Test | IOPS | MB/s |
4K QD4 rnd read | 39800 | 163 |
4K QD4 rnd write | 10400 | 42,5 |
4K QD4 seq read | 87200 | 357 |
4K QD4 seq write | 57400 | 235 |
64K QD4 rnd read | 34200 | 2242 |
64K QD4 rnd write | 9303 | 610 |
64K QD4 seq read | 30200 | 1979 |
64K QD4 seq write | 12700 | 831 |
1M QD4 rnd read | 5484 | 5751 |
1M QD4 rnd write | 741 | 778 |
1M QD4 seq read | 5723 | 6002 |
1M QD4 seq write | 855 | 897 |
Sync:
Code:
fio --filename=test --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
fio --filename=test --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
fio --filename=test --sync=1 --rw=read --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
fio --filename=test --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
Test | IOPS | MB/s |
4K QD4 rnd read | 20100 | 82,5 |
4K QD4 rnd write | 4885 | 20,0 |
4K QD4 seq read | 265000 | 1087 |
4K QD4 seq write | 9959 | 40,8 |
64K QD4 rnd read | 17000 | 1113 |
64K QD4 rnd write | 3549 | 233 |
64K QD4 seq read | 29900 | 1962 |
64K QD4 seq write | 4373 | 287 |
1M QD4 rnd read | 1959 | 2055 |
1M QD4 rnd write | 634 | 665 |
1M QD4 seq read | 1889 | 1981 |
1M QD4 seq write | 651 | 683 |
I'm aware of the fact that the tests may not workaround any caches, so if you have further tests I could execute, please tell me. I do not have any performance results of a similar system, thats why I'm quite unsure if the measured performance is "good".
Thanks in advance!
Last edited: