Watching too much Linus Tech Tips, I did buy a Gigabyte Server:
R272-Z32 with 8x 1.97TB Samsung NVMe Drives.
I setup a raidz2 on the 8 drives on a FreeBSD system with a goal of doing sharing over NFS (eventually for VM hosting in XCP-ng).
I have a test scenario with the 5 following tests:
Test 1 (IOPS Random Read):
fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
Test 2 (IOPS Seq Read):
fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=read
Test 3 (IOPS Seq Write):
fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=write
Test 4 (MiB/s Read):
fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=read --ramp_time=4
Test 5 (MiB/s Write):
fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=write --ramp_time=4
Running the tests on the local FreeBSD machine, I get the following results:
Test 1: 73'000 IOPS
Test 2: 493'000 IOPS
Test 3: 221'000 IOPS
Test 4: 5767 MB/s
Test 5: 2866 MB/s
Doing the same tests on another server on the NFS share (10Gbit Network), I get the following results:
Test 1: 4'100 IOPS
Test 2: 5'700 IOPS
Test 3: 5'600 IOPS
Test 4: 948 MB/s
Test 5: 920 MB/s
I'm happing with Test 4 & 5 which seem to almost max out my 10Gbit/s network, but the IOPS performance is horrible.
To get to this point, I did enable Jumbo Frames & disable the sync option on the ZFS dataset.
My problem is the IOPS performance.
Even my old NetApp non-NVMe scored almost 10x values on Test 3.
Does anyone know where I need to tweak?
ZFS dataset?
NFS Server?
NFS Client?
Any help is appreciated.
Best regards,
MM
R272-Z32 with 8x 1.97TB Samsung NVMe Drives.
I setup a raidz2 on the 8 drives on a FreeBSD system with a goal of doing sharing over NFS (eventually for VM hosting in XCP-ng).
I have a test scenario with the 5 following tests:
Test 1 (IOPS Random Read):
fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
Test 2 (IOPS Seq Read):
fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=read
Test 3 (IOPS Seq Write):
fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=write
Test 4 (MiB/s Read):
fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=read --ramp_time=4
Test 5 (MiB/s Write):
fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4M --iodepth=256 --size=10G --readwrite=write --ramp_time=4
Running the tests on the local FreeBSD machine, I get the following results:
Test 1: 73'000 IOPS
Test 2: 493'000 IOPS
Test 3: 221'000 IOPS
Test 4: 5767 MB/s
Test 5: 2866 MB/s
Doing the same tests on another server on the NFS share (10Gbit Network), I get the following results:
Test 1: 4'100 IOPS
Test 2: 5'700 IOPS
Test 3: 5'600 IOPS
Test 4: 948 MB/s
Test 5: 920 MB/s
I'm happing with Test 4 & 5 which seem to almost max out my 10Gbit/s network, but the IOPS performance is horrible.
To get to this point, I did enable Jumbo Frames & disable the sync option on the ZFS dataset.
My problem is the IOPS performance.
Even my old NetApp non-NVMe scored almost 10x values on Test 3.
Does anyone know where I need to tweak?
ZFS dataset?
NFS Server?
NFS Client?
Any help is appreciated.
Best regards,
MM
Last edited: