Hello, everyone.
Currently running
Epyc 7302P
128GB (8x 16GB) 2667Mhz
36x 10TB HGST ultrastar in RAIDZ1 (width 12, 3 ZDEV) just for performance testing inside SC847 with BPN-SAS3-846EL1 and BPN-SAS3-826EL1 backplanes.
HBA is Lenovo 430-16i (9400-16i fw in tri-mode) in PCIe 3.0 x8 slot. 4 miniSAS HD connected to the backplanes.(2 each)
All the drives all recognized as 12Gbps in both the BIOS and storcli64.
No optional disk data sets were configured (dedup, logs, and etc).
I’m seeing about 2.6GB/s writes and 4+GB/s reads when i run
fio --ramp_time=5 --gtod_reduce=1 --numjobs=1 --bs=1M --size=100G --runtime=60s --readwrite=write --name=testfile
Is this in line with what’s expected? If not, what’s the bottleneck in my setup? All 32 threads hover around 30-40% when running fio.
I was thinking that the drives should be able to saturate the SAS3 ports.
Front backplane: (2 connecters, 4 lanes) = 8 lanes * 12Gb = 96 Gbps
Each drive = ~2 Gbps, 24 drives = 48 Gbps
Rear backplane: (2 connecters, 4 lanes) = 8 lanes * 12Gb = 96 Gbps
Each drive = ~2 Gbps, 12 drives = 24 Gbps
total estimated SAS3 link to hdd = 70 Gbps
PCIe 3.0 x8 = ~8 GB/s
But it seems like I’m getting less than half, and was just curious what I’m not taking into consideration.
Each drive reports around 250MB/s when tested individually.
8 drives out of 36 are 4k, and the rest are 512B.
Thanks, everyone, and hope to interact with you more down the road.
Currently running
Epyc 7302P
128GB (8x 16GB) 2667Mhz
36x 10TB HGST ultrastar in RAIDZ1 (width 12, 3 ZDEV) just for performance testing inside SC847 with BPN-SAS3-846EL1 and BPN-SAS3-826EL1 backplanes.
HBA is Lenovo 430-16i (9400-16i fw in tri-mode) in PCIe 3.0 x8 slot. 4 miniSAS HD connected to the backplanes.(2 each)
All the drives all recognized as 12Gbps in both the BIOS and storcli64.
No optional disk data sets were configured (dedup, logs, and etc).
I’m seeing about 2.6GB/s writes and 4+GB/s reads when i run
fio --ramp_time=5 --gtod_reduce=1 --numjobs=1 --bs=1M --size=100G --runtime=60s --readwrite=write --name=testfile
Is this in line with what’s expected? If not, what’s the bottleneck in my setup? All 32 threads hover around 30-40% when running fio.
I was thinking that the drives should be able to saturate the SAS3 ports.
Front backplane: (2 connecters, 4 lanes) = 8 lanes * 12Gb = 96 Gbps
Each drive = ~2 Gbps, 24 drives = 48 Gbps
Rear backplane: (2 connecters, 4 lanes) = 8 lanes * 12Gb = 96 Gbps
Each drive = ~2 Gbps, 12 drives = 24 Gbps
total estimated SAS3 link to hdd = 70 Gbps
PCIe 3.0 x8 = ~8 GB/s
But it seems like I’m getting less than half, and was just curious what I’m not taking into consideration.
Each drive reports around 250MB/s when tested individually.
8 drives out of 36 are 4k, and the rest are 512B.
Thanks, everyone, and hope to interact with you more down the road.
Attachments
-
98.4 KB Views: 4
-
417.3 KB Views: 4
-
414.4 KB Views: 4
-
274.9 KB Views: 4
-
43.9 KB Views: 4