HGST HUSMM1640 Typical Performance?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Propaganda

Active Member
Dec 6, 2017
154
62
28
43
I am not sure what I was expecting but the 4k performance on these drives appears poor? Or perhaps it is my HP h240 HBA? Even worse in ATTO than my 840 pro running the OS.
1617053803294.png
1617054408675.png
1617054784198.png
1617054969187.png
 

Propaganda

Active Member
Dec 6, 2017
154
62
28
43
More data, I ran the drive in 2 different machines under freenas. One machine with a HP H240 HBA (SAS3) the other with a LSI 9217-4I4E HBA (SAS2). This is looking like an HBA issue?

HP H240 HBA:
Code:
root@truenas[/mnt/t1]# fio --filename=test --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B,ioengine=psync, iodepth=4
fio-3.19
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [r(1)][100.0%][r=282MiB/s][r=72.2k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3438: Mon Mar 29 18:11:32 2021
read: IOPS=44.0k, BW=176MiB/s (184MB/s)(10.0GiB/58295msec)
clat (nsec): min=1970, max=1190.1k, avg=21175.50, stdev=5201.83
lat (usec): min=2, max=1190, avg=21.27, stdev= 5.20
clat percentiles (nsec):
|  1.00th=[ 2832],  5.00th=[ 3504], 10.00th=[21120], 20.00th=[21632],
| 30.00th=[21888], 40.00th=[21888], 50.00th=[22144], 60.00th=[22400],
| 70.00th=[22656], 80.00th=[22912], 90.00th=[23168], 95.00th=[23680],
| 99.00th=[34560], 99.50th=[35584], 99.90th=[36608], 99.95th=[37120],
| 99.99th=[38656]
bw (  KiB/s): min=169245, max=280303, per=98.96%, avg=178003.11, stdev=12452.94, samples=109
iops        : min=42311, max=70075, avg=44500.43, stdev=3113.20, samples=109
lat (usec)   : 2=0.01%, 4=6.72%, 10=0.06%, 20=0.05%, 50=93.17%
lat (usec)   : 100=0.01%, 250=0.01%, 750=0.01%, 1000=0.01%
lat (msec)   : 2=0.01%
cpu          : usr=6.44%, sys=93.55%, ctx=1046, majf=0, minf=2
IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=2621440,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency   : target=0, window=0, percentile=100.00%, depth=4
Run status group 0 (all jobs):
   READ: bw=176MiB/s (184MB/s), 176MiB/s-176MiB/s (184MB/s-184MB/s), io=10.0GiB(10.7GB), run=58295-58295msec
LSI 9217-4I4E HBA:
Code:
root@freenas0[/mnt/t0]# fio --filename=test --sync=1 --rw=randread --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B,ioengine=psync, iodepth=4
fio-3.16
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [r(1)][91.7%][r=660MiB/s][r=169k IOPS][eta 00m:02s]
test: (groupid=0, jobs=1): err= 0: pid=58047: Mon Mar 29 20:35:40 2021
read: IOPS=119k, BW=465MiB/s (488MB/s)(10.0GiB/22001msec)
clat (usec): min=2, max=132, avg= 7.70, stdev= 8.25
lat (usec): min=2, max=133, avg= 7.77, stdev= 8.25
clat percentiles (nsec):
|  1.00th=[ 2736],  5.00th=[ 2992], 10.00th=[ 3056], 20.00th=[ 3120],
| 30.00th=[ 3184], 40.00th=[ 3248], 50.00th=[ 3344], 60.00th=[ 3536],
| 70.00th=[ 3696], 80.00th=[22144], 90.00th=[22912], 95.00th=[23424],
| 99.00th=[24448], 99.50th=[24960], 99.90th=[36096], 99.95th=[36608],
| 99.99th=[37632]
| 99.00th=[24448], 99.50th=[24960], 99.90th=[36096], 99.95th=[36608],
| 99.99th=[37632]
bw (  KiB/s): min=416644, max=816538, per=95.28%, avg=454120.14, stdev=78569.59, samples=43
iops        : min=104161, max=204134, avg=113529.77, stdev=19642.33, samples=43
lat (usec)   : 4=76.45%, 10=1.30%, 20=0.09%, 50=22.15%, 100=0.01%
lat (usec)   : 250=0.01%
cpu          : usr=11.13%, sys=88.86%, ctx=427, majf=0, minf=4
IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=2621440,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency   : target=0, window=0, percentile=100.00%, depth=4
Run status group 0 (all jobs):
   READ: bw=465MiB/s (488MB/s), 465MiB/s-465MiB/s (488MB/s-488MB/s), io=10.0GiB(10.7GB), run=22001-22001msec
 

Propaganda

Active Member
Dec 6, 2017
154
62
28
43
Just got in new controller (LSI 9340-8i) and cables put in the same machine as the HP H240. Much better results, the HP H240 is anemic in comparison:
1617666325159.png
 
  • Like
Reactions: hitmanbabyvn

Woogz

New Member
May 8, 2021
9
4
3
I just got a HP H240 HBA up and running using this 24" cable.

I'm getting these results on a pair of Samsung SAS SSDs.


The card reports that it's running at 12Gbps but looks like I'm getting closer to 10. I find it puzzling that the card reports it's capable of 12Gbps but seems to cap out.

What's odd is that I found very little other than your thread reporting actual speeds people were able to pull from them, I guess because most people are hooking up SAS HDDs that are going to be the choke point.

I also found this thread where a guy says he got 12Gbps but didn't provide any additional info after I PM'ed him.

Going to be returning it for an LSI3008

The card is just... weird.
 
Last edited: