Wondering if this SuperMicro is a good deal?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BackupProphet

Well-Known Member
Jul 2, 2014
1,097
661
113
Stavanger, Norway
olavgg.com
Yes that is the job of the file page cache.

With fio I get up to 10G/s on the same machine as it supports multiple threads, but fio is a lot more cpu heavy than dd so it doesnt scale too well.

I will check my SSD pool/server later which has 24 cpu cores, it is far from cpu bottlenecked like my E5620
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,097
661
113
Stavanger, Norway
olavgg.com
Limited by 1333mhz ddr3 memory :(
This is a 120GB win10 image, and it does not 100% fit in ram. Here I also have lz4 compression enabled.
Code:
fio --filename=win_10.raw --direct=0 --sync=0 --rw=read --bs=1024k --numjobs=24 --iodepth=8 --runtime=60 --time_based --group_reporting --name=journal-test
journal-test: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=psync, iodepth=8
...
fio-2.16
Starting 24 processes
Jobs: 24 (f=24): [R(24)] [100.0% done] [43134MB/0KB/0KB /s] [43.2K/0/0 iops] [eta 00m:00s]
journal-test: (groupid=0, jobs=24): err= 0: pid=7503: Tue Mar 20 12:44:29 2018
  read : io=3015.6GB, bw=51463MB/s, iops=51462, runt= 60003msec
    clat (usec): min=80, max=30673, avg=464.25, stdev=663.09
     lat (usec): min=80, max=30674, avg=464.50, stdev=663.13
    clat percentiles (usec):
     |  1.00th=[  189],  5.00th=[  225], 10.00th=[  241], 20.00th=[  278],
     | 30.00th=[  322], 40.00th=[  370], 50.00th=[  402], 60.00th=[  430],
     | 70.00th=[  458], 80.00th=[  490], 90.00th=[  548], 95.00th=[  636],
     | 99.00th=[ 3312], 99.50th=[ 4704], 99.90th=[ 9664], 99.95th=[14528],
     | 99.99th=[19584]
    lat (usec) : 100=0.01%, 250=13.18%, 500=69.04%, 750=14.30%, 1000=1.38%
    lat (msec) : 2=0.66%, 4=0.73%, 10=0.62%, 20=0.09%, 50=0.01%
  cpu          : usr=0.76%, sys=91.25%, ctx=115560, majf=0, minf=15752
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=3087919/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: io=3015.6GB, aggrb=51463MB/s, minb=51463MB/s, maxb=51463MB/s, mint=60003msec, maxt=60003msec
Tried a smaller image which is already heavy compressed.
I also compiled a newer version of fio with numa enabled.

Best I managed to push out of my system.
Code:
olav@sola:~/projects/github/fio$ ./fio --filename=/sqltank/kvm/ubuntu1604/xenial-server-cloudimg-amd64-disk1.img --readonly --direct=0 --sync=0 --rw=read --bs=128k --numjobs=18 --iodepth=1 --runtime=60 --numa_cpu_nodes=0-1 --numa_mem_policy=interleave:0-1 --time_based --group_reporting --name=journal-test
journal-test: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=1
...
fio-3.5-56-g9634
Starting 18 processes
Jobs: 18 (f=18): [R(18)][100.0%][r=61.4GiB/s,w=0KiB/s][r=503k,w=0 IOPS][eta 00m:00s]
journal-test: (groupid=0, jobs=18): err= 0: pid=11869: Tue Mar 20 13:20:11 2018
   read: IOPS=516k, BW=63.0GiB/s (67.7GB/s)(3782GiB/60002msec)
    clat (usec): min=11, max=6583, avg=33.92, stdev=22.89
     lat (usec): min=11, max=6583, avg=34.02, stdev=22.89
    clat percentiles (usec):
     |  1.00th=[   21],  5.00th=[   24], 10.00th=[   26], 20.00th=[   28],
     | 30.00th=[   29], 40.00th=[   30], 50.00th=[   32], 60.00th=[   33],
     | 70.00th=[   35], 80.00th=[   37], 90.00th=[   41], 95.00th=[   46],
     | 99.00th=[   92], 99.50th=[  141], 99.90th=[  326], 99.95th=[  424],
     | 99.99th=[  816]
   bw (  MiB/s): min= 1994, max= 4319, per=5.56%, avg=3589.47, stdev=426.53, samples=2143
   iops        : min=15954, max=34556, avg=28715.73, stdev=3412.27, samples=2143
  lat (usec)   : 20=0.98%, 50=96.20%, 100=1.96%, 250=0.63%, 500=0.18%
  lat (usec)   : 750=0.02%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%
  cpu          : usr=3.72%, sys=95.85%, ctx=1087349, majf=0, minf=686
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=30983214,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=63.0GiB/s (67.7GB/s), 63.0GiB/s-63.0GiB/s (67.7GB/s-67.7GB/s), io=3782GiB (4061GB), run=60002-60002msec
 
Last edited:
  • Like
Reactions: fossxplorer

seizedengine

New Member
Aug 22, 2016
24
1
3
39
@themaxx25 there is one more thing that might be worth mentioning if you are planning to do ZFS and are looking for very high throughput on reads: at least on ZFS on Linux, at higher throughput (3~5 GBytes/sec), CPU becomes a bottleneck. I have a non-paid (meaning, progress will be slow while I work on paid projects first) project to tune ZFS on Linux on an array of 22x 12Gbps SAS SSDs and my initial findings are that CPU bottlenecks on reads at those speeds. I've seen similar bottlenecks even with 24x HDD, but not as bad. Increasing block size seems to reduce the CPU load, so I'm suspecting it has to do with the checksums; but that's speculation and I need to confirm it.

anyway, i know most people say you don't need that much CPU for a storage server, but if you're doing very fast or very wide vdevs, that might not be the case and you may benefit from faster CPU.
Off topic from the thread, but on your setup if you do a scrub does system load go sky high? I think I have been seeing the same thing with checksumming on a Core i3 with 8x SSD pool. Regular load systems fine (12 VMs or so) but do a scrub and system load goes up to 60+. Dtrace got me to a call related to checksum I think, misplaced my notes on it. Kind of stopped looking at it since the system is getting a new board and Xeon soon anyway.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I definitely plan to use it more than just a storage array. My plan is to install ESXi and run multiple VMs....Domain Controller, VDI for the family, Sophos, some lite web hosting, etc. It's going to be a general purpose box.
AIO w/ either ESXi/proxmox as hypervisor, pass-thru HBA to stg appliance, reap benefits of both worlds.