High (somehow) performance SSD for Centos workstation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

lpallard

Member
Aug 17, 2013
276
11
18
Hello guys!

I have opted to post on this forum since in the past I received consistently good replies and feedback from STH members. This time I'm looking for recommendations for a high-performance SSD for a Centos workstation.

I spent the better part of last night and the day before searching the web for a clear recommendation, but there are so many brands, models out there, and so many benchmarks that I got lost and now I'm not sure which one to pick.

The needs:

Current workstation is using an ASUS M5A97 mobo with 2x 4GB DDR3 1333 RAM and a Hitachi HDS721010CLA332.

The hard drive is extremely SLOWWW... Launching Firefox takes up to 20 seconds, the HDD light is consistently on while I end up waiting for everything on a constant basis, indicating the HDD is the bottleneck. Same with pretty much everything else. SWAP usage is zero.

I do not have huge storage needs. I use my FreeNAS server for storage via NFS, and locally perhaps I will need around 100GB? Current setup uses 25GB out of 55GB for root (/) and 1.8GB for /home...

I guess a 120-180GB SSD would suffice. I initially was going to go for a Samsung 850 Pro but in the light of recent TRIM issues with the Linux kernel, I'd rather stay away. Next contender was Sandisk's Extreme Pro but they dont make it in less than 240GB. Waste of money which I'd rather put in one of my servers which needs it..

Performance is really what I'm looking for here. Reliability comes second since I do backups regularly and like I said, I use my remote server consistently for storage. This SSD will be mostly for the OS and applications, and offer a little bit of storage space for /home for temporary use.

I'll continue looking up, if someone was in the same boat and has a suggestion, please post!
Greetings
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I'm a debian user myself (and I've never felt or seen the need to enable TRIM on my SSDs since me I/O load on them is very low) but I've had good experiences with my crucial drives under linux. They've had queued TRIM issues as well, but at least their firmware patches were very easy to apply. YMMV of course.

When you say you're looking for "just performance", your workloads sounds very desktop-centric so chances are you'll see the same relative performance with regards to a bog-standard consumer SATA SSD and a stormin' NVME monster. Personally I'd buy something like a 120GB or 250GB Crucial BX100 since they're cheap and seem like good workhorses (don't actually have any current-gen Crucials myself, still on M500s and M550s with a boatload of M4s).

Am sure everyone will have an opinion on their favourite brand, but if it really is only a desktop-ish workload then I think any decently reliable SSD will suit you fine. IMHO you should definitely put /home on the SSD, and if need be set up bind mounts to the HDD under /home/lpallard/big_ass_media_files or somesuch for things you don't want taking up space on your SSD. For bonus geek points you might consider getting a bigger SSD and using some of it to provide an lvm-cache/dmcache for your HDD.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Performance is going to come with capacity to a certain degree with SSD, so to get max performance you should expect to get a 200-256gb SSD.
 
  • Like
Reactions: coolrunnings82

JustinH

Active Member
Jan 21, 2015
124
76
28
48
Singapore
Personally, I've had better performance gains by upping the ram than upgrading the HDD/SDD. More Ram = more file system cache. I went from 8 to 32 and once the cache was warm it flies.

Then again, I don't know how "slow" you drive is.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
IMHO 256gb SSD are not cost prohibitive even used enterprise for that matter at that capacity.

If you want to stay really cheap I'd go for a Samsung 830 256gb they go for around 60-80 used and are great, reliable drives that web hosts commonly used with Linux. Another option is an Intel S3500 240gb the g for 85 to 105 on ebay based on age/usage... another great drive proven to remain stable under above-average desktop usage patterns.

I would stay away from the Crucial low-end consumer drives myself... (Especially after having an issue with one now too) I'm not sure about their BX/MX stuff, no luck with it myself, the older Crucial I have in service still w/out issue... but crucial in server/home have bad reports all over.
 
  • Like
Reactions: coolrunnings82

lpallard

Member
Aug 17, 2013
276
11
18
My workloads are indeed very desktop-centric. The current drive is showing signs of age perhaps, but 20+ seconds to launch an app (firefox, thunar, gwenview, etc) is IMO not acceptable, especially since the CPU and RAM are more or less idling.. Sometimes there is even a delay of several seconds when I have an app using the HDD intensively while I do other stuff.

I ran smart tools several times thinking something was wrong with the drive for it to be so slow, but everything always comes back normal.. Or as normal as it can be for a 5 years old drive. I think the drive was never faster, its just maxed out.

Another annoying issue is loading a folder with thousands of files. Takes anywhere between 45 seconds to several minutes to get the file list.

Regarding performance-size, I agree a 240+GB SSD would provide an extra performance gain but since I will keep using the existing 1TB drive as a "slow" storage, a huge SSD is definitely not required.

Regarding RAM, I never max it out.. I may or may not add an extra 8GB or even 16GB, but I'd like to keep expenditures to a minimum, like I said, I have a 8 year old Supermicro server still rocking but who knows...

Are Samsung 800 series OK now with Linux? Couldnt find a crystal clear answer....

hdparm from the current drive:
[root@workstation workstation-user]# hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 7286 MB in 2.00 seconds = 3643.61 MB/sec
Timing buffered disk reads: 416 MB in 3.00 seconds = 138.51 MB/sec
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I've been happily using a samsung 830 with linux for yonks but haven't used any of the newer samsungs, in days gone by their firmware updates were windows-only although I believe they've now got bootable ISOs.

hdparm -tT only does a sequential read test, what you want for good desktop performance is a drive that's good at random r/w IO on small block sizes (4k being the most common); any SSD made within the last five years should be at least an order of magnitude faster than a HDD in that regard. Take a look at Anand's awesome essay on SSDs here from back in 2009 (that's just the performance page for random IO, the whole essay is well worth a read); I mention it because I'm still running an intel X25-M - glacially slow by the standards of today's SSDs - which happily scores ~50MB/s in random 4k reads whilst the 10,000rpm hard drives don't even reach 1MB/s.

In a nutshell, any SSD should be a night-and-day upgrade over what you have already.
 

lpallard

Member
Aug 17, 2013
276
11
18
Another test I was running while you guys replied... :)

[workstation-user@workstation ~]$ fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=2G --numjobs=8 --runtime=240 --group_reportingrandwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1...
fio-2.2.8
Starting 8 processes
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
Jobs: 8 (f=8): [w(8)] [100.0% done] [0KB/12275KB/0KB /s] [0/3068/0 iops] [eta 00m:00s]
randwrite: (groupid=0, jobs=8): err= 0: pid=3452: Thu Oct 22 19:09:30 2015
write: io=3976.2MB, bw=16959KB/s, iops=4239, runt=240080msec
slat (usec): min=2, max=201968, avg=1884.73, stdev=13668.88
clat (usec): min=0, max=10017, avg= 0.51, stdev=20.67
lat (usec): min=3, max=201974, avg=1885.42, stdev=13669.39
clat percentiles (usec):
| 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0],
| 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 1],
| 70.00th=[ 1], 80.00th=[ 1], 90.00th=[ 1], 95.00th=[ 1],
| 99.00th=[ 2], 99.50th=[ 3], 99.90th=[ 4], 99.95th=[ 5],
| 99.99th=[ 16]
bw (KB /s): min= 226, max=314956, per=13.00%, avg=2204.42, stdev=13125.67
lat (usec) : 2=97.79%, 4=2.07%, 10=0.11%, 20=0.02%, 50=0.01%
lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%
cpu : usr=0.08%, sys=0.25%, ctx=24831, majf=1, minf=222
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=1017859/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: io=3976.2MB, aggrb=16958KB/s, minb=16958KB/s, maxb=16958KB/s, mint=240080msec, maxt=240080msec

Disk stats (read/write):
dm-2: ios=2/672082, merge=0/0, ticks=70/36530191, in_queue=36533109, util=99.94%, aggrios=101/663321, aggrmerge=0/10190, aggrticks=2098/34574256, aggrin_queue=34585671, aggrutil=99.95%
sda: ios=101/663321, merge=0/10190, ticks=2098/34574256, in_queue=34585671, util=99.95%
 

lpallard

Member
Aug 17, 2013
276
11
18
This site proved invaluable:
Benchmarking - Benchmarking Linux with Sysbench, FIO, Ioping, and UnixBench: Lots of Examples

As seen on the output, the drive was pretty much maxed out (util=99.95%) during the test, while the IOPS was only 4239....

What surprises me is the percentile.. 95% of the requests were faster than 1 us which is decent I guess..

This is interesting stuff but probably not so practical.. At least it gives me an idea on how to get actual real life numbers in regards to storage performance.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I'd say that's good performance for a HDD but to me it looks like there's a cache getting in the way there somewhere...

issued : total=r=0/w=1017859

Implies that just over a million write IOs were issued by fio, but...

sda: ios=101/663321, merge=0/10190

...only 660,000-odd of those actually hit the drive and 10,000 were merged by the IO scheduler. Hmm. Need to dust off my fio hat...
 

lpallard

Member
Aug 17, 2013
276
11
18
From https://www.kernel.org/doc/Documentation/iostats.txt

"Field 2 -- # of reads merged, field 6 -- # of writes merged
Reads and writes which are adjacent to each other may be merged for
efficiency
. Thus two 4K reads may become one 8K read before it is
ultimately handed to the disk, and so it will be counted (and queued)
as only one I/O. This field lets you know how often this was done."

Not sure if this is it but sure looks like that.

[EffrafaxOfWug, sorry , your avatar inspired me to change mine..;)]
 
Last edited:

lpallard

Member
Aug 17, 2013
276
11
18
I just pulled the trigger on a Crucial BX100 250GB. For 127$ CAD incl. taxes and shipping, I thought that wasnt too bad..

Performance seems decent enough for my little workstation and that should be enough to be day and night with the Hitachi drive I currently use. Will see how CentOS and Linux behave on that drive!

Thanks guys for the guidance!
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
From https://www.kernel.org/doc/Documentation/iostats.txt
...
Not sure if this is it but sure looks like that.
Yups, it's quite common and eminently sensible to batch up writes in such a fashion, but the main problem now that I'm actually awake is you weren't using direct IO; setting direct=0 means that most of your writes are going straight into RAM, repeat the same test with direct=1 and you'll get a much less rosy picture of HDD performance.

Best of luck with your new SSD and your new improved avatar ;)
 
  • Like
Reactions: lpallard

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Rest is behind a spoiler tag as it's off-topic and a wall o' text, but because I thought the results you were getting for your HDD were Too Good To Be True I repeated it myself. In fact it was so wall of text I had to put it in another post... two other posts!

From one of my RAID10 arrays, using (I think) the same settings as yourself:

Code:
effrafax@wug:/storage/fio$ fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=2G --numjobs=8 --runtime=240 --group_reporting
randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
...
fio-2.1.11
Starting 8 processes
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
Jobs: 8 (f=8): [w(8)] [100.0% done] [0KB/27648KB/0KB /s] [0/6912/0 iops] [eta 00m:00s]ta 02m:23s]
randwrite: (groupid=0, jobs=8): err= 0: pid=11449: Fri Oct 23 15:00:36 2015
  write: io=9299.7MB, bw=39673KB/s, iops=9918, runt=240031msec
    slat (usec): min=1, max=282245, avg=805.81, stdev=6084.27
    clat (usec): min=0, max=38, avg= 0.20, stdev= 0.42
     lat (usec): min=1, max=282248, avg=806.06, stdev=6084.42
    clat percentiles (usec):
     |  1.00th=[    0],  5.00th=[    0], 10.00th=[    0], 20.00th=[    0],
     | 30.00th=[    0], 40.00th=[    0], 50.00th=[    0], 60.00th=[    0],
     | 70.00th=[    0], 80.00th=[    0], 90.00th=[    1], 95.00th=[    1],
     | 99.00th=[    1], 99.50th=[    1], 99.90th=[    2], 99.95th=[    3],
     | 99.99th=[    4]
    bw (KB  /s): min=  761, max=742032, per=12.62%, avg=5006.21, stdev=30531.08
    lat (usec) : 2=99.78%, 4=0.21%, 10=0.01%, 20=0.01%, 50=0.01%
  cpu          : usr=0.09%, sys=0.33%, ctx=54927, majf=0, minf=56
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=2380708/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=9299.7MB, aggrb=39673KB/s, minb=39673KB/s, maxb=39673KB/s, mint=240031msec, maxt=240031msec

Disk stats (read/write):
    dm-1: ios=28/1758758, merge=0/0, ticks=308/1152084, in_queue=1153264, util=98.09%, aggrios=28/1759399, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md32: ios=28/1759399, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=4/387432, aggrmerge=0/199582, aggrticks=51/131432, aggrin_queue=131439, aggrutil=61.66%
  sdc: ios=3/387330, merge=0/199420, ticks=72/89920, in_queue=89948, util=23.26%
  sdd: ios=9/387192, merge=0/199446, ticks=68/95248, in_queue=95228, util=23.97%
  sdf: ios=0/387770, merge=0/199888, ticks=0/265800, in_queue=265804, util=61.66%
  sdg: ios=14/387332, merge=0/199416, ticks=128/97664, in_queue=97760, util=24.70%
  sdh: ios=1/387195, merge=0/199442, ticks=28/141216, in_queue=141200, util=35.88%
  sdi: ios=1/387775, merge=0/199885, ticks=12/98744, in_queue=98696, util=24.54%
That gets me about ~7300 IOPS and a throughput of about 40MB/s. But if I repeat the with setting direct=1...

Code:
effrafax@wug:/storage/fio$ fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=2G --numjobs=8 --runtime=240 --group_reporting
randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
...
fio-2.1.11
Starting 8 processes
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
Jobs: 1 (f=0): [_(1),w(1),_(6)] [12.6% done] [0KB/575KB/0KB /s] [0/143/0 iops] [eta 27m:59s]
randwrite: (groupid=0, jobs=8): err= 0: pid=11416: Fri Oct 23 14:55:44 2015
  write: io=377380KB, bw=1571.6KB/s, iops=392, runt=240127msec
    slat (usec): min=9, max=1182.2K, avg=14911.62, stdev=92951.00
    clat (usec): min=2, max=1079.4K, avg=5444.12, stdev=26332.88
     lat (usec): min=94, max=1182.5K, avg=20356.17, stdev=95869.25
    clat percentiles (usec):
     |  1.00th=[   89],  5.00th=[  104], 10.00th=[  120], 20.00th=[  151],
     | 30.00th=[  171], 40.00th=[  203], 50.00th=[  262], 60.00th=[  366],
     | 70.00th=[  676], 80.00th=[ 3600], 90.00th=[17792], 95.00th=[28544],
     | 99.00th=[47360], 99.50th=[78336], 99.90th=[436224], 99.95th=[643072],
     | 99.99th=[897024]
    bw (KB  /s): min=    4, max=  885, per=14.03%, avg=220.47, stdev=181.38
    lat (usec) : 4=0.04%, 10=0.01%, 50=0.01%, 100=3.62%, 250=44.70%
    lat (usec) : 500=17.61%, 750=5.10%, 1000=3.23%
    lat (msec) : 2=3.94%, 4=2.22%, 10=4.97%, 20=5.78%, 50=7.93%
    lat (msec) : 100=0.44%, 250=0.26%, 500=0.07%, 750=0.05%, 1000=0.03%
    lat (msec) : 2000=0.01%
  cpu          : usr=0.05%, sys=0.33%, ctx=98928, majf=0, minf=77
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=94345/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=377380KB, aggrb=1571KB/s, minb=1571KB/s, maxb=1571KB/s, mint=240127msec, maxt=240127msec

Disk stats (read/write):
    dm-1: ios=31/118587, merge=0/0, ticks=6164/3962128, in_queue=3994300, util=100.00%, aggrios=31/118633, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md32: ios=31/118633, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=5/32710, aggrmerge=0/7554, aggrticks=1027/264532, aggrin_queue=265612, aggrutil=91.56%
  sdc: ios=1/32731, merge=0/7581, ticks=12/181908, in_queue=182208, util=40.47%
  sdd: ios=9/32525, merge=0/7543, ticks=1540/181668, in_queue=183176, util=40.37%
  sdf: ios=4/32874, merge=0/7538, ticks=3356/543628, in_queue=547028, util=91.56%
  sdg: ios=13/32731, merge=0/7581, ticks=1192/183732, in_queue=184956, util=40.68%
  sdh: ios=0/32525, merge=0/7543, ticks=0/303956, in_queue=303960, util=66.91%
  sdi: ios=4/32874, merge=0/7538, ticks=64/192304, in_queue=192348, util=40.97%
...it shows the array's actually only giving me about 500 random 4k write IOPS with a throughput of ~1.5MB/s. Real-life performance is somewhere in between these two extremes, since it's sensible to cache stuff, but for measuring raw device speed you'll always want to be sure to use --direct=1.
 
  • Like
Reactions: lpallard

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Incidentally, disc stats at the end show that this array is clearly being held back by /dev/sdf so will need to look into getting that replaced. Now lets run the whole thing against an SSD (well, technically two M500's in RAID1):

Code:
effrafax@wug:/storage/scratch/fio$ fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=2G --numjobs=8 --runtime=240 --group_reporting
randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
...
fio-2.1.11
Starting 8 processes
Jobs: 8 (f=8): [w(8)] [100.0% done] [0KB/133.8MB/0KB /s] [0/34.3K/0 iops] [eta 00m:00s]
randwrite: (groupid=0, jobs=8): err= 0: pid=11552: Fri Oct 23 15:23:28 2015
  write: io=16384MB, bw=105622KB/s, iops=26405, runt=158842msec
    slat (usec): min=4, max=171797, avg=13.53, stdev=629.28
    clat (usec): min=0, max=428547, avg=287.71, stdev=2278.70
     lat (usec): min=31, max=441305, avg=301.34, stdev=2393.62
    clat percentiles (usec):
     |  1.00th=[   71],  5.00th=[   78], 10.00th=[   80], 20.00th=[   83],
     | 30.00th=[   85], 40.00th=[   86], 50.00th=[   88], 60.00th=[   89],
     | 70.00th=[   91], 80.00th=[   93], 90.00th=[  137], 95.00th=[  243],
     | 99.00th=[12992], 99.50th=[13504], 99.90th=[15936], 99.95th=[19072],
     | 99.99th=[62720]
    bw (KB  /s): min=   46, max=32624, per=12.57%, avg=13278.13, stdev=10491.37
    lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.02%
    lat (usec) : 100=85.20%, 250=9.86%, 500=0.74%, 750=1.71%, 1000=0.83%
    lat (msec) : 2=0.52%, 4=0.03%, 10=0.03%, 20=1.01%, 50=0.02%
    lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%
  cpu          : usr=0.83%, sys=3.70%, ctx=4201332, majf=0, minf=73
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=4194304/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=16384MB, aggrb=105622KB/s, minb=105622KB/s, maxb=105622KB/s, mint=158842msec, maxt=158842msec

Disk stats (read/write):
    dm-5: ios=0/4201216, merge=0/0, ticks=0/2967964, in_queue=2970444, util=99.83%, aggrios=0/4207858, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md1: ios=0/4207858, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/4195232, aggrmerge=0/12779, aggrticks=0/753992, aggrin_queue=753984, aggrutil=95.55%
  sda: ios=0/4195232, merge=0/12779, ticks=0/1137988, in_queue=1138352, util=95.55%
  sdb: ios=0/4195232, merge=0/12779, ticks=0/369996, in_queue=369616, util=40.36%
~17,500 IOPS and about 100MB/s throughput.

But then again we have lies, damned lies and synthetic benchmarks. In the real world there's almost no such thing as a workload that's 100% reads/writes, nor 100% sequential or 100% random, or 100% 4k blocks - it's always somewhere in between. But IMHO it's those little random reads and writes that make computers "seem" slow since HDDs have such absurdly long penalties to access random data due to their mechanical nature, which SSDs largely eliminate.
 
  • Like
Reactions: lpallard

lpallard

Member
Aug 17, 2013
276
11
18
Very educational what you did there!!

Whoa... ran fio in direct mode, "less rosy" is actually an understatement...


fio-2.2.8
Starting 8 processes
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
randwrite: Laying out IO file(s) (1 file(s) / 2048MB)
Jobs: 8 (f=8): [w(8)] [100.0% done] [0KB/1072KB/0KB /s] [0/268/0 iops] [eta 00m:00s]
randwrite: (groupid=0, jobs=8): err= 0: pid=27883: Fri Oct 23 21:11:17 2015
write: io=240348KB, bw=1001.4KB/s, iops=250, runt=240029msec
slat (usec): min=6, max=267, avg=29.69, stdev= 4.45
clat (usec): min=228, max=563371, avg=31921.14, stdev=23908.09
lat (usec): min=276, max=563398, avg=31951.17, stdev=23907.41
clat percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 28],
| 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 31],
| 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 40],
| 99.00th=[ 79], 99.50th=[ 167], 99.90th=[ 515], 99.95th=[ 537],
| 99.99th=[ 553]
bw (KB /s): min= 7, max= 379, per=12.59%, avg=126.06, stdev=23.23
lat (usec) : 250=0.01%, 500=0.10%, 750=0.01%
lat (msec) : 2=0.01%, 4=1.54%, 10=0.60%, 20=0.08%, 50=95.96%
lat (msec) : 100=0.73%, 250=0.83%, 500=0.04%, 750=0.12%
cpu : usr=0.03%, sys=0.13%, ctx=60158, majf=0, minf=227
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=60087/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
WRITE: io=240348KB, aggrb=1001KB/s, minb=1001KB/s, maxb=1001KB/s, mint=240029msec, maxt=240029msec

Disk stats (read/write):
dm-2: ios=0/60455, merge=0/0, ticks=0/2036617, in_queue=5389608, util=100.00%, aggrios=2/60882, aggrmerge=0/13, aggrticks=2315/3241850, aggrin_queue=12586221, aggrutil=100.00%
sda: ios=2/60882, merge=0/13, ticks=2315/3241850, in_queue=12586221, util=100.00%
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
You purchased basically the lowest end 'name brand' SSD... don't expect much overall.
Not saying it's bad (my parents use one) but don't expect high performance and high endurance.
 

lpallard

Member
Aug 17, 2013
276
11
18
You purchased basically the lowest end 'name brand' SSD... don't expect much overall.
Not saying it's bad (my parents use one) but don't expect high performance and high endurance.
Perhaps, but the reviews are pretty good for the price point. Not 850 Pro levels but decent enough. At 50cents Canadian per GB (after taxes & shipping) this is pretty good to me.

Once I get this thing I'll migrate to it and re-run the same tests as I did in this thread, then we can compare. I'm not expecting miracles but I'm expecting a decent improvement. After all, the HDD I currently use was in my HTPC which in 2010 got upgraded to a cheap OCZ Vertex2 40GB SSD, and I saw day & night improvement at that time. I would expect the BX100 to be order of magnitudes better than the Vertex2 and also the other components are also better than my HTPC so overall, if there is not much difference between the current Hitachi HDD and the BX100 SSD, I will start to rethink everything...