@Chuckleb Here's the best list of random resources: Server and Storage I/O Benchmarking and Performance Resources | StorageIOblog
You're right that it's a mess.
You're right that it's a mess.
That's a fundamental question though. Are you trying to make a completely objective measure of speed or are you trying to try setting a generic environment as a baseline then compare in that baseline?While they are how it is used... raw allows a consistent measure rather than have a kernel update completely change the XFS performance attributes.
I would rather raw and the occasional FS comparisons. Play with one variable at a time.
Yes they take a long time. Just finished up doing benches for 2 drives, they started Saturday morningI love the idea.
Need a 70/30 test certainly. QD 1,2,4,8,16,32,64,128,256
For "normal" benchmarking you need a pre-conditioning run before each actual run. Generally these are in the multi-hour range. I think my normal iometer script is around 36 hours for "quick" runs.
Ubuntu 16.04.1 is great.
Thanks for passing this along, that's good data. Makes me curious what tooling they use to collect the IO sizes in their systems. I'd like to do something similar for a couple of my own projects.@handruin
Pure Storage just released customer data stats: An analysis of IO size modalities on Pure Storage FlashArrays
Likewise here is Nimble: Storage Performance Benchmarks Are Useful – If You Read Them Carefully | Nimble Storage
Might be interesting data to use.