ServeTheHome Benchmark Methodology Suggestions Requested

Patrick

Administrator
Staff member
Dec 21, 2010
11,908
4,871
113
For the past few months I have been toying with the idea of introducing more benchmarks to ServeTheHome.com

I do have some specifically in the CPU articles, but not many regarding actual performance of NAS systems. One of the primary reasons for this is that I have tended to deal with "larger" systems than those frequently reviewed by other sites which typically are built for 1-6 drives.

In terms of motherboard performance, CPU wise, we are basically at the point where a given CPU/ chipset/ RAM combination will perform within 1% or so of similar configurations from other vendors. That is why I tend not to benchmark the Asus P7F-E versus the Intel S3420GPLC and Supermicro X8SI6-F for example. I think that features are the big differentiators (as well as price).

On the other hand, I could see the value in setting up a test system, and then running different OSes on it, or different network connections, and then setting up a client system or multiple systems to bang on the server.

My question is, what should be used? Obvious candidates would be:
1. Intel NASPT
2. IOmeter
3. A custom put/ get script

Whatever is used, I would really prefer not to use ultra proprietary setups so that everyone can replicate the same tests in their environments.

Prior to doing the benchmarks, I thought it would be worth getting feedback on what should be used/ what people would like to see.
 

nitrobass24

Moderator
Dec 26, 2010
1,083
127
63
TX
I think its important to use multiple tools and multiple runs of each.
I would do 5 runs of each test and throw out the highest and lowest and avg. the three remaining. This way you can eliminate anomalies during testing.
By using multiple tools you can validate your results. (i.e. if IOmeter says X and NASPT says Y....then something is off or they are not performing similar test).

There are a lot of tools but most of the time people dont have them configured correctly and do not get accurate results. (e.g. CDM if you run a 100mb test you are going to be testing the cache of your raid card most likely; further most people do not test across multiple QDs to see how performance is affected when you are really hitting the drives hard versus streaming a video). As long as you have configured the tools correctly, have multiple runs across multiple tools to validate your results i dont think it matters what tools you ultimately end up using.

One thing i would like to see if that when you use IOmeter/custom script if you can post the config/script (whatever they call it) so other users can run the exact same test on their machines as a comparison.
 

Patrick

Administrator
Staff member
Dec 21, 2010
11,908
4,871
113
One thing i would like to see if that when you use IOmeter/custom script if you can post the config/script (whatever they call it) so other users can run the exact same test on their machines as a comparison.
That is a key one for me. I really don't like the idea of non-public configs.
 

john4200

New Member
Jan 1, 2011
152
0
0
I would not spend much time on tests involving small block random I/O, like 4KB random read/write. While such accesses may be important for a business database server, for a home server most I/O access will be in larger chunks.

I have been doing some tests on my new (in progress) server, and I have been concentrating on sequential I/O and 512KB random I/O. I did do a few quick read latency tests (512B random read), but I did not spend much time on that -- I was just curious what sort of IOPS I could get with big, slow HDDs. Anyway, I am running linux on my server, so I used dd, iozone, and Witold Baryluk's seeker_baryluk.c program to do the benchmarks. I would not necessarily recommend these for your tests, but they work well for me.

http://en.wikipedia.org/wiki/Dd_(Unix)
http://www.iozone.org/
http://www.linuxinsight.com/how_fast_is_your_disk.html#comment-1583