With Linux-Bench's 11 main tests generating ~55 data points, I think we have hit the point where the script is generating too many data points. Especially hard is that the scale is completely different between the tests.
Here is the big question:
What would make sense in terms of presenting data? Eventually they will all be in a sortable format on Linux-Bench.com. Until then, what makes the most sense for presenting results?
E.g. I am working on a comparison of two lower-end Intel Xeon E5 processors. My first instinct was to present all of the data. Now I am thinking that may not make sense.
One idea I had was to use a baseline, then show a performance delta in percentage for all of the tests. Then have the raw data available in the application to sort/ make graphs as folks want with the raw data.
Another idea is to present 5-10 different graphs but focus on specific bits. Like Dhrystone/ Whetstone for UnixBench as an example.
Thoughts/ suggestions would be appreciated.
Regards,
Patrick
Here is the big question:
What would make sense in terms of presenting data? Eventually they will all be in a sortable format on Linux-Bench.com. Until then, what makes the most sense for presenting results?
E.g. I am working on a comparison of two lower-end Intel Xeon E5 processors. My first instinct was to present all of the data. Now I am thinking that may not make sense.
One idea I had was to use a baseline, then show a performance delta in percentage for all of the tests. Then have the raw data available in the application to sort/ make graphs as folks want with the raw data.
Another idea is to present 5-10 different graphs but focus on specific bits. Like Dhrystone/ Whetstone for UnixBench as an example.
Thoughts/ suggestions would be appreciated.
Regards,
Patrick