Interesting post on benchmarking

Here.

In it, the author makes a number of points. Some I take no issue with, or don’t have direct knowledge of.

Others …

Two benchmarking tools, CrystalDiskMark and AS SSD, are popular despite a flaw that many reviewers noticed: they report sequential read/write throughput results consistently inferior to other benchmarking tools (especially for SF-1200-based SSDs). For example Benchmark Reviews tested the OCZ Vertex 2 120GB and these flawed tools report 210-215MB/s while all other tools report 270-280MB/s as expected.

Erp … You only get the “faster” speeds with easily compressible data. You get the far slower speeds when the data isn’t so easy to compress. We know. We measured this, and observed it.

If you write all zeros, just like in the days when compilers special cased particular codes (cough cough), its possible disks don’t even do the writes. Yet when they have random data, its kinda hard to fake a write.

So, I’d argue that the “wrong” results weren’t wrong. Just likely writing some non-compressible bits along with compressible bits.

His conclusion is:

Benchmarking is hard. Most people get it wrong.

Yeah. I agree with this. Real measurement is hard. If people aren’t reporting averages over several cases, and scaling of results, and … then there’s probably a high likelihood of “wrongness”.

FWIW, I’ve seen bad IO (and more generally performance) measurement everywhere, from popular blogs, articles from people who should know better, through supercomputer centers and national labs. The deer in the headlights rapid blinking isn’t fun to either experience, or provide the headlights for. Its better to ask for an independent review of results, to make sure they make sense. Peer review. Novel concept.

Viewed 8905 times by 2062 viewers

Facebooktwittergoogle_plusredditpinterestlinkedinmail