Lab machine, updated RAID system (to our current shipping specs).
We’ve got a 10GbE and an IB DDR card in there for some end user lab tests over the next 2 weeks.
We just finished rebuilding the RAID unit, and I wanted a baseline measurement. So a fast write then read (uncached of course).
[root@jr5-lab fio]# fio sw.fio ... Run status group 0 (all jobs): WRITE: io=195864MB, aggrb=3789.1MB/s, minb=3880.1MB/s, maxb=3880.1MB/s, mint=51680msec, maxt=51680msec
Thats the write.
Here’s the read.
[root@jr5-lab fio]# fio sr.fio ... Run status group 0 (all jobs): READ: io=195864MB, aggrb=4639.3MB/s, minb=4750.6MB/s, maxb=4750.6MB/s, mint=42219msec, maxt=42219msec
Streaming 196GB in 42.2s. 4.6GB/s sustained read.
System has 32GB ram, RAID cache is 512MB. No SSDs were used for caching this file system (reads/writes).
Thats what I call a nice fast read. Maximum theoretical read speed for this configuration is 4.76GB/s with the disks we have in it. This is about 96.6% efficiency. I seem to remember hearing of some very large storage array used for some small supercomputer in Illinois, where the average measured bandwidth per disk is 8 MB/s or so. We are sustaining 117.9 MB/s per disk. Sadly we weren’t in the running for the next revision of this storage, as our name isn’t a TLA for a specific tier 1 company. Go figure. Our tax dollars at work.
Also, I wonder if I should be amused that this system is roughly 3x faster on writes, and 4x faster on reads than the “fastest storage system in the world” on a per chassis basis. Nah.
As I said before, all that matters for raw unapologetic firepower is what you can deploy to an application. The more firepower you have the more you are able to deploy, modulo other issues (inefficient networks, etc.)