Taking siFlash-SSD out for a spin, and cracking the throttle ...
By joe
- 2 minutes read - 308 words… half open. [update] video [FLOWPLAYER=http://scalability.org/wp-content/videos/screencast_video.flv,480,315] I won’t show the fio output until I get the unit back and get some more testing in. Also, I’ve discovered something … I guess … depressing about fio, in that what it reports for performance isn’t necessarily what the storage subsystem sees. This isn’t just fio, its pretty much all tools that talk to the file/storage API at a high level. The low level actual results (you have to grab data from the OS reporting infrastructure to see this) differ, sometimes wildly, from the high level API results. That is, if I open a file, write a 1GB data set, close the file, and put calipers around that, the time I measure for this 1GB data to be written will be not terribly well correlated with what the hardware reports its loads as. And the latter is the truth. There are many reasons for this. And happily, when you get to the point that the front end APIs are being effectively gated by the hardware, there is pretty good agreement between the hardware and the API measured performance. Yeah, I know, you want the speeds and feeds. 24x SSDs (remember, this unit holds 48 … actually 50 … 54). Our special software and hardware sauce. RAID5 LUN setups. File system atop this. CPUs and ram. Lather, rinse, repeat. fire up 24 threads of reads against the SSD. For streaming read IO tests. 7.2 GB/s sustained. For random read IO tests, spin up 192 threads of 8k random reads 220k IOPS sustained. Remember, this is a 1/2 configured system. If we can get another 24 drives, we’ll see what running on all cylinders is like. Will get the measurements later next week. Then I have to send the drives back to OCZ (many thanks to the team there for getting this done!).