From HPCwire …
“Moving a physical disk-head to accomplish random I/O is so last-century,” said Allan Snavely, associate director of SDSC, co-principal investigator for SDSC’s Gordon system and project leader for Dash. “Indeed, Charles Babbage designed a computer based on moving mechanical parts almost two centuries ago. With respect to I/O, it’s time to stop trying to move protons and just move electrons. With the aid of flash SSDs, we can do latency-bound file reads more than 10 times faster and more efficiently than anything being done today.”
Ok … apart from the humor in the quote (and I am hoping that Allan or the writer meant that comment to be interpreted in a semi-humorous manner … its also very possible Allan didn’t say that and the writer took … er … liberties … yeah thats it … and decided to embark on a more, how shall I say this … creative writing effort than more serious journalism … embellish it a bit), there is another thread that is worth discussing.
If you happen to have an infinite budget, building a flash/SSD based storage system, yeah, not such a bad idea for random IO. If you happen to have a small fixed budget, and need to maximize random IO performance within that constraint … maybe flash/SSD isn’t the most appropriate direction. Flash/SSDs are not cheap now, and I don’t see them dropping in price any time soon.
But this whole concept of moving random IO off things with mechanical motion is a good one. The question is how to accomplish it in the most cost effective manner possible. And do so within a single name space (e.g. whether or not the file system is distributed should be irrelevant to the application), or is that also an anachronism … “so last century” as it were.
That is where we (and others) are looking to SDSC and other national labs for results of their research. This is very important, as bandwidth walls are rising faster than ever with larger data sets.
Viewed 13432 times by 3149 viewers