Disruption in HPC (and storage)
By joe
- 2 minutes read - 284 wordsOn InsideHPC, John West has an interesting story on disruption in HPC markets, and predictions on success or failure of a business. There are some interesting tidbits evident throughout the article.
This made me smile.
Our Delta-V encompasses these ideas. Its designed to be a lower end storage target. The tools we have developed around it (and are continuing to develop) to enable simplified management are meant to make dealing with large numbers of these devices very easy. Not completely there yet, still working on a number of things in this regard. But they are coming along nicely, and are in production at quite a few sites as primary/secondary storage for clusters/groups. In conjunction with our target management software, we are currently providing a nice consistent user interface to configuring/using iSCSI block and NFS file targets, and building up from there to other technologies. The focus for this work is on enabling lower cost and reasonable performance. Adding in extensive connectivity over gigabit, Infiniband and 10GbE, we can provide simple standards based commoditized fabric connections. In the design of the system, we did everything we could to lower the BOM cost as far as reasonable. Some choices for these sorts of systems are, in our opinion, unreasonable (c.f. Backblaze), in that they wind up erecting a bandwidth wall between you and your data. You do not ever want to do this in HPC. Since this is not their goal, their choices may be reasonable for their purposes. For HPC, you want to minimize the cost per flop, the cost per byte moved, the cost per latency, the cost per storage to scale up. You will likely be hearing more about this soon. I promise.