HPC in the first decade of a new millenium: a perspective, part 3
By joe
- 4 minutes read - 650 wordsThe relentless onslaught of clusters … We are also mostly doing SMPs and MPPs then. Clusters are barely registering. See the chart and the data to get more perspective. What happened in the market was a simple alteration of the cost scale per flop. Clusters provided massive numbers of cheap cycles. Add to this that MPI has been standardized, reasonably well designed, and people were migrating codes to it. Funny, MPI on a cluster runs just as nicely as MPI on the SGI. At a small fraction of the cost. Pay attention to that. It is happening again. More in a bit. As you can see from the chart and the data, clusters began to appear on the big radars around 2001 time frame. They began to eat everyone’s lunch around 2002, and simply did not stop.
But something else happened. In 1999/2000 time frame, the HPC market size was about $6B USD. In 2009, this number is closer to $20B USD. Between 1999 and 2009, clusters rapidly took over most of HPC. There are a few large non-cluster systems around, but they aren’t “mass market” HPC systems. We’ll get back to that. The ascension of Linux in HPC … In 1999, HP was porting HPUX to EPIC, IBM may have been thinking of AIX on EPIC, SGI was going Linux on EPIC, IRIX was kicked to the curb. There were many different OSes
[
](/images/operating-systems-over-time.png)
with an understanding that a few of them would be going away. Looking at the list now, and noting that SLES is Linux, we have Linux, AIX, and “other”. Of which, a fairly large percentage of “other” is liable to be some variant of Linux. What is nice about this, is that now you can be effectively source code compatible across these systems, and in a number of cases, for an intelligent build environment, you can be binary compatible … you can move applications from machine to machine. This is what commoditization of the platform buys you. The cost to move to a new platform is governed by licensing costs, as well as time to implement and stand up the platform and application. More on that in a bit as well. The point is that the historical trends weren’t pointing in any one direction in 1999 for OSes. Now we can see the inexorable march to a single OS. This lowers ISV and developer costs for qualifying the code on the system. It lowers cost for moving code between systems. It promotes system use. In 1999, I was into my 4th year of playing with Linux. I had been able to port my MD code to g77 … some minor changes to a few functions. It compiled and ran fine. What struck me, as an end user, was how helpful Linux was as compared to Irix at the time. Things worked, well, the way I expected them to work. In Irix, I sometimes had a few issues I couldn’t figure out. I remember learning how to map between SGI’s installation tools and rpm. And then I found RPM had a richer capability (at the time) than SGI’s tool. Real IRIX die-hards refused to consider that this upstart product could be better than their tool. They liked their boot environment dammit, and they would not alter it to make booting saner/easier. They liked their tools (some were quite good), and they didn’t want to look at anything else. For those people, such inflexibility became a fatal mistake. Some of the IRIX die-hards I saw then … I now see on LKML working on Linux kernel bits. Go figure. From a geographic sense, North America had the largest number of HPC systems, and this hasn’t changed terribly much over time. I do expect this to change, but that might be for a post of prognostication. Which I might not be sufficiently courageous to do in public.