HPC in the first decade of a new millenium: a perspective, part 1

[Update: 9-Jan-2010] Link fixed, thanks Shehjar!

This is sort of another itch I need to scratch. Please bear with me. This is a long read, and I am breaking it up into multiple posts so you don’t have to read this as a huge novel in and of itself.

Many excellent blogs and news sites are giving perspectives on 2009. Magazine sites are talking about the hits in HPC over the last year in computing, storage, networking.

You can’t ignore the year itself, and I won’t. Doug Eadline’s piece is (as always) worth a careful read.

I want to look at a bigger picture though than the last year. The last decade.

My reasoning? A famous quote by George Santayana. Ok, we aren’t doomed to repeat our past, but something very remarkable is going on in the market now. Something that has happened before, and under similar circumstances.

Let me explain.

In the beginning …

At the close of 1999, many things were happening in the HPC market. Things that would quickly reshape the organization of this market, how people made money in the market, how systems were designed, how code was written and run.

Ask an SGIer (well most of them, I and a few others were exceptions at that time) then would commodity machines ever replace the distributed shared memory model for HPC … and the answer you would hear from them was an unequivocal, resounding … even scornful … no. Ask the same questions of an IBMer, an HP-ite, a Sun person, a DEC person. You’d get largely the same answers.

HPC as a market was finishing the transition from vector architectures. This is a story in and of itself, as vector machines dominated … no … ruled … early HPC. The super microprocessors ate their lunch, not on performance, but on cost per flop, and number of flops available per unit time per dollar (or other appropriate currency).

To wit, in 1999, the processor landscape looked like this in HPC (generated from the excellent top500.org site)


processor family topping HPC in Nov 1999

Indeed, if you leverage the historical charting capability of the top500 site, you arrive an inexorable conclusion.


processor family topping HPC over time

That being, architectures come, and architectures go. In processors, in system design, etc. Moreover, you see consolidation.

You can see, taking the slice at 1999 why all these large RISC cpu vendors would scoff at the little commodity machines. Its not like they were a threat now … were they? They were just cheap “Pee Cees” after all. How could they compete with the mighty SMP in HPC?
(for added derision, they emphasized the “pee” portion)


processor family topping HPC over time

Aside from this, about half of these vendors had just signed on to the good ship Itanium, based in part upon some … er … creative … yeah, thats the ticket … market penetration data … manufactured … er … estimated by one of the market watchers


The infamous

Yeah, in the brand new millenium, we would soon all be using VLIW based machines, and companies would be selling Billions and Billions of dollars of them.

Yeah.

Billions. Of. Dollars.

Of Itanium.

Viewed 13139 times by 2582 viewers

6 thoughts on “HPC in the first decade of a new millenium: a perspective, part 1

  1. Sure, marketing failed itanium, perhaps even the cost and most definitely the power consumption, but it wasnt such a bad processor if you only look at technology behind it…a beast of a processor..a guzzler even.

    Also, the link to the first image is broken.

  2. @Shehjar

    What failed Itanium were multiple things. First and foremost, cost. The price for it and the surrounding infrastructure was too high compared to alternative choices. Second was compatibility. The alternatives were able to run native code faster without requiring any special hoops to jump through … Itanium did have a slower emulation mode … not a good idea.

    The competition for this space started out in the x86 world, with Pentium/AMD units (the latter to a lesser extent) being able to do most of what Itanium could do, at a lower cost, and a higher speed. There were a few, very few, cases where Itanium shined.

    The competition evolved in this space, so that AMD introduced x86-64, which Intel initially ignored as they didn’t want to compete with their own IA64 platform. Once the realized that IA64 was relegated to niche products and the future really was x86-64, they came back with initially weak, then progressively better offerings.

    Again, there is a minuscule subset of codes for which Itanium is best. The question that is needed to be answered is, is it worth its cost? That is, will the cost benefit analysis ever favor Itanium over competitors? The market has spoken on this issue with great clarity.

    Whether or not it was technologically advanced or just marketing hype, doesn’t really matter. What does matter is having the right mix to gain market share. Slower, incompatible, and more expensive is not the right way to do this.

  3. It???d be hard for anyone other than the executives of those companies to know for sure what led to their acquisitions.

    Its conceivable that the poor economy played a part in the acquisitions. For example, one or more might have been facing cash flow problems, and just didn???t have many other options than to sell the company. Whereas in a booming economy, they might have been able to raise money from other investors, get lines of credit, had more customers, etc.

    But I tend to agree that they simply had technology that Intel and Microsoft felt they needed quickly, and were willing to pay for. All three of these companies were focusing on software for developing software, and HPC customers are notorious for not wanting to pay for such software ;-(. So being acquired by a larger company was a likely exit strategy all along???

Comments are closed.