The evolving market for HPC: part 1, recent past

I’ve said this many times, and at many different venues. HPC drives downmarket, and does so very hard. High cost solutions have limited lifetimes, at best. At worst, they will not catch on.

2013 was the year of the accelerators. We predicted this many years ago. I won’t beat this dead horse (for us). I’ll simply say “we were right”, and right with great specificity and accuracy. This seams to be a pattern with us. This “being right”.

2013 was the year that SSD/flash in general really took off. Its not that there is anything special with flash, it is, to a degree, the first of many technologies with a potential to replace disks. But one of the things that happened in 2013, that I think we’ll see really amplified in 2014 and beyond is, that the market for flash is really flash in the disk format. That is, people like their disk use semantics … they understand replacing a failed drive. Ask them to power down a production server to swap out a failed PCIe card? No, thats not going to happen. Ask them to offline a non-disk array to swap out a failed module? Again, no, not going to happen. I’ll talk about this more in a post in the very near future (today or tomorrow).

2013 was the year of realizing that big data infrastructure is not simply a pile-o-pcs, or a pile-o-cloud-VMs or a random distribution of hadoop/noSQL/… . There are strong performance requirements on large scale analytics. If you use inefficient resources, you have to buy a whole helluva-lotta resources to achieve the same goals as you could with more efficient designs. Think about how logical it is … more inefficient machines (computing, IO, networking, and combinations of these) are required to perform the same work as fewer far more efficient machines. These efficiencies are seen in hardware design, software stacks, etc. This is how a machine that had no right to beat competitors on a benchmark, set records on 2/3 of the tests.

2013 was the year end users realized that the older filer head based storage designs simply don’t scale, and cannot scale. JBOD arrays behind a head is a design from the 1990’s, and it makes no sense in the era of multi-100TB and beyond sized storage systems. This has to do with people realizing the value of bandwidth wall height analysis. You risk making data as cold as possible … frozen actually … unable to be accessed in a reasonable period of time … with designs that harken back to the dot com era. Seriously, anyone building something like that ought to be writing that data to tape and storing it in a bunker somewhere. And I am not a fan of tape.

2013 was not the year of ARM. It was the year of ARM hype. But for reasons having nothing to do with ARM itself. Basically ARM was the anti-Intel. Everyone was looking for an anti-Intel, in order to maintain pressure on Intel. I can’t tell you how many times I’d heard “Intel is done” or “Intel is in trouble” in meetings with customers, or partners. This wasn’t true, though there was a great deal of wishing it were true on the part of some. We hedged a bit by trying to work with Calxeda. Unfortunately, as we rapidly discovered, the claims about ARM as a viable replacement for Intel were simply not true (specifically talking about Calxeda’s chips) at this time. Calxeda were positioned in customers and partners minds as being “just like Intel’s maybe a little slower and much less power”. Neither of these statements were true. The chips were badly underpowered, and when you aggregated enough of them to do interesting work, they ran … HOT … .

I remember in my initial discussions with them, they talked about the CPU and a number of other things. I noted that the first version was 32 bit, and asked when we could start playing with 64 bit parts. The answer I got back surprised me (this was ~2 years ago). I hoped they could survive during their 64 bit drought, as no one actively wants to use 32 bit parts anymore. This should be a cautionary tale for any vendor pushing out systems. If you are not 64 bit, you need a clear niche to work in lest you wish to avoid the problems of having customers. That ship has sailed. 64 bits or bust.

I remember having subsequent discussions with them where they talked about the storage market. Storage is becoming very computationally intensive thanks to a variety of computations you have to do in-situ which underpowered chips need not apply for. These include various RAID and Erasure coding schema, encryption, and high performance networking. You can’t get there from here if your chips can’t even drive a 10GbE network at 50% performance. Its not worth even trying on such a platform.

There’s much more, and I’ll cover this in subsequent posts.

Viewed 75088 times by 6402 viewers

Facebooktwittergoogle_plusredditpinterestlinkedinmail