The future of HPC

Some of us have been arguing for a while that the future of HPC is aSMP (asymmetric processing) or heterogeneous processing. Others have argued that the future is massive multicore. In the aSMP world view, there are camps forming between RC (reconfigurable computing) and GPU/Cell-like computing.

Here is what is interesting. In an article just posted in HPCWire, an “anonymous” writer, whom in the past has argued the vector case strenuously, makes an extremely good analysis of the issues in front of us.

I won’t recap it, other than to note that he makes the point that nanocore (massive multi core taken to something close to an “absurd” limit) is likely heterogeneous by nature, that we will need to rethink how we program these things. I don’t see him talking about RC.

However … Imagine if you will, something akin to the Stretch processor. In a nanocore ensemble.

He is right, programming these things will be hard. Though Amir and numerous other bright people are working on tools.

Nanocore, or massive multicore is problematic in that memory bandwidth is a shared (and highly precious) commodity, to be used wisely, and sparingly.

Cell is an example of (not so massive) heterogenous multicore. Indications are from some sectors that it is hard to program, though others I have spoken to aren’t saying that.

I think the issue is, at the end of the day, the programming and abstraction model. How can you conceptualize and express your problem such that you have a fighting chance of efficiently using the resources?

ILP is all about more effeciently packing instructions per unit time. Massive multicore is about massively increasing the number of instructions in use per unit time. RC is about making those “instructions” far more efficient.

Something tells me that, though the author didn’t touch on it, the future solutions are going to look a bit like massive multicore with some sort of RC capability. Which is what I said before. Amir and others have expressed I think, similar views, maybe I am starting to come around to them the more I think about it.

Over the last 20 years we have seen fundamental shifts from single monolithic large machines to parallel supermicros, to parallel massive machines. Again, the sea is changing. Each time this happened before the size of the supercomputing market grew significantly. This is good for everyone, but we are going to need capital to make it happen.

Maybe we will be eating nanocores with strawberries and bananas for breakfast. I have a Charles Stross-ian sense that the change that is coming might be better expressed in computational power per kilogram than number of cores ….

Viewed 7558 times by 1548 viewers

Optimization WordPress Plugins & Solutions by W3 EDGE