Demoing Accelerated Computing

So I flew to Eilat, demostrated how a little accelerated computing worked relative to a cluster. What really got to me was how simple a demo it was. The fingers never left the hands, and all that.

We ran a HMMer run on the cluster, then a Scalable HMMer run on the identical data set. Then we ran an 8 way cluster run using MPI-HMMer, again, running the same data and options. Finally we ran the accelerated computing version. Same input decks. Same options.
Scalable HMMer was 2x faster than regular HMMer. MPI-HMMer was about 8.2x faster than regular HMMer. Hardware accelerated HMMer was about 10x faster.
This was the application run time, not the core algorithm.
This is important. No one cares if you make one bit of the code much faster, they only care if you reduce the overall run time significantly.
The MPI-HMMer team is merging more of this work together, and should be announcing additional things soon. What if you could give multiple orders of magnitude of acceleration to your most time consuming applications? Could this change your work?
In the time from 1990 to 2004, my little molecular dynamics code went from taking 1 week for 100 time steps on a “superworkstation” to taking about 3-4 seconds per time step on my laptop. Yeah, this could count as acceleration, though I did some code optimization along the way. 6048 seconds per time step down to 4. From Moore’s law, we expect an order of magnitude (OOM) every 6.6 years. 13.2 years gets us 2 OOM. This gets us to 60.5 seconds. The rest comes from code optimization (one more OOM).
Getting code onto Accelerated Computing is still non-trivial. Even with RapidMind, PeakStream and others, or Celoxica tools, or … The SDKs mostly cost too much (apart from CUDA). Someone isn’t thinking they want to sell many units when they price their SDK at a number comparable to the cost of the hardware.
Worse, the porting aspect is non-trivial for FPGA and for “stream” processing. This is not simply: take your C code and it will run 100x faster. No. It wont.
The major hurdles I see for accelerated computing are the application ports. Over time I expect the market to sort out the tools. I expect (as do most end users) that the lower cost tools will be the ones to thrive. The history of HPC is littered with the bones of companies that made the critical mistake of not understanding a) HPC moves downstream, b) HPC moves towards the less expensive providers, c) asking people to pay much higher costs for small increase in value is a sure way to lose.
You see, while the speed is important, giving people 10x better performance can be done today and not in 6.6 years, charging people 4-6x node price for 10x performance simply doesn’t work out from an economic view. Just wait a year for better price performance from Moore’s law, and voila, problem solved.
10x works today. Can we get the apps on them?
Working on it, though as noted many times, VCs and other potential capital sources are not even remotely interested in accelerated computing or HPC. Which means that this will go slowly.

6 thoughts on “Demoing Accelerated Computing”

  1. In another post you said that HPC is not sexy, so VCs do not come. I’ve been trying to wrap my head around it. There was a time HPC was sexy (I think). What changed? Is it that the machines didn’t occupy a room anymore? Or are the major vendor’s to blame?
    I need to think about this more.

  2. Once upon a time …
    No, wait …
    Late 80s early 90s there were pushes for MPP as being “cheap” and “fast”. Unfortunately they were expensive and hard to program. VCs lost money. Early/mid 90s there were a few workstationy startups that promised supercomputing performance for “PC prices” that didn’t deliver on either. VCs lost money.
    Along come clusters which have effectively commoditized and mainstreamed HPC, and VCs sat out that round as they watched their bubble-oriented grid plays destroy value and cash. Someone sold them bill of goods after bill of goods. And they lost money.
    During this time, the HPC market went from 1.18B$/year in the early 90s, to over 10B$ (using the US definition of B == 10**9, or 10^9 for the Fortran challenged) last year. Using a simple growth model, assuming that the growth is uniform,
    (1.18B$) ( 1 + x )**(Nyears) = 10B$
    where Nyears is 17 (1990 to 2007), x would be your yearly growth rate. I know, there are lots of reasons to assume that x has not been constant over time, but it is a reasonable estimation and first approximation to the correct time dependent value of x. For example, we know what x is recently, and it is underpredicted by this model. Regardless, solve this for x, and you get x ~ 0.1259, or x ~= 12.6% growth rate year over year.
    Recently, we know that the growth rate has been in the 20% region. I grabbed data from as many sources as possible for this, and there seems to be a fairly good consistency to it.
    See for details. Note the jink up in early 2000’s. This is due to Linux cluster adoption. The value for x for this aspect of the market is larger than 20%, and has been pushing 60% for several years. Linux clusters represent more than 1/2 of this market, and have been driving growth in it for several years, as well as driving the growth behind HPC in general.
    The idea is that the less expensive a technology is that adds significant value to a process, the more users you will have (and a non-constant dollar volume market). That is, there is elasticity in this market.
    So if you can deliver a small clusters worth of power for about the cost of a node, this could drive additional growth. Assuming that is, that you can build it. And this requires capital. Which doesn’t seem to be interested in this market.
    I had a discussion with Sharad Sharma over at Orbit Change about this, and his thoughts seem to be that it made sense that there is no interest, though I could not understand his reasoning. There are hugely disruptive technologies coming to the mix in HPC, there are a large pool of potential adopters. Moreover talking with them, they understand the value. You don’t need to convert them, they are already hooked. One needs the right product at the right time, and a reasonably good plan to get it there (not to mention a clueful management team).
    I can’t fathom why the private equity market doesn’t have an interest in this. Friends have suggested many theories, including the previous burn histories, a herd mentality, and so on. Look at what has been funded, try to guess what their business model is (e.g. how they plan to make money and sustain themselves, while creating value and return for the owners). Some of it, well, you have to shake your head.
    As someone building a profitable accelerated computing (aka HPC) company with a good customer base, my hope is, eventually, they will get around to noticing this market and not discounting it out of hand. Until then, organic growth appears to be the only option. Which doesn’t do much for being able to fund disruptive product development. Sleep disruptive development, but that gets old, fast.

  3. Yes. That was at the southern tip of Israel. I could see the Red sea from the hotel, as well as the Jordanian border. It was at the Dan Eilat.
    Beautiful area, wonderfully scenic. Reminded me of San Diego.

  4. Sorry about that. It was ECCB06. European Conference on Computational Biology 06. It was delayed due to last summer’s war. See their website

Comments are closed.