I haven’t posted on accelerators in a while, and this will be short.
I have posited that GPUs would basically win out in the accelerator wars, with possibly a distant second to something like Cell if enough of them could be made inexpensively available.
My question now is, given the intent of Intel in this market, will Larrabee be able to get traction in the graphics world? And therefore, effectively displace nVidia (and to a lesser extent, AMD) as the accelerator king?
The economics of accelerators are something like this … An accelerator provides a real wall-clock speedup S such that S = (wallclock time on one node) / (wallclock time on accelerator + node). The cost of getting a factor of S more performance cannot approach S* cost of one node, or it is easier to simply leverage the easier programming model of the node, and buy S more nodes. The cost of S must be such that (cost of S + cost of node) / (cost of node) is not greater than some small factor, which must certainly be less than (cost of S)/(cost of node).
What these factors are is ill determined, though some people have a gut feel, and it is constrained by Moore’s law. Moore’s law gives you 1 OOM (order of magnitude) every ~6 years. So if S = 10 (basically one OOM), and you assume you can replace your hardware at about the same cost, twice in that interval, your cost of S shouldn’t be more than 2x the cost of the node. Because the end user will be able to get this S in 5 years, spending 2x the cost of the node.
This is at the high end of the analysis.
I think at 1/2 to 2/3 the cost of a node, you wind up with a “no-brainer”. It also helps if you have 10M+ in circulation that people can work with on laptops and desktops.
Which gets us back to Larrabee. What it costs will be important, as important as how hard it is to program.
But nVidia was out first, and this does matter in this market.