I have been talking with a person about using FPGAs to accelerate non-scientific applications, business applications recently.
The idea is fundamentally interesting. HPC is not the only thing that needs acceleration. My question is this: Where are the critical pain points, what processing takes a great deal of time that people would be willing to spend, I dunno, $10,000 US to make go faster?
I am using $10,000 US as a rough guess. It might be $1,000 US. I don’t believe that there is much of a market for an accelerator above $30,000 US, and we can largely see that the cost-benefit-analysis stops working at that point.
Along these lines nVidia just released the Quadro Plex. We are quite interested in this, hopefully they will loan out a sample or two … That is a GPU architecture, but with some effort, that architecture could be made to work well for other calculations.
MDGrape was announced. Point product though, but it is a petaflop point product. I don’t have a problem with that, though it will be kind of hard to sell/maintain a point product accelerator that is orders of magnitude above the sweet spot pricing for HPC.
It might be worth not calling it HPC either. This is accelerated computing, and accelerated processing. The HPC purists tend to disdain clusters, or non-global shared memory as inferior. Fine. Dedicated computing circuits. Or Accelerator Processor Units, APUs for short.
This is what FPGAs are when applied to computing tasks. Not general high performance computing with special purpose memory systems and powerful general purpose high performance processors, but dedicated task specific computing circuits.
I think the market for these might just be a whole lot larger than the “small” $9 billion that is HPC these days.
Viewed 11950 times by 4949 viewers