# tracking other companies

SGI and Clearspeed. SGI is now down to $4.90/share at its close. It dropped 11% yesterday. Market cap is$57M.

Yow. Yeah, the market has been volatile. I am not sure that explains this. With 1600 employees, this is a value of $36k/person. They are rapidly getting to a place where their valuation and ours becomes comparable. They are getting wins, but maybe the wins are not as profitable as they need … or maybe the ones we hear about are the only ones rather than a representative set. It is hard to be a profitable HPC company. Most of the HPC companies out there put out other companies products. Not a terrible thing, but less margin for you. That and the market is brutally unforgiving on margins. You want 40% margins? You are not going to get them in this market. I noted when they emerged from bankruptcy, that they were competing with the HPs and Dells of the world. I cannot emphasize this enough, one should not try to out-Dell, Dell. They can ship megatons. It makes far more sense to try to work with them. Given their loss of real differentiation, what makes them different from a Dell? This is the question they need to answer. Yeah, some folks have brand loyalty. Whether or not this makes sense in this market, versus getting real value (and understanding what real value represents in terms of features/performance) is a whole other discussion. As I said before, HPC is an unforgiving market. Focus where you can add real value. Ok, now onto Clearspeed. Jim Black noted in an earlier comment Any comment on todays news on Clearspeeds new CSX700 chip? It seems to be a big improvement on the last one. The blog on tomshardware website thinks it could represent a breakthrough. Basically it says; Clearspeed will give 96 GFlops Out Of 12 Watts at double precision which compares well with Nvidia’s chip 100 GFlops in double precision mode and consume 170 watts. The issues around acceleration tend to center around cost-benefit and effort-cost. How much does it cost and what will its impact likely be, and how much effort and at what cost will obtaining this benefit be? If it takes you 3 months to get 20x performance, what is the value of that extra speed to you, versus the 10x you might be able to get with another choice. That is, what is the value of the opportunity cost/alternative choices? Put more simply, where is the price-performance knee, and what technologies sit below/about this knee? The knee represents something like an optimax. Maximize value (e.g. return on money and time investment) at a minimum cost (in money and time). You can see such knees in computer parts prices … premium parts cost more than the benefit they give. Look at Opteron quad core 8xxx series. The 5% increase in clock rate will cost you 20+%. Ok. Back to Clearspeed and Jim’s comments. 96 GFLOP at 12 W for new CSD.L part. 100 GFLOP at 100W for new nVidia part (according to Jim). Great. Now look at cost ratios. CSD.L:$5,000/96 GFLOP = $52.1/GFLOP nVidia:$1,600/100 GFLOP= $16/GFLOP Ok, so the nVidia costs less to acquire. What about power cost? There is a 158W difference between the two. In the US, the power costs are less than$0.10/kW-h. So this 158W difference will amount to … $0.37/day additional power cost. Over a 3 year life cycle, this adds$415 to the cost.

But wait you say. What about the added cooling cost? Since you dump that 158W extra into the room, you have to remove it. This will cost at least the same as the power cost, if not more. So lets assume that we should triple the cost of the 3 year life cycle power difference… 1x for power difference, 2x for cooling costs. This would add $1246 to the cost of the nVidia unit relative to the CSD.L unit. So now we are looking at$2846 cost of the nVidia part over 3 years.

Ok.

Now look at programming cost.

SDK for end users.

CSD.L: SDK cost ~$6,000 nVidia: SDK cost ~$0.00

Hmmmm.

Now look at the economies of scale: nVidia will be shipping 2E+06 – 4E+07 CUDA enabled GPUs to its customers. CSD.L will be shipping 1E+02 to 1E+04 parts to customers.

Basically, this speed uptick won’t matter much. Which of these two platforms will ISVs target en-masse? And why?

CUDA was a masterstroke. It lowered barriers to using accelerators right away. There are some valid criticisms of it (you should see some of the code we are playing with), but at the end of the day, it is possible for mere mortals to pull the SDK, compile code, and deliver applications at a very low relative cost.

CSD.L SDK sorta kinda works on Redhat. Didn’t work on SuSE or Ubuntu.

It might help the gentle reader to know that we have a CSX600 PCI board in lab, as well as a CUDA card or 3. And an FPGA Bioboost.

Basically, unless the CSD.L SDK is free, I don’t see CSD.L demand increasing. Add to this the limited size of a potential CSD.L platform, I don’t see ISVs rushing to support it. I do see ISVs and customers giving the CUDA platform a serious set of kicks.

Some people argue that one technology platform is better than another. Unfortunately, those arguments don’t matter to the market. The better mousetrap rarely ever wins. This is not to say CUDA is bad … it isn’t. It is not saying CSD.L isn’t good, nor that FPGAs aren’t good. Its just that they ran into the perfect storrm of nVidia making some very wise, very strategic moves.

Sort of like Dell and HP battling out furiously in the cluster market. The smaller vendors are collateral damage.

If business is a contact sport, HPC is a bloodbath.

CSD.L and SGI, both, need to find a defensible niche. Right now, they aren’t there.

Viewed 6127 times by 1237 viewers