Plus ca change, plus c'est la meme chose

The more things change, the more they stay the same. My former employer (left on good terms, between layoffs a decade ago next month) SGI has layoffs coming.
This is a tough environment folks, a very tough environment. We pulled out a nearly 12% revenue growth in it. SGI posted a profit, but if you click through to the underlying article (hit InsideHPC first though), you see some interesting analysis. First on the size of the layoff.

If you assume a fully burdened cost of an employee to be roughly $150k (that includes benefits, salary, everything), then a 90-day severance could be somewhere in the neighborhood of $50 ($150k / 4 = $37.5k, then add a bit for padding and insurance). At $6.6Million, that?s over 130 employees.
Looking back at their final quarterly filing for 2010, you can get some rough numbers on their employees (Search for ?headcount?):
Manufacturing: 581 people
R&D: 278
Sales & Marketing: 247
Administrative: 193
Total: Around 1300
That means they just axed 10% of the company. That?s more than a little trimming, that?s pretty substantial reorganization.

The assumption may be wrong on the burdened costs though, and its possible projects are being shuttered/written off. And these are speculation, but the article author is probably in the right order of magnitude, I’d even argue that the first digit may be correct within a factor of 2. His analysis is not a function of the first digit, but comparing the order of magnitude to other known quantities. The subsequent Register article does suggest that this analysis is correct.
Then on UVs …

?ve heard a lot of skepticism over SGI?s recent amazing quarterly report, given the displeasure I?ve heard from several individuals over the results of the UltraViolet product line. I?m starting to think that, even with the great PR they got from that report, it was largely smoke and mirrors over the change in accounting and rushing a few major sales just under the wire at the end of the year. I?ll be surprised if they can do it again this year.

This is interesting. I won’t say why right now, but let me ask a simple question.
Is the UV interesting to the end user base? That is, is there enough of a real interest in large memory large core count SMP to merit a market?
This also begs other questions, as in, are the issues with the UV performance, pricing, or otherwise?
Color me curious, I’d really like to know. If you have one, or know of one, and are open to speaking about it, please contact me offline (or post here if you wish). This isn’t opposition research, this is pure market curiousity. Is there really enough of an interest in this, such that it makes sense for someone to build it? Or is it basically another large customer wants one or two, and then you have to figure out how to sell them to a generally unreceptive world?
Understand, I am a huge fan of shared memory. Shared memory is impossible on clusters without ScaleMP’s vSMP or now the symmetric computing package (though I don’t know much of whats new with that one, I have to look into it).
GPUs and accelerators don’t currently play well with these mechanisms, hopefully this will change. But large shared memory machines can enable certain types of processing that are hard to do on distributed memory. Then again, distributed memory code works really well on shared memory thanks to the data locality (ignoring a few pathological cases).
GPUs are in heavy demand. Are large SMPs? Is there a real need there? Is there a real desire? What are the driving points behind this? Would you, the average reader of HPC stuff, be willing to pay for such a beast?
I am, to a degree, trying to assess the total addressable market for these things, in part, by assessing what people think about them. From there I can think further on the issues of large SMP markets, and whether or not the UV really can survive, or even speculate what a competitor to the UV would look like.
And this brings up another question. Given that there is one UV, would the market actually entertain a second non-SGI built UV-like machine? If it were “cheaper” or “faster” would this matter? Or is expense and performance not the issues we should be thinking about when we speculate on competition?
So is this a large market SGI has to itself, or a small market SGI is trying to completely dominate?
This all said, SGI is going to cut staff. And projects. Pulling revenue in to end of quarters/end of year can be good and bad. It means your next quarter could be bumpy.
And I can tell you, in this market, when pipelines fluctuate wildly, that it introduces more risk to the business.

1 thought on “Plus ca change, plus c'est la meme chose”

  1. I’d also like to better understand the future of SMP.
    We used to see big SMP systems in pharma, typically on the biology side, to do, say, large BLAST searches keeping the whole DB in core. I’m not that well hooked up to that side now so I don’t know if those folks are still using SMP.
    The other Q. is: where is vSMP adequate and where not? I imagine that it’s not as good as hardware SMP for high-performance applications. But consider environments where you want to support a lot of interactive logins. For instance, a large development shop where you want to maintain a central facility for both automated and interactive building and debugging. This to me sounds like a good SMP use case, but I suspect vSMP would be fine here. The issue is load balancing, not performance.
    Then, of course, there are legacy performance apps which are designed to be ultra scalable on large SMP boxes. If you’re in a place that has such apps, and such boxes, isn’t it likely that you would write your next app to take advantage of them? It’s easier to program for SMP than for distributed memory, most would agree.
    But you yourself said, “Understand, I am a huge fan of shared memory.” Perhaps if you told us why, you would answer your own question — not to mention mine. 😉

Comments are closed.