Itanic sinks at SGI

This was a long time coming. The previous management, prior to them sinking in april 2009, nor the management teams before that … going back at least 10 years, would never have done this.
Its a shame. It should have happened long long ago.

Well, apparently SGI is moving on as well. Noer says they will continue to sell the 4700, but the next generation shared memory system will be based on the new Ultraviolet architecture. That design will use Intel Nehalem EX chips along with the next generation NUMAlink interconnect. Presumably this means the future “Tukwila” quad-core Itanium chips will never find a home at SGI. Although the 4700 line will continue to be offered for some period of time, the idea is to eventually migrate all the current users to the new architecture. “The intention is that Ultraviolet is the future of the shared memory systems line,” says Noer.

Basically Itanium is now legacy at SGI.
I remember asking at some engineering/sales meeting what the plan B was. I remember the management blinking rapidly, but not giving an answer. That was 1999 or so.
This is a new SGI. Not the company I spent 6 years at. It appears they now have a management with a clue. This is a good thing.
They still need to par down the burn rate, and adjust many other things in order to compete with the Dell’s and HPs of the world.

3 thoughts on “Itanic sinks at SGI”

  1. At an SGI “product roadmap” (non-)disclosure about 3 years ago, x86 SMP with cNUMA at this time was on the roadmap. I asked why they weren’t doing it sooner and was told that the necessary support on the chip would not be available from Intel for some years — which I guess means now. It was clear at that time that they really wanted very badly to move to this architecture. Of course, the demise of Itanium wasn’t on the roadmap. I guess the question now is whether there will be a cost/performance advantage of a fancy hardware interconnect over vSMP, which in fact SGI was among the first to (re)market, but a lot of the potential SMP market is outside of HPC.

  2. P.S. Great headline. If all this HPC stuff goes perflooie, perhaps you could apply for a job on the N. Y. Post….

  3. @Peter
    Gee thanks (on the headline part). We are focusing upon the HPC problems we see going forward, which are decidedly in storage and desktop systems. I have a strong belief that there is a coming bifurcation in the market, between ever more powerful desktops where you can do most of your work, and cloud-like devices upon which you can run larger jobs, with instant on clusters. Storage is one of those things that many people talk about, but few do right. It is not easy to get high performance data flow to and from disk. And as data sets grow, this gets more and more important.
    HPCs been moving downmarket for as long as I have been in it. I don’t see this changing, and desktops are the next obvious systems. vSMP in servers allows you to build larger memory/core count systems as you need, without paying the very high price for them up-front.
    Oddly enough, the Opteron has support in its chipset for things similar to the Stanford DASH design (Origin/Altix memory/connection architecture). That was based upon the DEC Alpha connection network stuff they inherited when they acquired that team from DEC. Unfortunately they had a bug in the implementation, so it never scaled beyond 8 way, and there was a serious performance hit at 8 way for memory intensive codes. The caching protocols had some … er … performance issues.
    We have an Itanium in our lab. Hasn’t been turned on in 6 months (you are welcome to it for the cost of shipping).
    As for HPC going perflooie? I hope not. If so, we have significant growth outside of traditional HPC anyway. Turns out many people need very high performance tightly coupled storage systems, and our pricing is quite aggressive. We still run into the “but you are not ‘X'” syndrome, but, as one of our partners put it, when the customers see “the order(s) of magnitude faster disk, they rapidly stop objecting.”

Comments are closed.