I guess this means that it is ending 15 years early?

From this article one gets the impression that Windows will not be supporting Itanium anymore.
Way back during the initial marketing onslaught of Itanium, it was said to be the architecture for the next 25 years for Intel. That was a decade ago. It seems to be losing software support fairly rapidly though. Its hard to see this lasting another 15 years … let alone 5 years.
Linux still has Itanium support for now, but fewer users of it are out there. Important subsystems (like accelerated video drivers) aren’t being built for it anymore … there is no real market for them, on Itanium.
We still have an Itanium2 box in the lab. Haven’t turned it on in more than a year. We no longer have any customers with these systems. Not that we have lost customers, just that the customers have thrown away these systems.

Read moreI guess this means that it is ending 15 years early?

Rethinking how we build and invest in partnerships

One of the things smaller companies want to do is to build alliances that are mutually beneficial … be they reseller relationships, or partnerships where the sum of the two partners offerings provides significant tangible benefits for customers. Enhance offerings, provide more value to customers. These need to be two way streets … they can’t be a one way flow, if they are to have real value.
We’ve built some partnerships over the past few years, some very good, some, not as good, that have ranged between one way “tell us what you will do for us” scenarios, to what we thought were bilateral efforts at promoting mutual business.
We’ve had some folks work with us in a reseller mode … we offer very good, fast/reliable systems, and offer aggressive … over the top … support to our customers.
Our partnerships have varied in quality. Some are mutual, opportunities flow both ways … some are ad-hoc … a middleman is needed thanks to the way the customer has restricted themselves to doing business … some are opportunistic … we see ways to pursue business with a potential partner.
Its that last bit I want to talk about.

Read moreRethinking how we build and invest in partnerships

Cluster file systems views

We’ve had a chance to do a compare/contrast in recent months between GlusterFS and Lustre. Way back in the 1.4 Lustre time period, we helped a customer get up and going with it. I seem to remember thinking that this was simply not something I felt comfortable leaving at a customer site without a dedicated file system engineer monitoring it/dealing with it 24×7. Seriously, it needed lots of hand-holding then.
Have a recent 1.8.2 installation … I have the same indelible impression … that I am concerned with whether or not the customer has the interest/man-power to really maintain this. Lustre is not for the feint of heart. It requires a serious over-engineering of resources in order to prevent some of its myriad of issues from leaping up and interrupting you (yeah, we should be able to tune these issues, but …) . If you don’t have the luxury of over-engineering these resources, you’d better get ready to dedicate a person or more. It can easily become a full time job for someone.
I don’t consider that a benefit, and I don’t see this problem improving soon.

Read moreCluster file systems views

Did distributed memory really win?

About a decade or more ago, there was a “fight” if you will, for the future of high performance computing systems application level programming interfaces. This fight was between proponents of SMP and shared memory systems in general, and DMP shared-nothing approaches.
In the ensuing years, several important items influenced the trajectory of application development. Shared memory models are generally easier to program. That is, it’s not hard to create something that operates reasonably well in parallel. But it is still hard to get great (near theoretical maximum) performance out of these systems. And, back in that day, shared memory busses, for single core CPUs, became more expensive as you added more CPUs to them. That is, going from 4 processors to 8 processors involved a great deal more wire, motherboard lands, chipset support, and other things like this.
DMP (Distributed memory parallel) shared nothing approaches were and are harder to program. This hasn’t changed. MPI exists and it works. But it is quite easy to get yourself into trouble with it. MPI isn’t terribly complex, but it allows complex interactions to be created, and behaviors to emerge. These behaviors can have performance impacts, not usually what you want.
In the early 2000’s, people realized that they could write code for DMP, and it would run just as nicely on SMP. So … to a degree, the game is over. Just write MPI and be done with it.
Sort of.

Read moreDid distributed memory really win?