We need a systems support engineer. Have a look at our career page for more info.
I had heard that there were some … er … issues with latest round of Microsoft patches. I think I have a backup of this VM, so I can roll back the changes. Sheesh. And yes, bringing up repair does in fact hang it hard. [sigh]
Stent is out, kidney stones should be gone (modulo lithotripsy). Anesthetic really took it out of me on Monday, and I’d argue, on Tuesday. Happily, after Monday night, I was off of pain meds. I don’t like stuff that messes with my head, and what they gave me definitely messed with my head. I had … Read moreOT: Back to (almost) normal
one siCluster, one specialty computing system. Added 1/8th of a PetaByte to our shipped storage. I do apologize, we’ve been busy. And, by all indications, we haven’t seen nuthin yet. Lots of business queued up for Q2, including several siClusters, several specialist computing clusters, and a number of deskside supers in the CX1 and Pegasus … Read moreDelivered two clusters last week …
From this article one gets the impression that Windows will not be supporting Itanium anymore.
Way back during the initial marketing onslaught of Itanium, it was said to be the architecture for the next 25 years for Intel. That was a decade ago. It seems to be losing software support fairly rapidly though. Its hard to see this lasting another 15 years … let alone 5 years.
Linux still has Itanium support for now, but fewer users of it are out there. Important subsystems (like accelerated video drivers) aren’t being built for it anymore … there is no real market for them, on Itanium.
We still have an Itanium2 box in the lab. Haven’t turned it on in more than a year. We no longer have any customers with these systems. Not that we have lost customers, just that the customers have thrown away these systems.
One of the things smaller companies want to do is to build alliances that are mutually beneficial … be they reseller relationships, or partnerships where the sum of the two partners offerings provides significant tangible benefits for customers. Enhance offerings, provide more value to customers. These need to be two way streets … they can’t be a one way flow, if they are to have real value.
We’ve built some partnerships over the past few years, some very good, some, not as good, that have ranged between one way “tell us what you will do for us” scenarios, to what we thought were bilateral efforts at promoting mutual business.
We’ve had some folks work with us in a reseller mode … we offer very good, fast/reliable systems, and offer aggressive … over the top … support to our customers.
Our partnerships have varied in quality. Some are mutual, opportunities flow both ways … some are ad-hoc … a middleman is needed thanks to the way the customer has restricted themselves to doing business … some are opportunistic … we see ways to pursue business with a potential partner.
Its that last bit I want to talk about.
We’ve had a chance to do a compare/contrast in recent months between GlusterFS and Lustre. Way back in the 1.4 Lustre time period, we helped a customer get up and going with it. I seem to remember thinking that this was simply not something I felt comfortable leaving at a customer site without a dedicated file system engineer monitoring it/dealing with it 24×7. Seriously, it needed lots of hand-holding then.
Have a recent 1.8.2 installation … I have the same indelible impression … that I am concerned with whether or not the customer has the interest/man-power to really maintain this. Lustre is not for the feint of heart. It requires a serious over-engineering of resources in order to prevent some of its myriad of issues from leaping up and interrupting you (yeah, we should be able to tune these issues, but …) . If you don’t have the luxury of over-engineering these resources, you’d better get ready to dedicate a person or more. It can easily become a full time job for someone.
I don’t consider that a benefit, and I don’t see this problem improving soon.
About a decade or more ago, there was a “fight” if you will, for the future of high performance computing systems application level programming interfaces. This fight was between proponents of SMP and shared memory systems in general, and DMP shared-nothing approaches.
In the ensuing years, several important items influenced the trajectory of application development. Shared memory models are generally easier to program. That is, it’s not hard to create something that operates reasonably well in parallel. But it is still hard to get great (near theoretical maximum) performance out of these systems. And, back in that day, shared memory busses, for single core CPUs, became more expensive as you added more CPUs to them. That is, going from 4 processors to 8 processors involved a great deal more wire, motherboard lands, chipset support, and other things like this.
DMP (Distributed memory parallel) shared nothing approaches were and are harder to program. This hasn’t changed. MPI exists and it works. But it is quite easy to get yourself into trouble with it. MPI isn’t terribly complex, but it allows complex interactions to be created, and behaviors to emerge. These behaviors can have performance impacts, not usually what you want.
In the early 2000’s, people realized that they could write code for DMP, and it would run just as nicely on SMP. So … to a degree, the game is over. Just write MPI and be done with it.