Rumor: Crosswalk is done

Robin at Storagemojo (great blog, read it religiously) says he had heard a rumor. Yeah, I should invoke the 24 hour rule. If you are at Crosswalk, and it is still going, please let him (and me) know.

Robin makes a great point there

The High Performance Computing (HPC) focus is questionable. My experience is that folks who start with HPC stay there, because each HPC customer has so many interesting requirements that engineers love to solve and that will never make a dime for the company. Performance-driven customers ask for all kinds of enhancements that most commercial customers will never notice.

Well, ok, I take a little issue with it in that the performance driven folks now realize that stability is critical to performance. Scaling up on a bleeding edge is a sure way to have lots of down time. But apart from that minor nit-picking on my part, he is dead on the money.
HPC as a business is good. It is growing fast and hard. It is not small. It is just very … economically … challenging. Value is often dictated to be inversely related to the price of a system, without little consideration for applicability. Things like Linux alter the equation somewhat, in that price per node is now more or less fundamentally a cost-of-hardware-and-hardware-support issue rather than a TCO of OS + accouterments. And many of us have argued that the compute node hardware is fundamentally disposable. Not the infrastructure mind you, just the nodes. Or at least with the right software atop this (think Tiburon).
HPC storage is growing rapidly. Again, not small. Again, somewhat challenging economically. Value is again a function of price, but it is also a function of “tell me how you are not going to lose my data” as well as “tell me how quickly you will get my data”. They are weighted a little differently from your standard enterprise storage, but for HPC, they have similar problems (home directory storage, results, notebooks, etc). Fast storage in an HPC system? Local disk is (with very few exceptions) always fastest. You can create block storage and mete out bits to compute nodes on the fly. Or not. Shared scratch space can come from very fast servers. Again, use IB, 10GbE, or GbE. The problem is that the time cost of moving large volumes of data in clusters is large. There are some commercial tools (Exludus) which help with one aspect, and open source tools (new xcp coming) which help with others.
The point being in HPC, data motion is hard. Not so curiously as HPC diffuses outward into many other areas, they are discovering that moving data is hard. Storing data requires some data motion. If you are going to go to all the trouble to create the data, and move it around, wouldn’t it be nice to store it safely as well?
This is where HPC storage startups want to play. And so do some of the bigger fish: EMC, NetApp, …
But like computing side of HPC, customers expect storage pricing to drop while capabilities grow. Which puts pressure on the storage folks. Lots of pressure.
Hopefully they aren’t done. I have heard numerous rumors about others going under or laying off people in the last 2 weeks. Some have proven true (Open Source Servers) others have not.