Or the businesses that have grown dependent upon simulation? Well, not all of them are doing well. Ford’s troubles are fairly well known, but this is not a supercomputing issue, it is a business conditions issue. Dreamworks and Boeing aren’t in trouble, they are doing well. As are many of the others who attended this meeting. All of them appear to indicate that they need more computing power and more software that can take advantage of this power.
If you ask some of the attendees, you may hear pining for the vectors of old. There are advantages to this architecture for some problems, and it is at least moderately amusing or somewhat ironic that years after microprocessor based systems effectively displaced vector machines in the HPC marketplace, that the microprocessors are starting to sport multiple parallel functional units: tightly coupled SIMD SSE “vector” registers, and loosely coupled “massively parallel” multi-cores.
Their concerns about software are valid. Why build a 1024 processor system if you will only be able to use 16 of those processors? Why use such a system if it is simply not cost effective to use so many processors? Programmatic parallel efficiency can have direct bottom line impacts … if the resource is being inefficiently used, it may not be as productive, and hence may bottleneck critical processes.
Unfortunately the people who hold the purse strings in companies and governments are simply not willing to pay the costs for scalable software. Remember, the software vendors are going to follow the markets they are in, or have created. If their customers are under pressure to buy smaller systems, and skip buying the larger systems, then the software vendors are going to follow them onto these smaller systems. Grumbling about the software vendors isn’t going to solve the problem.
The issue is the cost of access to large cycle systems. Assume pricing around $1000 per core for reasonable systems, add in about $1000 per port of low latency interconnect per 4 cores, and we are talking about $5000/4-core node with low latency fabric. You want to get to 1000 processors? Thats 250 nodes. Or if you prefer, 250 of these nodes. Thats about 1.25M$ (roughly) of capital outlay. Sure, your can lease it and get it on your expense budget paying for the financing of the depreciation.
Unless you are Google or Microsoft, $1.25M is a rather large hurdle to clear. It is a barrier you must cross before you can start working. And then you have the infrastructure costs to pay for. 40 of these nodes consume about 15kW of power. 250 of them consume about 94kW of power. About 3 tons of AC per rack, the 6 and a quarter racks will require about 20 tons of AC.
All of those raise the price of the system. Now add on an easy to install and manage OS and you can keep your costs down. Or you can pay $469/node and add $0.12M, or about 10% to the cost of the system. Your choice.
Ok, back to the topic. All of these things raise the cost of the barrier to use the resource. You have to pay this in order to use it.
Unfortunately the end users needs are not necessarily well coupled to their budgets. If they were, then large supercomputers would be the norm. Not smaller ones.
There is a perpetual drive to push costs down. This is a good thing in general. As with all change, there are consequences to the change. Such as smaller systems. Made out of less expensive components. Which are able to do more than their predecessors.
These changes in aggregate drive computing towards different models over time. Different technological models, as vectors gave way to the super micros, which gave way to the commodity processors. We have a pretty good idea of what comes next in the technological roadmap.
The business models will change as well. The acquisition barrier will need to drop for supercomputing to be more adopted. We believe this will happen with the right business model. And we think we know what that model is.
These are indeed interesting times. Not in terms of the Chinese curse, but genuinely interesting times whereby change will be rapid, enabling, and hopefully not disruptive to end users. The end users have indicated a monotonically increasing desire to consume supercomputing resources. It is the coupling of the desire, the ability to pay for what you need, and access to resources of the appropriate type that will bring supercomputing to the masses. Doesn’t mean everyone will buy big machines. Something else looks far more likely.
Viewed 12378 times by 2593 viewers