Our market is often inundated with buzzwords. And fads sweep through organizations looking for silver bullets to their very hard problems. Some of these problems are self-inflicted … some are as a result of growth, or needed infrastructure change.
One of the biggest problems with HPC (and to a degree, storage) has been the high up-front costs to build what you need. You have to lay down capital to buy something, which may or may not have an ROI adequate to pay for it.
Many people have made very convincing cases for the ROI. Its really not a risk, and if used wisely, will provide returns significantly above the investment. That is, it usually pays off if you use it well.
But it still requires that initial up front (large) investment. Clusters and higher end SMPs have not been (until recently) inexpensive. Though they are less expensive than what preceded them.
I’ve argued for decades, that HPC goes down market … that stuff you could do 10 years ago on a massive super some distance away, you can probably do on your desktop today. The value flows down market. Which opens the market up for a much wider audience.
The “As-A-Service” (AAS) model does a good job of lowering up-front costs to obtain this capability. No decision is without costs though, and its very much worth considering this.
What I’ve been absolutely right about has been that HPC evolves to lower price (e.g. barrier to entry) for users, and expand its horizons. There are many more HPC users now than in the past. Moreover, these users want an easier HPC model … setting up/managing clusters isn’t really it.
The AAS models allow end users to light up clusters and HPC resources, on demand. This is very good for some use cases, as Doug Eadline points out in his article.
Remember, we’ve had many buzzwords and fads run through this market, and fundamentally, you need to be careful of anyone and any organization pushing this as the solution to all problems. Its not. It solves very specific needs, and does an OK job with them. It doesn’t solve others needs, and those are (and will likely remain) the central needs in the HPC realm going forward.
Moreover, the cost that you pay on the back end, for not paying the cost up front, is something of a killer for this AAS business model … in the sense that if you have high utilization of your resource (which HPC shops should have), then the cost for a local cluster, or private cloud, would be FAR lower … in some cases more than 1 order of magnitude … than a public cloud. I expect these costs to drop some, but not by a huge amount. And you shouldn’t begrudge these costs, they are part of the value of the service. How much capital would it take for you to spin up a 10k core cluster for a single job, that might last a day or two?
If your need is more bursty and sporadic, an AAS-first posture does make some sense. If your need is sustained with infrequent bursts, maybe less of a need … though … you can do some very interesting things by spinning up an augmentation cluster … basically add additional cores to your cluster, by hosting them in the cloud. This was a business model we were trying to get funding to build right about the time that EC2 launched (it wasn’t just accelerators we were playing with). Had as much success raising capital for that as we did with the accelerators. Obviously, our success rate for capital raises in the past appears to be an inverse function of how good our ideas are 🙁 . Hopefully that will change for the new work, as our idea is IMO quite a bit better than other stuff we’ve done before. But I can’t go into it now 🙂
All this said, the business model and pricing bits are not the only issues. One of the bigger (unsolved) issues is the cost and speed of the data pipe between the user and their computing hardware.
This is a non-trivial issue. Its a very hard issue. Network cost scales (very) nonlinearly with performance. There is little … almost no … competition between networking providers here in the US, and pipe costs for reasonable bandwidths are huge.
100 Mb/s bandwidth (asymmetric, with 10-15 Mb/s bandwidth up) costs ~$300/month. This is about 12.5 MB/s down, and about 1.25 MB/s up.
Lets haul out our storage bandwidth wall now.
Time to transfer 1TB = startup latency + 1TB/Bandwidth(pipe)
For 100Mb/s, B = 12.5 MB/s. So the height of the wall (for TB sized transfers, which are becoming the common order of magnitude size for our customers) is (assuming negligible startup latency):
1000GB/(0.0125 GB/s) = 80,000 seconds to transfer 1TB. One day is 86400 seconds, so 1TB/day at this rate is a rough rule of thumb.
But … this is download. Upload is 1/10th of this. So 1TB/10days.
What we need is Gigabit or higher speed, commonly everywhere. At Gigabit speed, 1TB takes a bit over 2 hours to transfer. And it needs to be symmetric.
You have this, and users, who have large storage requirements for their HPC jobs (many of the HPC users do), would be looking very closely at using remote resources. To point back to James Cuff’s post, storage as a service is hard these days, more for the issue of bandwidth to the remote storage.
But thats not all.
HPC users often need massively parallel access to storage. Look at figure 1 in Doug’s article. Not all users need this, but many/most do. A 100 MB/s pipe to disk for an HPC system is absolutely ludicrous. Its too low, by orders of magnitude. Multiple orders of magnitude.
But most clouds aren’t architected for HPC, or, as it turns out, for Big Data (which is something akin to the business version of HPC … more in a future post). Their major use cases are for web and mail servers. Backend stuff. Things that really don’t care if you can do 1GB/s to the disk.
Well, there are a few players doing something akin to HPC in the cloud. Deepak over at Amazon has been working with people doing some amazing things on the cycle engines. But they are still intrinsically bandwidth and IOP limited, and this is in part due to their design and implementation. Amazon introduced a new (expensive) high IOP target, that increases that real usable IOP rate about a factor of 10.
Note: I purposely ignore marketing benchmarks, and only look at use cases … If you want to know the BS (Benchmark Significance) scale factor, recent experience has shown that the BS ratio for SSD is about 10, for PCIe Flash its about 2-4. Though some are (far worse) than others. We report what we measure, which often gets us “hey you are coming in slower than the competition”, and our response is “lets put that to a real use case test, shall we?”
Thats for IOPs BTW. For streaming, our experience is that (most) SSDs can now nearly fill the read tests, while sucking wind hard on the write tests. They are still far better than the high IOP disk drives, but they aren’t anywhere near where the marketing numbers claim them to be.
So you can take a few, shove em in a box, and they’ll give you something that doesn’t do a terrible job. Though if you want good performance, you need to go to the well designed units. Higher performance units means you need to buy fewer units in order to achieve specific performance goals. Which lowers your costs.
Unfortunately, it doesn’t appear that most of the HPC cloud types have gone this route, and have instead drank the kool aid where 1GB/s is “fast” and “500k” IOPs (really 50k IOPs, but whats a slight exaggeration amongst friends).
The point about this is that most HPC clouds … really … aren’t. Ones that are designed for HPC include Sabalcore, Penguin on Demand, R-Systems, and a few others (CRL). On those designed for HPC, you will get good performance for many HPC apps. For large collections of virtualized web servers … mebbe not so much.
But as Doug noted, for a sizable fraction of users, this will work well. He posits 24% or so. I think that might be high, but of a reasonable order of magnitude.
One of the more painful aspects to the AAS models are customer experiences. We’ve got lots of customers whom are interested in high performance on their systems (or they wouldn’t be talking to us). Many start out telling us how some other system is better performance (at least on paper). And a few months down the road of using it, they realize it really isn’t better performance. Its much worse. We’ve had customers try this on various cloud vendor’s gear, telling us how much better it was, then come back later and say “lets build a private cloud.”
Fads suck. They waste time/effort/resources. They slow projects down. Unfortunately, fads are hype magnets. HPC as a service is not a fad. But with all the hype around AAS models, its pretty close to being tarnished by similar failures.
Worse are groups that push these fads upon others. This is very annoying to us. Same issue when we are hired to “field re-engineer” resources (e.g. the original folks didn’t really have a clue, left the customer in an awkward/broken state, and the customer needs to have stuff working). How do you explain to someone that the system they just spent months and very large sums of money on, really won’t work well for the intended mission? This is why fads suck. They blind people to accurate cost benefit and risk analyses.
HPC in the cloud can work for some use cases, and for some subset of customers. Not all customers, but some. It would be unwise to ignore it. Some subset of customers are unusually vulnerable to fads. We’ve seen this in multiple markets, and its caused me to remark on at least one occasion that there is no such thing as the X market (for suitable values of X), thats its all fad/hype and disillusionment later on.
Note that we have a cloud bit. I am not talking smack about them. I am talking soberly about them. You have to be upfront and forthright about the risks.
AAS models increase operational availability risk due to many more moving parts between you and the computing system. They increase performance risks as you cannot guarantee you will always get the deterministic performance you might need. If you are going to work with them on business critical HPC and storage, you really … REALLY … need to replicate the system and data somewhere else, preferably with another provider. Say Amazon and Joyent, or Sabalcore and R-Systems. Or …
You have risks in terms of being able to move enough data as quickly as possible. You’ve got a hack around that in terms of Fedex-Net, the highest bandwidth net in the world.
You have to consider the potential cost of these risks before you embark on a critical project. Some may say “damn the torpedos, full steam ahead.” Chances are, if they run head first into one of the many issues, the will regret that viewpoint.
AAS models provide good flexibility, and an “instant on” that you really can’t get by moving atoms. Are we at a crossover point that I alluded to in the past? No … not yet.
So let me conclude with noting that Google has done two things as of late. First, they’ve set up a high performance reasonably priced network test bed in Kansas City. Second, they’ve opened up the GCE.
Google gets it. I’d expect Amazon to be scrambling behind the scenes as well (if they aren’t already, that they will do this soon) to either buy a network provider, or build their own.
So I expect that the storage bandwidth wall (bandwidth of the smallest pipe) issue to go away in 2-5 years. They will still need very fast storage in their data centers … far … far … faster than what they have now. But once that barrier comes down, this opens the door to many new possibilities. And as I noted, Google appears to get this.
Viewed 32343 times by 4948 viewers