Irresistable force? meet Immovable object ...
By joe
- 5 minutes read - 856 wordsThere is a strong push (well at least the articles tell us so, and you know, its not like they are ever wrong … nosiree) to move computing into a cloud. This is sometimes a good idea, there are specific profiles which fit the cloud paradigm. Quite a few profiles actually. But there are some speedbumps. Literally. Bandwidth has been, and will be, an issue for the foreseeable future. Clouds have limited bandwidth in and out. End users have limited bandwidths in and out. You can “solve” this with Fedex-Net or UPS-Net. Encrypt disks in transit, require multi-factor decryption keys, and ship a couple-o-terabytes back and forth in a handsome carrying case. Clouds are, if you believe the press and articles out there … the greatest thing since sliced bread. They are the future. They are … an irresistible force. This may be. But apart from bandwidth, they are running hard into immovable objects.
Software licenses. If you are lucky enough to use open source for your application, your cost of elasticity is merely a marginal cost of adding the next chunk of hardware, which the cloud providers do a very good job of making as low cost to you as possible. If you use ISV software, with per-seat licensing … adding the (N+1)th CPU will add a (usually very significant) cost. So your cost of elasticity is “merely” the marginal cost of adding additional permanent licenses. Which completely dwarfs the marginal cost of adding the next chunk of hardware. Or buying hardware for that matter. This is in part due to the current business models of the ISVs revolves around selling seats. Which provides a very high barrier to usage of the tool, limits the tool usage, and provides some non-elastic revenue for the ISV. Suppose they wanted to adopt a utilization cost model. Demark time in quantum of hours. Start with 1 year being 8760 hours. Use a utilization fraction U, which represents the fraction of the year that the license is in use on average. Use L as the license cost per CPU per year. Your current costing model is then L/(8760U) on a per utilized hour basis. Here are some examples. U = 25% (1/4 of the time, the customers utilize the license) L = $3000 (cost per CPU per year). then your utilized cost per hour = $1.37. If you want to charge a premium for this, apply a margin M to this so that your billing rate per CPU-hour is (1+M) utilized cost per hour. For a 100% margin (being greedy here), this is $2.74/CPU-hour. So, a 16 CPU run, run for 12 hours would cost $526.08 + cost of hardware access. One of our partners provides bare metal systems with 4 CPUs at $0.50/hour. So 16 CPUs would be $2/hour. For 12 hours, this would be $24 of hardware rental cost. So if you sell 10000 CPU licenses per year, for $30M in license revenue, how many CPU-hours would you need to replace this? For the above example, this would be 10.9M CPU hours. Now assume that each small group does 3 runs per week of the size indicated above. Thats 3x 192 CPU hours = 576 CPU hours/group. 28800 CPU hours per group per year. This is interesting as you only need 380 customers of this size to completely replace this revenue. And you likely have more than 380 customers. So you get the idea. I have a nice little spreadsheet that goes through this model. But replacement is not the only model. Augmentation is a good idea … enable people to elastically enhance their simulation capability. On the fly. Pay for what they use. This lowers the access barrier, enabling many more users to access the code. It enables users to occasionally scale usage up. Even if they can’t afford that capital for the hardware, they can usually afford the software cost as an expense for a run. But this requires a sea-change in the way companies bill for their product. And one thing we see is that companies with business models that “work” are reluctant to change. Even if they don’t grasp that what they perceive to be “working” may not be what their customers perceive to be working. They are the immovable object in this equation. I do suspect that the smart organizations are going to figure out how to do the cloud model. And we would be happy to speak with anyone who wants to pilot test this on the ISV side, as well as on the customer side. Its those organizations who adapt to this model who are going to grow their businesses something fierce. Likely at the expense of those who don’t. I am not playing cloud booster here, I am pointing out that cloud computing represents a new way for HPC users to acquire processing capability, in an elastic manner. This allows for easier decisions, and better control over their costs, with less up-front expenditure on capital equipment. The vendors who understand that this represents a huge opportunity, and who choose to act upon this, will be the ones to thrive.