Robin at Storagemojo tears into the latest buzzword-enabled marketing phrase, cloud computing. Robin’s thesis is that there are impediments to moving to the cloud, those being bandwidth and the “non-magic” nature of Google’s infrastructure.
I don’t agree with his ascribing blame for the bandwidth issue to Cisco. It really is not their issue. Bandwidth providers in the US are the primary culprit … we have been behind the curve for quite some time in terms of bandwidth delivered to business/homes. The reason for this is the same as the reason for the RIAA suits against their end user base … protecting a business model and existing investment by avoiding newer technology that would cause significant capital expenditure.
Like it or not, bandwidth costs money. We can lay fibre to every business and home. Now everyone has 100 Mb to their demarc. Great. Now they start using it. Lets ask the backbone people how much infrastructure they have to handle this … Remember, infrastructure costs money.
Understand also, I am not defending their decisions/actions. I would like 100 Mbit to my business/house. Affordable 100 Mb. It is rightly viewed as transformative, and this is why the SaaS folks are all salivating over the prospect of remote delivery of “stuff”. Their model absolutely depends upon bandwidth being cheap/readily available.
But the bandwidth is being provided by private companies on the backbone, and the last thing they want to do is to drive down the revenue their (fairly sizable) capital expenditure is generating. So the likelihood of “cheap” bandwidth emerging broadly is low unless a competitive business model emerges that can make this sensible for one of the larger players. AT&T/Verizon is experimenting with this. The technology works, they are simply trying it out to see if the business model works. This is their right.
We don’t have to like it. But it is most assuredly not Cisco’s issue. So I disagree with Robin on that point.
On the Google-non-magic machines, well, I think he is touching a side of the issue.
There is a parallel development which shows why this “could” work (again, with enough bandwidth) for companies. It could massively reduce the cost of capital equipment, while massively increasing the cost of expense/recurring rental costs. A net wash or, more likely, a net loss on the balance sheet, but it gets rid of that capital equipment, which makes CFO’s very happy.
Remember our (US) accounting rules make it very expensive to the business to buy things (goes on the balance sheet as a depreciating asset, and you have to write off the cost over N years, not right away), versus “renting” things (goes on the balance sheet as an expense, which is written off immediately). This effectively forces many CFOs looking to make the company look financially better to forbid capital equipment purchases, or simply make them hard.
Big computers, which the business needs, are capital equipment. Oops.
So there is a built in bias towards renting things.
Last decade, and early into this decade, this was accomplished via leasing machines. You leased the hardware, didn’t buy it, and though it actually cost you *more* to do this over the term of the lease as you are paying for the cost of money as well as the cost minus residual, as well as the profit to the leasing company. It all went off the balance sheet that year. This doesn’t work for commodity machines … ok, it can work, as long as you don’t mind the large sucker stamp on your forehead … the machines don’t cost very much to begin with, and have effectively no residual value at the end of the process. So renting these machines means you are voluntarily paying more for the same hardware, which will be thrown away at the end of the lease. Your CFO will actually be happier paying more money for the same gear though. Go figure.
This gets into the ASP model. ASPs, the fore-runner of SaaS, have the concept of “rent-a-cycle”. Really it is not that, it is more of “rent-an-app-and-a-machine-to-run-it-on-and-the-connectivity” model. You rent applications, machines, licenses, bandwidth. The model is that it is less expensive for them to run it than for you, and you can pay for what you use.
Which tends to ignore the usual rule of IT … the work tends to (over)fill the available resource on a shorter time scale than your purchase/approval cycle. Take a look at your IT staff’s white boards. You will see triages in progress. They can’t solve everything, so they figure out which fires to battle, and what the important elements to put their efforts behind.
Which means that ASP usage climbs. And climbs. And climbs. And so do the bills. Which makes the CFO start asking hard questions. That there will be disconnects between thought processes, expectations, and reality, shouldn’t surprise anyone. CFOs genuinely think that when they hear “cheaper” it really means “it will cost less”. Not that “it will cost you less so you will use more and therefore pay more”. CIOs have similar concerns.
But what serves to alter this ASP model (which, for a number of reasons, is actually broken out of the gate), is virtualization. Fire up your VM with your application, connecting to your license server. You don’t care (for the most part) what the machine is on the far end. As long as you can connect to it and fire off your VM.
This creates (effectively) a market for cheap cycles. That is, you create a VM, upload it, and run it. You get charged by the processor hour with bandwidth and storage. You don’t like their prices/service? Go move your VM.
One of my companies’ mantras is “lowering barriers”. This is something we are huge believers in. Virtualization lowers barriers. Yes, you pay for it in performance. But what do you get in return? Literally, electronic machine portability. Move your (virtual) box anywhere. Now the large collections of processors become a “cloud” of machines upon which you can run your applications, and there may i(eventually) be an emerging market, trading in computing cycles as a commodity. Which is what virtualization makes it.
Another thing that lowers barriers is Linux, as it reduces the barriers and costs to wide scale system installation, which is absolutely needed for this model of computing to work.
Google’s relation to this is tangential. Their cluster/cloud/grid is huge enough that you can do this on their systems (if they let you). Amazon (some really smart people seem to work there, and grasp this) are doing this. Google, from rumors on various rags, seem to also be readying something. Microsoft appears to want in. Sun fancies itself a member of this group (it isn’t).
Cloud computing as it were, demands you minimize cost per node. Of hardware, of software, of licenses. It makes no sense to scale a cloud to N machines if your software licenses scale to N machines (see the above note on why Linux is perfect for this). You don’t need to pay for VM execution software for clouds, just use VMware player or similar.
Now this is interesting from a SaaS point of view. To do SaaS you need an infrastructure. What if you can lower your cost of infrastructure by moving your backend where you get the best cost per cycle? That is, cloud computing actually makes SaaS feasible.
This of course requires that a competitive market emerges that doesn’t require consumers to pay what the market will bear. This is part of what killed ASPs, the other part being of course, the huge up-front capital costs.
So, I disagree with Robin that it is mere smoke and mirrors (he didn’t say this, but this is my interpretation of what he said, in a pithy wrapping). Whether or not a real market will emerge still remains to be seen. We would like it to, as we also believe that storage is a commodity, and we have some of the fastest-least expensive units out there.
Viewed 16770 times by 3597 viewers