Ok, I know that this is dangerous. Randy is a very smart guy and he has a lot more experience on the public cloud side than I probably ever will. But I do feel compelled to respond to his recent “Elasticity is NOT #Cloud Computing …. Just Ask Google” post.
On many of the key points – such as elasticity being a side-effect of how Amazon and Google built their infrastructure – I totally agree. We have defined cloud computing in our business in a similar way to how most patients define their conditions – by the symptoms (runny nose, fever, headache) and not the underlying causes (caught the flu because I didn’t get the vaccine…). Sure, the result of the infrastructure that Amazon built is that it is elastic, can be automatically provisioned by users, scales out, etc. But the reasons they have this type of infrastructure are based on their underlying drivers – the need to scale massively, at a very low cost, while achieving high performance.
Here is the diagram from Randy’s post. I put it here so I can discuss it, and then provide my own take below.
My big challenge with this is how Randy characterizes the middle tier. Sure, Amazon and Google needed unprecedented scale, efficiency and speed to do what they have done. How they achieve this are the tactics, tools and methods they exposed in the middle tier. The cause and the results are the same – scale because I need to. Efficient because it has to be. These are the requirements. The middle layer here is not the results – but the method chosen to achieve them. You could successfully argue that achieving their level of scale with different contents in the grey boxes would not be possible – and I would not disagree. Few need to scale to 10,000+ servers per admin today.
However, I believe that what makes an infrastructure a “cloud” is far more about the top and bottom layers than about the middle. The middle, especially the first row above, impacts the characteristics of the cloud – not its definition. Different types of automation and infrastructure will change the cost model (negatively impacting efficiency). I can achieve an environment that is fully automated from bare metal up, uses classic enterprise tools (BMC) on branded (IBM) heterogeneous infrastructure (within reason), and is built with the underlying constraints of assumed failure, distribution, self-service and some level of over-built environment. And this 2nd grey row is the key – without these core principles I agree that what you might have is a fairly uninteresting model of automated VM provisioning. Too often, as Randy points out, this is the case. But if you do build to these row 2 principles…?
Below I have switched the middle tier around to put the core principles as the hands that guide the methods and tools used to achieve the intended outcome (and the side effects).
The core difference between Amazon and an enterprise IaaS private cloud is now the grey “methods/tools” row. Again, I might use a very different set of tools here than Amazon (e.g. BMC, et al). This enterprise private cloud model may not be as cost-efficient as Amazon’s, or as scalable as Google’s, but it can still be a cloud if it meets the requirement, core principles and side effects components. In addition, the enterprise methods/tools have other constraints that Amazon and Google don’t have at such a high priority. Like internal governance and risk issues, the fact that I might have regulated data, or perhaps that I have already a very large investment in the processes, tools and infrastructure needed to run my systems.
Whatever my concerns as an enterprise, the fact that I chose a different road to reach a similar (though perhaps less lofty) destination does not mean I have not achieved an environment that can rightly be called a cloud. Randy’s approach of dev/ops and homogeneous commodity hardware might be more efficient at scale, but it is simply not the case that an “internal infrastructure cloud” is not cloud by default.