The End of Over-Provisioning

One part of the debate on cloudonomics that often gets overlooked is the effect of over-provisioning. Many people look at the numbers and say they can run a server for less money than they can buy the same capacity in the cloud. And, assuming that you optimize the utilization of that server, that may be true for some of us. That that’s a very big and risky assumption.

People are optimists – well, at least most of us are. We naturally believe that the application we spend our valuable time creating and perfecting will be widely used. That holds true whether the application is internal- or external-facing.

In IT projects, such optimism can be very expensive because we feel the need to purchase many more servers than we typically need. On the other hand, and with the typical lead time of many weeks or even months to provision new servers and storage in a traditional IT shop, it’s important to not get caught with too little infrastructure. Nothing kills a new system’s acceptance more than poor performance or significant downtime due to overloaded servers. The result is that new systems typically get provisioned with far more infrastructure than they really need. When in doubt, it’s better to have too much than too little.

As proof of this it is typical for an enterprise to have server utilization rates below 15%. That means that, on average, 85% of the money companies spend with IBM, HP, Dell, EMC, NetApp, Cisco and other infrastructure providers is wasted. Most would peg ideal utilization rates at somewhere in the 70% range (performance degrades above a certain level), so that means that somewhere between $5 and $6 of every $10 we spend on hardware only enriches the vendors and adds no value to the enterprise.

Even with virtualization we tend to over-provision. It takes a lot of discipline, planning and expense to drive utilization above 50%, and like most things in life, it gets harder the closer we are to the top. And more expensive. The automation tools, processes, monitoring and management of an optimized environment require a substantial investment of money, people and time. And after all, are most companies even capable of sustaining that investment?

I haven’t even touched on the variability of demand. Very few systems have a stable demand curve. For business applications, there are peaks and valleys even during business hours (10-11 AM and 2-3 PM tend to be peaks while early, late and lunchtime are valleys). If you own your infrastructure, you’re paying for it even when you’re not using it. How many people are on your systems at 3:00 in the morning?

If a company looks at their actual utilization rate for infrastructure, is it still cheaper to run it in-house? Or, does the cloud look more attractive. Consider that cloud servers are on-demand, pay as you go. Same for storage.

If you build your shiny new application to scale out – that is, use a larger quantity of smaller commodity servers when demand is high – and you enable the auto-scaling features available in some clouds and cloud tools – your applications will always use what they need, and only what they need, at any time. For example, at peak you might need 20 front-end Web servers to handle the load of your application, but perhaps only one in the middle of the night. In this case a cloud infrastructure will be far less costly than in-house servers. See the demand chart below for a typical application accessed from only one geography.

So, back to the point about over-provisioning. If you buy for the peak plus some % to ensure availability, most of the time you’ll have too much infrastructure on hand. In the above chart, assume that we purchased 25 servers to cover the peak load. In that case, only 29% of the available server hours in a day are used: 174 hours out of 600 available hours (25 servers x 24 hours).

Now, if you take the simple math a step further, you can see that if your internal cost per hour is $1 (for simplicity), then the cloud cost would need to be $3.45 to approach equivalency ($1 / 0.29). A well-architected application that usea autoscaling in the cloud has the ability to run far cheaper than in a traditional environment.

Build your applications to scale out, and take advantage of autoscaling in a public cloud, and you’ll never have to over-provision again.

Follow me on twitter for more cloud conversations: http://twitter.com/cloudbzz

Notice: This article was originally posted at http://CloudBzz.com by John Treadway.

(c) CloudBzz / John Treadway

12 thoughts on “The End of Over-Provisioning

Add yours

  1. Isn't under-provisioning more the concern? There's a great scene in some movie where there's an explosion in NYC and then everyone's cellphone starts ringing one after another in some meeting. It seems to me that's the big risk we face in the future, when everyone's expansion strategy is to use the same cloud capacity.

  2. Ahh, that's the ideal. However the reality is that companies using public clouds routinely over provision machines through carelessness, lack of coordination and poor visibility into what's going on in their public cloud. Of companies that use cloud, how many know what % of their cloud use is waste?

  3. You make a good point, but if all of your applications used autoscaling then they would not be over-provisioned. If you rely on people to manage the size of your cloud footprint, there's no way to manage it closely enough.

  4. Well, I suppose that could be a concern in a hypothetical case. But in the reality is that a real over-provisioning situation is very common and costs companies millions in unnecessary expense.

  5. Is it realistic to expect that time zone differences can be used to smooth the cloud providers daily usage curve?IMHO no because if I am in Europe, I do not want my VMs to be hosted in the US or Asia for latency reasons. So, most of Europe being on the same timezone +/- 1h, all the european datacenters will reproduce the curve you show. So the cloud providers will not be able to pass the economic advantage to their customers.More realistic is equalization within a timezone between business hours (pros) and leisure hours (individuals).Are there customers willing to buy cheap compute time nightly or during the week end ? Big question in HW has to be amortized 24×7?

  6. Note that this post was from the user/customer perspective, not the cloud provider.Cloud providers do have the challenge of managing the fixed asset investment and optimizing the 24×7 usage as you say. This is why Amazon has added spot pricing. Surprisingly many web sites from European companies are running in Amazon in the US (to take advantage of the big market). So, latency is not an issue for all. In any event, I would expect over time that more clouds will provide incentives for off-hours processing in order to increase their asset utilization rates, which will drive down usage fees overall. Power companies do the same, and for the same reason. Rates are higher during the day than at night, particularly for commercial customers.

  7. John, For cloudonomics – Is it enough to build applications to scale out? Old habits die hard and so do server utilization rates. Assuming you run the application on a cloud instance, I believe there is going to be similar trend for a typical cloud server instance – in terms of utilization. Refer to this study by accenture – where they benchmark web server running on amazon using SPECWeb2005. The focus of apps such as web server is holding on to the incoming traffic – so their peak requirement is not CPU but bounded by network traffic capacity of an amazon instance. Typical CPU utilization for such an instance is barely 20%. So now I think we are into an era where people pay by the hour assuming they are paying for 100% of resources in cloud whereas they are actually utilizing only 20% CPU (full network capacity and memory may be). Thoughts?

  8. Would this network bandwidth constraint issue be any different in a corporate environment? Also, some “enterprise” cloud providers are using higher-grade networking gear and topologies with 10gigE SLAs on the network. That may have an impact.

  9. The issue is no different than in a corporate environment but with cloud instances the 'server utilization' issue becomes larger – for an end user – not for the cloud provider as much. Earlier you would buy a server so it is paid for whether you use 15% of its resources or less depends on your corporate efficiency. But with a cloud instance, where you pay 100% for all resources – be it CPU, network or memory – but actually utilize less (for a typical application) than what you paid for in terms of server resources – that too by the hour.There are technologies which can help improve server utilization without impacting any running applications – be it physical, virtual or cloud instance server. Take a look at http://silverline.librato.com.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: