After my recent post on EC2 Micro instances, I received a great comment from Robert Jenkins over at CloudSigma regarding the “false construct” of fixed instance sizes. There’s no reason why an EC2-small has to have 1.7GB RAM, 1 VPU and 160GB of local storage. The underlying virtualization technology allows for fairly open configurability of instances. What if I want 2.5GB of RAM, 2 VPUs and 50GB of local storage? I can’t get that from Amazon – but the Xen hypervisor they doesn’t prohibit this. You’re never going to use exactly 160GB of storage, and Amazon is counting that most won’t use more than 50 or 60 GB – showing you how much of a deal you get for something they never have to provide.
Same is true for most cloud providers. Rackspace allows you to go down to 256mb RAM, 10GB disk and then a 10Mbps bandwidth limit. You can use more bandwidth and disk, you just pay for it.
Perhaps customers like the “value meal” approach with pre-configured instance types and they sell better. Perhaps Amazon likes being able to release a new instance type every quarter as a way to generate news and blog posts. Perhaps their ecommerce billing systems can’t handle the combinatorial complexity of variable memory, storage, bandwidth and VPUs. Whatever the reason, these fixed instance types limit user choice.
They’re dumb because they’re unnecessary.
No, you are completely ignoring the complexity of the problem. If anyone could choose the desired amount of CPU, RAM, storage, …, the best resource allocation is quite a complex problem (at least, it’s a generalization of the knapsack problem, which is itself NP). nnYou can read this (my) article on the problem: A General Model for Virtual Machines Resources Allocation in Multi-tier Distributed Systems, published in ICAS 2009 Conference Proceedings.
John, nnThe fixed instance size is a step along the way, but I’d like to see a real pay for what you use model (e.g. clock cycle based pricing myself). nnThanks for the retweet on Top 10 Cloud Computing Startups in Boston (http://bit.ly/d8bDjT). nnJoe
I’m glad my earlier comment has sparked off some debate!nnJust to respond to Paolo’s comments regarding the n-dimensional packing problem (i.e. filling up physical hosts efficiently). In reality we find that these things don’t impact significantly. We have high RAM users, we have high CPU users and everything in between; it becomes white noise over the whole cloud. If we find we need to shift the structural balance of our overall cloud with regards to CPU/RAM/Storage ratios we can do so by physically adjusting it. Alternatively we can use our price mechanism to adjust (because we have independent resource based pricing).nnThat ties in with Joe’s comments which I totally agree with. The second half of eliminating server instance sizes is moving to transparent resource unit based time pricing which we have. Apart from over-selling certain resources, the other reason why many IaaS vendors use instance sizes is because it is a great way to obscure prices. Amazon even goes so far as to quote CPU in rough EC2 compute units. Try making a meaningful comparison on a per resource basis between Rackspace and Amazon (Warning: it takes a while!). nnBy moving to resource based pricing, users buy resources in aggregate then form their infrastructure into however many nodes/servers they wish without regard to instance sizes. That’s the sort of efficiency gains that IaaS should and can be delivering.nnBest wishes,nnRobertnn– nRobert JenkinsnCo-foundernCloudSigmanhttp://www.cloudsigma.com
Ah, the old MIPS model.
i dont think so..not really old model..