Market Parallels – Cloud and Open Source?

Any new technology market has its own lifecycle and rhythm.  From mainframes, through smartphones, there’s the early years, the rapid growth, some slowing down and inevitably a decline.  Some technologies never go away completely (e.g. mainframes), while others never really get a foothold (insert your own example here).

Open source was a software movement that began as an idea and now dominates how many new software offerings are marketed and sold. Open source is not a technology, but a business and legal framework within which technology is propogated.  Still, the biggest companies in software are largely closed source – Oracle, SAP, etc.  Nearly all specialty vertical apps (e.g. trading systems) are closed source.  Whereas most new development technologies including databases and tools are open source.  Given that open source is more a legal construct which bleeds into sales and marketing, it’s highly likely that there will be both open and closed source models co-existing for any foreseeable future.  Further, open source shrinks the size of the industry from a revenue perspective by default  (though parodoxically, software spending is up this year even in this economy).

What about cloud computing?  Will there still be the need for the cloud modifier in the future?  In the past most infrastructure was sold directly to the users under a capex spending model – this includes servers, databases, operating systems, etc.  Of total infrastructure spending in 20 years, how much will still be for on-premise capex, and how much for cloud opex?  Will ecconomies of scale drive infrastructure in the cloud to a point where the infrastructure market will shrink in both real and nominal terms?

Will the purveyors of servers, networking and core infrastructure software sell 90% of their wares to cloud companies?  Will what we currently call  “cloud computing” be  just plain “computing” in the future?  Time will tell, and it will be a long time before the cloud distinction becomes superflous, but it will be interesting to watch.

Advertisements

Private Cloud for Interoperability, or for “Co-Generation?”

There has been a lot of good discussion lately about the semantics of private vs. public clouds.  The general issue revolves around the issue of elasticity.  It goes something like this: “If you have to buy your own servers and deploy them in your data center, that’s not very elastic and therefore cannot be cloud.”  Whether or not you buy into the distinction, pivate clouds (if you want to call them that) do suffer from inelasticity.  Werner Vogels in his VPC blog post debunks the private cloud as not real:

“Private Cloud is not the Cloud

These CIOs know that what is sometimes dubbed “private cloud” does not meet their goal as it does not give them the benefits of the cloud: true elasticity and capex elimination. Virtualization and increased automation may give them some improvements in utilization, but they would still be holding the capital, and the operational cost would still be significantly higher.”

What if we were to look at the private cloud concept as an interoperability play?  If someone implements a cloud-like automation, provisioning and management infrastructure in their data center to gain many of the internal business process benefits of cloud computing (perhaps without the financial benefits of opex vs. capex and elastic “up/down scaling”), it still can be a very valuable component of a cloud computing strategy.    It’s not “the Cloud” as Werner points out.  It’s just part of the cloud.

To realize this benefit requires a certain degree of interoperability and integration between my “fixed asset cloud” and the public “variable expense cloud” such that I can use and manage them as a single elastic cloud (this is what is meant by “hybrid cloud”).  Remember that enterprises will always need some non-zero, non-trivial level of computing resources to run their business.  It is possible that these assets can be acquired and operated over a 3-5 year window at a lower TCO than public cloud equivalents (in terms of compute and storage resource).

Managing these fixed + variable hybrid cloud environments in an interoperable way requires tools such as cloud brokers (RightScale, CloudKick, CloudSwitch, etc.).  It also requires your internal cloud management layer to be compatible with these tools.  Enterprise outsourcers like Terremark, Unisys and others may also provide uniform environments for their clients to operate in this hybrid world. In hybrid you get the benefits of full elasticity since your view of the data center includes the public cloud providers you have enabled. You may choose to stop all new capex going forward while leveraging the value of prior capex (sunk costs) you’ve already made.  In this context, private cloud is very much part of your cloud computing strategy.  A purely walled off private cloud with no public cloud interoperability is really not a cloud computing strategy – on this point I agree with Vogels.

Co-Generation:  Selling Your Private Cloud By the Drink

Now, assuming you’ve built a really great data center capability, implemented a full hybrid cloud environment with interoperability and great security, what’s to stop you from turning around and selling off any excess capacity to the public cloud?  Think about it – if you can provide a fully cloud-compatible environment on great hardware that’s superbly managed, virtualized, and secured, why can’t you rent out any unused capacity you have at any given time?  Just like electricity co-generation, when I need more resources I draw from the cloud, but when I have extra resources I can sell it to someone else who has the need.

You might say that if your cloud environment is truly elastic, you’ll never have excess capacity.  Sorry, but things are never that easy.  Today large enterprises typically have very poor asset utilization, but for financial and other reasons dumping this capacity on eBay does not always make sense.  So, what about subletting your computing capacity to the cloud?

Then, if I take all of the big corporate data centers in the world and weave them into this open co-generation market, then instead of buying instances from Amazon, Citigroup can buy them from GE or Exxon.  What if you need a configuration that is common in the enterprise, but not in the cloud (e.g. true enterprise class analytic servers with 100TB capacity), perhaps you can rent one for a few days.  It may be more cost-effective than running the same job on 300 Ec2 instances over the same timeframe.

There may be many reasons why the co-generation cloud computing market may never evolve, but those reasons are not technical. Doing this is not rocket science.