theCUBE Interview at EMC World 2013

Here is a video of me discussing the current state of cloud providers and where the industry is going live inside theCUBE with Wikibon’s John Furrier and Stu Miniman from the floor of EMC World 2013 in Las Vegas.

Thank you to ServiceMesh for inviting me to speak.

(c) 2013 John Treadway / CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow me on Twitter @CloudBzz.

Measuring the Business Value of Cloud Computing

My favorite and least favorite question I get is the same – “Can you help me build a business case and ROI for cloud computing?”

Well, yes… and no. The issue is that cloud computing has such a massive impact on how IT is delivered that many of the metrics and KPIs that are typically used at many enterprises don’t capture it.  I mean, how do you capture Agility – really?

In the past I have broken this down into 3 buckets. Yes, some people have more but these are the big three…

Agility

Agility is reducing cycle time from ideation to product (or system delivery) – incredibly difficult to measure given that it’s hard to do apples to apples when every product/project is unique. You can do this in terms of Agile methodology task points and the number of points per fixed timeframe sprint on average over time. Most IT shops do not really measure developer productivity in any way at the moment so it’s pretty hard to get the baseline let alone any changes. Agility, like developer productivity, is notoriously difficult to quantify.  I have done some work on quantifying developer downtime and productivity, but Agility is almost something you have to take on faith. It’s the real win for cloud computing, no matter how else you slice it.

Efficiency

In a highly automated cloud environment with resource lifecycle management and open self-service on-demand provisioning, the impetus for long-term hoarding of resources is eliminated. Reclamation of resources, only using what you need today because it’s like water (cheap and readily available), coupled with moving dev/test tasks to public clouds when at capacity (see Agility above) can reduce the dev/test infrastructure footprint radically (50% or more). Further, elimination of manual processes will reduce labor as an input to TCO for IT. In a smaller dev/test lab I know of, with only 600 VMs at any given time, 4 FTE onshore roles were converted to 2 FTE offshore resources.

There’s a very deep book on this topic that came out recently from Joe Weiman called Cloudonomics (www.cloudonomics.com). One of the key points is to be able to calculate the economics of a hybrid model where your base level requirements are met with a fixed infrastructure and your variable demand above the base is met with an elastic model. A key quote (paraphrase) “A utility model costs less even though it costs more.”

The book is based on this paper — http://joeweinman.com/Resources/Joe_Weinman_Inevitability_Of_Cloud.pdf

And can be summarized as…

Inline image 1
Source: Joe Weiman in “Cloudonomics”

A hybrid model is the most cost-effective – which is “obvious” on the surface but now rigorously proven (?) by the math.

P = Peak.  T = Time.  U = the utility price premium.

If you add the utility pricing model in Joe Weiman’s work to some of the other levers I listed above, you get a set of interesting metrics here. Most IT shops will focus on this to provide the ROI only. They are the ones who are missing the key point on Agility. However, I do understand the project budgeting dance and if you can’t show an ROI that the CFO will bless, you might not get the budget unless the CEO is a true believer.

Quality

What is the impact of removing human error (though initially inserting systematic error until you work it through)? Many IT shops still provision security manually in their environments, and there are errors. How do you quantify the reputation risk of allowing an improperly secured resource be used to steal PII data?  It’s millions or worse. You can quantify the labor savings (Efficiency above), but you can also show the reduction in operational risk in IT through improved audit performance and easier regulatory compliance certification. Again, this is all through automation.

IT needs to get on the bandwagon and understand the fundamental laws of nature here — for 50-80% of your work even in a regulated environment, a hybrid utility model is both acceptable (risk/regulation) and desirable (agility, economics, and quality).

Do a Study?

The only way to break all of this down financially is to do a Value Engineering study and use this to do the business case. You need to start with a process review from the outside (developer) in (IT) and the inside (IT) out (production systems). Show the elimination of all of the manual steps.  Show the reduced resource footprint and related capex by eliminating hoarding behavior. Show reduced risk and lower costs by fully automating the provisioning of security in your environment. Show the “cloudonomics” of a hybrid model to offset peak demand and cyclicality or to eliminate or defer the expense of a new data center (that last VM with a marginal cost of $100 million anybody?).

History Lesson

In 1987 the stock market crashed and many trading floors could not trade because they lacked real-time position keeping systems. Traders went out and bought Sun workstations, installed Sybase databases, and built their own.  They didn’t wait for IT to solve the problem – they did it themselves.  That’s what happens with all new technology innovation.

The same thing happened with Salesforce.com. Sales teams just started using it and IT came in afterwards to integrate and customize it. It was obviously a good solution because people were risking IT’s displeasure by using it anyway.

If you really want to know if cloud computing really has any business value, take a look at your corporate credit card expenses and find out who in your organization is already using public clouds – with or without your permission. It’s time to stop calculating possible business value and start realizing actual business value from the cloud.

(c) 2012 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

IaaS Cloud Litmus Test – The 5 Minute VM

I will make this simple.  There is only one question you need to ask yourself or your IT department to determine if what you have is really an Infrastructure-as-a-Service cloud.

Can I get a VM in 5-10 minutes?

Perhaps a little bit more detailed?

Can a properly credentialed user, with a legitimate need for cloud resources, log into your cloud portal or use your cloud API, request a set of cloud resources (compute, network, storage), and have them provisioned for them automatically in a matter of a few minutes (typically less than 10 and often less than 5)?

If you can answer yes, congratulations – it’s very likely a cloud.  If you cannot answer yes it is NOT cloud IaaS. There is no wriggle room here.

Cloud is an operating model supported by technology.  And that operating model has as its core defining characteristic the ability to request and receive resources in real-time, on-demand. All of the other NIST characteristics are great, but no amount of metering (measured service), resource pooling, elasticity, or broad network access (aka Internet) can overcome a 3-week (or worse) provisioning cycle for a set of VMs.

Tie this to your business drivers for cloud.

  • Agility? Only if you get your VMs when you need them.  Like NOW!
  • Cost? If you have lots of manual approvals and provisioning, you have not taken the cost of labor out.  5 Minute VMs requires 100% end-to-end automation with no manual approvals.
  • Quality? Back to manual processes – these are error prone because humans suck at repetitive tasks as compared to machines.

Does that thing you call a cloud give you a 5 Minute VM?  If not, stop calling it a cloud and get serious about building the IT Factory of the Future.

“You keep using that word [cloud].  I do not think it means what you think it means.”

– The Princess Cloud

 

 

(c) 2012 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

Open Call to VMware – Commercialize Cloud Foundry Software!

After spending time at VMware and Cloud Expo last week, I believe that VMware’s lack of full backing for Cloud Foundry software is holding back the entire PaaS market in the enterprise.

Don’t get me wrong, there’s a lot of momentum in PaaS despite how very immature the market is. But this momentum is in pockets and largely outside of the core of software development in the enterprise. CloudFoundry.com might be moving along, but most enterprises don’t want to run the bulk of their applications in a public cloud. Only through the Cloud Foundry software layer will enterprises really be able to invest. And invest they will.

PaaS-based applications running in the enterprise data center are going to replace (or envelope) traditional app server-based approaches. It is just a matter of time due to productivity and support for cloud models. Cloud Foundry has the opportunity to be one of the winners but it won’t happen if VMware fails to put their weight behind it.

Some nice projects like Stackato from ActiveState are springing up around cfoundry, but the enterprises I deal with every day (big banks, insurance companies, manufacturers) will be far more likely to commit to PaaS if a vendor like VMware gets fully behind the software layer. Providing an open source software support model is fine and perhaps a good way to start. However, this is going to be a lot more interesting if VMW provides a fully commercialized offering with all of the R&D enhancements, etc.

This market is going to be huge – as big or bigger than the traditional web app server space. It’s just a matter of time. Cloud Foundry is dominating the current discussion about PaaS software but lacks the full support of VMware (commercial support, full productization). This is just holding people back from investing.  VMware reps ought to be including Cloud Foundry in every ELA, every sales discussion, etc. and they need to have some way to get paid a commission if that is to happen. That means they need something to sell.

VMware’s dev teams are still focused on making Cloud Foundry more robust and scalable. Stop! It’s far better to release something that’s “good enough” than to keep perfecting and scaling it.
“The perfect is the enemy of the good.” – Voltaire

It’s time for VMware to get with the program and recognize what you they and how it can be a huge profit engine going forward – but they need to go all in starting now!

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

VMware’s OpenStack Hook-up

VMware has applied to join the OpenStack Foundation, potentially giving the burgeoning open source cloud stack movement a huge dose of credibility in the enterprise. There are risks to the community in VMware’s involvement, of course, but on the balance this could be a pivotal event. There is an alternative explanation, which I will hit at the end, but it’s a pretty exciting development no matter VMware’s true motivations.

VMware has been the leading actor for cloud computing in the enterprise. Most “private clouds” today run vSphere, and many service providers have used their VMware capabilities to woo corporate IT managers. While the mass-market providers like Amazon and Rackspace are built on open source hypervisors (typically Xen though KVM is becoming more important), the enterprise cloud is still an ESXi hypervisor stronghold.

Soapbox Rant: Despite the fact that most of the enterprise identifies VMware as their private cloud supplier, a very large majority of claimed “private clouds” are really nothing more than virtualized infrastructure. Yes, we are still fighting the “virtualization does not equal cloud” fight in the 2nd half of 2012. On the “Journey to the Cloud,” most VMware private clouds are still in the Phase I or early Phase II stages and nowhere near a fully elastic and end-to-automated environment driven by a flexible service catalog etc.

VMware’s vCloud program includes a lot of components, old and new, anchored by the vCloud Director (“vCD”) cloud management environment. vCD is a fairly rich cloud management solution, with APIs, and several interesting features and add-ons (such as vCloud Connector).

vCD today competes directly with OpenStack Compute (Nova) and related modules. However, it is not really all that widely used in the enterprises (I have yet to find a production vCD cloud but I know they exist). Sure, there are plenty of vCD installations out there, but I’m pretty sure that adoption has been nowhere near where VMware had hoped (queue the VMware fan boys).

From early days, OpenStack has supported the ESXi hypervisor (while giving Microsoft’s Hyper-V a cold shoulder). It’s a simple calculus – if OpenStack wants to operate in the enterprise, ESXi support is not optional.

With VMware’s overtures to the OpenStack community, if that is what this is, it is possible that the future of vCloud Director could be very tied to the future of OpenStack. OpenStack innovation seems to be rapidly outpacing vCD, which looks very much like a project suffering from bloated development processes and an apparent lack of innovation. At some point it may have become obvious to people well above the vCD team that OpenStack’s momentum and widespread support could no longer be ignored in a protectionist bubble.

If so, VMware should be commended for their courage and openness to support external technology that competes with one of their strategic product investments from the past few years. VMware would be joining the following partial list of OpenStack backers with real solutions in (or coming to) the market:

  • Rackspace
  • Red Hat
  • Canonical
  • Dell
  • Cloudscaling
  • Piston
  • Nebula
  • StackOps

Ramifications

Assuming the future is a converged vCD OpenStack distro (huge assumption), and that VMware is really serious about backing the OpenStack movement, the guys at Rackspace deserve a huge round of applause. Let’s explore some of the potential downstream impacts of this scenario:

  • The future of non-OpenStack cloud stacks is even more in doubt. Vendors currently enjoying some commercial success that are now under serious threat of “nichification” or irrelevancy, include: Citrix (CloudStack), Eucalyptus, BMC (Cloud Lifecycle Management), and… well, is there really anybody else? You’re either an OpenStack distro, and OpenStack extension, or an appliance embedding OpenStack if you want to succeed. At least until some amazingly new innovation comes along to kill it. OpenStack is to CloudStack as Linux is to SCO? Or perhaps FreeBSD?
    • Just weigh the non-OpenStack community against OpenStack’s “who’s who” list above. If you’re a non-OpenStack vendor and you are not scared yet, you may be already dead but just not know it.
    • As with Linux v. Unix, there will be a couple of dominant offerings and a lot of niche plays supporting specific workload patterns.  And there will be niche offerings that are not OpenStack.  In the long run, however, the bulk of the market will go to OpenStack.
  • The automation vendors (BMC, IBM, CA, HP) will need to embrace and extend OpenStack to stay in the game. Mind you, there is a LOT of potential value to what you can do with these tools. Patch management and compliance is just scratching the surface (though you can use Chef for that too, of course). Lots of governance, compliance, integration, and related opportunities for big markets here, and potentially all more lucrative and open to differentiated value. I’ve been telling my friends at BMC this for the past couple of years – perhaps I’ve got to get a bit more vociferous…
  • The OpenStack startups are in a pretty tough position right now. The OpenStack ecosystem has become it’s own pretty frothy and shark-filled “red ocean,” and the noise from the big guys – Rackspace, Red Hat, VMware, Dell, etc. – will be hard to overcome. I foresee a handful of winners, some successful pivots, and the inevitable failures (VCs invest in risk, right?). There are a lot of very smart people working at these startups, and at cloudTP we work with several of them, so I wouldn’t count any of them out yet. But in the long run, if the history of open source is any indicator, the market can’t support 10+ successful OpenStack software vendors.
  • Most importantly, it is my opinion that OpenStack WILL be the enterprise choice in the next 2-3 years. Vendors who could stop this – including VMware and Microsoft – are not getting it done (Microsoft is particularly missing the boat on the cloud stack layer). We’ll see the typical adoption curve with the most aggressive early adopters deploying OpenStack today and driving ecosystem innovation.
  • Finally, with the cloud stack battle all but a foregone conclusion, the battle for the PaaS layer is ripe for a blowout. And unlike the IaaS stack layer, the PaaS market will be a lot less commoditized in the near future.  There is so much opportunity for differentiation and innovation here that we will all have a lot to keep track of in the coming years.

Alternative Explanations

Perhaps I am wrong and the real motivation here for VMware is to tactically protect their interests in the OpenStack project – ESXi integration, new features tied to the vSphere roadmap, etc. The vCD team may also be looking to leverage the OpenStack innovation curve and liberal licensing model (Apache) to find and port new capabilities to the proprietary VMware stack – getting the benefit of community development efforts without having to invent them.

My gut tells me, however, that this move by VMware will lead to a long-term and strategic commitment that will accelerate the OpenStack in the Enterprise market.

Either way, VMware’s involvement in OpenStack is sure to change the dynamic and market for cloud automation solutions.

——-

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

Google Compute Engine – Not AWS Killer (yet)

GCE Logo
(c) Google

Google launched their new “Google Compute Engine” yesterday at I/O. Here’s more info about GCE on the Google Developer’s Blog, and a nice analysis by Ben Kepes on CloudAve.  If “imitation is the sincerest form of flattery” then it’s clear the guys at Google hold the Amazon Web Services EC2 in very high regard. In many ways, GCE is a really good copy of EC2 circa 2007/2008. There are some differences – like really great encryption for data at rest and in motion – but essentially GCE is a copy of EC2 4-5 years ago.

GCE is missing a lot of what larger enterprises will need – monitoring, security certifications, integration with IAM systems, SLAs, etc. GCI also lacks some of the things that really got people excited about EC2 early on – like an AMI community, even the AMI model so I can create one from my own server image.

One of the key selling points that people are jumping on is pricing. Google claims 50% lower pricing, but that doesn’t hold against reserved instances at Amazon which are actually lower over time than GCE.  And price is rarely the primary factor in enterprise buying anyway. Plus, you have to assume that Amazon is readying a pricing response so whatever perceived advantage Google might have there will quickly evaporate

Other missing features that AWS provides today:

  • PaaS components – Relational Database Service (MySQL, SQL Server and Oracle), Elastic Map Reduce, CloudFront CDN, ElastiCache, Simple Queue Service, Simple Notification Service, Simple Email Service, Simple Email Service
  • Direct Connect – ability to run a dedicated network segment into AWS back to your data center
  • Virtual Private Cloud – secure instances that are not visible to public internet
  • Deployment tools – IAM, CloudWatch, Elastic Beanstalk, CloudFormation
  • Data Migration – AWS Import/Export via portable storage devices (e.g. sneaker net) for very large data sets
  • and others

Bottom line is that GCE is no AWS Killer. Further, I don’t think it ever will be. Even more – I don’t think that should be Google’s goal.

What Google needs to consider is how to create the 10x differentiation bar that any new startup must have.  Google Search was that much better than everybody else when it launched.  GMail crushed Yahoo Mail with free storage, conversation threads and amazingly fast responsiveness. Google Maps had AJAX, which blew away MapQuest and the others. And so on. You can’t just be a bit better and win in this market. You need to CRUSH the competition – and that ain’t happening in this case.

What would GCE have to offer to CRUSH AWS?

  • Free for production workloads up to a fairly robust level (like free GBs of GMail vs. Yahoo’s puny MBs, the ability to run most small apps for no cost at all would be highly disruptive to Amazon)?
  • A vastly superior PaaS layer (PaaS is the future – If I were rewriting The Graduate… “just one word – PaaS”)?
  • A ginormous data gravity well – think of if Google built a data store of every bit of real-time market data, trade executions, corporate actions, etc – they’d disrupt Bloomberg and Thomson Reuters too!  Or other data – what is the data that they can own (like GIS, but more broadly interesting) that can drive this
  • Enterprise SaaS suite tied to GCE for apps and extensions – what if Google bought SugarCRM, Taleo, ServiceNow and a dozen other SaaS providers (or built their own Google-ized versions of these solutions), disrupted the market (hello, Salesforce-like CRM but only free), and then had a great compute story?
  • A ton of pre-built app components (whatever they might be) available in a service layer with APIs?

No matter what the eventual answer needs to be, it’s not what I see on the GCE pages today. Sure, GCE is mildly interesting and it’s great that Google is validating the last 6 years of AWS with their mimicry, but if there’s ever going to be an AWS killer out there – this ain’t it.

Open Clouds at Red Hat

Red Hat has ben making steady progress toward what is shaping up as a fairly interesting cloud strategy.  Building on their Deltacloud API abstraction layer and their CloudForms IaaS software, a hybrid cloud model is starting to emerge. Add to this their OpenShift PaaS system, and you can see that Red Hat is assembling a lot of key components. Let’s add the fact that Red Hat has gotten very involved with OpenStack, providing an interesting dynamic with CloudForms.

Red Hat is the enterprise king in Linux (RHEL), strong in application servers (JBoss), and has a lot of very large customers.  Their VM environment, RHEV (aka KVM) won’t displace VMware in the enterprise space any time soon, but it is pretty interesting in the service provider space.

Red Hat’s community open source model will be very appealing to the market.  In fact, any of the OpenStack distro providers should be at least a bit worried that Red Hat might leapfrog them.  With their OpenStack move, CloudForms is being repositioned as a hybrid cloud management tool.  Now their competition in the future might be more along the lines of RightScale and enStratus.  What I’ve seen so far of CloudForms shows a lot of promise, though it’s still pretty immature.

Red Hat is pushing a message about “open clouds” – which is less about open source than it is about avoiding vendor lock in with cloud providers.  That’s something that CloudForms is intending to address.  It’s also why OpenShift has been released as an open source project (Apache 2.0 – yay) that can be deployed on other clouds and non-cloud infrastructures.

The big opportunity, IMO, is for Red Hat to go very strong on the OpenStack path for IaaS (e.g. release and support an enhanced Red Hat distro), really push their OpenShift conversation vs. Cloud Foundry based on their ability to drive community (along with it’s deep integration with JBoss), and move CloudForms further up the stack to a governance and multi-cloud management framework (their messaging on this is not very strong).  It’s this model of openness – any cloud, any app, that will make their “Open Cloud” vision a reality.

RACI and PaaS – A Change in Operations

I have been having a great debate with one of my colleagues about the changing role of the IT operations (aka “I&O”) function in the context of PaaS. Nobody debates that I&O is responsible and accountable for infrastructure operations.

Application developers (with or without the blessing of Enterprise Architecture) select platform components such as application servers, middleware etc.  I&O keeps the servers running – probably up to the operating system.  The app owners then manage their apps and the platform components.  I&O has no SLAs on the platform, etc.

In the PaaS era, I think this needs to change.  IT Operations (I&O) needs to have full accountability and responsibility for the OPERATION of the PaaS layer. PaaS is no longer a part of the application, but is now really part of the core platform operated by IT.  It’s about 24×7 monitoring, support, etc. and generally this is a task that I&O is ultimately best able to handle.

Both teams need to be accountable and responsible for the definition of the PaaS layer to ensure it meets the right business and operational needs.  But when it comes to operations, I&O now takes charge.

The implication of this will be a need for PaaS operations and administration skills in the I&O business.  It also means that the developers and application ownership teams need only worry about the application itself – and not the standard plumbing that supports it.

Result?  Better reliability of the application AND better agility and productivity in development.  That’s a win, right?

Cloud API Standardization – It’s Time to Get Serious

UPDATE 6/2

Given the recent losses by Oracle vs. Google in their copyright Java farce it looks like using the AWS APIs as a standard for the industry could actually work. Anybody want to take the lead and set up a Cloud API standards body and publish an AWS-compatible API spec for everybody to use??

——

Okay – this is easy… or is it?

Lots of people continue to perpetuate the idea that the AWS APIs are a de facto standard, so we should just all move on about it.  At the same time, everybody seems to acknowledge the fact that Amazon has never ever indicated that they want to be a true standard.  Are we reallyIn fact, they have played quite the coy game and kept silent luring potential competitors into a false sense of complacency.

Amazon has licensed their APIs to Eucalyptus under what I and others broadly assume to be a a hard and fast restriction to the enterprise private cloud market. I would not be surprised to learn that the restrictions went further – perhaps prohibiting Eucalyptus from offering any other API or claiming compatibility with other clouds.

Amazon Has ZERO Interest in Making This Easy

Make no mistake – Amazon cares deeply about who uses their APIs and for what purpose.  They use silence as a way to freeze the entire market.  If they licensed it freely and put the API into an independent governance body, we’d be done.  But why would they ever do this and enable easy portability to other public cloud providers?  You’re right – they wouldn’t. If Amazon came out and told everybody to bugger off, we’d also be done – or at least unstuck from the current stupidly wishful thinking that permeates this discussion.  Amazon likes us acting like the deer-in-the-headlights losers we all seem to be. Why? Because this waiting robs us of our will and initiative.

It’s Time to Create A Cloud API Standard

Do I know what this is or should be? Nope. Could be OpenStack API. It won’t be vCloud API. It doesn’t freaking matter. Some group of smart cloud platform providers out there should just define, publish, freely licence and fully implement a new standard cloud API.

DO NOT CREATE A CLOUD API STANDARDS ORG OR COMMITTEE. Just go do it, publish it under Creative Commons, commit to it and go. License it under Apache. And AFTER it gets adopted and there’s some need for governance going forward, then create a governance model (or just throw it under Apache). Then every tool or system that needs to access APIs has to only do it twice. Once for Amazon and once for the true standard.

Even give it a branding value like Intel Inside and make it an evaluation criteria in bids and RFPs. I don’t care – just stop treating AWS API as anything other than a tightly controlled proprietary API by the dominant cloud provider that you should NOT USE EVER (once there is a standard).

Take it one step forward – publish a library to translate the Standard API to AWS under an Apache license and get people to not even code AWS API into their tools.  We need to isolate AWS API behind a standard API wall.  Forever.

Then, and only then, perhaps we can get customers together and get them to force Amazon to change to the standard (which they will do if they are losing enough business but only then).

Eucalyptus and AWS – Much Ado About Nothing

UPDATED:  Eucalyptus announced a $30M financing round from a great group of VCs.  That will buy them some room but if they want to get a good return on the $55.5M they’ve raised, they’re going to need to hit it out of the park.  At least they’ll be busy spending all that green.

Yes, this is a delayed post.  But hey, I’m busy.

The Eucalyptus – AWS announcement last week was really a great case of Much Ado About Nothing.  Marten Mickos is a great marketer, and the positioning in this story was almost magical.  For a while there it seemed that Amazon had truly anointed Eucalyptus as “the private cloud” for the enterprise.

Here is 100% of the content behind this story as far as I can tell:  Amazon granted Eucalyptus a license to the AWS API and might provide some technical assistance.  That’s it – there is no more.

I got the release from Amazon and managed to have a quick email Q&A with an Amazon spokesperson (below).

1.  What’s new here?  Eucalyptus has had AWS-compatible APIs since the beginning.

This agreement is about making it simple to move workloads between customers’ on-premise infrastructure running Eucalyptus and AWS—all the while being able to use common tools between the environments.  That’s what Eucalyptus and AWS are working to make easier for customers.

As part of this agreement, AWS will be working with Eucalyptus as they continue to extend compatibility with AWS APIs and customer use cases.

2. Does this mean that Eucalyptus has been granted a license to use the AWS APIs verbatim (e.g. a copyright license)?  And if so, does that other cloud stacks would not be granted a license to legally use AWS APIs?

Yes, to your first question.  Each situation is different and we’ll evaluate each one on its own merits.

3. Are Amazon and Eucalyptus collaborating on APIs going forward, or will Amazon release APIs and let Euca use them? Also, will Eucalyptus have advance visibility into API development so they can release simultaneously?

We’re not disclosing the terms of our agreement.

4. Is Amazon helping Eucalyptus develop new features to be more compatible across the AWS portfolio?  Examples might include RDS, SimpleDB, EMR, Beanstalk, etc.  Without support for the PaaS layer components then Eucalyptus is only partly compatible and the migration between internal and external cloud would be restricted

No.  This relationship is about making workloads simple to move between on premise infrastructure and AWS.

5.  Does “As part of this agreement, AWS will support Eucalyptus as they continue to extend compatibility with AWS APIs and customer use cases” imply that Amazon’s Premier Support offerings will be extended to Eucalyptus so a customer can get support for both from Amazon?  Or is this more about the AWS team supporting the Eucalyptus team in their quest to maintain API parity?

AWS will be working with Eucalyptus to assure compatibility, but will not be supporting Eucalyptus customers or installations.  Support will be provided directly by Eucalyptus to their customers, just as was the case before this agreement.

6. Will Amazon resell Eucalyptus?

No.

7. Will Eucalyptus resell Amazon?

No.

8. Will Eucalyptus-based private clouds be visible/manageable through the AWS Management Console, or through CloudWatch?

The AWS management console does not support Eucalyptus installations.

9.  Is this exclusive or will Amazon be open to other similar partnerships?

It is not exclusive.

Not exclusive – that means Eucalyptus is not “the anointed one.”  No operational integration (e.g. CloudWatch, etc.) means that “common tools” in the answer to Q1 is RightScale, enStratus etc. Here’s a question I didn’t ask and, based on the answer to Q3 above, I would not expect to be answered — What did Eucalyptus commit to in order to get the license grant (which is the only news here)?

I’m going to go out on a limb here and speculate that the license grant applies to Eucalyptus only when deployed in a private cloud environment. It would be my expectation that Amazon would not want to legitimize any use of their APIs by service providers against whom they would compete. It’s not in Amazon’s best interest to make the AWS API an open standard that would enable public cloud-to-cloud compatibility.  Eucalyptus only targets on-premise private clouds so that would have been an easy give.

Okay, so how much does it matter that your private cloud has the same API as Amazon?  On the margin, I suppose it’s a good thing.  But RightScale and enStratus both do a great job of encapsulating multiple cloud APIs behind their management interfaces.  Unless I’m building my own automation layer internally to manage both AWS and my private cloud, then as long as the feature sets are close enough then the API does not have to be the same.

There’s some info about the Citrix Apache CloudStack project and AWS API, but I have no information that Amazon has granted Citrix or the Apache Foundation a license.  Will update you when I learn more.

All in all, this turned out to be not that interesting.  I like Marten and have no dog in this hunt, but I don’t think that this announcement in any way improves the long-term market for Eucalyptus.  And after the Citrix CloudStack announcement today, I would say that things are looking cloudier than ever for the Eucalyptus team.