Talking Cloud in the Enterprise

Despite the fact that cloud is part of the daily conversation in many enterprises, I still find a significant gap in many places in terms of a true understanding of that it means. This is somewhat compounded by the reliance on standard definitions of cloud computing from NIST and other sources. These definitions are helpful in some respects, but they are far more focused on attributes than on business value – and the business value is what is truly needed in the enterprise to break through the barriers to cloud computing.

First, let’s divide the enterprise IT landscape into three buckets:

  • Infrastructure & Operations: this is the core group in IT responsible for operating the data centers, running the servers, keeping the lights on, ensuring adequate capacity, performing IT governance, etc. We’ll call them I&O.
  • Applications: whether custom application development by a dev team, or a COTS application licensed in, or a SaaS app running externally, the applications are where the core value of IT is created. As a general rule, I&O exists to serve applications – not the other way around (though I think we can all come up with situations in our past where the nature of that relationship has not been so clear).
  • The Business: these are the users and application owners. Developers build apps for “the business” and I&O hosts them. Often times, especially in large enterprises, application development actually sits within the business under a business line CIO. So the app owners also control the application development and are the “customers” of I&O.

When talking about cloud, it’s really critical to have this context in mind. If you are talking to the business, they care about some very specific things related to their applications, and they have requirements that I&O needs to address. If you are talking to I&O, they have a related but very different set of issues and requirements and you need to address them in terms that are relevant to them. Let’s start with the business.

Talking Cloud to the Business

If you are speaking with the application owners within a business, they care about the following (generally unsatisfied) requirements with respect to their infrastructure and IT needs (the ordering is not important – it will be different for different businesses):

  1. Control – The pre-cloud world is one in which the business makes requests of I&O, often through an onerous and labor intensive service request workflow involving forms, approvals, emails, negotiation, rework, and more. The business puts up with this because they have to, or they just go outside and procure what they need from external vendors. As with many innovations, cloud computing first entered through the business and only later got adopted by IT. As a business, I really want to be able to control my IT consumption at a granular level, know that it will get delivered reliably and quickly with no errors, etc. This is the concept of “on-demand self-service.” Let me configure my requirements online, push a button, and get it exactly as I ordered with no fuss.
  2. Transparency – I heard a story once where a company had hired so many IT finance analysts that there were more people accounting for IT than actually producing it. It may be a myth, but I can see how it might actually happen. If I apply the management accounting principles of the shop floor to IT, I start to get into activity-based costing, very complex allocation formulas, etc. But even with that, it’s still viewed by the business as more of a black box than transparent. I sat with some IT folks from Massachusetts a couple years ago and they all groused about how costs were allocated – with the exception of one guy at the table who knew he was getting a good deal. Winners and losers, right?  What the business wants today is transparency. Let me know the cost per unit of IT, in advance, and give me control (see 1) over how much I consume and let me know what I’ve used and the costs incurred along the way. No surprises, no guess work, no hassle. In the NIST cloud world we call this “measured” IT.
  3. Productivity & Innovation– Pre-cloud I&O processes are often so onerous that they significantly impact developer productivity. If it takes me several meetings, days of analysis, and hours of paperwork to properly size, configure and formulate a request for a VM, that’s a huge productivity drain. Further, if I have to wait several days or even weeks before this VM is available to me, and I have to wait for it, that slows me down. At one financial institution I spoke with the VM request form as 4 packed pages long, required 12 approval steps, and each approval step had an SLA of 3 weeks. Yes, that’s a potential of 36 weeks to return a VM and still hit their SLAs to the business. In reality it never took 36 weeks – but it often took 6-10 weeks for a VM. Seriously, why can I just have a VM now, when I need it? That’s what the business wants. Related to productivity, innovation is seriously stifled in most enterprise IT environments. Imagine if I’m on a roll, have this great idea, but need a VM to test it. Then imagine a series of hurdles – sizing, configuration, paperwork, approvals and waiting!! Now, it may be a pretty cool idea, but unless it’s part of my top priority task list and was approved months ago, it just isn’t going to happen. The business wants support for innovation too. That means it wants speed. This is the concept of “elasticity” in IT. Give me as much as I want/need now, and when I’m done, you can have it back.
  4. Cost – Last but often not least, the business wants a smaller bill from IT – and the benchmark is no longer in their peer group. The benchmark is Amazon, Google, Microsoft, Rackspace and others. The benchmark is the cloud. Why pay $800/month for a virtual machine when Rackspace will rent it to me for $100? Not only does the business want better IT – more control, transparency, productivity, and innovation – but they also want it at a lower cost. Easy right?

When engaging with the business application owners about their cloud needs (you do this, right?), and they are having a hard time articulating what is important to them and why they want cloud, ask them if they want more control, transparency, productivity & innovation, and lower cost.  If they don’t really want most of this, then perhaps they don’t want or need a cloud.

Talking Cloud to IT Infrastructure & Operations (I&O)

In short, I&O really would like to satisfy the requirements of the business listed above. Remember that I&O’s mission is to serve the business by serving their applications. When talking with the I&O side of the house (make no mistake, there are at least 2 sides here), talk in terms of the requirements of the business. Yup – control, transparency, productivity & innovation, and cost.

How? Be a cloud provider to the business, of course. But what does that mean? So many people I meet still think that a self-service portal in front of a vSphere cluster is all that it means to be a cloud. It’s more than this – it’s a completely end-to-end automated operations model to deliver IT services. In order to meet all of the requirements above, including at a reasonable cost, everything that can be automated should be automated. So-called “enterprise clouds” that still require manual steps in the provisioning process cannot achieve the cost advantages of a fully automated environment (unless of course the cost of putting in the automation, divided by the number of units produced, is greater than the cost of doing it manually). This is no different than with the making of products. Even in many heavily automated mass-production businesses such as auto manufacturing, IT still done in a way where every VM and deployment landscape is an exception crafted by hand. That’s a huge waste!

Cloud computing operating models (cloud not a technology, it’s the application of many technologies to change the way IT is operated) grew out of necessity. How could Google, Amazon or other large scale web business possibly handle tens of thousands of servers without either armies of low cost workers or “extreme” automation? They could not, and neither can you even if your data center footprint is in the hundreds of server range. Clearly the automation route is less expensive in the long run, at least for the vast majority of tasks and actions that are performed in a data center every day.

Now enterprise IT gets to have many of the same techniques used by cloud providers applied in their own operations. With all of the software out there for building infrastructure (IaaS) and platform (PaaS) clouds, it’s never been easy to envision and implement the “IT Factory of the Future” in any sized environment. Take an OpenStack, BMC Cloud Lifecycle Management, VMware vCloud or other cloud stack and create your infrastructure factory. Then add Apprenda, Cloud Foundry, or one of dozens of other PaaS frameworks and and create your application platform factory If fully implemented and integrated, the variable labor cost for new units of IT production (VM, scaled front end, etc.) will approach zero.

Let’s take this to an extreme. Some in IT have titles like VP or director of infrastructure. They run the I&O function. Let’s give them a new title – IT plant manager if they run one data center. Or VP of IT production if they run all I&O. Even if you don’t go that route, perhaps that’s how these people need to see themselves going forward.

Related Posts

“Putting Clouds in Perspective – Cloud Redefined”

“Don’t Mention the Cloud”

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

 

How Some Journalists Confuse People About Cloud

Simon Wardley and I had a quick exchange about the sloppily written and factually inaccurate writing of Wired’s Jon Stokes. Simon commented about a November post on Wired CloudLine as follows:

@swardley:  “This Wired post on cloud from Nov ’11 – where it isn’t wrong (repeating unfounded myths), it is tediously obvious – bit.ly/wWLbsL”

I piled on and Simon posted about another post here.

@swardley: “Oh dear, another of the wired author’s articles – http://bit.ly/vHWPZW – is so full of holes, well, no wonder people are confused.”

Stokes replied here.

@jonst0kes:  “@cloudbzz @swardley And I’d like to think that one of you could write a real takedown instead of slinging insults on twitter.”

Challenge accepted.

Let me just start by stating the obvious – When a respected editor like Stokes at a very respected zine like Wired puts up crap, misinformation and rubbish, it just confuses everybody. If very knowledgable people like Simon Wardley are calling bullshit on someone’s weak attempt at journalism, then you can bet that something is not right.

Wired Cloudline Post by Jon Stokes – “An 11th Law of Cloudonomics

Stokes:  “Don’t build a public cloud–instead, build a private one with extra capacity you can rent out.”

I’m sorry, but if you’re renting out your cloud, it’s public – so you’re building a public cloud and you better damned well know what you’re getting into. Anybody who has a clue about building clouds knows that there are tremendous differences in terms of requirements and use cases – depending on the cloud, the maturity of your ops team, and a whole bunch of other factors. Yes, you can build a cloud that is dual use, but it’s rare and very difficult to reconcile the differing needs. I know of only one today – at it’s in Asia, not in the U.S.

Stokes: “If you look at the successful public clouds—AWS, AppEngine, Force.com, Rackspace—you’ll notice that they all have one thing in common: all of them were built for internal use, and their owners then opted to further monetize them by renting out excess capacity.”

Garbage!  Amazon’s Bezos and CTO Werner Vogels have repeatedly disputed this.  Here is just one instance that Vogels posted on Quora:

Vogels: “The excess capacity story is a myth. It was never a matter of selling excess capacity, actually within 2 months after launch AWS would have already burned through the excess Amazon.com capacity.”

Rackspace built their public cloud as a public cloud, and never had any internal use case that I can come up with (they’re a hosting company at their core – what would they have a private cloud for internally??). For private clouds, they actually use a very different technology stack based on VMware, whereas their public Cloud Servers is built on Xen. But again, their private clouds are for their customers, not for their own internal use.

Stokes: “It’s possible that in the future, OpenStack, Nimubla, and Eucalyptus will create a market for what we might called “AWS clones”—EC2- and S3-compatible cloud services that give you frictionless portability among competing clouds.”

Eucalyptus is the only stack that is remotely an AWS clone – and that’s how it started as a project at UC Santa Barbara. OpenStack is based on Rackspace and NASA Nebula – not AWS clones – and Nimbula is something built by former AWS engineers but is also not a clone. There are some features that are common to enable federation, but that’s hardly being a clone (we call it interoperability). And none of them give you frictionless portability between each other.

Stokes: “In that future, we could see a company succeed by building a public cloud solely for the purpose of making it an AWS clone.”

Huh? That’s about the least likely scenario for success I could dream of. If all I do is build an AWS clone to compete against Amazon with its scale, resources and brand, then I’m the biggest moron on the planet. That would be total #FAIL.

Stokes: “…attempts to roll out new public clouds and attract customers to them will fail because it’s too expensive to build a datacenter and then hang out a shingle hoping for drop-in business to pick up.”

Generally I agree with this, but not for the reasons Stokes gives. Most cloud providers don’t need to build a data center. You can get what you need from large DC providers (space, power, HVAC, connectivity) and build your cloud. But you need to have a reason for customers to consider your cloud, and the idea of “build it and they will come” is a truly lame strategy. I don’t know a single cloud provider today that is operating on that model.

Stokes: “And most cloud customers are drop-in customers at the end of the day.”

Most startups might “drop in” on a cloud. But most enterprises certainly are more mature than that. You don’t drop in on IBM’s cloud (which is pretty successful), or Terremark’s or Savvis’s. Gartner MQ upstart BlueLock is (a) not even remotely an AWS clone, (b) having really great success, and (c) does not want or allow “drop-in customers” at all (you need to call them and talk to a sales rep).

Going forward I expect better from Stokes and the folks at Wired.

 

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

PaaSing Comments – Data and PaaS

I’ve been looking at the PaaS space for some time now.  I spent some time with the good folks at CloudBees (naturally), and have had many conversations on CloudFoundry, Azure, and more with vendors, customers and other cloudy folks.

Krishnan posted a very good article over on CloudAve, and at one level I fully agree that PaaS will be come more of a data-centric (vs. code-centric) animal over the next few years.  To some degree that’s generally true of all areas of IT – data, intelligence and action from data, etc.  But there is a lot more to this.

Most PaaS frameworks have very few actual services – other than code logic containers, maybe one messaging framework, and some data services (structured and unstructured persistence and query).  You get some scale out, load balancing, and rudimentary IAM and operations services.  Over time as the enterprise PaaS market really starts to take off, we may find that these solutions are sorely lacking.

In the data and analytics space alone there are many types of services that PaaS frameworks could benefit from:  data capture, transformation, persistence (have), integration, analytics and intelligence.  But this is too one-dimensional.  Is it batch or realtime, or high-frequency/low-latency?  What is the volume of data, how does it arrive and in what format? What is the use-case of the data services?  Is it structured or unstructured? Realtime optimization of an individual users’ e-commerce experience or month-end financial reporting and trend analysis?

Many enterprises have multiple needs and different technologies to service them.  Many applications have the same – multiple data and analytical topologies and requirements.  Today’s complex applications are really compositions of multiple workload models, each with its own set of needs.  You can’t build a trading system with just one type of workload model assumption.  You need multiple.

A truly useful PaaS environment is going to need a “specialty engine” app store model that enables developers to mix and match and assemble these services without needing to break out of the core PaaS programming model. They need to be seamlessly integrated into a core services data model so the interfaces are consumed in a consistent manner and behave predictably.

Data-centricity is one of the anchor points.  But so is integration.  And messaging.  And security in all it’s richness and diversity of need.

This gets back to the question of scale.  Salesforce has the lead, but they also have a very limiting computational model which will keep them out of the more challenging applications.  Microsoft is making strides with Azure, and Amazon continues to add components but in a not-very-integrated way.  But will a lot of other companies be able to compete?  Will enterprises be able to build and operate such complex solutions (they already do, but…)?

This is a great opportunity and challenge, and I have great expectations that we will be seeing some exciting innovations in the PaaS market this year.

Cloudy Implications and Recommendations in Megaupload Seizure

The FBI seized popular upload site Megaupload.com yesterday.  They took the site down and now own the servers.

I am not an attorney, and I have no opinion on whether or not the MegaUpload guys were breaking laws or encouraging users to violate copyrights through illegal uploading and streaming of movies, recordings, etc.  Right or wrong, the FBI did it and now we need to deal with the fallout.

The challenge is that there were very likely many users who were not breaking any laws.  People backing up their music, photos, websites, documents and who knows what else.  I highly doubt any large corporations would want to use such a site, but I bet a lot of small businesses did.  My focus here is on the ramifications to the enterprise, and how to protect yourself from being impacted by this.

What if the offending site was using Amazon, Google or Microsoft to store their bad content?  I’m sure that the Feds would have had no problems getting the sites shut down through these companies without needing to resort to taking them offline.  But legally could they have gone in and seized the AWS data centers?  Or some of the servers?  Maybe legally, but perhaps not easily for both technical and legal reasons (Amazon has lots of money for lawyers…).

What if the cloud provider was someone smaller, without the financial ability to challenge the FBI?  I mean, those guys usually don’t call ahead — they just bust in the door and start taking stuff.  The point is that IT needs to take some steps that protect themselves from getting caught up in an aggressive enforcement action, legitimate or not.

Recommendations to IT

  1. Stick with larger, more legitimate vendors that have the ability to square up with the Feds when necessary – not that will stop them but it could slow them down enough to let you get your data
  2. Encrypt your data using your own keys so that even if your servers get taken, your data is secured (of course, that’s just a good idea in general)
  3. Back up your data to another cloud or your own data center.  Having all of your eggs in one basket is just stupid (and that goes for consumers who are more likely to just trust a single backup provider like Carbonite (who stated in their S1 offering docs that they expected to lose data and that the consumer’s PC was assumed to be the primary copy!)

Feds, Please Consider Doing it Differently

Perhaps we need some legislation to protect the innocent legitimate users from the enforcement fallout caused by people who are clearly breaking laws.  I don’t understand why, for example, the FBI could not have copied off all of the files, logs, databases etc. but left the site running.  Even watching the traffic that occurred after the announcement could have given the FBI some interesting insights into some of the illegal usage.

Bottom Line – protect yourself because this is a story that could be coming to your preferred cloud someday.

Cloud Stack Red Ocean Update – More Froth, but More Clarity Too

The cloud stack market continues to go through waves and gyrations, but increasingly now the future is becoming more clear.  As I have been writing about for a while, the number of competitors in the market for “cloud stacks” is totally unsustainable.  There are really only four “camps” now in the cloud stack business that matter.

The graphic below shows only some of the more than 40 cloud stacks I know about (and there are many I surely am not aware of).

VMware is really on its own.  Not only do they ship the hypervisor used by the vast majority of enterprises, but with vCloud Director and all of their tools, they are really encroaching on the traditional data center/systems management tools vendors.  They have great technology, a huge lead in many ways, and will be a force to reckon with for many years.  Many customers I talk with, however, are very uncomfortable with the lack of openness in the VMware stack, the lack of support for non-virtualized environments (or any other hypervisor), and a very rational fear of being monopolized by this machine.

Data Center Tools from the big systems management vendors have all been extended with cloud capability at use in both private and public clouds.  Late to the party, they are investing heavily and have shown fairly significant innovation in recent releases.  Given that the future of the data center is a cloud, this market is both a huge opportunity and an existential threat.  Deep hooks into the data center with service desks, service catalogs, automation and orchestration capabilities provide a near term protection.  There are just too many trained resources with too much invested for most IT organizations to just walk away.

Unlike the VMware approach, all of these vendors support a more heterogeneous environment – especially CA and BMC.  Most support some combination of Xen, KVM and Hyper-V in addition to VMware hypervisors.  They are also moving up-stack, supporting integration with public clouds such as Amazon and others, application-level functionality, and more.

OpenStack is the new 800-lb gorilla.  In less than 18 months OpenStack has emerged as the most vibrant, innovative and fast-moving segment of this market.  Evidence of progress includes contributed code from over 1,000 developers, more than 128 companies in the community, a growing list of commercial distributions from  incredibly smart teams, and a maturing technology base that is starting to gain traction in the enterprise. It’s still very early days for OpenStack, but it very much feels like the counterweight to VMware’s controlling influence.

The froth in this market is coming from increasing number of very cool (and occasionally well-funded) OpenStack commercialization efforts.  As with most markets, there will be winners and losers and some of these efforts will not make it.  This market is so new that whatever shakeout may occur, it won’t happen for a few years.

Other solutions are going to find the going tougher and tougher.  Some may be doing well and growing today, but ultimately the market will shake out as it always does and many of these current solutions will either find new use-cases and missions, or they will be shuttered. I have unconfirmed reports of at least two of the currently available stacks on my list being withdrawn from the market for lack of sales.  Is this the start of a “great cloud stack shakeout?”

Where are we heading?

The majority of the market in 3 years will have coalesced into three big buckets, and it’s starting to happen now.  vCloud, OpenStack and the big data center vendors will rule the roost at the core stack level going forward.  The graphic below is not intended to show the size of these markets.

The guys in the “other” category reading this post are probably not ready to hear the bad news, but this is what I believe to be the ultimate state. There will be niche survivors, some who will migrate to the OpenStack island (rumors abound), and others who may pivot to new markets or solution designs.  Some are just focusing on Asia, especially China, since it’s more of a wild west scenario and just showing up is guaranteed to generate deals.  However, many of them will have gone out of business by 2015 or be barely scraping by. Such is the nature of new markets.

One key distinction with the “big four” data center/systems management tools vendors is that they are not going to be the same kind of open and vibrant ecosystems as OpenStack or vCloud.  With their huge sales organizations and account presence, they don’t necessarily need the leverage that an ecosystem provides. Some in the #clouderati community might conclude that they are toast.  I’ve heard several say that there will be only two choices in the coming years, but I disagree and do think that the DC tools guys get it now and have a lot of money to invest.

I have this opinion based on spending most of my days working with large enterprises and governments who have millions invested in these vendors, and I expect a fair bit of enterprise cloud infrastructure – especially for their more mission-critical applications – to be real long-term opportunities for the big guys.  vCloud and OpenStack will certainly hurt them in their core markets, however, and there will be lots of pivots and new initiatives from these mega vendors to ensure their relevancy for a long time to come.

Bottom line?

The market is starting to form up, and it looks like there will be three big segments going forward (and a small market of “other”). If you’re not in one of them, and solidly so, you’re doing something else in a few years. There just won’t be enough revenue to support 40+ profitable and viable vendors.  How many will survive? That’s a tough question, but here’s my prediction for the market breakdown in 2018.

VMware:  1

OpenStack commercial distributions:  4 viable, 1 or 2 that are clear leaders

DC Tools:  4 main and a couple smaller guys

Other: at most 3, mainly in niche markets

Total:  12 viable cloud stack businesses in 2018

What do you think?

 

 

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

 

 

Don’t Mention the Cloud

"I mentioned it once, but I think I got away with it alright."

The “cloud” term has started to turn like the leaves on the trees outside my Window.  It’s yellowing, drying out and about to fall to Earth to be raked up and composted into fertilizer if something isn’t done to stop it.

Where once it was the magic phrase that opened any door, the term “cloud” is now considered persona non grata in many meetings with customers. When everything’s a cloud – and today “cloud washing” is an epidemic on an unprecedented scale – the term loses meaning.

When everything’s a cloud, nothing is.

In fact, not only does “cloud” mean less today than a year ago, what it does mean is not good. For many customers, “cloud” is just a pig with cloud lipstick.  And who’s fault is this?  It’s ours – all of ours in the IT industry.  We’ve messed it up – potentially killing the golden goose.

A Vblock is not a cloud (not that a Vblock is a pig). It’s just a big block of “converged infrastructure.” Whatever its merits, it ain’t a cloud. You can build a cloud on top of a Vblock, which is great, but with out the cloud management environment from CA, BMC, VMware (vCloud) or others, it’s just hardware.

A big EMC storage array is not a cloud either, but that doesn’t stop EMC from papering airports around the globe with “Journey to the Private Cloud” banners. Nothing against EMC.  And VMware too often still confuses your cloud state with what percent of your servers are virtualized.  Virtualization is not cloud.  Virtualization is not even a requirement for cloud – you can cloud without a VM.

A managed hosting service is not a cloud.

Google AdWords is not cloud “Business Process as a Service” as Gartner would have you believe. It’s advertising!  Nor is ADP Payroll a cloud (sorry again, Gartner), even if it’s hosted by ADP.  It’s payroll.  By their logic, Gartner might start to include McDonalds in their cloud definition (FaaS – Fat as a Service?). I can order books at Amazon and they get mailed to my house.  Is that “Book Buying as a Service” too?  Ridiculous!

And then there’s Microsoft’s “To the Cloud” campaign with a photo app that I don’t believe even exists.

It’s no wonder, then, that customers are sick and tired and can’t take it (cloud) anymore.  Which is why it’s not surprising when many customer “cloud” initiatives are actually called something else.  They call it dynamic service provisioning, or self service IT, or an automated service delivery model.  Just don’t use the “cloud” term to describe it or you might find yourself out in the street quicker than you can say “resource pooling.”

There’s also that pesky issue about “what is a cloud, anyway?” that I wrote about recently. For users, it’s a set of benefits like control, transparency, and productivity.  For providers, it’s Factory IT – more output at higher quality and lower cost.

When talking about “cloud computing” to business users and IT leaders, perhaps it’s time to stop using the word cloud and start using a less ambiguous term. Perhaps “factory IT” or “ITaaS” or some other term to describe “IT capabilities delivered as a service.”

No matter what, when speaking to customers be careful about using the “cloud” term.  Be precise and make sure you and your audience both know what you mean.

The Red Ocean of Cloud Infrastructure Stacks (updated)

Update: am revising this still… Reposting now – but send me your comments via @CloudBzz on Twitter if you have them.

It seems like every day there’s a new company touting their infrastructure stack.   I’m sure I’m missing some, but I show more than 30 solutions for building clouds below, and I am sure that more are on their way.  The market certainly can’t support so many participants!  Not for very long anyway.  This is the definition of a “red ocean” situation — lots of noise, and lots of blood in the water.

This is the list of the stacks that I am aware of:

I. Dedicated Commercial Cloud Stacks

II.  Open Source Cloud Stacks

III.  IT Automation Tools with Cloud Functionality

IV.  Private Cloud Appliances

I hope you’ll pardon my dubious take, but I can’t possibly understand how most of these will survive.  Sure, some will because they are big and others because they are great leaps forward in technology (though I see only a bit of that now).  There are three primary markets for stacks:  enterprise private clouds, provider public clouds, and public sector clouds.  In five years there will probably be at most 5 or 6 companies that matter in the cloud IaaS stack space, and the rest will have gone away or taken different routes to survive and (hopefully) thrive.

If you’re one of the new stack providers – think long and hard about this situation before you make your splash.  Sometimes the best strategy is to pick another fight.  If you swim in this red ocean, you might end up as shark bait.

Putting Clouds in Perspective – Cloud Redefined

A Change of Perspective by kuschelirmel

You’d think as we head into the waning months of 2011 that there’d be little left to discuss regarding the definition of cloud IT.  Well, not quite yet.

Having spent a lot of time with clients working on their cloud strategies and planning, I’ve come to learn that the definition of cloud IT is fundamentally different depending on your perspective.  Note that I am using “cloud IT” and not “cloud computing” to make it clear I’m talking only about IT services and not consumer Internet services.

Users of cloud IT – those requesting and getting access to cloud resources – define clouds by the benefits they derive.  All those NIST-y terms like resource pooling, rapid elasticity, measured service, etc. can sound like gibberish to users.  Self-service is just a feature – but users need to understand the benefits.  For a user – cloud IT is about control, flexibility, improved productivity, (potentially) lower costs, and greater transparency. There are other benefits, perhaps – but these are commonly what I hear.

For providers – whether internal IT groups or commercial service providers – cloud IT means something entirely different.  First and foremost, it’s about providing services that align with the benefits valued by users described above.  Beyond that, cloud IT is about achieving the benefits of mass production and automation, a “factory IT” model that fundamentally and forever changes the way we deliver IT services.  In fact, factory IT (McKinsey blog) is a far better term to describe what we call cloud today when you’re talking to service providers.

Factory IT standardizes on a reasonable number of standard configurations (service catalog), automates repetitive processes (DevOps), then manages and monitors ongoing operations more tightly (management). Unlike typical IT, with it’s heavily manual processes and hand-crafted custom output, factory IT generates economies of scale that produce more services in a given time period, at a far lower marginal cost per unit of output.

Delivering these economies end-to-end is where self-service comes in.  Like a vending machine, you put your money (or budget) in, make a selection, and out pops your IT service.  Without factory IT, self service – and the control, transparency, productivity and other benefits end users value – would not be possible.

Next time someone asks you to define cloud, make sure you understand which side of the cloud they are standing on before you answer.

—-

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

CloudFloor Drives the Cloud To Achieve Business Results

cloudfloor logo

CloudFloor (Waltham, MA) is getting close to starting the beta program for CloudControl, their system to tie cloud usage to measurable business metrics.  I had an interesting call with co-founder and CTO Imad Mouline last week to learn more about this innovative system.  There are a couple of ways to approach the concept of CloudFloor.  The most obvious one deals with controlling costs by shutting down instances when they are no longer needed, but it’s also the least interesting approach.  And there are companies such as CloudCrusier addressing the cost management and cloud chargeback business.

The CloudFloor guys started seeing big uptake in cloud usage a while ago and were able to glean some pretty interesting insights from their performance data.  Insights such as the “noisy neighbor” problem in a multi-tenant environment (it’s real), seeing users deploy lots of VMs but not shut them down when no longer needed, etc.  They saw a lot of large enterprises overspending on cloud but also getting application performance blocked by simple and easily remedied mistakes.  CloudFloor was formed to address these issues and beyond.

What struck me as most interesting was the sophistication of how they tie non-cost business metrics into the equation.  Think about any business and the key metrics that drive their success.  As Imad pointed out, companies can track many metrics today but very few are core and critical to their business.  For example, at an auction site like eBay they know that the two most important metrics are number of listings and number of bids at any given point in time.

If you’re in a primarily online business, metrics are heavily influenced by the amount of infrastructure you have deployed at any given time.  Too much and you’re losing money.  Too little and you’re losing money… Like Goldilocks and the Three Bears, the trick is to get it “just right.”

One of my previous startups was in the digital imaging space.  The number of images uploaded at any given point directly correlated with print and gift orders. Having sufficient infrastructure to handle the upload loads at any given time was critical.  Having too much was wasteful – and since we started this pre-cloud we were over-provisioned a majority of the time.  However, at the very biggest peak times we sometimes were under-provisioned.  This caused uploads to slow or fail which in turn resulted in lost revenues.

Had I had a reason to do so (ie had I been using cloud), it would have been pretty easy for me to create a formula that calculated the marginal cost of additional infrastructure vs. the marginal gross profit that would be enabled by provisioning instances.  Given that formula, I could then maximize my profit by having a system that very intelligently managed the balance to a point where – in theory – an extra $1.00 spent on cloud would result in at least an extra $1.00 in gross profit (all other costs being equal).  Beyond that, I’d see diminishing returns.  Of course, it would never get exactly that precise, but it could be close.

Of course, you can also have metrics that may not so easily tie to micro economics.  If you’ve promised a certain SLA level for transactions (e.g. cart page load in 1.5 seconds, purchase-to-confirmation page of 4 seconds, etc. – CloudControl can optimize the amount of cloud infrastructure you have deployed to meet the SLAs.  This is what BSM – Business Service Management – is all about.

They also can do things like manage geographic load balancing, traffic shaping and more.  There is a pretty sophisticated vision at play here.

So, how does it work?

Their core “Principles Engine” (“PE”) accepts data from a number of different feeds – could be Google Analytics, data generated from the application, or other information.  PE then turns that data into visibility and insights.  If that’s all you need, you’re golden — CloudControl is free for the visibility bits (you pay if you want their automation to control cloud resources).  See the graphic below.

Click to Enlarge

Then you provide your goals and principles for CloudControl to manage.  CloudControl then manages global traffic, cloud instances and more (can call out to any service).  All of this goes towards hitting the business metrics established in the Principles Engine.

One of the things they realized early on is that an holistic approach to cloud BSM would have to go broader than the capabilities of individual clouds. Geographic load balancing, failover and other Internet-level traffic-shaping techniques are absolutely critical to hitting the metrics in many cases.  This might also include managing across different vendor clouds and even internal clouds (very complicated DNS management required).

What they needed, then, is a platform on which to manage these capabilities, so they went out and acquired a small but growing DNS provider (Microtech Ltd from the UK) and are now in the DNS management business too.  DNS is important to performance, security and availability – which is why CloudFlare is able to do what it does (protects and speeds up web sites).  They still sell the DNS services standalone, but the strategic rationale for the acquisition was the breadth of the vision for business service management.  This was a really smart play and will set them apart from many potential competitors.

CloudFloor has taken a very sophisticated approach to tie cloud usage, costs and capabilities to the business metrics you care about most.  They are going to beta soon and it should be very interesting to see where they take the platform.

———–

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/cloudfloor-drives-the-cloud-to-achieve-business-results/. You can follow CloudBzz on Twitter @CloudBzz.

 

Dell (and HP) Join OpenStack Parade to the Enterprise…

(and HP)

 

Update:  HP also announced support for OpenStack on its corporate blog.  And the beat goes on…

 

The OpenStack Parade is getting bigger and bigger. As predicted, enterprise vendors are starting to announce efforts to make OpenStack “Enterprise Ready.”  Today Dell announced their support for OpenStack through their launch of the “Dell OpenStack Cloud Solution.”  This is a bundle of hardware, OpenStack, a Dell-created OpenStack installer (“Crowbar”), and services from Dell and Rackspace Cloud Builders.

Dell joins Citrix as a “big” vendor supporting OpenStack to their customers.  Startups such as Piston are also targeting the OpenStack space, with a focus on the enterprise.

Just one year old, the OpenStack movement is a real long-term competitor to VMware’s hegemoy in the cloud. I fully expect to see IBM, HP and other vendors jumping into the OpenStack Parade in the not too distant future.