It’s All About SDN

By Ben Grubin

HP’s announcement last week at Interop that they are shipping their SDN SDK and SDN App Store is merely one of the first salvos of a war that will likely heat up over the next 24 months. What was once the purview of marketing and start-ups, Software Defined Networking has now become the dominant strategy of HP, VMware and others to truly disrupt the current state of data center network architecture.

HP SDN Ecosystem

As they announced a few months ago, VMware is going further with their NSX-based SDDC (Software Designed Data Center) concept, which essentially treats the entire underlying network infrastructure as dumb pipes.

In this new world of SDN and SDDC, the never-ending list of features that Cisco, HP, Dell, Huawei, and others have used as the lynchpin of their competitive strategy in the Ethernet switching and routing markets is nearly irrelevant. Instead, what these new technologies demand is simplicity and speed, something incompatible with layering on hundreds of unnecessary features into the software that drives Ethernet switches.

In fact, layering is the underlying story here. While most network architects have tried to avoid networking overlays due to complexity and losing visibility into layer 2 and 3 architecture, SDN and SDDC are truly a network overlay that abstracts away the entirety of the underlying physical network.

Implementing SDDC means only two basic requirements for the underlying network: it should have as few hops as possible between any two points, and as much “symmetric” capacity as possible–meaning the capacity should be equally large between any two points on the network. Only with this design do you enable the broadest possible freedom at the overlay SDDC layer.

What don’t you need? VLANs, layer 3 routing protocols (OSPF, IGRP), and other such mainstays of the data center. All of this is handled inside the software layer, and with VMware NSX the virtual infrastructure.

All in all, this is an exciting movement towards simplifying the network layer and making it more agile and responsive to the needs of business. Having per-VM virtualized network components such as load balancers, firewalls, and switches means less specialized equipment and less capital outlay in the racks.

Is all of this going to be in production tomorrow? No way. There’s still some key hardware and software challenges that need to be solved to equalize the performance equation. However, if history is our guide, it won’t take long for those to be conquered.

 

 

Oracle of the Cloud – Seek and Ye Shall Find

Oracle logoOracle & Cloud. Oil & Water. Never the twain shall mix. Or so it’s been until now.

Excluding SaaS offerings that were mostly acquired, Oracle has been largely absent from the cloud these past 7 years. However, one thing you can always count from Larry & Co is an uncanny ability to adapt, embrace and compete like hell when it matters. Coming from an 8-1 deficit to win 8 straight America’s Cup matches shows you just how much Ellison likes to win.

After years of ignoring or aggressively denying the importance of cloud computing, Oracle has finally demonstrated their credible progress with no less than 10 new offerings announced at Oracle Open World this week. There is still a fair amount of cloudwashing going on, but for the first time it is no longer fair to deride Oracle as cloud hype without substance. It was fun while it lasted though.

Oracle is embracing the public cloud with database, middleware, compute and storage offerings. Their compute solution, powered by the acquisition of Nimbula and Chris Pinkham, looks pretty reasonable at first glance. And storage built on OpenStack Swift is also pretty leading edge. Multiple DBaaS offerings and a cloud-extended database backup appliance will probably be well-received by Oracle’s customer base.

In the private cloud, Oracle is starting to make some progress as well. I wouldn’t use them to build private IaaS clouds at this point, but they are selling an IaaS-in-box “engineered system” that might get some users. What’s more interesting is their database consolidation play which is being offered to major enterprises through an Exadata DBaaS offering that can be run in customer data centers. A very solid customer case from UBS shows that this is real.

Another interesting area is in the middle tier with the availability of Dynamic Clusters in WebLogic 12c. Like a good PaaS environment (which this is not), the ability to seamlessly (and with preset constraints) perform horizontal scaling of workloads is pretty interesting. Application changes might be required, and I don’t believe that multi-geo scaling would work with their model without significant code changes, but it’s a good start at enterprise PaaS functionality.

I came to the Oracle [Open World] seeking truth and wisdom on the cloud but expecting very little. To Oracle’s credit, they have exceeded my expectations. If you are an Oracle client or partner, it’s time to take a look at their cloud story to see how it might fit with your plans. I’d still be wary of some of their claims and don’t believe that they will be able to meet all of your needs, but at least they are in the game and competing. And we all know what happens when Ellison chooses to compete.

Getting Ready for the Cloud

by Ben Grubin

Whether you have a handful of applications of thousands of them, if some are not already running the cloud the idea has likely been discussed. Most people agree there are large numbers of applications that should be relatively easy to migrate to cloud infrastructure, yet most still haven’t made the jump to cloud. Why?

A few years ago, I remember writing about the immaturity of public cloud services. My thinking then was that building a private cloud and migrating your applications to it internally would build institutional knowledge (capabilities, policies, experience, etc) necessary for migrating and operating applications in the a public cloud while radically simplifying storage and network issues. These days most companies still haven’t made it that far even though the maturity of public cloud has grown by leaps and bounds. In fact, public cloud maturity has come so far that the question has become not whether to migrate applications to the public cloud, but how many and to which cloud?

In hindsight, it’s pretty easy to see that leaping into cloud (private OR public) a few years ago was a pretty risky and expensive proposition. Most enterprises made the right choice when they elected to sit tight, leverage virtualization to reduce wasted hardware and consolidate data centers (or at least reduce the growth of hardware), and keep a weather eye on this “cloudy” stuff. But now, with a maturing IaaS cloud market, is it time to jump in?

Sorta.

While public clouds are maturing, the question of which public cloud can be tricky. Yes, Amazon AWS currently has the lion’s share of the market, but the lower left corner of the Gartner Magic Quadrant for IaaS is very crowded, with new entrants daily. Furthermore, some IT behemoths are just piling into this market: see Tuesday’s announcement that Oracle is launching the Oracle Compute Cloud, intended as a competitive platform to AWS.

The answer may be to optimize your application for IaaS portability, rather than for a specific cloud environment. For example, decoupling services from the core application both helps an application become easier to scale horizontally, and frees you to change out underlying technologies in those services (like moving from sending your own email to using Amazon’s Simple Email Service).

Making your applications ready for the cloud now positions you to take greater advantage of the growing diversity of the public cloud ecosystem. Tackling changes today will make it a lot easier to move your apps when the time is right.

So Much To Read, So Little Time…

Too Much to ReadRecently I was having a conversation with someone and we got on the topic of books and what we had read recently. We’ve all had this conversation many times, but it struck me that lately many people suffer from the same conundrum – too many books and not enough time. In addition to the stack of unread paperback and hardcover books on my nightstand, I also have a Kindle full of great reading.

I have read many of these books, but many others I have merely started to read before getting distracted and moving onto something else. I also have a stack of magazines (paper and digital), dozens of online publications and blogs, white papers and analyst reports, newspapers, and the daily missives of my friends on Facebook and my extended network on Twitter.

I read for many reasons – to gain knowledge, to get perspective, to escape into another world or time, to keep up with my profession, to laugh, to think, to live. I love to read, but I now wonder if my reading might be more satisfying if it were more purposeful.

What does that mean – more purposeful? What I’m thinking about is picking a small number of issues, topics or themes and focus my reading in those areas. Exploring the depth of an idea might interesting.

Today I could be accused of being a mile wide and an inch deep when it comes to my literary interests. In itself that is probably not a horrible thing. My interests are broad and that probably makes it easier for me to engage in superficially interesting conversations with many different people. I say “superficially interesting” because my knowledge and perspective on many of these topics is often exhausted in a matter of minutes. If I meet someone with a deep passion for a particular interest we hold in common, my surface-level exploration won’t allow me to sustain an extended and thoughtful conversation on the matter. That’s not totally true, I guess, because that encounter becomes a great opportunity for me to learn from the other. But I would have less to contribute.

I also wonder if I should substitute writing more for reading less. It would likely be a positive development if I were to write more, create more, and otherwise get out and engage more fully with the world. It would stand to reason that my writing should follow my reading in terms of focus.  As I gain deeper knowledge and understanding of a particular topic, my writing would hopefully reflect that and be more thought-provoking as a result.

Reading, after a certain age, diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking.

— Albert Einstein

Now I just need to figure out where my focus should be…

theCUBE Interview at EMC World 2013

Here is a video of me discussing the current state of cloud providers and where the industry is going live inside theCUBE with Wikibon’s John Furrier and Stu Miniman from the floor of EMC World 2013 in Las Vegas.

Thank you to ServiceMesh for inviting me to speak.

(c) 2013 John Treadway / CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow me on Twitter @CloudBzz.

Measuring the Business Value of Cloud Computing

My favorite and least favorite question I get is the same – “Can you help me build a business case and ROI for cloud computing?”

Well, yes… and no. The issue is that cloud computing has such a massive impact on how IT is delivered that many of the metrics and KPIs that are typically used at many enterprises don’t capture it.  I mean, how do you capture Agility – really?

In the past I have broken this down into 3 buckets. Yes, some people have more but these are the big three…

Agility

Agility is reducing cycle time from ideation to product (or system delivery) – incredibly difficult to measure given that it’s hard to do apples to apples when every product/project is unique. You can do this in terms of Agile methodology task points and the number of points per fixed timeframe sprint on average over time. Most IT shops do not really measure developer productivity in any way at the moment so it’s pretty hard to get the baseline let alone any changes. Agility, like developer productivity, is notoriously difficult to quantify.  I have done some work on quantifying developer downtime and productivity, but Agility is almost something you have to take on faith. It’s the real win for cloud computing, no matter how else you slice it.

Efficiency

In a highly automated cloud environment with resource lifecycle management and open self-service on-demand provisioning, the impetus for long-term hoarding of resources is eliminated. Reclamation of resources, only using what you need today because it’s like water (cheap and readily available), coupled with moving dev/test tasks to public clouds when at capacity (see Agility above) can reduce the dev/test infrastructure footprint radically (50% or more). Further, elimination of manual processes will reduce labor as an input to TCO for IT. In a smaller dev/test lab I know of, with only 600 VMs at any given time, 4 FTE onshore roles were converted to 2 FTE offshore resources.

There’s a very deep book on this topic that came out recently from Joe Weiman called Cloudonomics (www.cloudonomics.com). One of the key points is to be able to calculate the economics of a hybrid model where your base level requirements are met with a fixed infrastructure and your variable demand above the base is met with an elastic model. A key quote (paraphrase) “A utility model costs less even though it costs more.”

The book is based on this paper — http://joeweinman.com/Resources/Joe_Weinman_Inevitability_Of_Cloud.pdf

And can be summarized as…

Inline image 1
Source: Joe Weiman in “Cloudonomics”

A hybrid model is the most cost-effective – which is “obvious” on the surface but now rigorously proven (?) by the math.

P = Peak.  T = Time.  U = the utility price premium.

If you add the utility pricing model in Joe Weiman’s work to some of the other levers I listed above, you get a set of interesting metrics here. Most IT shops will focus on this to provide the ROI only. They are the ones who are missing the key point on Agility. However, I do understand the project budgeting dance and if you can’t show an ROI that the CFO will bless, you might not get the budget unless the CEO is a true believer.

Quality

What is the impact of removing human error (though initially inserting systematic error until you work it through)? Many IT shops still provision security manually in their environments, and there are errors. How do you quantify the reputation risk of allowing an improperly secured resource be used to steal PII data?  It’s millions or worse. You can quantify the labor savings (Efficiency above), but you can also show the reduction in operational risk in IT through improved audit performance and easier regulatory compliance certification. Again, this is all through automation.

IT needs to get on the bandwagon and understand the fundamental laws of nature here — for 50-80% of your work even in a regulated environment, a hybrid utility model is both acceptable (risk/regulation) and desirable (agility, economics, and quality).

Do a Study?

The only way to break all of this down financially is to do a Value Engineering study and use this to do the business case. You need to start with a process review from the outside (developer) in (IT) and the inside (IT) out (production systems). Show the elimination of all of the manual steps.  Show the reduced resource footprint and related capex by eliminating hoarding behavior. Show reduced risk and lower costs by fully automating the provisioning of security in your environment. Show the “cloudonomics” of a hybrid model to offset peak demand and cyclicality or to eliminate or defer the expense of a new data center (that last VM with a marginal cost of $100 million anybody?).

History Lesson

In 1987 the stock market crashed and many trading floors could not trade because they lacked real-time position keeping systems. Traders went out and bought Sun workstations, installed Sybase databases, and built their own.  They didn’t wait for IT to solve the problem – they did it themselves.  That’s what happens with all new technology innovation.

The same thing happened with Salesforce.com. Sales teams just started using it and IT came in afterwards to integrate and customize it. It was obviously a good solution because people were risking IT’s displeasure by using it anyway.

If you really want to know if cloud computing really has any business value, take a look at your corporate credit card expenses and find out who in your organization is already using public clouds – with or without your permission. It’s time to stop calculating possible business value and start realizing actual business value from the cloud.

(c) 2012 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

IaaS Cloud Litmus Test – The 5 Minute VM

I will make this simple.  There is only one question you need to ask yourself or your IT department to determine if what you have is really an Infrastructure-as-a-Service cloud.

Can I get a VM in 5-10 minutes?

Perhaps a little bit more detailed?

Can a properly credentialed user, with a legitimate need for cloud resources, log into your cloud portal or use your cloud API, request a set of cloud resources (compute, network, storage), and have them provisioned for them automatically in a matter of a few minutes (typically less than 10 and often less than 5)?

If you can answer yes, congratulations – it’s very likely a cloud.  If you cannot answer yes it is NOT cloud IaaS. There is no wriggle room here.

Cloud is an operating model supported by technology.  And that operating model has as its core defining characteristic the ability to request and receive resources in real-time, on-demand. All of the other NIST characteristics are great, but no amount of metering (measured service), resource pooling, elasticity, or broad network access (aka Internet) can overcome a 3-week (or worse) provisioning cycle for a set of VMs.

Tie this to your business drivers for cloud.

  • Agility? Only if you get your VMs when you need them.  Like NOW!
  • Cost? If you have lots of manual approvals and provisioning, you have not taken the cost of labor out.  5 Minute VMs requires 100% end-to-end automation with no manual approvals.
  • Quality? Back to manual processes – these are error prone because humans suck at repetitive tasks as compared to machines.

Does that thing you call a cloud give you a 5 Minute VM?  If not, stop calling it a cloud and get serious about building the IT Factory of the Future.

“You keep using that word [cloud].  I do not think it means what you think it means.”

– The Princess Cloud

 

 

(c) 2012 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

Open Call to VMware – Commercialize Cloud Foundry Software!

After spending time at VMware and Cloud Expo last week, I believe that VMware’s lack of full backing for Cloud Foundry software is holding back the entire PaaS market in the enterprise.

Don’t get me wrong, there’s a lot of momentum in PaaS despite how very immature the market is. But this momentum is in pockets and largely outside of the core of software development in the enterprise. CloudFoundry.com might be moving along, but most enterprises don’t want to run the bulk of their applications in a public cloud. Only through the Cloud Foundry software layer will enterprises really be able to invest. And invest they will.

PaaS-based applications running in the enterprise data center are going to replace (or envelope) traditional app server-based approaches. It is just a matter of time due to productivity and support for cloud models. Cloud Foundry has the opportunity to be one of the winners but it won’t happen if VMware fails to put their weight behind it.

Some nice projects like Stackato from ActiveState are springing up around cfoundry, but the enterprises I deal with every day (big banks, insurance companies, manufacturers) will be far more likely to commit to PaaS if a vendor like VMware gets fully behind the software layer. Providing an open source software support model is fine and perhaps a good way to start. However, this is going to be a lot more interesting if VMW provides a fully commercialized offering with all of the R&D enhancements, etc.

This market is going to be huge – as big or bigger than the traditional web app server space. It’s just a matter of time. Cloud Foundry is dominating the current discussion about PaaS software but lacks the full support of VMware (commercial support, full productization). This is just holding people back from investing.  VMware reps ought to be including Cloud Foundry in every ELA, every sales discussion, etc. and they need to have some way to get paid a commission if that is to happen. That means they need something to sell.

VMware’s dev teams are still focused on making Cloud Foundry more robust and scalable. Stop! It’s far better to release something that’s “good enough” than to keep perfecting and scaling it.
“The perfect is the enemy of the good.” – Voltaire

It’s time for VMware to get with the program and recognize what you they and how it can be a huge profit engine going forward – but they need to go all in starting now!

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

VMware’s OpenStack Hook-up

VMware has applied to join the OpenStack Foundation, potentially giving the burgeoning open source cloud stack movement a huge dose of credibility in the enterprise. There are risks to the community in VMware’s involvement, of course, but on the balance this could be a pivotal event. There is an alternative explanation, which I will hit at the end, but it’s a pretty exciting development no matter VMware’s true motivations.

VMware has been the leading actor for cloud computing in the enterprise. Most “private clouds” today run vSphere, and many service providers have used their VMware capabilities to woo corporate IT managers. While the mass-market providers like Amazon and Rackspace are built on open source hypervisors (typically Xen though KVM is becoming more important), the enterprise cloud is still an ESXi hypervisor stronghold.

Soapbox Rant: Despite the fact that most of the enterprise identifies VMware as their private cloud supplier, a very large majority of claimed “private clouds” are really nothing more than virtualized infrastructure. Yes, we are still fighting the “virtualization does not equal cloud” fight in the 2nd half of 2012. On the “Journey to the Cloud,” most VMware private clouds are still in the Phase I or early Phase II stages and nowhere near a fully elastic and end-to-automated environment driven by a flexible service catalog etc.

VMware’s vCloud program includes a lot of components, old and new, anchored by the vCloud Director (“vCD”) cloud management environment. vCD is a fairly rich cloud management solution, with APIs, and several interesting features and add-ons (such as vCloud Connector).

vCD today competes directly with OpenStack Compute (Nova) and related modules. However, it is not really all that widely used in the enterprises (I have yet to find a production vCD cloud but I know they exist). Sure, there are plenty of vCD installations out there, but I’m pretty sure that adoption has been nowhere near where VMware had hoped (queue the VMware fan boys).

From early days, OpenStack has supported the ESXi hypervisor (while giving Microsoft’s Hyper-V a cold shoulder). It’s a simple calculus – if OpenStack wants to operate in the enterprise, ESXi support is not optional.

With VMware’s overtures to the OpenStack community, if that is what this is, it is possible that the future of vCloud Director could be very tied to the future of OpenStack. OpenStack innovation seems to be rapidly outpacing vCD, which looks very much like a project suffering from bloated development processes and an apparent lack of innovation. At some point it may have become obvious to people well above the vCD team that OpenStack’s momentum and widespread support could no longer be ignored in a protectionist bubble.

If so, VMware should be commended for their courage and openness to support external technology that competes with one of their strategic product investments from the past few years. VMware would be joining the following partial list of OpenStack backers with real solutions in (or coming to) the market:

  • Rackspace
  • Red Hat
  • Canonical
  • Dell
  • Cloudscaling
  • Piston
  • Nebula
  • StackOps

Ramifications

Assuming the future is a converged vCD OpenStack distro (huge assumption), and that VMware is really serious about backing the OpenStack movement, the guys at Rackspace deserve a huge round of applause. Let’s explore some of the potential downstream impacts of this scenario:

  • The future of non-OpenStack cloud stacks is even more in doubt. Vendors currently enjoying some commercial success that are now under serious threat of “nichification” or irrelevancy, include: Citrix (CloudStack), Eucalyptus, BMC (Cloud Lifecycle Management), and… well, is there really anybody else? You’re either an OpenStack distro, and OpenStack extension, or an appliance embedding OpenStack if you want to succeed. At least until some amazingly new innovation comes along to kill it. OpenStack is to CloudStack as Linux is to SCO? Or perhaps FreeBSD?
    • Just weigh the non-OpenStack community against OpenStack’s “who’s who” list above. If you’re a non-OpenStack vendor and you are not scared yet, you may be already dead but just not know it.
    • As with Linux v. Unix, there will be a couple of dominant offerings and a lot of niche plays supporting specific workload patterns.  And there will be niche offerings that are not OpenStack.  In the long run, however, the bulk of the market will go to OpenStack.
  • The automation vendors (BMC, IBM, CA, HP) will need to embrace and extend OpenStack to stay in the game. Mind you, there is a LOT of potential value to what you can do with these tools. Patch management and compliance is just scratching the surface (though you can use Chef for that too, of course). Lots of governance, compliance, integration, and related opportunities for big markets here, and potentially all more lucrative and open to differentiated value. I’ve been telling my friends at BMC this for the past couple of years – perhaps I’ve got to get a bit more vociferous…
  • The OpenStack startups are in a pretty tough position right now. The OpenStack ecosystem has become it’s own pretty frothy and shark-filled “red ocean,” and the noise from the big guys – Rackspace, Red Hat, VMware, Dell, etc. – will be hard to overcome. I foresee a handful of winners, some successful pivots, and the inevitable failures (VCs invest in risk, right?). There are a lot of very smart people working at these startups, and at cloudTP we work with several of them, so I wouldn’t count any of them out yet. But in the long run, if the history of open source is any indicator, the market can’t support 10+ successful OpenStack software vendors.
  • Most importantly, it is my opinion that OpenStack WILL be the enterprise choice in the next 2-3 years. Vendors who could stop this – including VMware and Microsoft – are not getting it done (Microsoft is particularly missing the boat on the cloud stack layer). We’ll see the typical adoption curve with the most aggressive early adopters deploying OpenStack today and driving ecosystem innovation.
  • Finally, with the cloud stack battle all but a foregone conclusion, the battle for the PaaS layer is ripe for a blowout. And unlike the IaaS stack layer, the PaaS market will be a lot less commoditized in the near future.  There is so much opportunity for differentiation and innovation here that we will all have a lot to keep track of in the coming years.

Alternative Explanations

Perhaps I am wrong and the real motivation here for VMware is to tactically protect their interests in the OpenStack project – ESXi integration, new features tied to the vSphere roadmap, etc. The vCD team may also be looking to leverage the OpenStack innovation curve and liberal licensing model (Apache) to find and port new capabilities to the proprietary VMware stack – getting the benefit of community development efforts without having to invent them.

My gut tells me, however, that this move by VMware will lead to a long-term and strategic commitment that will accelerate the OpenStack in the Enterprise market.

Either way, VMware’s involvement in OpenStack is sure to change the dynamic and market for cloud automation solutions.

——-

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.