Don’t Mention the Cloud

"I mentioned it once, but I think I got away with it alright."

The “cloud” term has started to turn like the leaves on the trees outside my Window.  It’s yellowing, drying out and about to fall to Earth to be raked up and composted into fertilizer if something isn’t done to stop it.

Where once it was the magic phrase that opened any door, the term “cloud” is now considered persona non grata in many meetings with customers. When everything’s a cloud – and today “cloud washing” is an epidemic on an unprecedented scale – the term loses meaning.

When everything’s a cloud, nothing is.

In fact, not only does “cloud” mean less today than a year ago, what it does mean is not good. For many customers, “cloud” is just a pig with cloud lipstick.  And who’s fault is this?  It’s ours – all of ours in the IT industry.  We’ve messed it up – potentially killing the golden goose.

A Vblock is not a cloud (not that a Vblock is a pig). It’s just a big block of “converged infrastructure.” Whatever its merits, it ain’t a cloud. You can build a cloud on top of a Vblock, which is great, but with out the cloud management environment from CA, BMC, VMware (vCloud) or others, it’s just hardware.

A big EMC storage array is not a cloud either, but that doesn’t stop EMC from papering airports around the globe with “Journey to the Private Cloud” banners. Nothing against EMC.  And VMware too often still confuses your cloud state with what percent of your servers are virtualized.  Virtualization is not cloud.  Virtualization is not even a requirement for cloud – you can cloud without a VM.

A managed hosting service is not a cloud.

Google AdWords is not cloud “Business Process as a Service” as Gartner would have you believe. It’s advertising!  Nor is ADP Payroll a cloud (sorry again, Gartner), even if it’s hosted by ADP.  It’s payroll.  By their logic, Gartner might start to include McDonalds in their cloud definition (FaaS – Fat as a Service?). I can order books at Amazon and they get mailed to my house.  Is that “Book Buying as a Service” too?  Ridiculous!

And then there’s Microsoft’s “To the Cloud” campaign with a photo app that I don’t believe even exists.

It’s no wonder, then, that customers are sick and tired and can’t take it (cloud) anymore.  Which is why it’s not surprising when many customer “cloud” initiatives are actually called something else.  They call it dynamic service provisioning, or self service IT, or an automated service delivery model.  Just don’t use the “cloud” term to describe it or you might find yourself out in the street quicker than you can say “resource pooling.”

There’s also that pesky issue about “what is a cloud, anyway?” that I wrote about recently. For users, it’s a set of benefits like control, transparency, and productivity.  For providers, it’s Factory IT – more output at higher quality and lower cost.

When talking about “cloud computing” to business users and IT leaders, perhaps it’s time to stop using the word cloud and start using a less ambiguous term. Perhaps “factory IT” or “ITaaS” or some other term to describe “IT capabilities delivered as a service.”

No matter what, when speaking to customers be careful about using the “cloud” term.  Be precise and make sure you and your audience both know what you mean.

The Red Ocean of Cloud Infrastructure Stacks (updated)

Update: am revising this still… Reposting now – but send me your comments via @CloudBzz on Twitter if you have them.

It seems like every day there’s a new company touting their infrastructure stack.   I’m sure I’m missing some, but I show more than 30 solutions for building clouds below, and I am sure that more are on their way.  The market certainly can’t support so many participants!  Not for very long anyway.  This is the definition of a “red ocean” situation — lots of noise, and lots of blood in the water.

This is the list of the stacks that I am aware of:

I. Dedicated Commercial Cloud Stacks

II.  Open Source Cloud Stacks

III.  IT Automation Tools with Cloud Functionality

IV.  Private Cloud Appliances

I hope you’ll pardon my dubious take, but I can’t possibly understand how most of these will survive.  Sure, some will because they are big and others because they are great leaps forward in technology (though I see only a bit of that now).  There are three primary markets for stacks:  enterprise private clouds, provider public clouds, and public sector clouds.  In five years there will probably be at most 5 or 6 companies that matter in the cloud IaaS stack space, and the rest will have gone away or taken different routes to survive and (hopefully) thrive.

If you’re one of the new stack providers – think long and hard about this situation before you make your splash.  Sometimes the best strategy is to pick another fight.  If you swim in this red ocean, you might end up as shark bait.

Putting Clouds in Perspective – Cloud Redefined

A Change of Perspective by kuschelirmel

You’d think as we head into the waning months of 2011 that there’d be little left to discuss regarding the definition of cloud IT.  Well, not quite yet.

Having spent a lot of time with clients working on their cloud strategies and planning, I’ve come to learn that the definition of cloud IT is fundamentally different depending on your perspective.  Note that I am using “cloud IT” and not “cloud computing” to make it clear I’m talking only about IT services and not consumer Internet services.

Users of cloud IT – those requesting and getting access to cloud resources – define clouds by the benefits they derive.  All those NIST-y terms like resource pooling, rapid elasticity, measured service, etc. can sound like gibberish to users.  Self-service is just a feature – but users need to understand the benefits.  For a user – cloud IT is about control, flexibility, improved productivity, (potentially) lower costs, and greater transparency. There are other benefits, perhaps – but these are commonly what I hear.

For providers – whether internal IT groups or commercial service providers – cloud IT means something entirely different.  First and foremost, it’s about providing services that align with the benefits valued by users described above.  Beyond that, cloud IT is about achieving the benefits of mass production and automation, a “factory IT” model that fundamentally and forever changes the way we deliver IT services.  In fact, factory IT (McKinsey blog) is a far better term to describe what we call cloud today when you’re talking to service providers.

Factory IT standardizes on a reasonable number of standard configurations (service catalog), automates repetitive processes (DevOps), then manages and monitors ongoing operations more tightly (management). Unlike typical IT, with it’s heavily manual processes and hand-crafted custom output, factory IT generates economies of scale that produce more services in a given time period, at a far lower marginal cost per unit of output.

Delivering these economies end-to-end is where self-service comes in.  Like a vending machine, you put your money (or budget) in, make a selection, and out pops your IT service.  Without factory IT, self service – and the control, transparency, productivity and other benefits end users value – would not be possible.

Next time someone asks you to define cloud, make sure you understand which side of the cloud they are standing on before you answer.

—-

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/. You can follow CloudBzz on Twitter @CloudBzz.

CloudFloor Drives the Cloud To Achieve Business Results

cloudfloor logo

CloudFloor (Waltham, MA) is getting close to starting the beta program for CloudControl, their system to tie cloud usage to measurable business metrics.  I had an interesting call with co-founder and CTO Imad Mouline last week to learn more about this innovative system.  There are a couple of ways to approach the concept of CloudFloor.  The most obvious one deals with controlling costs by shutting down instances when they are no longer needed, but it’s also the least interesting approach.  And there are companies such as CloudCrusier addressing the cost management and cloud chargeback business.

The CloudFloor guys started seeing big uptake in cloud usage a while ago and were able to glean some pretty interesting insights from their performance data.  Insights such as the “noisy neighbor” problem in a multi-tenant environment (it’s real), seeing users deploy lots of VMs but not shut them down when no longer needed, etc.  They saw a lot of large enterprises overspending on cloud but also getting application performance blocked by simple and easily remedied mistakes.  CloudFloor was formed to address these issues and beyond.

What struck me as most interesting was the sophistication of how they tie non-cost business metrics into the equation.  Think about any business and the key metrics that drive their success.  As Imad pointed out, companies can track many metrics today but very few are core and critical to their business.  For example, at an auction site like eBay they know that the two most important metrics are number of listings and number of bids at any given point in time.

If you’re in a primarily online business, metrics are heavily influenced by the amount of infrastructure you have deployed at any given time.  Too much and you’re losing money.  Too little and you’re losing money… Like Goldilocks and the Three Bears, the trick is to get it “just right.”

One of my previous startups was in the digital imaging space.  The number of images uploaded at any given point directly correlated with print and gift orders. Having sufficient infrastructure to handle the upload loads at any given time was critical.  Having too much was wasteful – and since we started this pre-cloud we were over-provisioned a majority of the time.  However, at the very biggest peak times we sometimes were under-provisioned.  This caused uploads to slow or fail which in turn resulted in lost revenues.

Had I had a reason to do so (ie had I been using cloud), it would have been pretty easy for me to create a formula that calculated the marginal cost of additional infrastructure vs. the marginal gross profit that would be enabled by provisioning instances.  Given that formula, I could then maximize my profit by having a system that very intelligently managed the balance to a point where – in theory – an extra $1.00 spent on cloud would result in at least an extra $1.00 in gross profit (all other costs being equal).  Beyond that, I’d see diminishing returns.  Of course, it would never get exactly that precise, but it could be close.

Of course, you can also have metrics that may not so easily tie to micro economics.  If you’ve promised a certain SLA level for transactions (e.g. cart page load in 1.5 seconds, purchase-to-confirmation page of 4 seconds, etc. – CloudControl can optimize the amount of cloud infrastructure you have deployed to meet the SLAs.  This is what BSM – Business Service Management – is all about.

They also can do things like manage geographic load balancing, traffic shaping and more.  There is a pretty sophisticated vision at play here.

So, how does it work?

Their core “Principles Engine” (“PE”) accepts data from a number of different feeds – could be Google Analytics, data generated from the application, or other information.  PE then turns that data into visibility and insights.  If that’s all you need, you’re golden — CloudControl is free for the visibility bits (you pay if you want their automation to control cloud resources).  See the graphic below.

Click to Enlarge

Then you provide your goals and principles for CloudControl to manage.  CloudControl then manages global traffic, cloud instances and more (can call out to any service).  All of this goes towards hitting the business metrics established in the Principles Engine.

One of the things they realized early on is that an holistic approach to cloud BSM would have to go broader than the capabilities of individual clouds. Geographic load balancing, failover and other Internet-level traffic-shaping techniques are absolutely critical to hitting the metrics in many cases.  This might also include managing across different vendor clouds and even internal clouds (very complicated DNS management required).

What they needed, then, is a platform on which to manage these capabilities, so they went out and acquired a small but growing DNS provider (Microtech Ltd from the UK) and are now in the DNS management business too.  DNS is important to performance, security and availability – which is why CloudFlare is able to do what it does (protects and speeds up web sites).  They still sell the DNS services standalone, but the strategic rationale for the acquisition was the breadth of the vision for business service management.  This was a really smart play and will set them apart from many potential competitors.

CloudFloor has taken a very sophisticated approach to tie cloud usage, costs and capabilities to the business metrics you care about most.  They are going to beta soon and it should be very interesting to see where they take the platform.

———–

(c) 2011 CloudBzz / TechBzz Media, LLC. All rights reserved. This post originally appeared at http://www.cloudbzz.com/cloudfloor-drives-the-cloud-to-achieve-business-results/. You can follow CloudBzz on Twitter @CloudBzz.

 

Dell (and HP) Join OpenStack Parade to the Enterprise…

(and HP)

 

Update:  HP also announced support for OpenStack on its corporate blog.  And the beat goes on…

 

The OpenStack Parade is getting bigger and bigger. As predicted, enterprise vendors are starting to announce efforts to make OpenStack “Enterprise Ready.”  Today Dell announced their support for OpenStack through their launch of the “Dell OpenStack Cloud Solution.”  This is a bundle of hardware, OpenStack, a Dell-created OpenStack installer (“Crowbar”), and services from Dell and Rackspace Cloud Builders.

Dell joins Citrix as a “big” vendor supporting OpenStack to their customers.  Startups such as Piston are also targeting the OpenStack space, with a focus on the enterprise.

Just one year old, the OpenStack movement is a real long-term competitor to VMware’s hegemoy in the cloud. I fully expect to see IBM, HP and other vendors jumping into the OpenStack Parade in the not too distant future.

Citrix + Cloud.com = OpenStack Leadership?

TechCrunch reported today that Citrix has acquired Cloud.com for > $200m.  This is a great exit for a very talented team at Cloud.com and I’m not surprised at their success.  Cloud.com has had great success in the market, especially in the last 12 months.  This is both in the service provider space and for internal private clouds.  Great technology, solid execution.

Citrix has been a fairly active member in the OpenStack community, most recently with their Olympus Project announcement in May.  The stated goal there is…

a tested and verified distribution of … OpenStack, combined with a cloud-optimized version of XenServer.

Cloud.com has also been visible in OpenStack though there has not been a lot of detail on their commitment. Cloud.com’s CloudStack is also a multi-hypervisor solution with support for vSphere and KVM in addition to XenServer. I would assume that to continue to be the case – selling in the enterprise means bowing to the reality of a VMware dominant position. However, I would expect an ever-tighter value prop with XenServer and that’s okay.

So will Citrix clarify the Cloud.com/OpenStack position?  That’s almost a given and in-fact I do expect a strong push to dominate commercial OpenStack based on the feature/function lead that Cloud.com gives them.  Given the support for other hypervisors, this does put more pressure on Piston as a startup targeting the OpenStack space.  However, the Piston team is very smart (led by Joshua McKenty) and I would not worry about them just yet.

No matter what happens from here, it has to be a great day for the Cloud.com team.  Savor it and enjoy – and then get back to work!

The Hybrid Enterprise – Beyond the Cloud

In the past few months we (at Unisys) have been rolling out a new strategic concept we call the Hybrid Enterprise. Normally I don’t use this forum to talk about Unisys but, as one of the lead authors of this strategy, in this case I’ll make an exception. The starting point for this hybrid enterprise concept is the realisation that cloud data center capabilities don’t replace traditional IT – at least not in the foreseeable future. They just add new models and resulting complexity.

We started with two primary models of infrastructure delivery – internal data centers and outsourcing/managed services. Now we add at least three more – internal private clouds, hosted private clouds and public clouds.

But it gets worse from there. There are many types of public clouds with many different operating models. If my company starts using clouds from Unisys, Amazon, Rackspace and Microsoft – they are all very different. Yet, for IT to really have a leading role in this movement, they all need to be embraced for appropriate use. And there are impacts across several areas:  security, governance, application architectures and more.

The hybrid enterprise approach reflects the reality that end-user IT organizations are facing today.  Cloud doesn’t make it easier to run IT – quite the opposite.  But it’s still worth it.

 

    Forward PaaS: VMware’s Cloud Foundry First Down

    I know it’s baseball season, but there’s no passing in baseball and this post will just work better as a football analogy.

    VMware’s announcement this week of Cloud Foundry (twitter @cloudfoundry) has gotten a lot of attention from the cloud community, and for good reason. Just as hardware is a low-margin commodity business, hardware as a service (e.g. IaaS) is the same. Ultimately, price will be the core basis for competition in the IaaS space and a lot of high-cost “enterprise” clouds will struggle to compete for business without some real differentiation.

    For the past few years, PaaS offerings from Salesforce (force.com)a, Microsoft (Azure), Google (AppEngine) and newcomers like Heroku (now owned by Salesforce), EngineYard and others have really gained a lot of traction. Developers really don’t like sysadmin work as a rule, and provisioning instances of EC2 is sysadmin work. Writing code that turns into applications, features, etc. that end-users use is far more interesting to the developers I’ve worked with (and who’ve worked for me). PaaS, then, is for developers.

    But PaaS before this week meant lock-in. Developers, and the people who pay them, don’t like to be locked into specific vendor solutions. If you write for Azure, the fear (warranted or not) is that you can only run on Azure. Given that Microsoft has totally fumbled the opportunity to make Azure a partner-centric platform play, that means you need to run your Azure apps on Microsoft’s cloud. Force.com is even worse – with it’s own language, data model, etc. there’s not even the chance that you can run your code elsewhere without major rework. Force.com got traction primarily for people building extensions to Salesforce’s SFA and CRM offerings – though some people did do more with it. VMforce (Spring on Force.com) was supposed to change the openness issue by providing a framework for any Java apps to run. Google AppEngine is also proprietary in many respects, and when it launched with just a single language (Python!), a lot of developers shrugged. Even the proprietary PaaS components of AWS have been a problem. I could not get my developers to use SimpleDB back in 2008 because, as the rightly pointed out, we’d be stuck if we wanted to move off of EC2 at some point.

    Lots of flags on the field. Holding! Illegal receiver! Neutral zone infraction!

    There have been some attempts to publish PaaS frameworks that can run on other clouds, but they have failed to gain much traction. (carried off the field on a stretcher? yeah, that works).

    Along comes CloudFoundry by VMware and — INTERCEPTION!

    In fact, it’s like a whole new game just started. On their first possession VMware completed a perfectly executed forward PaaS. It’s 1st & 10 on their own 20 yard line. There’s a lot of field out there, and while the defense is in total disarray for the moment, it’s going to take a lot of perfect execution to score a CloudFoundry touchdown.

    The Cloud Foundry Playbook

    VMware really nailed it on the launch, with very compelling playbook of offensive and defensive plays that should have most PaaS competitors reeling. Here’s their graphic that shows the core concepts:

    Shotgun Formation: Across the top you can see three programming frameworks included at launch.  Spring (Java – SpringSource owned by VMware), Rails, and node.js.  You can expect more frameworks to be supported – including Python and PHP.  Ideally they would add .NET too, though not sure if the licensing can work there (a huge chunk of corporate apps are Windows/.NET based).  They also added support for MongoDB, MySQL and Redis for data management.

    The Open Blitz: VMware did an incredibly good thing by launching the core Cloud Foundry project as an Apache-licensed open source project.  While I have some concerns around their lack of a community governance model, the fact that they went with Apache vs. a dual-license GPL/Commercial model like MySQL is incredibly aggressive.  I could, if I wanted to, grab Cloud Foundry code, create my own version (e.g. Bzz Foundry) and sell it for license fees with no need to pay VMware anything.  The reality is that I could, but I would not do that and VMware knows that their own development teams will be the key to long term sustainability of this solution.  That said, a cloud service provider that wants to add Cloud Foundry on top of their OpenStack-based cloud could do so without any licensing fees.  I can be part of the “Cloud Foundry Federation” without having to be a vCloud VSPP provider.

    Special Teams: Cloud Foundry is deployable in an enterprise private cloud, a public cloud, or what they call a “micro cloud” model (to run on a laptop for development).  I suspect they will have a very strong licensing and maintenance business for the enterprise versions of Cloud Foundry.  They’ll also get support and maintenance fees from many cloud service providers who see the value in paying for it.  Of course, CloudFoundry.com is a service itself, which may be a problem for other cloud service providers to join the federated model.  This is something they will need to think about – EMC Atmos Online eventually had to be closed to new customers based on push back from other service providers who were looking to also be in the cloud storage business.  It’s hard to get service providers to use your stuff if you’re competing against them.

    Just over a year ago I argued that VMware should “Run a Cloud…” as one of their options.  In fact, I predicted a Spring is the key to them being a cloud provider:

    Their alternative at that point is to offer their own cloud service to capture the value from their enterprise relationships and dominant position.  They can copy the vertically integrated strategy of Microsoft to make push-button deployment to their cloud service from both Spring and vCenter.

    Gartner’s Chris Wolf is following a similar line of thinking, especially when you add last week’s EMC -> VMware Mozy transfer.

    So where does that leave Team CloudFoundry?

    For now, they are on the field, in the game, and playing like winners.  Let’s see if they can march down the field before the defense gets into a position of strength.

    ——-

    (c) 2011 CloudBzz / TechBzz Media, LLC.  All rights reserved.  This post originally appeared at http://www.cloudbzz.com/seamicro-atom-and-the-ants/. You can follow CloudBzz on Twitter @CloudBzz.

     

     

     

    SeaMicro: Atom and the Ants

    How the Meek Shall Inherit The Data Center, Change The Way We Build and Deploy Applications, And Kill the Public Cloud Virtualization Market

    The tiny ant. Capable of lifting up to 50 times its body weight, an ant is an amazing workhorse with by far the highest “power to weight” ratio of any living creature. Ants are also among the most populous creatures on the planet. They do the most work as well – a bit at a time Ants can move mountains.

    Atom chips (and ARM chips too) are the new ants of the data center. They are what power our smartphones, tablets and ever more consumer electronics devices. They are now very fast, but surprisingly thrifty with energy – giving them the highest computing power to energy weight ratio of any microprocessor.

    I predict that significantly more than half of new data center compute capacity deployed in 2016 and beyond will be based on Atoms, ARMs and other ultra-low-power processors. These mighty mites will change much about how application architectures will evolve too. Lastly, I seriously believe that the small, low-power server model will eliminate the use of virtualization in a majority of public cloud capacity by 2018. The impact in the enterprise will be initially less significant, and will take longer to play out, but in the end it will be the same result.

    So, let’s take a look at this in more detail to see if you agree.

    This week I had the great pleasure to spend an hour with Andrew Feldman, CEO and founder of SeaMicro, Inc., one of the emerging leaders in the nascent low-power server market. SeaMicro has had quite a great run of publicity lately, appearing twice in the Wall Street Journal related to their recent launch of their second-generation product – the SM10000-64 based on a new dual-core 1.66 GHz 64-bit Atom chip created by Intel specifically for SeaMicro.

    SeaMicro: 512 Cores, 1TB RAM, 10 RU

    Note – the rest of this article is based on SeaMicro and their Atom-based servers.  Calxeda is another company in this space, but uses ARM chips instead.

    These little beasties, taking up a mere 10 rack units of space (out of 42 in a typical rack), pack an astonishing 256 individual servers (512 cores), 64 SATA or SSD drives, up to 160GB of external network connectivity (16 x 10GigE), and 1.024 TB of DRAM. Further, SeaMicro uses ¼ of the power, ¼ the space and costs a fraction of a similar amount of capacity in a traditional 1U configuration. Internally, the 256 servers are connected by a 1.28 Tbps “3D torus” fabric modeled on the IBM Blue Gene/L supercomputer.

    The approach to using low-power processors in a data center environment is detailed in a paper by a group of researchers out of Carnegie Mellon University. In this paper they show that cluster computing using a FAWN (“Fast Array of Wimpy Nodes”) approach, overall, are “substantially more energy efficient than conventional high-performance CPUs” at the same level of performance.

    The Meek Shall Inherit The Earth

    A single rack of these units would boast 1,024 individual servers (1 CPU per server), 2,048 cores (total of 3,400 GHz of compute), 4.1TB of DRAM, and 256TB of storage using 1TB SATA drives, and communicate at 1.28Tbps at a cost of around half a million dollars (< $500 per server).

    $500/server – really? Yup.

    Now, let’s briefly consider the power issue. SeaMicro saves power through a couple of key innovations. First, they’re using these low power chips. But CPU power is typically only 1/3 of the load in a traditional server. To get real savings, they had to build custom ASICs and FPGAs to get 90% of the components off of a typical motherboard (which is now the size of a credit card, with 4 of them on each “blade”). Aside from capacitors, each motherboard has only three types of components – the Atom CPU, DRAM, and the SeaMicro ASIC. The result is 75% less power per server. Google has stated that, even at their scale, the cost of electricity to run servers exceeds the cost to buy them. Power and space consumes >75% of data center operating expense. If you save 75% of the cost of electricity and space, these servers pay for themselves – quickly.

    If someone just gave you 256 1U traditional servers to run – for free – it would be far more expensive than purchasing and operating the SeaMicro servers.

    Think about it.

    Why would anybody buy traditional Xeon-based servers for web farms ever again? As the saying goes, you’d have to pay me to take a standard server now.

    This is why I predict that, subject to supply chain capacity, more than 50% of new data center servers will be based on this model in the next 4-5 years.

    Atoms and Applications

    So let’s dig a bit deeper into the specifics of these 256 servers and how they might impact application architectures. Each has a dual-core 1.66GHz 64-bit Intel Atom N570 processor with 4GB of DRAM. These are just about ideal Web servers and, according to Intel, the highest performance per watt of any Internet workload processer they’ve every built.

    They’re really ideal “everyday” servers that can run a huge range of computing tasks. You wouldn’t run HPC workloads on these devices – such as CAD/CAM, simulations, etc. – or a scale-up database like Oracle RAC. My experience is that 4GB is actually a fairly typical VM size in an enterprise environment, so it seems like a pretty good all-purpose machine that can run the vast majority of traditional workloads.

    They’d even be ideal as VDI (virtual desktop servers) where literally every running Windows desktop would get their own dedicated server. Cool!

    Forrester’s James Staten, in a keynote address at CloudConnect 2011, recommended that people write applications that use many small instances when needed vs. fewer larger instances, and aggressively scale down (e.g. turn off) their instances when demand drops. That’s the best way to optimize economics in metered on-demand cloud business models.

    So, with a little thought there’s really no need for most applications to require instances that are larger than 4GB of RAM and 1.66GHz of compute. You just need to build for that.

    And databases are going this way too. New and future “scale out” database technologies such as ScaleBase, Akiban, Xeround, dbShards, TransLattice, and (at some future point) NimbusDB can actually run quite well in a SeaMicro configuration, just creating more instances as needed to meet workload demand. The SeaMicro model will accelerate demand for scale-out database technologies in all settings – including the enterprise.

    In fact, some enterprises are already buying SeaMicro units for use with Hadoop MapReduce environments. Your own massively scalable distributed analytics farm can be a very compelling first use case.

    This model heavily favors Linux due to the far smaller OS memory footprint as compared with Windows Server. Microsoft will have to put Windows Server on a diet to support this model of data center or risk a really bad TCO equation. SeaMicro is adding Windows certification soon, but I’m not sure how popular that will be.

    If I’m right, then it would seem that application architectures will indeed be impacted by this – though in the scheme of things it’s probably pretty minor and in line with current trends in cloud.

    Virtualization? No Thank You… I’ll Take My Public Cloud Single Tenant, Please!

    SeaMicro claims that they can support running virtualization hosts on their servers, but for the life of me I don’t know why you’d want to in most cases.

    What do you normally use virtualization for? Typically it’s to take big honking servers and chunk them up into smaller “virtual” servers that match application workload requirements. For that you pay a performance and license penalty. Sure, there are some other capabilities that you get with virtualization solutions, but these can be accomplished in other ways.

    With small servers being the standard model going forward, most workloads won’t need to be virtualized.

    And consider the tenancy issue. Your 4GB 1.66GHz instance can now run on its own physical server. Nobody else will be on your server impacting your workload or doing nefarious things. All of the security and performance concerns over multi-tenancy go away. With a 1.28 Tbps connectivity fabric, it’s unlikely that you’ll feel their impact at the network layer as well. SeaMicro claims 12x available bandwidth per unit of compute than traditional servers. Faster, more secure, what’s not to love?

    And then there’s the cost of virtualization licenses. According to a now-missing blog post on the Virtualization for Services Providers blog (thank you Google) written by a current employee of the VCE Company, the service provider (VSPP) cost for VMware Standard is $5/GB per month. On a 4GB VM, that’s $240 per year – or 150% the cost of the SeaMicro node over three years! (VMware Premier is $15/GB, but in fairness you do get a lot of incremental functionality in that version). And for all that you get a decrease in performance having the hypervisor between you and the bare metal server.

    Undoubtedly, Citrix (XenServer), RedHat (KVM), Microsoft (Hyper-V) and VMware will find ways to add value to the SeaMicro equation, but I suspect that many new approaches may emerge that make public clouds without the need for hypervisors a reality. As Feldman put it, SeaMicro represents a potential shift away from virtualization towards the old model of “physicalization” of infrastructure.

    The SeaMicro approach represents the first truly new approach to data center architectures since the introduction of blades over a decade ago. You could argue – and I believe you’d be right – that low-power super-dense server clusters are a far more significant and disruptive innovation than blades ever were.

    Because of the enormous decrease in TCO represented by this model, as much as 80% or more overall, it’s fairly safe to say that any prior predictions of future aggregate data center compute capacity are probably too low by a very wide margin. Perhaps even by an order of magnitude or more, depending on the price-elasticity of demand in this market.

    Whew! This is some seriously good sh%t.

    It’s the dawn of a new era in the data center, where the ants will reign supreme and will carry on their backs an unimaginably larger cloud than we had ever anticipated. Combined with hyper-efficient cloud operating models, information technology is about to experience a capacity and value-enablement explosion of Cambrian proportions.

    What should you do? Embrace the ants as soon as possible, or face the inevitable Darwinian outcome.

    The ants go marching one by one, hurrah, hurrah…

    ——————

    (c) 2011 CloudBzz / TechBzz Media, LLC.  All rights reserved.  This post originally appeared at http://www.cloudbzz.com/seamicro-atom-and-the-ants/. You can follow CloudBzz on Twitter @CloudBzz.

    BlueLock Takes an IT-Centric Cloud Approach to Hybrid Cloud

    A couple months back I had a chance to catch up with Pat O’Day, CTO at BlueLock. They are a cloud provider headquartered in Indianapolis with two data centers (a primary and a backup), and also cloud capabilities on Wall Street and in Hong Kong for specific customers.

    BlueLock has been a vCloud service provider for the past year and has taken an enterprise IT-centric approach to their cloud services. They are not going after the SMB web hosting market, and don’t want to sell to everybody. Their primary focus is on mid-tier enterprises looking for a provider that will deliver cloud in a way that integrates with customer environments – what you might expect from a managed services provider.

    Initially they just provided private clouds, really just dedicated VMware environments with a vCenter front end. Their clouds now are still mostly private, with the user able to control what level of multi-tenancy they want. They do this through three models:

    – Pay as you go multitenant
    – Reserved multitenant at a lower cost
    – Committed single-tenant dedicated infrastructure

     

    For multi-tenant users they implemented vCloud Director as the UI. When showing this to their customers, they got feedback that Director was too unfamiliar when compared to vCenter. This gave them the idea to create a plug-in to vCenter that would allow VMware administrators to control their cloud resources.

    Their plug-in was enabled by the fact that vCloud Director provides a full implementation of the vCloud API. This model has proven to be very popular with their customers. It was also very innovative.

    In addition to starting and stopping cloud instances, users can move applications to BlueLock’s cloud and back again. As O’Day explained it, a vCenter administrator can create vApps from workloads running in their data center and use vCenter to deploy it up to the cloud – and to repatriate it again if necessary.

    Contrast this with most cloud providers. Some, like Amazon and Rackspace, require you to package up your applications and move them to the cloud with a lot of manual processing. Amazon now can import VMDKs, but that only gets you instances – not whole apps. Other service providers, including most who target the enterprise, have “workload onboarding” processes that generally require IT to package up their VMware images and let the provider manage the import. Sometimes this is free, sometimes there may be an onboarding charge. BlueLock’s approach makes it easy and under the control of IT for workloads and data to be migrated in both directions.

    VMware recently announced vCloud Connector to perform essentially the same function. But to my knowledge BlueLock remains one the few – if not the only- production cloud with this type of capability deployed.

    While we all love to cite Amazon’s velocity of innovation, BlueLock has shown that even smaller providers can deliver very innovative solutions based on listening closely to customer requirements. While most people out there today are just talking about hybrid clouds, BlueLock is delivering.