A Vision of the Future Cloud Data Center

A new year is often a time for reflection on the past and pondering the future.  2010 was certainly a momentous year for cloud computing.  An explosion of tools for creating clouds, a global investment rush by service providers, a Federal “cloud first” policy, and more.  But in the words of that famous Bachman Turner Overdrive song — “You ain’t seen nothin’ yet!”

In fact, I’d suggest that in terms of technological evolution, we’re really just in the Bronze Age of cloud.  I have no doubt that at some point in the not too distant future, today’s cloud services will look as quaint as an historical village with no electricity or running water.  The Wired article on AI this month is part of the inspiration for what comes next.  After all, if a computer can drive a car with no human intervention, why can’t it run a data center?

Consider this vision of a future cloud data center.

The third of four planned 5 million square foot data centers quietly hums to life.  In the control center, banks of monitors show data on everything from number of running cores, to network traffic to hotspots of power consumption.  Over 100,000 ambient temperature and humidity sensors keep track of the environmental conditions, while three cooling towers vent excess heat generated by the massively dense computing and storage farm.

The hardware, made to exacting specifications and supplied by multiple vendors, uses liquid coolant instead of fans – making this one of the quietest and most energy-efficient data centers on the planet.  The 500U racks reach 75 feet up into the cavernous space, though the ceiling is yet another 50 feet higher where the massive turbines draw cold air up through the floors.  Temperature is relatively steady as you go up the racks due to innovative ductwork that vents cold air every 5 feet as you climb.

Advanced robots wirelessly monitor the 10GBps data stream put off by all of the sensors, using their accumulated “knowledge and experience” to swap out servers and storage arrays before they fail. Specially designed connector systems enable individual pieces or even blocks of hardware to be snapped in and out like so many Lego blocks – no cabling required.  All data moves on a fiber backbone at multiple terabytes per second.

On the data center floor, there are no humans.  The PDUs, cooling systems and even the robots themselves are maintained by robots – or shipped out of the data center into an advanced repair facility when needed.  In fact, the control center is empty too – the computers are running the data center.  The only people here are in the shipping bay, in-boarding the new equipment and shipping out the old and broken, and then only when needed.  Most of these work for the shippers themselves.  The data center has no full-time employees.  Even security and access control for the very few people allowed on the floor for emergencies is managed by computers attached to iris and handprint scanners.

The positioning and placement of storage and compute resources makes no sense to the human eye.  In fact, it is sometimes rearranged by the robots based on changing demands placed on the data center – or changes that are predicted based on past computing needs.  Often this is based on private computing needs of the large corporate and government clients who want (and will pay for) increased isolation and security.  The bottom line – this is optimized far beyond what a logical human would achieve.

Tens of millions of cores, hundreds of exabytes of data, no admins.  Sweet.

The software automation is no less impressive.  Computing workloads and data are constantly optimized by the AI-based predictive modeling and management systems.  Data and computing tasks are both considered to be portable – one moving to the other when needed.  Where large data is required, the compute tasks are moved to be closer to the data.  When only a small amount of data is needed, it will often make the trip to the compute server.  Of course, latency requirements also play a part.  A lot of the data in the cloud is maintained in memory — automatically based on demand patterns.

The security AI is in a constant and all-out running battle with the bots, worms and viruses targeting the data center.  All server images are built with agents and monitoring tools to track anomalies and attack patterns that are constantly updated.  Customers can subscribe to various security services and the image management system automatically checks for compliance. Most servers are randomly re-imaged throughout the day based on the assumption that the malware will eventually find a way to get in.

Everything is virtualized – servers, storage, networking, data, databases, application platforms, middleware and more.  And it’s all as a service, with unlimited scale-out (and scale-in) of all components.  Developers write code, but don’t install or manage most application infrastructure and middleware components.  It’s all there and it all just works.

Component-level failure is assumed and has no impact on running applications.  Over time, as the AI learns, reliability of the software infrastructure underlying any application exceeds 99.999999%.

Everything is controllable through APIs, of course.  And those APIs are all standards-based so tools and applications are portable among clouds and between internal data centers and external clouds.

All application code and data is geographically dispersed so even the failure of this mega data center has a minimal impact on applications.  Perhaps a short hiccup is experienced, but it lasts only seconds before the applications and data pick up and keep on running.

Speaking of applications, this cloud data center hosts thousands of SaaS solutions for everything from ERP, CRM, e-commerce, analytics, business productivity and more. Horizontal and vertical applications too.  All exposed through Web services APIs so new applications – mashups – can be created that combine them and the data in interesting new use cases.  The barriers between IaaS, PaaS and SaaS are blurred and operationally barely exist at all.

All of this is delivered at a fraction of the cost of today’s IT model.

Large data center providers using today’s automation methods and processes are uncompetitive. Many are on the verge of going out of business and others are merging in order to survive.  A few are going into higher-level offerings – creating custom solutions and services.

The average enterprise data center budget is 1/10th of what it used to be. Only the applications that are too expensive to move or otherwise lack suitability for cloud deployment are still in-house managed by an ever-dwindling pool of IT operations specialists (everybody else has been retrained in cloud governance and management, or found other careers to pursue).  Everything else is either a SaaS app or otherwise cloud-hosted.

Special-purpose clouds within clouds are easily created on the fly, and just as easily destroyed when no longer needed.

The future of the cloud data center is AI-managed, highly optimized, and incredibly powerful at a scale never before imagined.  The demand for computing power and storage continues to grow at ever increasing rates.  Pretty soon, the data center described above will be considered commonplace, with scores or even hundreds of them sprinkled around the globe.

This is the future – will you be ready?

Follow me on twitter for more cloud conversations: http://twitter.com/cloudbzz

Notice: This article was originally posted at http://CloudBzz.com by John Treadway.

(c) CloudBzz / John Treadway

Enterprise Cloud Computing – We’ve Only Just Begun!

cloudWay back in 2006 when Amazon released their first enterprise web service, S3 (Simple Storage Service), I immediately wanted in. At the time we were wracking up huge data center and hardware bills (for a small company) and storing tens of terabytes of image files. We got in the S3 early beta but didn’t get too far because we needed image processing to happen in the cloud to be cost-effective. My contact at Amazon hinted that a compute infrastructure was not long in coming.

When EC2 (Elastic Compute Cloud) launched in 2007, I really wanted in. The digital photo business was winding down, so there was no point. My 2008 venture, did use EC2 and S3 and we saved many thousands of dollars and hours by not having to worry about hardware. While there were countless stories of Web startups using cloud services back then, only recently have I begun to hear of enterprise applications in the cloud.

I know what you’re thinking. What about SaaS vendors like Salesforce.com with their PaaS (platform as a service) models? Yes, these are in the cloud, and fit in most people’s definition of “cloud computing.” However, for me the inflection is where enterprises are now deploying any type of application, not just those that are build as ASP or SaaS frameworks, or that require you to build into a narrow framework like sforce. Truly custom apps written in any language are not what sforce was designed for.

Now I am hearing about life sciences companies putting the cloud to work in HPC environments for drug discovery or genetic mapping. And trading firms, like Majedie Asset Management, who own or manage no physical data center assets. All of their applications – from the most trivial to the most mission-critical – live in the cloud.

There are tools providers like RightScale and Stax building deployment management environments on top of Amazon and other cloud infrastutures. There are folks like Enomaly building cloud stacks for deployment by telcos and hosting providers. There are guys likeGood Data building BI in the cloud.

Think of the opportunity! Over the next 10-15 years millions of systems now operated in-house will be ported to the cloud. An increasing number of new applications will be built specifically for cloud environments. Vendors of systems management, security, indentity and access control, databases, ERP, CRM, and most other types of IT technology will create cloud versions of their systems, and new vendors will emerge for this new environment.

There will be opportunities across all aspects of enterprise IT to profit from this wholesale shift. How will you participate?