Convergence


Convergence.  We hear this word over and over in our industry and as I return from VMware Partner Exchange, it is resonating with me more.  In our world the definition of convergence means the merging of distinct technologies, industries, or devices into a unified whole.  What does this all mean?  Before we discuss what this means today let’s take a journey into IT past, the Datacenter of old.  Rewind back 10 years where IT was rapidly changing from the mainframe to the client-server world.  The driving force was to provide a much more powerful and intuitive interface for the end user to consume applications and IT services in a whole different way and to become more productive.  What did the Datacenter look like then?  It contained many physical servers, networking gear, and storage, each with its own management interface and along with that, high CapEX and OPEX costs.  Management of these resources required rigid procedures and policies.  Technology Silos were getting larger and larger, where resources were separated by technology and function.  Year after year applications were being released and in time those grew and continue to grow exponentially like wild fire.  And what was born from the flames?  Virtualization.  This new idea to consolidate Workloads by enabling abstraction and pooling at the Compute layer.  Fundamentally this changed the way we built our compute layer in the Datacenter.  A phase of rapid consolidation takes place and the savings to the customers enables this idea to take off and become the core of the Datacenter today.  This is the first phase of convergence.

Software that allowed us to pool resources of disparate hardware and bring them together to provide a solution, Semi-Convergence.  Today, it is the most common architecture out there and in fact, we still sell and deliver this type of solution to our customers daily, but is it the right choice?  Let’s not answer that question just yet.

A few years ago another idea took flight and that was the “Cloud.”  Don’t worry this is not a primer on all things cloud I would hope by this time we all have an idea of what that term really means.  CIO’s were faced with many challenges when this idea came to Market.  Cloud promised many things such as faster delivery of services and applications, an easy consumption model, and a low operating cost all of which businesses demanded as we saw total acceptance in the consumer space.  Services like Webmail, and other applications were shared and consumed at a rapid pace and the expectations changed in the business community.  To meet these demands Private Cloud models were formed.  This birthed companies to be created like VCE, providing full infrastructure stacks “in a box” that have been tested and delivered to the customer as a complete solution with one support model.  Others followed suit like Dell with Active Infrastructure, HP’s Converged and CloudSystem and IBM Flex and PureSystems.  These companies are providing Super Converged Infrastructure where they provide rapid deployment and complete support. Others provided reference architectures which allowed for easier deployments of infrastructure that were tested and used as best practice during implementations like FlexPod from NetApp/Cisco, and VSPEX from EMC but without the single number support model.  CIO’s now had a choice where applicable, to select the technologies that made sense to their organizations and be able to deliver quickly.

Fast forward to today and what you see is a mishmash of these two converged architectures, however fundamental problems still have yet to be addressed.  We hear that IT budgets have remained flat all this time.  I call shenanigans on this and here’s why.  I worked as a manager of a few data centers for ACS Xerox, for 5 years.   Every year I was asked to cut staff and “trim the fat” from our OPEX & CapEX budgets and this is when we had the best growth rates in our organization.  The fact is that IT budgets are diminishing because they are looked at as a cost center, nothing more.  In the Enterprise, CIO’s are changing this model to become charge centers where he/she can provide chargeable services to their organization and bill back the services they provide, but they still need to do this smartly and at low cost of entry.

Silos still exist today as the storage, network, and server teams are separated and even worse all three do not happily coexist with the application teams.  Time to make all necessary changes to deliver a service or application to the business is still an issue.  For example, the Server/Virtualization team is asked to stand up an environment to support a new application that requires storage and network changes as well.  In smaller environments this may happen quickly because all the resources are in the same room.  In larger environments this gets harder and usually stalls.  The storage team is supporting X business unit and can’t get to it as of yet.  The network admin is working on an audit with Y business unit and is also delayed.  In the meantime the application teams are getting pressured to deliver and they still don’t have the resources to do just that.

Enter Hyper-convergence.  The idea of Hyper-converged architectures is derived from the very thing that allowed us to get out of the waste jam in the last decade and that’s Virtualization.  You may hear people today talk about this in the context of hardware where the first phase being delivered today is the integration of storage at the compute layer, but there’s more, a lot more.  Hyper-convergence has everything to do with software and policy-driven infrastructure.  This is the second phase of moving to SDDC, collapsing the network and storage into the virtual stack or at the very least providing an overlay of policy driven management like Viper from EMC.  We just went over some of the pitfalls that IT and business leadership are still facing today.  Diminishing budgets, faster time to deliver expectations from the business, and a silo IT support model.   Smaller organizations are dealing with little to no resources and budget.  They too want to take advantage of some of the things that SDDC brings but how do we enable them to get there.

Imagine an infrastructure that we can easily deploy, consume resources, and in some cases provide a single point support model.  An environment where the overhead of managing infrastructure is so minute time is spent assessing and delivering instead of reacting and catching up. Now pair this with added operational and self-service support model and you truly can deliver the Software Defined Data Center vision.  Technologies like VSAN, ScaleIO, and technology companies like Nutanix and Simplivity are making this a reality and at a low cost of entry.   CIO’s are seeing the true value of policy driven Infrastructure.   These technologies are disruptive in our industry and we must embrace them to move forward.  The benefits are too real to ignore.

It’s a wonder why Nutanix is the fastest growing technology company out there today.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s