The computer industry has progressed; from Mainframes to Client-server to Cloud Computing.

Cloud computing is the delivery of shared computing resources, as a service, via the Internet. It’s built on top of a virtualized infrastructure, enabling multiple instances of infrastructure resources to run on the same hardware. This, on-demand, service delivery model improves efficiency, agility and scalability. It also affects IT in other ways:

  • IT value shifts to the network: The cloud is a network-centric computing model. Over time, more IT resources will be virtualized, pooled and then allocated to applications and services, as a network resource.
  • The network is evolving to a software-defined model: As virtual computing becomes a reality, the network itself will become a more agile resource through the use of software-defined networking. Additionally, many network services, such as routing, will become virtual functions, giving customers the ability to deploy these services on the fly.

It all sounds great but how do we get there?

While virtualization is the underlying technology that enables the cloud, fabric technology is its most crucial component but, it doesn’t come easy and it’s not one-size-fits-all.

Concepts like the network fabric have been in the IT lexicon for a while now but the challenge is how they are to be implemented.  Fabrics, after all, are simply a flattened, federated network architecture that replaces the old point-to-point topology of the past with much more dynamic and scalable multipoint-to-multipoint architectures. Along that way, you greatly enhance data throughput and flexibility while reducing capital-intensive physical infrastructure and redundant layers of management and manpower.

Sounds simple? And in truth, it is – once you have a working fabric firmly in place. The journey from here to there, however, is anything but. There is the thorny issue of incorporating legacy infrastructure into the mix (you don’t want to chuck your entire plant and start over do you?). Then there are a range of issues governing traffic management, protocol support and the like.

Network vendors have poured a lot of resources into fabric development and are actively guiding the transition to a flatter network environment. The fact is that, without a high degree of network flexibility, none of the many and much-vaunted cloud platforms would get off the ground.

However, this is a never-ending game. Virtualization has enabled the sharing of resources, improving data centre efficiency by orders of magnitude, but IT usage is constantly on the increase. The resulting data centre workloads of various types compete for these shared resources, sometimes creating unpredictable behaviour and application interference. So, system designers started creating partitions to isolate different applications and business functions. With every function living in a separate cluster, performance is predictable but efficiency drops. At some point certain servers are under-loaded and others are over-loaded, but the fear of performance impact makes it hard to share. This is a vicious circle and it’s non-trivial to break it.

So where does all this leave us? Data centre evolution is big business. We must remain flexible and adaptable in the face of rapidly changing data environments.

Andrew Salmon

A true digital veteran with over 20 years’ experience working in the UK and international digital markets. He has a proven track record as a successful business consultant and senior executive in strategic planning, development and execution.
Andrew Salmon