Lippis Report 154: Is Networking Too Rigid?
Networking has become “rigid”. Yes I know it’s almost absurd to attribute inflexibility or rigidity to networking. Look what TCP/IP has done for us. There are nearly 2 billion people connected to the internet and according to the Internet World Stats internet user growth rate increased by 380% between 2000-2009. With 2 billion people and growing online, accessing a plethora of applications via a wide range of end-points there is no doubt that the internet and TCP/IP has been a much bigger success than anyone would have imagined back in the early ’90s. But there’s always a give and take between computing and networking where one drives and changes the other. Right now we are in a compute innovation cycle that’s driving a fundamental change in networking which screams out the need for more flexibility.
BLADE Unified FabricArchitecture Delivers Economic & Data Center Network Design Advantages
Sure networking has increased from a bandwidth point of view and the IETF has added new protocols and network services, but it hasn’t kept up with compute innovation. As data centers pack more compute power and operating systems (OS) per physical server, thanks to virtualization, the need to move containers of OS plus applications and data around have sky rocked. In addition, traffic patterns have shifted tremendously as client-server or north-south flows are layered on top of server-server or east-west flows. And yes, there are new networking approaches being offered by vendors and standard organizations such as Cisco’s FlexPath, Juniper’s Stratus, Brocades VCS, Extreme’s Direct Attach, Force 10’s Open Automation, Arista’s Multi-Chassis Link Aggregation, BLADE’s Unified FabricArchitecture, the IETF’s TRILL and LISP and IEEE’s 802.1AQ, but these may be short term solutions to a much bigger networking problem.
Computing has always driven network design as mainframes drove SNA and analog multi-point wide area networks (WANs) during the ’70s. Mini-computers drove peer-to-peer networking protocols like DecNet, OSI and TCP/IP in the ’80s. Client-Server computing drove LANs and TCP into the mainstream in the early ’90s. The Web drove the internet in the 2000s and now server virtualization and cloud computing is once again changing fundamental networking requirements to make them more flexible.
Cisco Threat Defense for Borderless Networks
The rigid label is a powerful one as it creates frustration by not addressing or enabling new business processes. Every time a network protocol or architecture was labeled as too rigid it was replaced and in the process a new market emerged on the scale of tens of billions of dollars. SNA was labeled as too rigid to support peer-to-peer networking. The T1 multiplexer market of the late ’80s and early ’90s was too rigid to support data traffic and thus routing replaced it. The PSTN and TDM were too rigid as they doled out bandwidth in 56Kbs chunks and were unable to support internet and VoIP traffic. The national entertainment network is rigid too as it doesn’t support two-way communications and it also will be replaced slowly but surely.
So where is networking not flexible enough? It’s in virtualized data centers. Some analyst groups estimate that 30% of workloads are virtualized and increasing. Since virtualization or a VM is the new atomic layer of data centers, networking is falling short in public as well as private clouds. Ideally, all resources (compute, storage, and networking) would be pooled, with services dynamically drawing from the pools to meet demand. Virtualization techniques have succeeded in enabling processes to be moved between machines, but constraints in the data center network continue to create barriers that prevent agility, for example, VLANs, ACLs, broadcast domains, Load Balancers, Firewall/IPS Security settings and service-specific network engineering.
HP FlexFabric Virtualize network connections and capacity From the edge to the core An HP Converged Infrastructure innovation primer
The well understood problem is that when a VM is moved from one physical machine to another the network, load balancers, firewalls/IPS, broadcast domains, etc., have to be reconfigured. There is no automation in place, meaning that the network is not flexible or agile enough to make the changes required. Now this problem has scale to it as it’s a growing requirement of both IT executives managing corporate IT assets and service/cloud providers.
There are market solutions available today and more are coming that address “network automation” which enable the network to reconfigure itself as a VM and/or workload is moved within a data center. Cisco’s Nexus 1000V, HP Network Automation software and its Virtual Connect approach, Force 10’s Open Automation, Blade Network Technologies VMReady Network Virtualization, Arista Network’s Virtualized Extensible Operating System or vEOS and others are addressing the problem of network agility or lack thereof in virtualized environments.
Improved Network Security with IP and DNS Reputation
But the problem gets bigger and more complex when distance and cloud provider entities become engaged. None of the solutions above address moving a VM from one physical server to another over large distance, be it around town, across state lines, across the country or the globe. Some are using IF-MAP as a registry, sort of like facebook for computers that publish their resources and use this information to automate network configuration to support large distance VM moves.
The problem gets larger yet when workloads move from a private cloud to a public cloud. (Definition note: There is no single definition of a workload, so for my purpose here I assume a container including a VM and associated applications and data that can be moved as simply as drag and drop or some other string of instructions). In short, all the software that is needed to compile and run an application for a set of users is a workload. The network inflexibility problem grows even larger when moving workloads between public clouds.
Now is this a real problem? You bet it is. Consider the value also of portable or mobile workloads to Enterprise and service providers. Workload mobility means capacity on demand, business continuance, and disaster recovery, etc. In addition, as IT leaders explore public and private cloud alternatives, they will want to move workloads from their data center to a provider’s and move the workload back when and if required. For reasons of security and trust, IT business leaders will demand mobility. For example, if your cloud provider goes bankrupt, then you will want to move your workload out quickly. If your cloud provider’s performance drops again then you could move your workload out. If your cloud provider is the target of a terrorist attack or is turned into a large botnet then you can move your workload out.
In addition to security and piece of mind, mobile workloads will fundamentally change IT delivery, capital structure and most importantly business models and processes. Once IT can move workload anywhere in their data center, across their data centers or to a provider they have tiered with, the question becomes when and how fast does IT move workload? If IT can perform all the provisioning in software and enable workload moves to occur transparently and safely with address, identity, security preservation, enabled trust, control and interoperability across providers, then the question is when does IT need to move workload? This level of mobility is an industry-wide initiative as it offers significant and material business value. Business value is created as IT could move workload in a follow- the-sun model, following the lowest cost per kilowatt-hour model; workload could move to avoid a disaster, or for capacity on demand, or for lowest cost of workload execution, etc.
So how can data center networks become more flexible? A key element of the solution is agility or the ability to dynamically grow and shrink resources to meet demand and to draw those resources from the most optimal location. Today, the network stands as a barrier to agility and increases the fragmentation of resources, which leads to low server utilization and prevents portable or mobile workloads.