Lippis Report 148: What’s Driving The Multi Billion Dollar Data Center Ethernet Market
During last week’s Cisco Q3 FY10 quarterly financial conference call, John Chambers, Cisco’s CEO, said something that impressed and shocked me. The company has been quiet about the growth rates for its Nexus line of data center switches until this call. What shocked me was that the Nexus 7000 is now on an annualized run rate of $1B, yes that’s Billion with a B! I remember being interviewed by John Markoff of the NY Times in Jan ’08 about the Cisco’s Nexus and Juniper’s yet to be announced Ethernet switches. In just 27 short months, the Nexus product line including the 7000, 5000 and 2000 represents a $1.4 B run rate of revenue to Cisco. Another insight gained from this ramp up is that the data center networking trends that we’ve discussed here in various Lippis Report Research Notes are powerful demand drivers for Cisco and other companies participating in this lucrative emerging market and its just starting! Companies such as Arista Networks, Force10 Networks, Blade Network Technologies, HP/3Com/H3C, Voltaire, Avaya, Brocade, Juniper, et al, have unique positions and offerings to participants in the burgeoning market. In this Lippis Report Research Note, we review the mega trends driving high market growth. We save a product review of each of the suppliers for our next Lippis Report Research Note.
Cache Architecture In WAN Optimization: It’s Not The Only Criteria in Vendor Selection
In addition to the run rate numbers above, Cisco also posted a milestone of 1 million 10 GbE ports shipped, providing a strong indicator that the 10GbE market is nearing a tipping point to high volume, as pricing drops and its use accelerates. The following are mega trends driving this tremendous market growth. Traffic demand drives bandwidth and that’s the first mega trend.
Traffic Profile Changes: Gone are the days when data center networks primarily shuffle asymmetric email messages and low bandwidth client-server computing applications between endpoints and servers. Best effort data delivery, where latency was secondary to delivering data accurately, has changed to being a paramount design element where 10 milliseconds means the difference between losing a customer or capturing revenue. Traffic is now highly mixed, moving around a data center in near Brownian motion between servers, storage, internet and intranet thanks to a plethora of old and new applications such as mash-ups, VoIP, search, backups, storage access, emerging converged I/O etc. In addition to Brownian motion traffic flows and low latency requirements, the volume of traffic continues to skyrocket and shows no sign of abating. Remember when the Dow dropped by 1000 points in early May of this year? Financial services firms saw an average of 40 times the amount of traffic in their data centers as traders responded to the drop. There is no better driver for traffic volume as financial markets in turmoil. The traditional model of over subscribing data center bandwidth by as much as 80:1 is the norm, and IT business leaders are looking for a more efficient model.
Delivering A Borderless Video Experience With Medianet
Workload Mobility: With the advent of server virtualization IT leaders are able to decouple an operating system from its underlying server hardware and increase the number of instances an operating system can be replicated on a single server. Server virtualization reduced the number of physical servers needed and in the process reduced energy and cooling requirements. Now that an operating system only needs to know which hypervisor it’s running on, that operating system instance and the applications it services can be moved from one physical server to another in near real-time with the click of a mouse, thus providing workload mobility or portability as well as a rapid application procurement tool.
So what does all of this have to do with networking? A lot, first moving these workloads around a data center consumes huge bandwidth and has low latency requirements to driving raw bandwidth requirements. Secondary, and most importantly to the industry, is that networking or should I say the rigid structure of IP addressing/VLANs, etc are impeding the automation of these workload moves. In short, the data center network needs to be reconfigured when VMs are moved from one physical server to the next in the same data center and it simply does not work if a VM is moved between data centers separated over distance, between a data center and a cloud provider and between cloud providers. This is the area of the infrastructure 2.0 working group.
Virtualization for OpenScape UC Server 2010 and OpenScape UC Suite
Doug Goulay said it best in his recent Network World post.
“When moving VMs between machines there is a caveat: if you want your TCP connections and IP addressing to stay intact the receiving physical host must be capable of supporting the same IP address that the VM moving to it is actively using. This means that both physical hosts have to be in the same subnet or in the same VLAN depending which layer of the network you are looking at. Since the largest number of physical servers that can be supported doing this is around 64 it doesn’t change the addressing architecture too much, unless the servers are in different data centers, or are connected to different access layer switches that talk to different aggregation layer switches. If this is the case the network architecture all of a sudden starts dramatically impeding the movement of VMs: either VM mobility is impeded, or the network is redesigned.
Some people often ask me, “can’t I do this with DNS?’ In short, no. DNS is cached at many client sites, ignoring your TTL. Additionally, DNS is cached on many PCs for the life of an application session. If you try to change the IP address of your backup server while you are in the middle of a 2GB backup do not expect the connection to continue. TCP doesn’t work this way.”
The Role of 10 Gigabit Ethernet in Virtualized Environments
Increased Density: It’s no secret that data centers are bursting from the seams as the economic down turn kicked large IT capital outlays down the road until economic conditions improved. Business leaders have been postponing increasing data centers space, that is square footage, while power density has grown exponentially, until very recently, as cooling requirements increase unabated. Power and cooling capacity are the primary constraints to data center expansion. To deal with these realities, IT business leaders are left with only one option, appropriate capital to either upgrade power and cooling systems or build a new data center. The impact of high energy densities is that server hardware is no longer the primary cost component of a data center. The purchase price of a new (1U) server is now exceeded by the capital cost of power and cooling infrastructure to support that server and will soon be exceeded by the lifetime energy costs alone for that server. In short, energy costs are on their way to dominate data center economics.
To help mitigate these trends, the new data center switches offer increased server connection density at lower energy consumption levels. In addition, their own energy consumption to shuffle packets around has been reduced, for some by as much as 50%. To connect an every increasing dense set of servers, new generation of data center switches boast a two tier network architecture to support thousands to tens of thousands to hundreds of thousands of servers. To deal with high server density connectivity, server access is via a leaf switch, while leaf switches and storage connect to a modular spine switch. The two-tier approach offers efficient connectivity density, low latency albeit this depends highly upon the internal switch design, and is ready to support consolidated I/O.
Data Center Class Network Extensible Operating System
Consolidated I/O while early in its adoption cycle will go a long way in reducing power consumption of servers as they will have a single network interface for both storage and networking. In addition, consolidated I/O promises to reduce the need for a separate storage switch too again reducing capital, energy and cooling cost.
Back to server density. Server density will only get, well, more dense. If the industry trajectory of cloud computing is realized any where near what the conventional wisdom dictates, then there will be more and more highly dense cloud computing sites supporting an ever increasing number of enterprise, government and consumer applications. How many cloud computing sites does the US need to support all IT applications? With nearly 16 million servers installed nation wide, according to IDC, and with each cloud computing site supporting hundreds of thousands of servers, then perhaps the number of cloud computing sites would be in the hundreds. While its unrealistic that all US enterprises and governments will be hollowed out of their data centers and applications via cloud computing with today’s technology and business control believes; the trend line is clear, there will be a smaller number of very large cloud providers delivering applications to a wide range of customers. Almost like a supernova transforms into a black hole, applications will not be able to escape the gravitational pull of the scale and economics of cloud computing if the industry gets anywhere near this size scale.
Unified Communications: A TECHNOLOGY AUDIT
The networking industry has been busy adapting to these powerful trends with new internal switching architectures, data center network architecture and automation. Internal switching architectures are being designed with high internal switching capacity in the terabit rage, lower energy consumption in the 10W/port range, low latency and of course high port density. The data center network architecture most are progressing toward is a two –tier leaf-spin approach mentioned above. These switches possess the highest levels of reliability, serviceability and redundancy, as networking is at the center of this massive server connectivity density.
Network automation is another area of investment where VMs can be moved within and between data centers, as well as between data centers and cloud providers, plus between cloud providers. A few companies are addressing network automation, but this is a huge issue that the industry needs to wrap its arms around and provide a scalable solution.
In the next Lippis Report Reseach note, we’ll review Cisco, Arista Networks, Force10 Networks, Blade Network Technologies, HP/3Com/H3C, Voltaire, Avaya, Brocade, Juniper, et al, and highlight their unique positions and offerings to participants in the burgeoning market.