Integrating VXLAN with Avaya VENA Fabric Connect

Get the White Paper

January 7th, 2013

by Avaya

VXLAN, VMware’s attempt at creating a next-generation VLAN technology, is intended to help businesses maximize the effectiveness of their server virtualization activities. Officially, “Virtual Extensible LAN (VXLAN) works by creating Layer 2 logical networks that are encapsulated in standard Layer 3 IP packets. A ‘Segment ID’ in every frame differentiates the VXLAN logical networks from each other without any need for VLAN tags. This allows very large numbers of isolated Layer 2 VXLAN networks to co-exist on a common Layer 3 infrastructure.” The intent is to build virtual domains on top of a common networking and virtualization infrastructure, with these virtual domains having complete isolation from each other and the underlying network. This is the theory anyway. However, the initial VXLAN specification was based on some rather conventional networking concepts and did not make allowance for groundbreaking work that had already been undertaken within the IEEE in defining Shortest Path Bridging (SPB).

The Benefits of Building Networks With Shortest Path Bridging

Listen to the Podcast

July 23rd, 2012

Shortest Path Bridging or SPB was ratified in March of 2012 by the IEEE. Its an active-active link protocol that replaces the older Spanning Tree Protocols. SPB is touted as a means to simplify the creation and configuration of carrier, enterprise and cloud networks by virtually eliminating human configuration error. In short, SPB is designed to scale. Avaya, an SPB leader, has implemented it within its data center and campus networking products in a hope to drastically simply the configuration of enterprise wide virtual networks. Paul Unbehagen an SPB co-author and Avaya Director working on next generation fabric standards and implementations joins me to discuss SPB and the value it brings to enterprise and data center network design.

Lippis Report 169: Making Sense of Data Center Switching Fabrics

March 28th, 2011

nicklippis.jpgIn the Lippis Report, we have discussed the fundamental changes shaping a new data center network architecture. These drivers are massive virtualization, a sea change in traffic patterns that are now dominated with east-west flows on top of existing north-south traffic, ultra low latency, the emergence of cloud spec data centers, etc. As a result, data center networking attributes are changing with requirements of traffic, steering in virtualized infrastructure, avoiding manual network changes as VMs move, removing oversubscription (thanks to spanning tree), streamlining network tiers to hasten east-west traffic flows, etc. The industry is responding to these changes and requirements with new approaches to data center networking, such as the Open Networking Foundation, Cisco’s FabricPath, Juniper’s QFabric, Brocade’s VCS, Avaya’s VENA, Nicira Networks’ network virtualization software, etc. In this Lippis Report Research Note, we explore a key technology to enabling two-tier network fabrics, and that’s link aggregation and its various approaches, including Multi-Chassis Link Aggregation Group, Transparent Interconnection of Lots of Links (TRILL) and Shortest Path Bridging (SPB).

Read the rest of this entry »

It’s Not Your Father’s Network

Get the White Paper

February 14th, 2011

By Ken Won, Director of Product Marketing at Force10 Networks

Server and storage environments have seen a lot of changes in the past ten years, while developments in networking have remained fairly static. Now, the demands of virtualization and network convergence are driving significant changes in the data center network. Networks have always been considered as plumbing that connect servers and storage, but new, dynamic switches are changing the network’s role in the overall data center. It’s not your father’s network anymore, and savvy data center managers need to understand and plan for the changes that are coming.

This white paper discusses new network technologies, explains what they are, and suggests how to plan for them in future data center architectures.