Lippis Report 126: Unified Fabric Options Are Finally Here

Nick LippisData center IT pros live in interesting times, as they have not seen design changes so sweeping since IBM introduced S360 architecture in the early 1960s. While Moore’s Law maps out a hardware compute trajectory of higher capacity, increased density and lower pricing, a new software approach to computing, networking and storage has been building over the past few years which is accelerating the effect of Moore’s Law and fundamentally changing data center design and IT delivery. At the center of this change is virtualization of computing, storage and networking which is starting to expand beyond the data center all the way to client end points. The value of virtualization’s economics and utility is well documented with power, cooling and server reduction thanks to an increase in the number of applications that run on servers. And while the industry is readying for a second generation of virtualized data centers thanks to VMware’s vSphere 4, another data center innovation is finally taking shape that offers consolidation of LAN and SAN switches, reduced cabling requirements and cost while increasing performance. This innovation is called a unified fabric. In this Lippis Report Research Note we discuss the unified fabric (UF) from architecture, maturity, and value proposition points of view.

A Cloud based UC Model Emerges

Listen to the Podcast

I ran a little experiment using social networking via Twitter to get a pulse on the UF market. I used http://monitter.com. Monitter is a twitter monitor that lets you “monitter” the twitter world for a set of keywords and watch what people are saying. Well, I monitored unified fabric for an entire day, just letting it run. There was not one UF posting, meaning that few are discussing UF. Translation, UF is early to market although there are now UF products and/or support from a wide range of companies including Cisco, HP, IBM, Brocade, Qlogic, Intel, EMC, Sun, Mellanox, Fusion, XSIGO, Emulex and many others.

Cisco Launches Hybrid WebEx Model to Reduce WAN Bandwidth

Listen to the Podcast

The fascinating aspect of UF is the simplicity and cost reduction it offers to data center network and storage design. The goal of UF is to consolidate LAN and SAN networks into one. The implication of UF is simplification, meaning that only one network adaptor is needed in a server to support storage, IP and inter-processor-communications (IPC) flows. In short, the expensive Host Bus Adapters (HBAs) are virtualized on a network interface card (NIC) or the server. In some cases all I/O is virtualized with HBA, network and clustering drivers virtualized on the server. Further, only one cable and server connection is needed for both storage and network traffic, which reduces data center server-to-network, and server-to-storage cabling within the rack by over 50%. There is no need for both a storage and network switch; a network switch will suffice, reducing the number of switches by over 50%. UF equates to lower cost, complexity, equipment, power and cooling requirements and increased performance too.

Future-Proof Networking: Making Decisions That Last

Get the White Paper

So why haven’t IT departments flocked to deploy UF? It has taken a while to develop all the standards needed to implement UF and in some areas the standards are still under development, but by the end of 2009 UF standards should be ratified. The fall of 2009 should kick-off the UF market with 2010 being the year of wide spread experimentation and data center piloting. UF deployment will be a multi-year rollout with significant revenues generated in 2010.

Virtualizing Contact Centers: The EDS-Avaya approach

Get the White Paper

Unified Fabric Architecture

What’s intriguing about UF is its architecture and attributes of increased server performance and lower cost/complexity. UF is the ability of a switch and host adapter to use the same physical infrastructure to carry different types of traffic that typically have very different traffic characteristics and handling requirements. While most UF is based upon 10Gbs Ethernet as its foundation, Inifiband, thanks to its high data link speeds and low latency is also being used.

UF is comprised of three primary hardware components: a converged network adaptor (CNA), a 10 Gbs Ethernet link, preferably the twin-ax SFP+ variety, and a 10Gbs UF switch that supports storage, inter-processor communications (IPC) and IP data packets. As you can guess this 10Gbs Ethernet link is special as it needs to support storage and IPC traffic flows, which are not forgiving of dropped packets as TCP/IP has been designed. In short, UF calls for ethernet to be partitioned into lossless and lossy logical links that are accommodated by extending the IEEE 802.1Q priority and IEEE 802.3x Pause concepts in what has been named Convergence Enhanced Ethernet (CEE (pronounced “sea”)). The 10Gbs Ethernet UF switches need to ensure strict bandwidth scheduling for storage, IPC and IP traffic, automated configuration and forwarding of lossless and lossy traffic flow, which is the job of Data Center Bridging (DCB). DCB is close to being standardized in IEEE P802.1Qbb, IEEE P802.1Qau and IEEE P802.1Qaz. Just this May, the University of New Hampshire Interoperability Lab hosted a DCB plug fest that demonstrated interoperability between DCB vendors including Cisco, Dell, Qlogic, Intel, NetApp, Fulcrum Microsystems and Finisar.

Three Main Storage Architectures

With three primary storage architectures UF gets a little messy or rich, depending on your perspective. There is iSCSI, Fibre Channel (FC) and InfiniBand (IB). IB is used to connect servers to storage in high-performance data centers as its architecture boasts quality of service, low latency, failover and is scalable from 2 Gbs to 96 Gbs. Most IB implementations are running at 20 Gbs moving to 40 Gbs. FC represents some 20% of all server-storage connections thanks to its B2B link credit mechanism which ensures lossless operation and scales from 1 to 12 Gbs with 2, 4 and now 8 Gbs speeds commercially available. Dell’Oro pegs the FC switch and HBA market at approximately $2.7B. iSCSI utilizes TCP which ensures lossless operation and scales with ethernet from 1 Gbs to 10 Gbs and above. iSCSI is the fastest growing category in the storage market with revenue growth of 76% between 2005 and 2010, according to IDC. IDC forecasts iSCSI to be a $5B market in 2010 representing nearly 20% of the external disk storage market, up from 3% in 2005. These numbers tell the story of why HP bought Left Hand Networks and Dell bought Equal Logic both of whom are iSCSI providers. 10 Gb Ethernet is a boon for UF as it starts to offer the bandwidth to support FC, IB and/or iSCSI storage flows. Over time 40 and 100 Gb Ethernet will be available but with the dominant ethernet speed in data centers being 1 GbS, 10 Gb Ethernet is a sure bet for UF over the next several years.

iSCSI

iSCSI runs over IP today without the need for a special CNA. iSCSI can run over ethernet, IB, ATM, Frame Relay, MPLS, et al. But iSCSI’s reliance on TCP for reliable transport has caused many data center managers to pause, thanks to concerns over jitter, latency and reliability at 1 GbS Ethernet. The vast majority of iSCSI users build separate ethernet networks to support iSCSI and IP traffic with a few segmented the traffic via VLANs. 10 Gbs Ethernet potentially removes the pause as the higher speed may mitigate previous concerns with iSCSI and IP traffic flowing over a single 10Gbs Network Interface Card (NIC). If this pans out, then iSCSI may realize a surge in popularity as it is widely supported by all server concerns. It’s interesting that Solid-State Drive (SSD) innovator Fusion io offers iSCSI over ethernet via a PCI Express adaptor to access its SSD, meaning that the SSD performance leader feels comfortable using iSCSI and ethernet for SSD access.

IB

There are many IB providers such as Voltaire, Mellanox, et al., but a few are using IB as a UF. For example, XSIGO Systems uses IB as a UF, while servers see virtualized NICs and HBAs. The server is completely unaware that it is using IB. The administrator can create vNICs and vHBAs on the I/O Director and these show up as ethernet interfaces or HBAs on the server while it gateways into ethernet and even FC. Accenture Software Utility Services uses XSIGO’s VP780 I/O Director and provides data center services to such firms as Best Buy, Mass Mutual, Continental Airlines, Virgin Blue, JetBlue, Net2Phone, et al., proving IB UF viability. IB is used as a UF construct here connecting servers-storage and server-server links with gateways to ethernet and FC LANs/SANs. IB providers are moving down market too, from their High Performance Computing (HPC) heritage in an effort to broaden IB’s appeal to data center professionals as a UF. There are also proposals for IBoE. However, the bulk of IT suppliers are either offering or announcing FCoE or iSCSI UF solutions.

Fibre Channel Over Ethernet

In the FCoE switch UF market a few companies dominate, those being Cisco Systems and Brocade. Cisco offers its Nexus 5020 FCoE switch while Brocade has recently introduced its 8000 FCoE switch. EMC also provides an FCoE switch, that being the Connectrix NEX-5020 which is the Cisco Nexus 5020. Converged network adapters (CNAs) that combine the functionality of an Ethernet NIC and a FC HBA are available from Emulex, Qlogic, Intel and Brocade. Native FCoE support on NetApp SAN storage arrays have been announced while EMC’s new Symmetrix V-Max supports native FCoE. Look for HDS, IBM, HP, Compellent, Dell, Sun, Pillar, Fujitsu, et al., to announce native FCoE during the fall of ’09. Many of these firms are working with QLogic to use its CNA ASIC on their array controller boards, which would provide native FCoE support and connect directly to 10 Gb Ethernet switches.

Increasing Server Performance

While UF components are becoming available, UF has a large role to play in increasing server performance. With the advent of 10 Gb and soon 40 to 100 Gb Ethernet, networking speeds are now outpacing CPU speeds, which means servers will have to work harder to keep up. When servers participate in network processing it reduces application performance as a large amount of CPU time is spent in the TCP/IP stack to copy data and manage buffers. To increase application performance especially in server-to-server communications and inter-processor communications Remote Direct Memory Access (RDMA) was developed to allow computers in a network to exchange data in main memory without involving the processor, cache, or operating system of either computer. Further, the IETF developed Internet Wide Area RDMA Protocol (iWARP) as an update to RDMA’s use over the internet.

But alas, there is disagreement on which standard to use: RDMA over ethernet or iWARP over ethernet. For example, Intel boasts that it will support iWARP over ethernet on every motherboard while others support RDMA over ethernet. The reason why this discussion is relevant to UF is that 10 Gbs Ethernet is a fundamental UF technology and thus with current line rates of 10 Gbps and higher, non-RDMA network transfers consume significant amounts of the available memory bandwidth and result in system CPU(s) stalling on memory accesses. In short, RDMA allows servers to keep up with network speeds and since UF is enabled by 10 Gb Ethernet, RDMA needs to be included in a UF solution. The OpenFabrics Alliance, a consortium of IT suppliers, government and corporate IT professionals, is working to deliver a unified, cross-platform, transport-independent software stack for RDMA that is architected for high-performance, low-latency and maximized efficiency. The OpenFabrics software is being bundled in VMware, HP’s BladeServers, Red Hat Linux with a Windows version available for IB. It needs to accelerate its work for FC and iSCSI.

Status and Issues

At this point in time, there is clear momentum behind FCoE as it enjoys the widest support across IT suppliers and it’s likely this momentum will only increase as we enter into the fall of ’09. While there are only a few FCoE switches available, every major ethernet and SAN switch supplier will offer a FCoE switch either by the end of ’09 or early ’10. So for those with FC storage infrastructure it’s time to start experimenting and piloting a small FCoE island to gain skills and comfort with the technology.

There are issues with FCoE too. Currently, there are only two switches to choose from, limiting choice; also where FCoE termination occurs will change over time. FCoE is terminated in the FCoE switch now while over time it will be terminated in Disk Arrays, forcing a transition from FCoE termination in Top of Rack or End of Row switches to Disk Arrays. Over time one can imagine a pure ethernet switch with DCB forwarding FCoE, IPC and IP packets to their destinations. Also there are no RDMA options for FCoE today. Another issue is that FC links can run up to 8 Gbs, which would leave only 2 Gb for IP and IPC traffic. Also there are FCoE switch suppliers such as Brocade who have developed their own CNAs and don’t currently interoperate with 3rd party CNAs.

For those with large investments in IB and strict latency requirements that are only met with IB, IB as a UF is being proved out as Accenture Software Utility Services shows. It’s unclear how far IB will move down market and it’s hard if not foolish to bet against ethernet as a UF transport. IB enjoys the widest RDMA support with the OpenFabrics Alliance supporting Linux, Windows and VMware over IB. Also HP, Sun, IBM and Dell blade server systems all support IB options as their high performance solutions.

iSCSI will benefit from 10 Gb Ethernet and DCB in high performance ethernet switches. In this model a single 10 Gb Ethernet NIC would support both iSCSI and IP traffic with layer 2 or layer 3 segmenting storage and IP traffic flows. For the mass UF market, at this time, it seems that iSCSI and FCoE are the two main options with IB as the UF option in the HPC segment.

A final note. It’s clear that Cisco, HP and IBM are putting their significant weight and influence behind FCoE. However, FCoE is a core component of Cisco’s Unified Computing System and Cisco has thought through the system level issues associated with integrating computing, networking and storage, which should yield it a learning curve edge on next generation virtualized data center design. In short, Cisco has embraced UF to a much larger extent than its data center competitors, offering a safe harbor in which to experiment with UF and as many technology transitions before, it’s not necessarily the technology but the companies behind them that decide which one wins or loses.

11 Debates over Lippis Report 126: Unified Fabric Options Are Finally Here

  1. Ken Oestreich said:

    Using a Unified Fabric has been our standard operating procedure as far back as 2001. Putting transport aside for a minute (whether ATM, FCoE, IB or Enet) the approach yields massive simplification and flexibility. Fewer I/O pieces, simpler re-configuration & repurposing, and overall cost/time reduction. It’s just plain elegant. And, by being able to reconfigure compute, I/O, network, etc. so quickly, it allows for simpler scaling, failover, etc. I’m hoping the rest of the industry acquiesces soon :)

  2. Ken Oestreich said:

    Unified Fabrics are here in a big way; Egenera, Cisco, others, plus lots of IHV support http://tinyurl.com/okjww9 thanks @nicklippis

  3. Christine Crandell said:

    RT @Fountnhead: Unified Fabrics are here in a big way; Egenera, Cisco, others, plus lots of IHV support http://tinyurl.com/okjww9

  4. dallison said:

    Good article, however I’d like to point out that the Xsigo solution is not IPoIB. Xsigo does indeed use IB for its fabric, but the servers see regular NICs and HBAs. The server is completely unaware that it is using IB. The administrator can create vNICs and vHBAs on the I/O Director and these show up as ethernet interfaces or HBAs on the server.

  5. Nick Lippis said:

    Lippis Report 126: Unified Fabric Options Are Finally Here http://tinyurl.com/of84mv

  6. egenera said:

    RT @Fountnhead: Unified Fabrics are here in a big way; Egenera, Cisco, others, plus lots of IHV support http://tinyurl.com/okjww9

  7. Nick Lippis said:

    Thanks, we spun another version of LR 126 that will be distributed on Tuesday which clarifies Xsigo’s value prop.

  8. Venky said:

    Hi Dallison,

    As I understand Xsigo solution needs IB adapters in the server. The vNICs and vHBAs created on the I/O director are then mapped on to the IB adapters as logical NICs and HBAs.. Is this correct ?

    Is Xsigo planning to support Ethernet adapters (10Gig) in server ?

  9. dallison said:

    Hi Venky,

    The Xsigo solution does indeed use IB HCAs in the servers. It uses the IB purely as a reliable, high speed transport. The NICs and HBAs are software entities running on top of the IB stack. For example, on Linux you can see vNICs using the ‘ifconfig’ command. vHBAs provide LUNs that appear as SCSI disks (/dev/sde for example). Xsigo installs special drivers on the servers to support this.

    As for Ethernet adapter support, unfortunately I cannot comment on future product directions.

  10. Nick Lippis said:

    New Lippis Report Research Note on Unified Fabrics is available http://bit.ly/2SJ9jD

  11. Cloud Computing – A Primer « Internet Protocol Forum said:

    [...] “Unified Fabric Options Are Finally Here,” The Lippis Report, 126, http://lippisreport.com/2009/05/lippis-report-126-unified-fabric-options-are-finally-here/ [...]

Leave a Reply