Lippis Report 220: How Open Is Cisco’s ACI?

nick_podium2Cisco’s acquisition of Insieme Networks enables the next generation of data center automation led by Application Centric Infrastructure (ACI) and the Nexus 9000 family of switches, which is comprised of the Nexus 9300 series fixed switches and the Nexus 9500 series modular switches. In addition to offering rich programmability features, industry-leading layer 2 and layer 3 forwarding, and advanced behaviors, such as VxLAN routing, the Nexus 9000 switches run in a fabric mode commonly referred to as ACI. ACI is an architecture that enables the automated deployment of applications across the network using Application Network Profiles. These profiles use a common declarative policy language that automates network configuration. The policy creates an application dependency map that spans applications, compute, storage and networking across data center, campus and WAN to enable cross-functional application troubleshooting and performance optimization. Critical to expanding ACI is the new Opflex protocol that enables an open source community to leverage ACI for policy-based automation for all data center devices (more on this below). ACI delivers the complete ability to provision, manage and monitor applications in a fully automated workload creation world. This is the Holy Grail of IT as it enables business resiliency and lowers OpEx. But the two big questions that many have been asking are: “How open is ACI, and can I trust it?”

The Industry’s Most Comprehensive Nexus® 9000 Programming Environment Demonstrated

Watch the Video

Applications and Networks Don’t Talk to Each Other

As the requirements for IT agility increase, administrators are constantly pressed to increase the speed with which they can deploy, manage and troubleshoot applications across compute, storage and networking domains. In the network space, a number of software-based overlay technologies were developed to approach this problem. These tools created issues in performance, integration with physical devices and management across overlay and underlay domains. Most critically, though, they focused on virtualizing the network as it is today rather than refocusing on the needs and requirements of applications. For instance, they are of little help in building a dependency map for an application across firewalls, IPS, load balancers, switches, virtual switches, virtual machines, bare metal servers, routers, WAN links, storage, VLAN, etc., that’s required to deliver and service applications. They also hinder visibility by separating underlay and overlay environments. With no map of these dependencies and limited visibility, how are IT professionals to conduct change management, troubleshooting, monitoring, performance optimization, etc.? How is application behavior assurance realized when various application responsibility dependencies are shared across administrative domains of compute, network, storage, DevOps, etc.? To create an application dependence map today, NetOps has only a manual sniffer in its tool chest. What’s really needed is a “self-documenting policy,” which is what ACI offers.

Brocade’s New VDX 6740 Performance Test Result Record

Watch the Video

ACI: Policy-Driven Application and Network Connection

Cisco’s ACI was purposely built to address these issues. The solution, built around the Application Policy Infrastructure Controller (APIC) and Nexus 9000, offers a combined overlay and underlay that provides line rate performance at scale with full real-time visibility across physical and virtual domains. Also, unlike traditional network overlay solutions, ACI is based around an innovative, declarative policy model that allows users to naturally describe application requirements and automate their deployment across the network. This Declarative Policy approach is a key differentiator for the ACI solution. It uses an abstract policy model to describe a future state of the infrastructure and relies on a set of intelligent devices capable of rendering this policy into specific device capabilities. By distributing complexity to the edges of the infrastructure, this approach promises excellent scale characteristics. Additionally, its use of abstract policy allows broad interoperability across devices without limiting a vendor’s ability to expose a differentiated feature set. Abstract policies also give application administrators a self-documenting, portable way of capturing their infrastructure requirements.

Cisco Nexus® 9508 Switch Performance Test

Get the White Paper

Finally, Cisco’s ACI solution was designed around open APIs. The APIC exposes a comprehensive REST API based on its policy model that supports integration with automation, enterprise monitoring and orchestration frameworks plus hypervisor and systems management. Each REST API call can directly configure multi-tenant policies, which the APIC communicates to network and service infrastructure via southbound APIs. Network and service infrastructure may include physical and virtual switches, as well as firewalls, load balancers, IPS, etc. Clients of the REST API include Puppet, Chef, CFEngine and Python for automation; OpenStack, CloudStack, VMware, Cloupia for orchestration; KVM, Xen, VMware, Oracle OVM and Hyper-V hypervisors, and system management tools, such as IBM, CA, HP, BMC; and enterprise monitoring tools, such as Splunk, NetScout, CA and NetQoS.

Cisco Nexus® 9508 Power Efficiency Test

Get the White Paper

ACI Architecture

For ACI to deliver on its promise, it needs to be in-between management/automation/monitoring/orchestration and network/service infrastructure. ACI is based upon a set of open standards and initiatives. The most obvious one is REST for northbound communications. For ACI to enable applications requesting network services governed through policy, it needs to connect to any network/service infrastructure, not just the new Nexus 9300 and 9500 switches with custom ASICs. Between ACI and network/service infrastructure on the southbound, Cisco is proposing a new protocol called “Opflex,” which it plans to submit to the IETF for standardization.

Open Industry Network Performance & Power Test for Cloud Networks Evaluating 10/40 GbE Switches

Get the White Paper


Opflex is designed as a generic, extensible policy resolution protocol. It was designed to function in a Declarative Control system, such as ACI, where a Policy Authority (PA) interacts with a number of distributed Policy Elements (PEs), such as physical or virtual switches, firewalls, ADCs, etc. In a Declarative Control system, the PA represents a logically centralized location where policies are specified while PEs instrument and enforce these policies across a control fabric. Policies are passed over an Opflex channel as managed objectives that can be interpreted by both the PA and PEs.

Arista 7500E Software-Defined Cloud Network Switch Performance Test

Get the White Paper

A concept called Promise Theory, which provides the logical foundation for Opflex, is commonly employed in policy driven control systems. In a Promise Theory Hive, individual PEs make promises to a PA, agreeing to autonomously enforce policies that are “in scope” without detailed instruction from a PA. Scope in this case is defined by operational circumstances and end-points upon whose behalf policies are enforced.

Opflex natively supports bidirectional communication, which is necessary for declarative policy resolution. To support a wide range of network/service infrastructure PEs, abstract policies rather than device-specific configuration is supported. Opflex enables flexible, extensible definitions of policy using XML/JSON, and through an Opflex agent, Cisco states that it can support any device – vSwitch, physical switch, network services, servers, etc.

Brocade VDXTM 6740 Top-of-Rack Switch Performance and Power Test

Get the White Paper

An Opflex agent would sit on a firewall, vSwitch, ADC, switches, routers, etc. The APIC or other Opflex-enabled controller communicates directly with devices equipped with Opflex agents as it distributes configuration instructions requested from applications, monitoring systems, systems/hypervisor management, orchestration systems and custom automation scripts. Polices may be directives, such as which servers can connect directly, or stating that web servers can connect to application servers. These policies are then translated into network/service configuration changes.

EOS: The Next Generation Extensible Operating System

Get the White Paper

Opflex is fundamental to ACI. It’s in Cisco’s best interest that a wide range of networking and hypervisor firms support it. Opflex is currently an IETF informational RFC, and Cisco intends to open source an Opflex agent that can be used by any hypervisor switch, physical switch or L4-7 device, and even extend its use to the campus and WAN environments.

In addition to Opflex, Cisco has developed a scripting API for layer 4-7 devices based upon Python/CLI and XML. The scripting API, a first in the SDN ecosystem, is designed to support network service insertion and chaining of L4-7 devices to application flow without requiring any changes to network devices. There is a “Device Specification” that is an object model for a L4-7 device, and a “Device Script” that’s an integration script using a L4-7 device’s existing APIs. Device packages for most major vendors, such as F5, Citrix, etc., will be released as open source code on website.

Object Exposed REST Northbound APIs

The northbound API is REST and based upon an open ACI object model, which is the foundation for integration with open source and third-party tools. JSON/XML over HTTP is the language plus transport and data store tools used for northbound communications. The northbound API promises to offer a wide range of service integration into ACI and is, thus, the most complex. Expect northbound vendor and open source package support to roll out over time. This is an ecosystem in development, and as such, Cisco has published its object model and documentation. It will make available a simulator environment, open source a Python SDK, Neutron Plugin for OpenStack and cookbooks for DevOps tools, such as Puppet and Chef.

Cisco is also working with the community to build a unified view of policy across a number of open source projects. The goal here is to ensure that application policy can be portable across a wide array of technologies and implementations, and utilized even without the APIC and Nexus 9000. For example, Open Daylight has recently created a “Group Policy” project intended to build a policy abstraction API. It is supported by a number of companies, including Cisco, Plexxi, Midokura and IBM. Additionally, Cisco is engaged with the OpenStack Neutron community to expose a similar policy API. Cisco has also committed to work closely with the Open vSwitch ecosystem to support compatible policy primitives.

ACI offers a solution to today’s uncontrolled growth of network operational cost. It’s fundamental to establish the link between applications and networks without NetOps manual intervention. If it’s able to deliver on its promise, then it will offer IT executives a powerful tool to offer on-demand service creation and deletion—a requirement that’s shared across the global 2000 and below.

ACI: Trust and Openness

For Cisco to realize ACI’s promise, it will have to demonstrate that ACI is open and interoperable with non-Cisco switches, routers, load balancers, firewalls, etc. On this front, it has done well in announcing support from partners, including Microsoft, Red Hat, Canonical and Citrix as hypervisors, as well as services products from Citrix F5, Embrane and others to be announced. Opflex could be an industry and/or de-facto standard if this momentum across the industry continues. Furthermore, its commitment to open source an Opflex agent with a liberal license, such as Apache 2.0, and plans to submit Opflex to IETF for standardization are positive signals. However, the southbound API is where Cisco has to work hard for industry adoption. That’s where it will have to answer the key questions of openness and trust.

ACI is at the epicenter of IT infrastructure, which requires great trust of the technology that it will work securely to configure network and service infrastructure on behalf of applications. In Cisco’s favor is the fact that it has a long track record of supporting a technology for the long haul. It has the economic resources to create the programs necessary and product engineering to navigate ACI through to success. But with ACI at the epicenter, some feel that there is a strong potential for a Cisco lock-in, as ACI will be controlling how applications use network infrastructure. Cisco’s commitment to building ACI via open standards will be critical to its success. If ACI can support a wide range of network vendors’ equipment via Opflex, northbound systems and interoperable inter-policy management that allow true plug-and-play without lock-in, then ACI will be highly successful and Cisco, justly rewarded.

Comments are closed.