Lippis Report 223: An Open Approach to Network Automation

nick_podium2Modern day networking is labor intensive. Configuration, monitoring and change management are manual processes for the most part. In fact, at the last Open Networking User Group (ONUG) this past Fall at Credit Suisse, most IT business leaders said that one network engineer supports approximately 120 networking devices such as a router, switch, etc. Compare this to 20,000 servers that a single engineer manages at a hyper scale firm and you can see that networking needs automation. Manual networks are not helping IT business leaders who are feeling pressure from business unit managers demanding self-service IT delivery on par with cloud providers such as Amazon, Azure, etc., but without the implied loss of security, visibility and control. Case in point: at ONUG in Boston hosted by Fidelity Investments, large financial service firms showed what happens when they offer business unit managers on-demand virtual machine (VM) creation and deletion; the trend lines showed exponential growth, demand and consumption!

Policy-Driven Infrastructure: Separating User Intent from Configuration Procedures

Listen to the Podcast

Self-service VM and container orchestration is a proxy for on-demand IT delivery. That is, application developers want network infrastructure to securely configure to their intent automatically. Business unit managers want to launch applications quickly to address competitive market opportunities or changes. Compare the seconds it takes to spin up a VM to that 150 days it took an IT executive to order a new server, have it set in a rack and loaded with an operating system. It took another six weeks for the network to be configured and firewall policies to be added. That was nearly 200 days from server order to application roll out. Note that this did not include the time to develop the application—only the time to procure, configure and deploy the infrastructure.

There needs to be a way to hasten the time to deliver IT services and automate the manual configuration, monitoring and change management aspects of networking. Many in the industry are looking toward a policy-based model to automate networking tasks. Group-Based Policy, or GBP, delivers open source software to fulfill this need, offering a means to automate infrastructure configuration to application developers’ intended requirements. GBP delivers intent to Software-Defined Networking (SDN) controllers in the form of communication rules between application tiers.

Open Networking Challenges and Opportunities

Get the White Paper

User or Developer Intent

In addition to self-service IT delivery, there are many chefs in the infrastructure kitchen, adding their own spices to the recipe as they build out an infrastructure to support an application. With mixed objectives and little time to deploy an application, the application developer’s intent is far from the end result. As demands increase for speed, scale, security, agility and flexibility in cloud infrastructure environments, a policy-driven approach is becoming an important area of development in the open source community. Today’s cloud infrastructure is often overwhelmed by inputs from different teams with differing objectives:

Developers want to quickly and easily deploy their applications;

Infrastructure teams need to deliver on operational requirements;

Business teams need to impose governance, cost or compliance constraints.

The end result is a system that muddles what the application owner wants with how the infrastructure actually works. This lies at the root of many of the problems that make cloud infrastructure hard to build, operate and scale.

SDN at Citigroup Networking for Citi’s Software-Defined IT Infrastructure

Get the White Paper

New approaches to policy-driven infrastructure aim to change this status quo by separating user or application developer intent from the procedures through which that intent is implemented. GBP introduces a new taxonomy designed to capture the requirements of applications in a way that is separate from the infrastructure behind it. This policy language is maturing within the open source community. While GBP is initially targeted at networking use cases, its approach can be generalized across storage and compute as well.

The focus on separating user intent from infrastructure is an important new insight into how cloud infrastructure should be built and run. To drive forward this approach, GBP is currently being developed for both OpenStack and OpenDaylight open source projects.

Open Industry Network Performance & Power Test for Cloud Networks Evaluating 10/40/100 GbE Switches

Get the White Paper

What Is Group-Based Policy?

GBP aims to address these issues by offering a simple, abstract API designed to capture user intent. It is based on the following main concepts (Figure 1):

● Groups: GBP introduces the concept of a group that represents a collection of network endpoints and fully describes their properties. Everything in the same group must be treated the same way (that is, it has the same policy). This approach is a simple but powerful generalization of the constructs in OpenStack today and maps well to the scalable application tiers used by most developers.

● Reusable policy rule sets: GBP introduces rule sets to describe secure connectivity between groups. Rule sets may imply switching or routing behaviors, but they offer a simple way to describe how sets of machines can communicate in non-networking terms.

  • They are reusable. The same rule set can be used for different combinations of groups. This reusability reduces the number of places that must be updated as policies change, thus improving agility, security and consistency.
  • They capture dependencies between multiple groups so it’s easy for different parts of the application to evolve in parallel.

● Policy layering: GBP is designed to allow policies to be layered based on different roles in an organization. For instance, layering allows application owners to specify the policy pertaining to an application, while infrastructure owners can prescribe security requirements such as redirection of traffic to a chain of firewall and intrusion-detection system (IDS) solutions before the traffic is sent to the application. Both policies can coexist and be described using nested primitives.

● Network services: The GBP model also supports a redirect operation that makes complex network service chains and graphs easy to abstract and consume. Network service chaining is a mechanism for connecting multiple Layer 4 through 7 services, such as load balancers and firewalls. The GBP API thus allows application developers to specify these requirements as components for a combination of groups rather than through switching or routing configuration.

group_based_policy_model

Open Industry Cloud Network Fabric Test for Two-Tier Ethernet Network Architecture

Get the White Paper

GBP: A Means to Automate Infrastructure Orchestration

To create an infrastructure that is fast and scalable to deploy applications, automation is fundamental. Today’s infrastructure automation solutions are based upon low-level APIs that unintentionally increase complexity, which also tend to be fragile, thanks to all the low-level API behaviors that become part of the automation system. In short, low-level API-based automation does not scale.

One of the key ideas behind inserting GBP or a policy layer above low-level APIs is to make it easier to build an automated orchestrated infrastructure at scale. Infrastructure is made up of set of tiers that connect to each other. Each tier can scale independently. By adding components to a tier, these components inherit all the policies about how they interact with each other as it’s already described and a property of the underlying infrastructure policy.

There are many devices, both virtual and physical, to configure within an infrastructure, but organizing infrastructure into tiers makes automation easier, more scalable and faster to deploy applications. GBP hides the very low-level AIPs from the automation orchestrator, making it possible to build infrastructure that is “stack-scalable.”

Understanding Total Cost of Ownership (TCO) Parameters for Next Generation Threat Prevention

Get the White Paper

Separation of Concerns

Separation of concerns in another important GBP concept and is focused around separating operator concerns from the different application teams concerns. IT organization design has evolved to be built around network, compute, storage, security, virtualization, et al, silos. In cloud infrastructure, the IT organization model is more horizontal; that is, there are no silos—just operations engineers who manage the entire infrastructure. In cloud infrastructure, there are but two separate teams: the application teams that operate independently from the infrastructure team. 

In this IT organization structure, orchestration automation performed properly is via APIs that separates the concerns of the infrastructure operator from the different application teams. The goal is to allow application and operator teams to work independently, offering the greatest flexibility. 

This separation of concerns is important as it enables the different teams to work at different paces and priorities without impacting each other’s progress. That is, application teams don’t have to be in lockstep from an infrastructure perspective. Application teams can define interfaces to their tier of the infrastructure, and the infrastructure team will align application team interfaces without the two teams having to coordinate. For example, operators don’t need to a say “make sure the firewall isn’t updated at the same time the application team is pushing updates.” The bottom line is that both teams need to operate independently and continue to evolve their work to hasten the pace of IT delivery.

GBP has been integrated into OpenStack and OpenDaylight. The concept of service chaining is of particular importance to both open source projects as it’s the building block of Network Service Virtualization (NSV) and Network Function Virtualization (NFV).

Service Chaining

Service chaining within GBP is essentially a model by which groups describe how traffic flows between and to network services, be it L2 or L3 service. GBP incorporates a service chaining API, which is also an important security component as service chaining is performed in policy context providing inherent security attributes. GBP is not an implementation of service chaining, but a mapping mechanism into existing network services that can now be composed via policy. GBP service chain API can be used with existing open source driver implementations, and the beauty of it is that there’s nothing vendor-specific enabling the mapping of existing load-balancers, IPS, firewalls and other network services that exists in OpenStack’s Neutron, for example.

Logical Nodes

GBP incorporates the idea of a “logical node,” which essentially is a logical instantiation of a network device(s). The service chaining spec or API operates on the logical nodes, and this spec is reusable. In short, GBP models a device in a logical way: the service chain is modeled around these logical nodes in a reusable, template format, and then the chain can be instantiated in multiple places. GBP service chaining within OpenStack and OpenDaylight was designed to be generic enough to acknowledge that there will be a mix of virtualized services and physical services present, with both needing to be managed and inserted as part of the application development process.

OpenStack GBP: Neutron Mapping or Native Driver Approach

GBP for OpenStack offers two possible backend implementations. It was designed to run on top of Neutron and render the policy into the Neutron API, working with any plugins or ML2 drivers already present in OpenStack. In this approach GBP is completely backwards compatible. The other approach is based on native drivers where a set of plugins directly handle the policy without first converting it to the Neutron API.

The GBP Neutron Mapping Driver
GBP for OpenStack includes a Neutron Mapping driver. In this mode, the GBP API is decomposed into Neutron calls, and GBP manages the potentially complex relationships between different parts of an application, such as multiple security groups, networks, etc. GBP for OpenStack is essentially modeled through Neutron networks, routers and security groups. Any plugin written for Neutron works with GBP. All network service APIs that are present in OpenStack are fully compatible with GBP, enabling the use any of existing plugins.

GBP: Native Driver Approach

The Native Driver approach eliminates the limitations of pushing policy through Neutron calls, and in Native Driver mode, GBP offers complete flexibility to a SDN controller to implement and enforce policy. GBP in Native Drive mode allows native drivers to take the policy directly and implement it any way needed.

Examples of GBP Native Drivers’ support is OpenDaylight, Cisco APIC, Nuage Networks, One Convergence, with more to come. In short, GBP is being embraced in open source projects as well as in vendor implementations in Native Driver mode. These direct vendor GBP plugins are available in its first release.

GBP Implementation Status

Group-Based Policy exists as a project in OpenStack’s StackForge, the place for forward-looking OpenStack projects, and the development team has completed a Juno release. Red Hat, Mirantis and Ubuntu OpenStack will be packaging or supporting GBP with various vendor drivers with their versions of Juno.

GBP ushers in a new decade of policy-based infrastructure that promises to make infrastructure responsive to the intent of application developers and in the process, automate secure infrastructure orchestration at scale. GBP aggregates many low-level network automation APIs so that networks can be grouped into polices where their configuration, monitoring and change management can be automated. For new applications being written in a GBP conscience format, the benefits are huge. There will be separation of concerns, an automated application dependency map created, automated infrastructure configuration, faster IT delivery, scale, lower operational cost and most importantly, a more competitive business.

Comments are closed.