2015-05-18

It would be helpful to view Neutron as simply a software-defined networking (SDN) application. To be more detailed, it manages networking virtualization that runs on top of OpenStack, and creates sets of loosely coupled, related projects for developing advanced cloud services. Each of these services is also an individual project that is driven both by the community and by the contributions from many vendors and companies. Importantly, there are 12 integrated projects in the OpenStack Kilo release:

Nova (compute): provides virtual servers/machines for cloud users on demand.

Neutron (networking): provides networking as a service (virtual networking services).

Swift (object storage): allows storage and retrieval of data images, files and documents that are API accessible.

Cinder (block storage): provides persistent block storage to the user’s VM.

Glance (images): provides a list of virtual disk images to the compute node, which is utilized by the VMs.

Horizon (dashboard): provides a web-based graphic user interface (GUI) for managing OpenStack by the administrator and tenants (users).

Keystone (identity): stores information for providing authentication and authorization for OpenStack services.

Ceilometer (telemetry): monitors and measures OpenStack cloud usage for the purpose of billing, benchmarking and statistics.

Heat (orchestration): provides an orchestration service for managing cloud applications by using appropriate API calls.

Ironic(Baremetal provisioning):  aims to provision bare metal machines instead of virtual machines, forked from the Nova Baremetal driver.

Sahara (Big Data as a service): project provides a simple means to provision a data-intensive application cluster (Hadoop or Spark) on top of OpenStack.

Trove (Database as a service):  project aims at providing Cloud Database as a Service provisioning functionality for both relational and non-relational database engines.

Virtual networks are created by tenants and administrators to provide networking capability between VMs managed by OpenStack compute. Neutron is a network management service that exposes an extensible set of APIs for creating and managing virtual networks.

Prior to Neutron, OpenStack had a simple, flat-networking environment without L3 or firewall support. It was contained within a Nova server, which made it difficult to accommodate the changes that were happening in networking.

Neutron was introduced to treat networking as a separate service, and to provide different implementation choices of the abstractions – where the Neutron server provides abstraction definition and management, and the actual implementation of the abstractions are realized by the plugins. This multi-tenancy supporting framework, which is based on plugins, is argued to be technology-agnostic and modular. We need to note that Neutron is a stand-alone service — that is, it can run as an autonomous service, exposing APIs with different vendors, providing the implementations and any appropriate extensions.

The API categories and the supported operations under each sub-category are summarized below. The operations are abbreviated as CRUD for create, read, update and delete. The core APIs cover the basic and necessary network operations, whereas the extensions and attribute APIs cover the necessities of feature-rich virtual networks.

Core API’s Operations

Network (CRUD)

Subnet (CRUD)

Port (CRUD)

Extension and Attribute API’s Operations

Quotas (RUD)

Network providers extended attributes (CRUD)

Network multiple providers extension (CR)

Ports binding extended attributes (CRU)

Security groups and rules (CRD)

Laer 3 networking (CRUD)

Metering labels and rules (CRD)

Load balancer as a service (LBaaS) (CRUD)

Neutron Architecture

Figure one below describes the OpenStack Neutron architecture, which is comprised of following elements:

Neutron Server

A python daemon is the main process of the OpenStack networking that typically runs on the controller node (a term used in OpenStack deployments). It exposes APIs, to enforce the network model, and passes the requests to the neutron plugin.

Plugins

Plugins can be either core or service. Core plugins implement the “core” Neutron API — L2 networking and IP address management. Service plugins provide “additional” services, such as the L3 router, load balancing, VPN, firewall and metering. These network services can also be provided by the core plugins by realizing the relevant API extensions. In short, plugins run on the controller node and implement the networking APIs, which interact with the Neutron server, database and agents.



Figure one: OpenStack Neutron Architecture

Plugin Agents

These agents are specific to the Neutron plugin being used. They run on compute nodes and communicate with the Neutron plugin to manage virtual switches. These agents are optional in many deployments and perform local virtual switch configurations on each hypervisor.

Message Queue

OpenStack components, including Neutron, use advanced message queue protocol (AMQP) for internal communications. The AMQP broker, RabbitMQ, sits between any two internal components of Neutron and allows them to communicate in a loosely coupled fashion, i.e., Neutron components use remote procedure call (RPC) to communicate with one another.

Database

Almost all plugins need a database to maintain a persistent network model; hence, the schema is defined by the configured core and service plugins.

DHCP Agent

This agent is a part of Neutron and provides DHCP services to tenant networks. It maintains the required DHCP configuration and is the same across all plugins.

L3 Agent

This agent is responsible for providing layer 3 and NAT forwarding to gain external access for virtual machines on tenant networks.

Modular Layer 2 Core Plugin

Modular Layer 2 (ML2) is Neutron’s core plugin. ML2, when introduced (in the Havana version of OpenStack), replaced existing monolithic plugins (e.g., Open vSwitch and Linux Bridge — they’re only plugins, not agents!) to eliminate redundant code and to reduce development and maintenance effort. As the authors of ML2 define, “The Modular Layer 2 (ML2) plugin is a framework allowing OpenStack Neutron to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers.”



Figure two: ML2 Plugin Architecture

ML2 achieves modularity through its driver model. As seen in the figure, it includes two categories of drivers: type and mechanism. Type drivers (such as flat, VLAN, GRE and VXLAN) define a particular L2 type, where each available network type is managed by a corresponding type driver. The driver maintains type-specific state information and realizes the isolation among the tenant networks along with validation of provider networks.

On the other hand, the mechanism drivers — which are vendor specific (such as OVS, and drivers from ODL, Cisco, NEC, etc.), based on the enabled type driver — support creating, updating, and deletion of network, subnet and port resources. We should note that vendors may implement a completely new plug-in similar to ML2, or just implement a mechanism driver. The talk by Salvatore Orlando and Armando Migliaccio helps make this decision easier.

OpenStack and SDN Controller: The Big Picture

Softwared-defined networking was introduced to both overcome the deficiencies of Neutron and to provide support for multiple network virtualization technologies (a centralized control plane creating isolated tenant virtual networks) and approaches (“Software Defined Networking (SDN) and OpenStack” by Christian Koenning of F5 Networks). With the integration on SDN, Neutron is expected to support the dynamic nature of large-scale, high-density, multi-tenant cloud environments.

OpenStack Neutron, with its plugin architecture, provides the ability to integrate SDN controllers into the OpenStack. This integration of SDN controllers into Neutron using plugins provides centralized management, and also facilitates the network programmability of OpenStack networking using APIs.

SDN controllers like OpenDaylight, Ryu, and Floodlight use either specific plugins or the ML2 plugin with the corresponding mechanism drivers, to allow communication between Neutron and the SDN controller. The big picture, showing the integration of OpenStack with SDN controllers, is shown in figure three below.

In our articles on SDN controllers, we have seen that network operating systems such as Open Daylight and RYU, among others, are responsible for providing a complete view of the network (topology) and its current and consistent state. We have also seen that the controller is also responsible for “managing” (applying, enforcing and ensuring) the necessary changes to the network by translating the requirements to configuring (and monitoring) the network elements (physical and virtual).  Typically, these changes to the underlying network (and network elements) come from network applications running on top of SDN controllers, using the northbound APIs.

With this integration of OpenStack Neutron and SDN controllers, the changes to the network and network elements are also triggered from the OpenStack user, which are translated into Neutron APIs, and handled by neutron plugins and corresponding agents running in SDN controllers. For example, OpenDaylight interacts with Neutron by using the ML2 plugin present on the network node of Neutron via the REST API using northbound communication. When an OpenStack user performs any networking related operation (create/update/delete/read on network, subnet and port resources) the typical flow would be as follows:

The user operation on the OpenStack dashboard (Horizon) will be translated into a corresponding networking API and sent to the Neutron server.

The Neutron server receives the request and passes the same to the configured plugin (assume ML2 is configured with an ODL mechanism driver and a VXLAN type driver).

The Neutron server/plugin will make the appropriate change to the DB.

The plugin will invoke the corresponding REST API to the SDN controller (assume an ODL).

ODL, upon receiving this request, may perform necessary changes to the network elements using any of the southbound plugins/protocols, such as OpenFlow, OVSDB or OF-Config.



Figure three: OpenStack and SDN Controllers, the Big Picture

We should note there exists different integration options with the SDN controller and OpenStack; for example, a) one can completely eliminate RPC communications between the Neutron server and agents on the compute node, with the SDN controller being the sole entity managing the network, or b) the SDN controller manages only the physical switches, and the virtual switches can be managed from the Neutron server directly.

Food for Thought: SDN Controller Deployment Options and OpenStack Integration

We want to end this introductory article by sharing with our readers one of the many challenges of SDN adaptability.

SDN controller deployments can take different forms, as summarized in the below three tables. We should note that it is possible to deploy with different permutations and combinations of the below mentioned options. For example, we can have non-virtualized, integrated, and single/redundant controllers in a datacenter managing all the network elements of the data center.

Options

Description

Non-Virtualized

Complete controller instance running on a single system (a physical machine).

Virtualized

Controller instance running in a virtualized environment (as a VM).

Options

Description

Integrated

All the SDN controller functions running under a single instance.

Distributed

SDN controller functions are distributed.

Options

Description

Single/Redundant

Single (or with redundancy) controller for the network.

Hierarchical

A hierarchy of controllers, possibly with client/server relationships between them.

The benefits of virtualization of an SDN controller include the ability to better scale up and scale out – dynamically adding more resources (such as storage) to an existing SDN controller. In a virtualized and distributed deployment — when the SDN controller is implemented as a set of collaborating virtual machines — additional VM instances can be added in response to the increased workload.

Consider a scenario where the SDN controller is virtualized and integrated/distributed, and the SDN network elements range from virtual to physical. In addition, management of these virtual infrastructures in the data center environment should fit within the current orchestration model — integrate with current VIMs (virtualized infrastructure managers) such as OpenStack. To achieve this, one has to overcome various challenges, such as performance and dynamic service management. The reader is encouraged to think of different options in creating end-to-end solutions in such scenarios.

Sridhar received his Ph.D. in computer science from the National University of Singapore in 2007, his M.Tech. degree in computer science from KREC, Suratkal, India in 2000, and his B.E. degree in instrumentation and electronics from SIT, Tumkur, Bangalore University, India, in 1997. He worked as a research lead at the SRM Research Institute, India; a post-doctoral fellow at the Microsoft Innovation Center, Politecnico Di Torino, Turin, Italy; and as a research fellow at the Institute for Infocomm Research (I2R), Singapore. He has worked on various development and deployment projects involving ZigBee, WiFi, and WiMax. Sridhar is currently working as a group technical specialist with NEC Technologies India Limited. Sridhar’s research interests are mainly in the domain of next-generation wired and wireless networking, such as OpenFlow, software-defined networking, software-defined radio-based systems for cognitive networks, Hotspot 2.0 and the Internet of Things.

Rackspace is a sponsor of The New Stack.

Feature image via Flickr Creative Commons.

The post SDN’s Scale Out Effect on OpenStack Neutron appeared first on The New Stack.

Show more