2015-05-14



When Ben Pfaff pushed the last of the changes needed to make OVN functional to the ovn branch, he dubbed it the “EZ Bake milestone”.  The analogy is both humorous and somewhat accurate.  We’ve reached the first functional milestone, which is quite exciting.

In previous posts I have gone through and shown components of the system as it has been built.  Now that it’s functional, I will go through a working demonstration of OpenStack using OVN.

DevStack

For this test environment we’ll stand up two hosts using DevStack.  Both hosts will be VMs running Fedora 21 that have 2 vCPUs and 4 GB of RAM.  We will refer to them as ovn-devstack-1 and ovn-devstack-2.

Each VM needs to have git installed and a user created that has sudo access.  This user will be used run DevStack.

Setting up ovn-devstack-1

The first DevStack host will look like a typical single node DevStack install that runs all of OpenStack.  It will be using OVN to provide L2 network connectivity instead of the default OVS ML2 driver and the neutron OVS agent.  It will still make use of the L3 and DHCP agents from Neutron as the equivalent functionality has not yet been implemented in OVN.

Start by cloning DevStack and networking-ovn:

networking-ovn comes with some sample configuration files for DevStack.  We can use the main sample for this host without any modifications needed.

After the DevStack configuration is in place, run DevStack to set up the environment.

This takes several minutes to complete.  Once it has completed successfully, you should see some output that looks like this:

Setting up ovn-devstack-2

The second DevStack host runs a minimal set of services needed to add an additional compute node (or hypervisor) to the existing DevStack environment.  It needs to run the OpenStack nova-compute service for managing local VMs and ovn-controller to manage the local ovs configuration.

Setting up the second DevStack host is a very similar process.  Start by cloning DevStack and networking-ovn.

networking-ovn provides an additional sample configuration file for DevStack that is intended to be used for adding additional compute nodes to an existing DevStack environment.  You must set the SERVICE_HOST configuration variable in this file to be the IP address of the main DevStack host.

Once the DevStack configuration is ready, you can run DevStack to set up the new compute node.  It should take less time to complete than the first DevStack host.

Once it completes, you should see output that looks like this:

The Default Environment

DevStack is now running on two hosts.  Let’s take a look at the default state of this environment before we start creating VMs.  We’ll run various OpenStack command line tools to interact with the OpenStack APIs.  By default, these tools get credentials from environment variables.  DevStack comes with a file called openrc that makes it easy to switch between admin (the cloud administrator) and demo (a regular cloud user) credentials.

We can start by making sure that Nova sees two hypervisors.  This API requires admin credentials.

DevStack also has a default network configuration.  We can use the neutron command line tool to list the default networks.

The Horizon web interface also provides a visual representation of the network topology:



The default environment also creates four Neutron ports.  Three are related to the router and can be seen in the diagram above.  The fourth (not shown) is for the DHCP agent providing DHCP services to the private network.

These default networks and ports can also be seen in OVN. OVN has a northbound database (OVN_Northbound) that serves as the public interface to OVN.  The Neutron driver updates this database to indicate the desired state.  OVN comes with a command line utility, ovn-nbctl, which can be used to view or update the OVN_Northbound database.  The show command gives a summary of the current configuration.

Launching VMs

Now that the environment is ready, we can start launching VMs.  We will launch two VMs so that one will end up on each of our compute nodes.  We’ll verify that the data path is working and then inspect what OVN has done to make it work.

We want our VMs to have a single vNIC attached to the private Neutron network.

DevStack automatically imports a very small test image, CirrOS, which suits our needs.

We’ll use the m1.nano flavor, as minimal resources are sufficient for our testing with these VMs.

We also need to create an SSH keypair for logging in to the VMs we create.

We now have everything needed to boot some VMs. We’ll create two of them, named test1 and test2.

We can use admin credentials to see which hypervisor each VM ended up on. This is just to show that we now have an environment with two VMs on the private Neutron virtual network that spans two hypervisors.

When we first issue the boot requests, the status of each VM was BUILD. Once the VM is running on the hypervisor, it will switch to the ACTIVE status.

Testing and Inspecting the Network

Our two new VMs has resulted in two more Neutron ports being created.  This is shown in Horizon’s visual representation of the network topology:

We can also get all of the details from the Neutron API:

The Ping Test

Now let’s verify that the network seems to work as we expect.  In this environment we can connect to the private Network from ovn-devstack-1. We can start with a quick check that we can ping both VMs and also that we can ping from one VM to the other.

It works!

OVN Northbound Database

Now let’s take a closer look at what OVN has done to make this work. We looked at the OVN_Northbound database earlier. It now includes the two additional ports for the VMs in its configuration for the private virtual network.

When we requested a new VM from Nova, Nova asked Neutron to create a new port on the network we specified. As the port was created, the Neutron OVN driver added this entry to the OVN_Northbound database. The northbound database is the desired state of the system. As it gets changed, the rest of OVN gets to work to implement the change.

OVN Chassis

OVN has a second database, OVN_Southbound, that is used internally to track the current state of the system. The Chassis table of OVN_Southbound is used to keep track of the different hypervisors running ovn-controller and how to connect to them. When ovn-controller starts, it registers itself in this table.

OVN Bindings

As logical ports get added to OVN_Northbound, the ovn-northd service creates entries in the Binding table of OVN_Southbound. This table is used to keep track of which physical chassis a logical port resides on. At first, the chassis column is empty. Once ovn-controller sees a port plugged into the local br-int with an iface-id that matches a logical port, ovn-controller will update the chassis column of that logical port’s Binding row to reflect that the port resides on that chassis.

OVN Pipeline

Another function of the ovn-northd service is defining the contents of the Pipeline table in the OVN_Southbound database. Each row in the Pipeline table represents a logical flow. ovn-controller on each chassis is responsible for converting the logical flows into OpenFlow flows appropriate for that node. We will go through annotated Pipeline contents for the current configuration. The output has been reordered to make it easier to follow. It’s sorted by datapath (the logical switch the flows are associated with), then table_id, then priority.

The Pipeline table has a similar format to OpenFlow. For each logical datapath (logical switch), processing starts at the highest priority match in table 0. A complete description of the syntax for the Pipeline table can be found in the ovn-sb document.

Table 0 starts by dropping anything with an invalid source MAC address. It also says to drop anything with a logical vlan tag, because there’s no concept of logical vlans.

The next 5 rows correspond to the five logical ports on this logical network. If the packet came in from one of the logical ports and its source MAC address is one that is allowed, processing will continue in table 1.

Finally, if the packet did not patch any higher priority flows, it just gets dropped.

The highest priority flow in table 1 matches packets with a broadcast destination MAC address. In that case, processing continues in table 2 several times (once for each logical port on this network) with the outport variable set.

The next 5 flows match when the destination MAC address is a MAC address assigned to one of the logical ports. In that case, the outport variable gets set and processing continues in table 2.

Table 2 does nothing important in this environment. It will eventually be used to implement ACLs. In the context of Neutron, security groups will get translated into OVN ACLs and those ACLs will be reflected by flow entries in this table.

Table 3 is the final table. The first flow matches a broadcast destination MAC address. The action is output;, which means to output the packet to the logical port identified by the outport variable.

The following 5 flows are associated with the 5 logical ports on this network. They will match if the outport variable matches a logical port and the destination MAC address is in the set of allowed MAC addresses.

All of the flows above are associated with the private network. These flows follow the same pattern, but are for the public network.

The Integration Bridge

Part of the configuration for ovn-controller is the integration bridge to use for all of its configuration.  By default, this is br-int.  Let’s start by looking at the configuration of br-int on ovn-devstack-2, as it is a bit simpler than ovn-devstack-1.

The port tap10964198-b2 is the port associated with VM running on this compute node (test2, 10.0.0.4). The other port, ovn-b29ae3-0, is for sending packets over a geneve tunnel to ovn-devstack-1.

Now we can look at the configuration of br-int on the other host, ovn-devstack-1. The setup is very similar, except it has some additional ports that are associated with the default Neutron setup done by DevStack.

OpenFlow

ovn-controller on each compute node converts the logical pipeline into OpenFlow flows. The processing maps conceptually to what we went through for the Pipeline table. Here are the flows for br-int on ovn-devstack-1.

(ovn-devstack-1)$ sudo ovs-ofctl -O OpenFlow13 dump-flows br-int
OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x0, duration=15264.413s, table=0, n_packets=28, n_bytes=3302, priority=100,in_port=1 actions=set_field:0x1->metadata,set_field:0x1->reg6,resubmit(,16)
cookie=0x0, duration=15264.413s, table=0, n_packets=1797, n_bytes=294931, priority=100,in_port=2 actions=set_field:0x1->metadata,set_field:0x2->reg6,resubmit(,16)
cookie=0x0, duration=15264.413s, table=0, n_packets=12857, n_bytes=1414286, priority=100,in_port=3 actions=set_field:0x1->metadata,set_field:0x4->reg6,resubmit(,16)
cookie=0x0, duration=15264.413s, table=0, n_packets=1239, n_bytes=143548, priority=100,in_port=5 actions=set_field:0x1->metadata,set_field:0x5->reg6,resubmit(,16)
cookie=0x0, duration=15264.413s, table=0, n_packets=20, n_bytes=1940, priority=50,tun_id=0x1 actions=output:1
cookie=0x0, duration=15264.413s, table=0, n_packets=237, n_bytes=23848, priority=50,tun_id=0x2 actions=output:2
cookie=0x0, duration=15264.413s, table=0, n_packets=14, n_bytes=1430, priority=50,tun_id=0x4 actions=output:3
cookie=0x0, duration=15264.413s, table=0, n_packets=75, n_bytes=8516, priority=50,tun_id=0x5 actions=output:5
cookie=0x0, duration=15264.413s, table=16, n_packets=0, n_bytes=0, priority=100,metadata=0x1,vlan_tci=0x1000/0x1000 actions=drop
cookie=0x0, duration=15264.413s, table=16, n_packets=0, n_bytes=0, priority=100,metadata=0x2,vlan_tci=0x1000/0x1000 actions=drop
cookie=0x0, duration=15264.413s, table=16, n_packets=0, n_bytes=0, priority=100,metadata=0x1,dl_src=01:00:00:00:00:00/01:00:00:00:00:00 actions=drop
cookie=0x0, duration=15264.413s, table=16, n_packets=0, n_bytes=0, priority=100,metadata=0x2,dl_src=01:00:00:00:00:00/01:00:00:00:00:00 actions=drop
cookie=0x0, duration=15264.413s, table=16, n_packets=28, n_bytes=3302, priority=50,reg6=0x1,metadata=0x1,dl_src=fa:16:3e:76:12:96 actions=resubmit(,17)
cookie=0x0, duration=15264.413s, table=16, n_packets=1797, n_bytes=294931, priority=50,reg6=0x2,metadata=0x1,dl_src=fa:16:3e:b1:34:ed actions=resubmit(,17)
cookie=0x0, duration=15264.413s, table=16, n_packets=0, n_bytes=0, priority=50,reg6=0x3,metadata=0x2,dl_src=fa:16:3e:8c:d0:a8 actions=resubmit(,17)
cookie=0x0, duration=15264.413s, table=16, n_packets=12857, n_bytes=1414286, priority=50,reg6=0x4,metadata=0x1,dl_src=fa:16:3e:b7:cd:77 actions=resubmit(,17)
cookie=0x0, duration=15264.413s, table=16, n_packets=1239, n_bytes=143548, priority=50,reg6=0x5,metadata=0x1,dl_src=fa:16:3e:24:46:3a actions=resubmit(,17)
cookie=0x0, duration=15264.413s, table=16, n_packets=0, n_bytes=0, priority=50,reg6=0x6,metadata=0x1,dl_src=fa:16:3e:50:01:91 actions=resubmit(,17)
cookie=0x0, duration=15264.413s, table=16, n_packets=0, n_bytes=0, priority=0,metadata=0x1 actions=drop
cookie=0x0, duration=15264.413s, table=16, n_packets=0, n_bytes=0, priority=0,metadata=0x2 actions=drop
cookie=0x0, duration=15264.413s, table=17, n_packets=12978, n_bytes=1420946, priority=100,metadata=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=set_field:0x6->reg7,resubmit(,18),set_field:0x1->reg7,resubmit(,18),set_field:0x2->reg7,resubmit(,18),set_field:0x4->reg7,resubmit(,18),set_field:0x5->reg7,resubmit(,18)
cookie=0x0, duration=15264.413s, table=17, n_packets=0, n_bytes=0, priority=100,metadata=0x2,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=set_field:0x3->reg7,resubmit(,18)
cookie=0x0, duration=15264.413s, table=17, n_packets=7, n_bytes=552, priority=50,metadata=0x1,dl_dst=fa:16:3e:76:12:96 actions=set_field:0x1->reg7,resubmit(,18)
cookie=0x0, duration=15264.413s, table=17, n_packets=1064, n_bytes=129938, priority=50,metadata=0x1,dl_dst=fa:16:3e:b1:34:ed actions=set_field:0x2->reg7,resubmit(,18)
cookie=0x0, duration=15264.413s, table=17, n_packets=0, n_bytes=0, priority=50,metadata=0x2,dl_dst=fa:16:3e:8c:d0:a8 actions=set_field:0x3->reg7,resubmit(,18)
cookie=0x0, duration=15264.413s, table=17, n_packets=0, n_bytes=0, priority=50,metadata=0x1,dl_dst=fa:16:3e:b7:cd:77 actions=set_field:0x4->reg7,resubmit(,18)
cookie=0x0, duration=15264.413s, table=17, n_packets=1492, n_bytes=154092, priority=50,metadata=0x1,dl_dst=fa:16:3e:24:46:3a actions=set_field:0x5->reg7,resubmit(,18)
cookie=0x0, duration=15264.413s, table=17, n_packets=380, n_bytes=150539, priority=50,metadata=0x1,dl_dst=fa:16:3e:50:01:91 actions=set_field:0x6->reg7,resubmit(,18)
cookie=0x0, duration=15264.413s, table=18, n_packets=37895, n_bytes=4255421, priority=0,metadata=0x1 actions=resubmit(,19)
cookie=0x0, duration=15264.413s, table=18, n_packets=0, n_bytes=0, priority=0,metadata=0x2 actions=resubmit(,19)
cookie=0x0, duration=15264.413s, table=19, n_packets=34952, n_bytes=3820300, priority=100,metadata=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,64)
cookie=0x0, duration=15264.413s, table=19, n_packets=0, n_bytes=0, priority=100,metadata=0x2,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,64)
cookie=0x0, duration=15264.413s, table=19, n_packets=7, n_bytes=552, priority=50,reg7=0x1,metadata=0x1,dl_

Show more