This setup allows us to experiment with OSP director in our own laptop, play with the existing Heat templates, create new ones and understand how TripleO is used to install OpenStack, from the confort of your laptop.
VMware Fusion Professional version is used, but this will also work in VMware Workstation with virtually no changes and in vSphere or VirtualBox with an equivalent setup.
This guide uses the official Red Hat documentation, in particular the Director Installation and Usage.
Architecture
Architecture diagram
Standard RHEL OSP 7 architecture with multiple networks, VLANs, bonding and provisioning from the Undercloud / director node via PXE.
Networks and VLANs
No especial setup is needed for enabling VLAN support in VMware Fusion, we just set the VLANs and their networks in RHEL as usual.
DHCP and PXE
DHCP and PXE are provided by the Undercloud VM.
NAT
VMware Fusion NAT will be used to provide external access to the Controller and Compute VMs via the provisioning and external networks. The VMware Fusion NAT below, configures 10.0.0.2 in your Mac OS X as the default gateway for the VMs, which will be used in the TripleO templates as the default gateway IP.
VMware Fusion Networks
The networks are configured in the VMware Fusion menu in Preferences, then Network.
The provisioning (PXE) network is set up in vmnet9, the rest of the networks in vmnet10.
The above describes the architecture of our laptop lab in VMware Fusion. Now, let’s implement it.
Step 1. Create 3 VMs in VMware Fusion
VM specifications
VM
vCPUs
Memory
Disk
NICs
Boot device
Undercloud
1
3000 MB
20 GB
2
Disk
Controller
2
3000 MB
20 GB
3
1st NIC
Compute
2
3000 MB
20 GB
3
1st NIC
Disk size
You may want to increase the disk size for the controller to be able to test more or larger images and to the compute node to be able to run more or larger instances. 3GB of memory is enough if you include a swap partition for the compute and controller.
VMware network driver in .vmx file
Make sure the network driver in the three VMs is vmxnet3 and not e1000 so that RHEL shows all of them:
ethX vs enoX NIC names
By default, the OSP director images have the kernel boot option net.ifnames=0. This will name the network interfaces as ethX as opposed to enoX. This is why in the Undercloud the interface names are eno16777984 and eno33557248 (default net.ifnames=1) and the Controller and Compute VMs have eth0, eth1 and eth2. This may change in RHEL OSP 7.2.
Undercloud VM Networks
This is the mapping of VMware networks to OS NICs. A OVS bridge br-ctlplane will be created automatically by the installation of the Undercloud.
Networks
VMware Network
RHEL NIC
External
vmnet10
eno33557248
Provisioning
vmnet9
eno16777984 / br-ctlplane
Copy the MAC addresses of the controller and compute VMs
Make a note of the MAC addresses of the first vNIC in the Controller and Compute VMs.
Step 2. Install the Undercloud
Install RHEL 7.1 in your preferred way in the Undercloud VM and then configure it as follows.
Network interfaces
First, set up the network. 192.168.100.10 will be the external IP in eno33557248 and 10.0.0.10 the provisioning IP in eno16777984.
In /etc/sysconfig/network-scripts/ifcfg-eno33557248
And in /etc/sysconfig/network-scripts/ifcfg-eno16777984
Once the network is set up ssh from your Mac OS X to 192.168.100.10 and not to 10.0.0.10 because the latter will be automatically reconfigured during the Undercloud installation to become the IP of the bridge called br-ctrlplane and you would lose access during the reconfiguration.
Undercloud hostname
The Undercloud needs a fully qualified domain name and it also needs to be present in the /etc/hosts file. For example:
And in /etc/hosts:
Subscribe RHEL and Install the Undercloud Package
Now, subscribe the RHEL OS to Red Hat’s CDN and enable the required repos.
Then, install the OpenStack client plug-in that will allow us to install the Undercloud
Create the user stack
After that, create the stack user, which we will use to do the installation of the Undercloud and later the deployment and management of the Overcloud.
Configure the director
The following undercloud.conf file is a working configuration for this guide, which is mostly self-explanatory.
For a reference of the configuration flags, there’s a documented sample in /usr/share/instack-undercloud/undercloud.conf.sample
Become the stack user and create the file in its home directory.
The masquerade_network config flag is optional as in VMware Fusion we already have NAT as explained above, but it might be needed if you use VirtualBox.
Finally, get the Undercloud installed
We will run the installation as the stack user we created
Step 3. Set up the Overcloud deployment
Verify the undercloud is working
Load the environment first, then run the service list command:
Configure the fake_pxe Ironic driver
Ironic doesn’t have a driver for powering on and off VMware Fusion VMs so we will do it manually. We need to configure the fake_pxe driver for this.
Edit /etc/ironic/ironic.conf and add it:
Then restart ironic-conductor and verify the driver is loaded:
Upload the images into the Undercloud’s Glance
Download the images that will be used to deploy the OpenStack nodes to the directory specified in the image_path in the undercloud.conf file, in our example /home/stack/images. Get the images and untar them as described here. Then upload them into Glance in the Undercloud:
Define the VMs into the Undercloud’s Ironic
TripleO needs to know about the nodes, in our case the VMware Fusion VMs. We describe them in the file instackenv.json which we’ll create in the home directory of the stack user.
Notice that here is where we use the MAC addresses we took from the two VMs.
Import them to the undercloud:
The command above adds the nodes to Ironic:
To finish the registration of the nodes we run this command:
Discover the nodes
At this point we are ready to start discovering the nodes, i.e. having Ironic powering them on, booting with the discovery image that was uploaded before and then shutting them down after the relevant hardware information has been saved in the node metadata in Ironic. This process is called introspection.
Note that as we use the fake_pxe driver, Ironic won’t power on the VMs, so we do it manually in VMware Fusion. We wait until the output of ironic node-list tells us that the power state is on and then we run this command:
Assign the roles to the nodes in Ironic
There are two roles in this example, compute and control. We will assign them manually with Ironic.
Create the flavors in Glance and associate them with the roles in ironic
This consists in creating the flavors matching the specs of the VMs and then adding the property control and compute to the corresponding flavors to match Ironic’s as done in the previous step. Then, it also requires a flavor called baremetal.
TripleO also needs a flavor called baremetal (which we won’t use):
Notice the disk size is 1 GB smaller than the VM’s disk. This is a precaution to avoid No valid host found when deploying with Ironic, which sometimes is a bit too sensitive.
Also, notice that I added swap because 3 GB of memory is not enough and the out of memory killer could be triggered otherwise.
Now we make the flavors match with the capabilities we set in the Ironic nodes in the previous step:
Step 4. Create the TripleO templates
Get the TripleO templates
Copy the TripleO heat templates to the home directory of the stack user.
Create the network definitions
These are our network definitions:
Network
Subnet
VLAN
Provisioning
10.0.0.0/24
VMware native
Internal API
172.16.0.0/24
201
Tenant
172.17.0.0/24
204
Storage
172.18.0.0/24
202
Storage Management
172.19.0.0/24
203
External
192.168.100.0/24
VMware native
To allow creating dedicated networks for specific services we describe them in a Heat template that we can call network-environment.yaml.
More information about this template can be found here.
Configure the NICs of the VMs
We have examples of NIC configurations for multiple networks and bonding in /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/
We will use them as a template to define the Controller and Compute NIC setup.
Notice that they are called from the previous template network-environment.yaml.
Controller NICs
We want this setup in the controller:
Bonded Interface
Bond Slaves
Bond Mode
bond1
eth1, eth2
active-backup
Networks
VMware Network
RHEL NIC
Provisioning
vmnet9
eth0
External
vmnet10
bond1 / br-ex
Internal
vmnet10
bond1 / vlan201
Tenant
vmnet10
bond1 / vlan204
Storage
vmnet10
bond1 / vlan202
Storage Management
vmnet10
bond1 / vlan203
We only need to modify the resources section of the ~/templates/nic-configs/controller.yaml to match the configuration in the table above:
Compute NICs
In the compute node we want this setup:
Bonded Interface
Bond Slaves
Bond Mode
bond1
eth1, eth2
active-backup
Networks
VMware Network
RHEL NIC
Provisioning
vmnet9
eth0
Internal
vmnet10
bond1 / vlan201
Tenant
vmnet10
bond1 / vlan204
Storage
vmnet10
bond1 / vlan202
Enable Swap
Enabling the swap partition is done from within the OS. Ironic only creates the partition as instructed in the flavor. This can be done with the templates that allow running first boot scripts via cloud-init.
First, the template for running at cloud-init userdata /home/stack/templates/firstboot/firstboot.yaml
Then, the actual script for enabling swap /home/stack/templates/firstboot/userdata.yaml
Step 5. Deploy the Overcloud
Summary
We have everything we need to deploy now:
The Undercloud configured.
Flavors for the compute and controller nodes.
Images for the discovery and deployment of the nodes.
Templates defining the networks in OpenStack.
Templates defining the nodes’ NICs configuration.
A first boot script used to enable swap.
We will use all this information when running the deploy command:
After a successful deployment you’ll see this:
An overcloudrc file with the environment is created for you to start using the new OpenStack environment deployed in your laptop.
Step 6. Start using the Overcloud
Now we are ready to start testing our newly deployed platform.