We are getting a lot of requests about how to deploy OpenStack in proof of concept (PoC) or production environments as, let’s face it, setting up an OpenStack infrastructure from scratch without the aid of a deployment tool is not particularly suitable for faint at heart newcomers
DevStack, a tool that targets development environments, is still very popular for building proof of concepts as well, although the results can be quite different from deploying a stable release version. Here’s an alternative that provides a very easy way to get OpenStack up and running, using the latest OpenStack stable release.
RDO and Packstack
RDO is an excellent solution to go from zero to a fully working OpenStack deployment in a matter of minutes.
RDO is simply a distribution of OpenStack for Red Hat Enterprise Linux (RHEL), Fedora and derivatives (e.g.: CentOS).
Packstack is a Puppet based tool that simplifies the deployment of RDO.
There’s quite a lot of documentation about RDO and Packstack, but mostly related to so called all-in-one setups (one single server), which are IMO too trivial to be considered for anything beyond the most basic PoC, let alone a production environment. Most real OpenStack deployments are multi-node, which is quite natural given the highly distributed nature of OpenStack.
Some people might argue that the reason for limiting the efforts to all-in-one setups is reasonably mandated by the available hardware resources. Before taking a decision in that direction, consider that you can run the scenarios described in this post entirely on VMs. For example, I’m currently employing VMWare Fusion virtual machines on a laptop, nested hypervisors (KVM and Hyper-V) included. This is quite a flexible scenario as you can simulate as many hosts and networks as you need without the constraints that a physical environment has.
Let’s start with describing how the OpenStack Grizzly multi-node setup that we are going to deploy looks like.
Controller
This is the OpenStack “brain”, running all Nova services except nova-compute and nova-network, plus quantum-server, Keystone, Glance, Cinder and Horizon (you can add also Swift and Ceilometer).
I typically assign 1GB of RAM to this host and 30GB of disk space (add more if you want to use large Cinder LVM volumes or big Glance images). On the networking side, only a single nic (eth0) connected to the management network is needed (more on networking soon).
Network Router
The job of this server is to run OpenVSwitch to allow networking among your virtual machines and the Internet (or any other external network that you might define).
Beside OpenVSwitch this node will run quantum-openvswitch-agent, quantum-dhcp-agent, quantum-l3-agent and quantum-metadata-proxy.
1 GB of RAM and 10GB of disk space are enough here. You’ll need three nics, connected to the management (eth0), guest data (eth1) and public (eth2) networks.
Note: If you run this node as a virtual machine, make sure that the hypervisor’s virtual switches support promiscuous mode.
KVM compute node (optional)
This is one of the two hypervisors that we’ll use in our demo. Most people like to use KVM in OpenStack, so we are going to use it to run our Linux VMs.
The only OpenStack services required here are nova-compute and quantum-openvswitch-agent.
Allocate the RAM and disk resources for this node based on your requirements, considering especially the amount of RAM and disk space that you want to assign to your VMs. 4GB of RAM and 50GB of disk space can be considered as a starting point. If you plan to run this host in a VM, make sure that the virtual CPU supports nested virtualization. Two nics required, connected to the management (eth0) and guest data (eth1) networks.
Hyper-V 2012 compute node (optional)
Micrsosoft Hyper-V Server 2012 is a great and completely free hypervisor, just grab a copy of the ISO from here. In the demo we are going to use it for running Windows instances, but beside that you can of course use it to run Linux or FreeBSD VMs as well. You can also grab a ready made OpenStack Windows Server 2012 Evaluation image from here, no need to learn how to package a Windows OpenStack image today. Required OpenStack services here are nova-compute and quantum-hyperv-agent. No worries, here’s an installer that will take care of setting them up for you, make sure to download the stable Grizzly release.
Talking about resources to allocate for this host, the same consideration discussed for the KVM node apply here as well, just consider that Hyper-V will require 16GB-20GB of disk space for the OS itself, including updates. I usually assign 4GB of RAM and 60-80GB of disk. Two nics required here as well, connected to the management and guest data networks.
Networking
Let’s spend a few words about how the hosts are connected.
Management
This network is used for management only (e.g. running nova commands or ssh into the hosts). It should definitely not be accessible from the OpenStack instances to avoid any security issue.
Guest data
This is the network used by guests to communicate among each other and with the rest of the world. It’s important to note that although we are defining a single physical network, we’ll be able to define multiple isolated networks using VLANs or tunnelling on top of it. One of the requirements of our scenario is to be able to run groups of isolated instances for different tenants.
Public
Last, this is the network used by the instances to access external networks (e.g. the Internet) routed through the network host. External hosts (e.g. a client on the Internet) will be able to connect to some of your instances based on the floating ip and security descriptors configuration.
Hosts configuration
Just do a minimal installation and configure your network adapters. We are using CentOS 6.4 x64, but RHEL 6.4, Fedora or Scientific Linux images are perfectly fine as well. Packstack will take care of getting all the requirements as we will soon see.
Once you are done with the installation, updating the hosts with yum update -y is a good practice.
Configure your management adapters (eth0) with a static IP, e.g. by editing directly the ifcfg-eth0 configuration file in /etc/sysconfig/network-scripts. As a basic example:
General networking configuration goes in /etc/sysconfig/network, e.g.:
And add your DNS configuration in /etc/resolv.conf, e.g.:
Nics connected to guest data (eth1) and public (eth2) networks don’t require an IP. You also don’t need to add any OpenVSwitch configuration here, just make sure that the adapters get enabled on boot, e.g.:
You can reload your network configuration with:
Packstack
Once you have setup all your hosts, it’s time to install Packstack. Log in on the controller host console and run:
Now we need to create a so called “answer file” to tell Packstack how we want our OpenStack deployment to be configured:
One useful point about the answer file is that it is already populated with random passwords for all your services, change them as required.
Here’s a script add our configuration to the answers file. Change the IP address of the network and KVM compute hosts along with any of the Cinder or Quantum parameters to fit your scenario.
Now, all we have to do is running Packstack and just wait for the configuration to be applied, including dependencies like MySQL Server and Apache Qpid (used by RDO as an alternative to RabbitMQ). You’ll have to provide the password to access the other nodes only once, afterwards Packstack will deploy an SSH key to the remote ~/.ssh/authorized_keys files. As anticipated Puppet is used to perform the actual deployment.
At the end of the execution, Packstack will ask you to install a new Linux kernel on the hosts provided as part of RDO repository. This is needed because the kernel provided by RHEL (and thus CentOS) doesn’t support network namespaces, a feature needed by Quantum in this scenario. What Packstack doesn’t tell you is that the 2.6.32 kernel they provide will create a lot more issues with Quantum. At this point why not installing a modern 3.x kernel?
My suggestion is to skip altogether the RDO kernel and install the 3.4 kernel provided as part of the CentOS Xen project (which does not mean that we are installing Xen, we only need the kernel package).
Let’s update the kernel and reboot the network and KVM compute hosts from the controller (no need to install it on the controller itsef):
At the time of writing, there’s a bug in Packstack that applies to multi-node scenarios where the Quantum firewall driver is not set in quantum.conf, causing failures in Nova. Here’s a simple fix to be executed on the controller (the alternative would be to disable the security groups feature altogether):
We can now check if everything is working. First we need to set our environment variables:
Let’s check the nova services:
Here’s a sample output. If you see xxx in place of one of the smily faces it means that there’s something to fix
Now we can check the status of our Quantum agents on the network and KVM compute hosts:
You should get an output similar to the following one.
OpenVSwitch
On the network node we need to add the eth2 interface to the br-ex bridge:
We can now check if the OpenVSwitch configuration has been applied correctly on the network and KVM compute nodes. Log in on the network node and run:
The output should look similar to:
Notice the membership of eth1 to br-eth1 and eth2 to br-ex. If you don’t see them, we can just add them now.
To add a bridge, should br-eth1 be missing:
To add the eth1 port to the bridge:
You can now repeat the same procedure on the KVM compute node, considering only br-eth1 and eth1 (there’s no eth2).
What’s next?
Ok, enough for today. In the forthcoming Part 2 we’ll see how to add a Hyper-V compute node to the mix!
The post Multi-node OpenStack RDO on RHEL and Hyper-V – Part 1 appeared first on Cloudbase Solutions.