2015-11-30

In the previous article in this series we gave you a quick overview of why OpenStack and Windows Nano Server provide some of the most exciting elements in the current Windows ecosystem. In this article we are going to expand on those elements, and also give you the tools you need to deploy your own Hyper-Converged cloud using our OpenStack Windows Liberty components along with Ubuntu’s OpenStack Linux ones!

Why is everyone so excited about Windows Nano Server?

Nano Server is a new installation option for Windows Server 2016, reducing the overall footprint to just a few hundreds MB of disk space. The resulting deployed OS is thus way faster to deploy and to boot, reducing drastically also the overall amount of updates and reboots required during daily management. In short, it’s an OS built for a cloud age and huge leap forward compared to traditional GUI based Windows deployments.

Nano images are designed to be purpose built for each deployment. That means that if you want a Windows Server that is just a hypervisor, you can build the image just with that role installed and nothing else. In this article we are going to focus on three main roles:

Compute (Cloudbase OpenStack components and Hyper-V)

Clustering

Storage (Storage Spaces Direct)

Storage Spaces Direct (S2D)

Aside from Nano itself, this is one of the features I am most excited about and a key element in allowing a hyper-converged scenario on Windows Server. Storage Spaces Direct is an evolution of Storage Spaces introduced in Windows Server 2012, with one important difference – it allows you to use locally attached storage. This means that you can use commodity hardware to build you own scale out storage at a fraction of the cost of a normal enterprise storage solution. This also means that we can create a hyper-converged setup where all Hyper-V compute nodes are clustered together and become bricks in a scale-out storage system.



Ok, Ok…lets deploy already!

Before we begin, a word of warning. Windows Nano Server is in Technical Preview (it will be released as part of Windows Server 2016). The following deployment instructions have been tested and validated on the current Technical Preview 4 and subject to possible changes in the upcoming releases.

Prerequisites

We want to deploy an OpenStack cloud on bare metal. We will use Juju for orchestration and MaaS (Metal as a Service) as a bare metal provider. Here’s our requirements list. We kept the number of resources to the bare minimum, which means that some features, like full components redundancy are left for one of the next blog posts:

MaaS install

Windows Server 2016 TP4 ISO

A Windows 10 or Windows Server 2016 installation. You will need this to build MaaS images.

One host to be used for MaaS and controller related services

Should have at least three NICs (management, data, external)

At least 4 hosts that will be used as Hyper-V compute nodes

Each compute node must have at least two disks

Each compute node should have at least two NICs (management and data)

As an example, our typical lab environment uses Intel NUC servers. They are great for testing and have been our trusty companions throughout many demos and OpenStack summits. These are the new NUCs that have one mSATA port and an one M.2 port. We will use the M.2 disk as part of the storage spaces direct storage pool. Each NUC has one extra USB 3 ethernet NIC that acts as a data port.

Here’s a detailed list of the nodes configuration.

Node 1

Ubuntu 14.04 LTS MaaS on bare metal with 4 VMs running on KVM.

3 NICs, each one attached to a standard linux bridge.

eth0 (attached to br0) is the MaaS publicly accessible NIC. It will be used by neutron as an external NIC.

eth1 (attached to br1) is connected to an isolated physical switch. This will be the management port for nodes deployed using MaaS.

eth2 (attached to br2) is connected to an isolated physical switch. This will be the data port used for tenant traffic.

The MaaS node hosts 4 virtual machines:

VM01

tags: state

purpose: this will be the Juju state machine

NICs

eth0 attached to br1

Minimum recommended resources:

2 CPU cores ( 1 should do as well)

2 GB RAM

20 GB disk space

VM02

tags: s2d-proxy

purpose: This manages the Nano Server S2D cluster.

NICs

eth0 attached to br1

Minimum recommended resources:

2 CPU cores

2 GB RAM

20 GB disk

VM03

tags: services

purpose: OpenStack controller

NICs:

eth0 attached to br1 (maas management)

eth1 attached to br2 (isolated data port)

eth3 attached to br0 (external network)

Minimum recommended resources:

4 CPU cores

8 GB RAM

80 GB disk space

VM04

tags: addc

purpose: Active Directory controller

NICs:

eth0 attached to br1

Minimum recommended resources:

2 CPU cores

2 GB RAM

20 GB disk

Node 2,3,4,5

each node has:

2 NICs available

one NIC attached to the MaaS management switch (PXE booting must be enabled and set as first boot option)

one NIC attached to the isolated data switch

2 physical disks (SATA, SAS, SSD)

16 GB RAM (recommended, minimum 4GB)

tags: nano

purpose: Nano Server Hyper-V compute nodes with Cloudbase OpenStack components

Install MaaS

We are not going to go into too much detail over this as the installation process has been very well documented in the official documentation. Just follow this article, it’s very simple and straightforward. Make sure to configure your management network for both DHCP and DNS. After installing MaaS, it’s time to register your nodes in MaaS. You can do so by simply powering them on once. MaaS will automatically enlist them.

You can log in the very simple and intuitive MaaS web UI available at http://
/MAAS
and check that you nodes are properly enlisted.

Assign tags to your MaaS nodes

Tags allow Juju to request hardware with specific requirements to MaaS for specific charms. For example the Nano Server nodes will have a “nano” tag. This is not necessary if your hardware is completely homogenous. We listed the tags in the prerequisite section.

This can be done either with the UI by editing each individual node or with the following Linux CLI instructions.

Register a tag with MaaS:

And assign it to a node:

Build Windows images

After you have installed MaaS, we need to build Windows images. For this purpose, we have a set of PowerShell CmdLets that will aid you in building the images. Log into your Windows 10 / Windows Server 2016 machine and open an elevated PowerShell prompt.

First lets download some required packages:

Download the required resources:

You should now have two extra folders in your home folder:

generate-nano-image

windows-openstack-imaging-tools-experimental

Generate the Nano image

Lets generate the Nano image first:

Now, SSH into your MaaS node and upload the image in MaaS using the following commands:

The name is important. It must be win2016nano. This is what juju expects when requesting the image from MaaS for deployment.

Generate a Windows Server 2016 image

This will generate a MaaS compatible image starting from a Windows ISO, it requires Hyper-V:

Upload the image to MaaS:

As with the Nano image, the name is important. It must be win2016.

Setting up Juju

Now the fun stuff begins. We need to fetch the OpenStack Juju charms and juju-core binaries, and bootstrap the juju state machine. This process is a bit more involved, because it requires that you copy the agent tools on a web server (any will do). A simple solution is to just copy the tools to /var/www/html on your MaaS node, but you can use any web server at your disposal .

For the juju deployment you will need to use an Ubuntu machine. We generally use the MaaS node directly in our demo setup, but if you are running Ubuntu already, you can use your local machine.

Fetch the charms and tools

For your convenience we have compiled a modified version of the agent tools and client binaries that you need to run on Nano Server. This is currently necessary as we’re still submitting upstream the patches for Nano Server support, so this step won’t be needed by the time Windows Server 2016 is released.

From your Ubuntu machine:

If everything worked as expected, the last command should give you the Juju version.

Configuring the Juju environment

If you look inside $HOME/hyper-c/juju-core you will see a folder called tools. You need to copy that folder to the web server of your choice. It will be used to bootstrap the state machine. Lets copy it to the MaaS node:

Now, ssh into your MaaS node and copy these files in a web accessible location:

Back on your client machine, create the juju environments boilerplate:

This will create a folder $HOME/.juju. Inside it you will have a file called environments.yaml that we need to edit.

Edit the environments file:

We only care about the MaaS provider. You will need to navigate over to your MaaS server under http://${MAAS_HOST}/MAAS/account/prefs/ and retrieve the MaaS API key like you did before.

Replace your environments.yaml to make it look like the following:

Before you bootstrap the environment, it’s important to know if the newly bootstrapped state machine will be reachable from your client machine. For example, If you have a lab environment where all your nodes are in a private network behind MaaS, where MaaS is also the router for the network it manages, you will need to do two things:

enable NAT and ip_forward on your MaaS node

create a static route entry on your client machine that uses the MaaS node as a gateway for the network you configured in MaaS for your cluster

Enable NAT on MaaS:

Add a static route on your client:

You are now ready to bootstrap your environment:

Deploy the charms

You should now have a fully functional juju environment with Windows Nano Server support. Time to deploy the charms!

This is the last step in the deployment process. For your convenience, we have made a bundle file available inside the repository. You can find it in:

Make sure you edit the file and set whatever options applies to your environment. For example, the bundle file expects to find nodes with certain tags. Here is an example:

Pay close attention to every definition in this file. It should precisely mirror your environment (tags, MAC addresses, IP addresses, etc). A misconfiguration will yield unpredictable results.

Use juju-deployer to deploy everything with just one command:

It will take a while for everything to run, so sit back and relax while your environment deploys. There is one more thing worth mentioning. Juju has a gorgeous web GUI. Its not resource intensive, so you can deploy it to your state machine. Simply:

You will be able to access it using the IP of the state machine. To get the ip simply do:

The user name will be admin and the password will be the value you set for admin-secret, set in juju’s environments.yaml.

At the end of this you will have the following setup:

Liberty OpenStack cloud (with Ubuntu and Cloudbase components)

Active Directory controller

Hyper-V compute nodes

Storage Spaces Direct

Access your OpenStack environment

Get the IP of your Keystone endpoint

Export the required OS_* variables (you can also put them in your .bashrc):

You can access also Horizon by fetching its IP from Juju and open it in your web browser:

What if something went wrong?

The great thing in automated deployments is that you can always destroy them and start over!

Form here you can run again:

What’s next?

Stay tuned, in the next posts, we’ll show how to add Cinder volume on top of Storage Spaces Direct and how to easily add fault tolerance to your controller node (the Nano Server nodes are already fault tolerant).

You can also start deploying some great guest workload on top of your OpenStack cloud, like SQL Server, Active Directory, SharePoint, Exchange etc using our Juju charms!

I know this has been a long post, so if you managed to get this far, congratulations and thank you! We are curious to hear how you will use Nano Server and storage spaces direct!

The post Hyper-Converged OpenStack on Windows Nano Server – Part 2 appeared first on Cloudbase Solutions.

Show more