2016-12-28



In the last post, we covered all the steps required to successfully develop and deploy a Django app on a single server. In this tutorial we will automate the deployment process with Fabric (v1.12.0) and Ansible (v2.1.3) to address these issues:

Scaling: When it comes to scaling a web app to handle thousands of daily requests, relying on a single server is not a good approach. Put simply, as the server approaches maximum CPU utilization, it can cause slow load times which can eventually lead to server failure. To overcome this, the app must be scaled to run on more than one server so that the servers can cumulatively handle the incoming concurrent requests.

Redundancy: Deploying a web app manually to a new server means a lot of repeated work with more chances of human error. Automating the process is key.

Specifically, we will automate:

Adding a new, non-root user

Configuring the server

Pulling the Django app code from a GitHub repo

Installing the dependencies

Daemonizing the app

Setup and Config

Start by spinning up a new Digital Ocean droplet, making sure to use the Fedora 25 image. Do not set up a pre-configured SSH key; we will be automating this process later via a Fabric script. Since the deployment process should be scalable, create a separate repository to house all the deployment scripts. Make a new project directory locally, and create and activate a virtualenv using Python 2.7x.

Why Python 2.7? Fabric does NOT support Python 3. Don’t worry: We’ll be using Python 3.5 when we provision the server.

Fabric Setup

Fabric is a tool used for automating routine shell commands over SSH, which we will be using to:

Set up the SSH keys

Harden user passwords

Install Ansible dependencies

Upgrade the server

Start by installing Fabric:

Create a new folder called “prod”, and add a new file called fabfile.py to it to hold all of the Fabric scripts:

Take note of the inline comments. Be sure to add you remote server’s IP address to the env.hosts variable. Update env.full_name_user as well. Hold off on updating env.password; we will get to that shortly. Look over all the env variables – they are completely customizable based on your system setup.

Set up the SSH keys

Add the following code to fabfile.py:

This function acts as the entry point for the Fabric script. Besides triggering a series of functions, each explained in further steps, it explicitly-

Generates a new pair of SSH keys in the specified location within your local system

Copies the contents of the public key to the authorized_keys file

Makes changes to the remote sshd_config file to prevent root login and disable password-less auth

Preventing SSH access for the root user is an optional step, but it is recommended as it ensures that no one has superuser rights.

Create a directory for your SSH keys in the project root:

Harden user passwords

This step includes the addition of three different functions, each executed serially to configure SSH password hardening…

Create deployer group

Here, we add a new group called deployers and grant sudo permissions to it so that users can carry out processes with root privileges.

Create user

This function-

Adds a new user to the deployers user group, which we defined in the last function

Sets up the SSH directory for keeping SSH key pairs and grants permission to the group and the user to access that directory

Upload SSH keys

Here, we-

Upload the locally generated SSH keys to the remote server so that non-root users can log in via SSH without entering a password

Copy the public key and the authorized keys to the remote server in the newly created ssh-keys directory

Install Ansible dependencies

Add the following function to install the dependency packages for Ansible:

Keep in mind that this is specific to the Fedora Linux distro, as we will be using the DNF module for installing packages, but it could vary by distro.

Set SELinux to permissive mode

The next function sets SELinux to permissive mode. This is done to overcome any potential Nginx 502 Bad Gateway errors.

Again, this is specific to the Fedora Linux distro.

Upgrade the server

Finally, upgrade the server:

Sanity check

With that, we’re done with the Fabric script. Before running it, make sure you SSH into the server as root and change the password:

Be sure to update env.password with the new password. Exit the server and return to the local terminal, then execute Fabric:

If all went well, new SSH keys will be generated, and you will be asked to create a password (make sure to do this!):

A number of tasks will run. After the deployer user is created, you will be prompted to add a password for the user-

-which you will then have to enter when the SSH keys are uploaded:

After this script exits successfully, you will NO longer be able to log into the remote server as a root user. Instead, you will only be able to use the non-root user deployer.

Try it out:

This is expected. Then, when you run-

-you should be able to log in just fine:

Ansible Primer

Ansible is a configuration management and provisioning tool used to automate deployment tasks over SSH.

You can fire individual Ansible tasks against the app servers from your shell remotely and execute tasks on the go. Tasks can also be combined into Playbooks – a collection of multiple plays, where each play defines certain specific tasks that are required during the deployment process. They are executed against the app servers during the deployment process. Playbooks are written in YAML.

Playbooks

Playbooks consist of a modular architecture as follows:

Hosts specify all the IP addresses or domain names of our remote servers that need to be orchestrated. Playbooks always run on a targeted group of hosts.

Roles are divided into sub parts. Let’s look at some sample roles:

Tasks are a collection of multiple tasks that need to be carried out during the deployment process.

Handlers provide a way to trigger a set of operations when a module makes a change to the remote server (best thought of as hooks).

Templates, in this context, are generally used for specifying some module-related configuration files – like nginx.

Variables are simply a list of key-value pairs where every key (a variable) is mapped to a value. Such variables can be used in the Playbooks as placeholders.

Sample Playbook

Now let’s look at a sample single-file Playbook:

Here, we defined the-

Hosts as hosts: all, which indicates that the Playbook will run on all of the servers that are listed in the inventory/hosts file

Variables http_port: 80 and app_name: django_bootstrap for use in a template

Tasks in order to install nginx, set up the nginx config (become indicates that we need admin privileges), and trigger the restart handler

Handler in order to restart the nginx service

Playbook Setup

Now let’s set up a Playbook for Django. Add a deploy.yml file to the “prod” directory:

The above snippet glues together the Ansible hosts, users, and roles.

Hosts

Add a hosts (plain text format) file to the “prod” directory and list the servers under their respective role names. We are provisioning a single server here:

In the above snippet, common refers to the role name. Under the roles we have a list of IP addresses that need to be configured. Make sure to add your remote server’s IP address in place of <server-ip-address>.

Variables

Now we define the variables that will be used by the roles. Add a new folder inside “prod” called “group_vars”, then create a new file called all (plain text format) within that folder. Here, specify the following variables to start with:

Make sure to update <path-to-your-ssh-keys>. To get the correct path, within the project root, run:

With these files in place, we are now ready to coordinate our deployment process with all the roles that need be carried out on the server.

Playbook Roles

Again, Playbooks are simply a collection of different plays, and all these plays are run under specific roles. Create a new directory called “roles” within “prod”.

Did you catch the name of the role in the deploy.yml file?

Then within the “roles” directory add a new directory called “common” – the role. Roles consists of “tasks”, “handlers”, and “templates”. Add a new directory for each.

Once done your file structure should look something like this:

All the plays are defined in a “tasks” directory, starting with a main.yml file. This file serves as the entry point for all Playbook tasks. It’s simply a list of multiple YAML files that need to be executed in order.

Create that file now within the “tasks” directory, then add the following to it:

Now, let’s create each task. Be sure to add a new file for each task to the “tasks” directory and add the accompanying code to each file. If you get lost, refer to the repo.

01_server.yml

Here, we list all the system packages that need to be installed.

02_git.yml

Add the following variables to the group_vars/all file:

Make sure to fork then clone the django-bootstrap repo, then update the code_repository_url variable to the URL of your fork.

03_postgres.yml

Update group_vars/all with the database configuration needed for the playbook:

Update the db_password variable with a secure password.

Did you notice that we restart the postgres service within the main.yml file in order to apply the changes after the database is configured? This is our first handler. Create a new file called main.yml in the “handlers” folder, then add the following:

04_dependencies.yml

Update group_vars/all like so:

Add a template called env.j2 to the “templates” folder, and add the following environment variables:

Be very careful with the environment variables and their values in env.j2 since these are used to get the Django Project up and running.

05_migrations.yml

06_nginx.yml

Add the following variable to group_vars/all:

Don’t forget to update <remote-server-ip>. Then add the handler to handlers/main.yml:

<

Show more