Introduction
In the previous post we set out the main topic of this series. We’ll be talking and Amazon Code Pipeline and what it can do for you in terms of Continuous Delivery. The larger part of the post dealt with the differences between 3 related topics: Continuous Integration, Continuous Deployment and Continuous Delivery.
Code Pipeline is an example of an automated Continuous Delivery tool which can help you with the often tedious steps of builds, test runners and deployments. The idea is that the developer can concentrate on the exciting stuff, such as writing some fantastically well-written code which is then pushed into a code repository, like GitHub. The rest of the steps is then handled by a CI/CD tool like Jenkins.
In this post we’ll take a quick visual tour of Code Pipeline so that you get the idea how to set up a new pipeline. We won’t yet add a custom job runner as that is not part of the initial setup process.
Visual demonstration of the setup process
After logging in you’ll find Code Pipeline under the Developer Tools section:
Amazon has the notion of regions and endpoints, such as Ireland, Tokyo and Frankfurt. These represent locations for the AWS data centres where they offer their services. E.g. the elastic cloud server service EC2 is available in every AWS region. Code Pipeline is still a relatively new service and it’s only been rolled out to two regions at the time of writing this post:
US West (Oregon) which has the string ID us-west-2
US East (N. Virginia) which has the string ID us-east-1
The region can be selected in the upper right hand corner:
If you already have at least one pipeline in the selected region you’ll be presented the pipelines overview page with a Create Pipeline button:
Otherwise you’ll only see the Create Pipeline button.
How to set up a new pipeline
Let’s see what steps we need to go through to create a pipeline.
Step 1 is to provide a name for the pipeline:
You’ll get a validation error if you enter invalid characters. E.g. spaces are currently not allowed.
Step 2 is to specify the location of the source code. There are two options at the moment: Amazon S3 and GitHub. With GitHub you’ll be able to connect to GitHub and provide a repository. With S3 you can point to a location in S3. I’ll just provide an S3 source, a java JAR file for this demo:
In Step 3 we can provide a build runner but this is optional. There are only 3 options available at this point: No build, Jenkins and Solano CI. In general you won’t yet find too many readily available integrations with CodePipeline at this time. I think it’s still quite early for the general public to be aware of this tool.
In Step 4, which is for some reason called Beta, we can select where to deploy the application. Currently there are two options: deploy the application to Elastic Beanstalk or to another AWS platform called CodeDeploy. The idea is that your application, e.g. a web site is hosted on Amazon, e.g. on an Amazon Elastic Beanstalk server and you can deploy your changes there.
If you make a selection in the drop down list and place the cursor within the Application name text box then after a short while you’ll be listed the available Beanstalk or CodeDeploy applications where you can deploy your code. The same is true for the Environment name. “Application name” refers to a Beanstalk application which can differ from the actual application you’re trying to deploy.
Just to make this clear here’s a snapshot from our Elastic Beanstalk application and environment in the same region as the one where I’m setting up the new pipeline for this demo:
I’ve drawn an “A” for the application name and an “E” for each available environment. These are reflected in my choices in the below screenshot:
In summary Step 4 allows us to deploy our application to an environment, like Dev or Production. Currently only Amazon deployment servers are available though.
Note that I’ll remove this step later on when the pipeline is ready. I seriously don’t want to deploy a random JAR file to a working development server.
In Step 5 we can select the IAM role under which the pipeline will be running. If you place the cursor within the Role name text box then the available IAM roles will be listed. Otherwise create a new role with the Create role button:
Step 6 is only to review the options you’ve selected. If you’re satisfied then click the Create pipeline button:
The pipeline will be set up and you’ll be redirected to the list of pipelines where the new pipeline will also be shown. You can click on its name and you’ll see a visual representation of the pipeline with the defined steps and arrows showing the flow. Here’s an example showing one of our own pipelines:
The pipeline will start executing as soon as it has been set up. You can click on the downward pointing arrows to disconnect the flow at some point in the chain.
We haven’t yet seen where the custom job runners can be added. Those can only be added to an already functioning pipeline i.e. they are not available during the setup process presented above. We’ll see how that works in the next post.
View all posts related to Amazon Web Services and Big Data here.