2014-01-21

OpenShift is an auto-scalable Platform as a Service. Auto-scalable means OpenShift can horizontally scale your application up or down depending on the number of concurrent connections. OpenShift supports the JBoss application server, which is a certified platform for Java EE 6 development. As an OpenShift user, you have access to both the community version of JBoss and JBoss EAP 6(JBoss Enterprise Application Platform) for free. In this blog post, we will learn how to host a scalable Java EE 6 application using a JBoss EAP 6 server cluster running on OpenShift.

Prerequisites

Basic Java knowledge is required. Install the latest Java Development Kit (JDK) on your operating system. You can either install OpenJDK 7 or Oracle JDK 7. OpenShift supports OpenJDK 6 and 7.

Basic Java EE 6 knowledge is required.

Sign up for an OpenShift Account. It is completely free and Red Hat gives every user three free gears on which to run your applications. At the time of writing, the combined resources allocated for each user was 1.5GB of memory and 3GB of disk space.

Install the RHC client tool on your machine. RHC is a Ruby Gem, so you need to have Ruby 1.8.7 or above on your machine. To install RHC, just type sudo gem install rhc on the command line. If you already have the RHC Gem installed, make sure it is the latest one. To update RHC, execute the command sudo gem update rhc. For more assistance setting up the RHC command-line tool, see the following page: https://openshift.redhat.com/community/developers/rhc-client-tools-install.

Set up your OpenShift account using the command rhc setup. This command will help you to create a namespace and upload your SSH key to the OpenShift server.

Github Repository

The code for today's demo application is available on GitHub.

It is a simple Java EE 6 'to do' application that exposes three REST endpoints.

When a user makes a GET request to '/api/v1/ping', then the user gets a pong response.

When a user makes a POST request to '/api/v1/todos', then the user creates a new 'to do' item.

When a user makes a GET request to 'api/v1/todos/:id', then the user fetches the 'to do' item with the specified id.

Create a scalable JBoss EAP application

We will start by creating a new scalable application with the JBoss EAP 6 and PostgreSQL cartridges.

Let's decipher the above command. It instructs OpenShift to create an application named todo, which should use the JBoss EAP and PostgreSQL 9.2 cartridges. The --scaling option tells OpenShift to create a scalable application. If you do not specify this, a non-scalable application will be created. The --from-code option instructs OpenShift to use the specified Git repository as the reference application. If you do not specify the --from-code option, then OpenShift will use a template application.

This command will create two gears. One gear will host the HAProxy load balancer and JBoss EAP application server, and the other gear will host a PostgreSQL database. OpenShift will also add the settings to allow communication between the gears. This is shown below.



In the above image, when a user makes a web request that request first goes to HAProxy. The HAProxy cartridge sits between your application and the user and routes web traffic to the JBoss EAP cartridges. If the request involves writing or fetching data to or from the database, then the application running inside JBoss EAP will use its datasource configuration to work with the PostgreSQL database.

You can view the application running at http://todo-domainname.rhcloud.com. Please replace domainname with your OpenShift account domain name. At this stage, when you go to http://todo-domainname.rhcloud.com you will get a 503 Service Unavailable Error.

Fixing the Service Unavailable Error

To understand why you are getting this error, go to your application HAProxy status page at http://todo-domainname.rhcloud.com/haproxy-status.



There is a lot of information on this page. We will look at it in detail in a later section, but for now the important visible information is that local-gear is down as it is shown in red. HAProxy does periodic health checks to determine the health of gears. The default health check URL is configured to poll '/', i.e. the root context of the application. The application that we have deployed exposes a few REST endpoints. There is no request handler for '/', so it returns a 404 (Page not found) HTTP response. HAProxy considers responses 2xx and 3xx as valid, and all others to indicate a server failure.

This can be solved by configuring HAProxy to use a URL that returns a valid HTTP response. In our application, we have a very simple PingResource, which returns a 200 HTTP response code. The resource is available at http://todo-domainname.rhcloud.com/api/v1/ping.

HAProxy maintains its configuration in the haproxy.cfg file. OpenShift allows its users to modify this file; to make a change, you have to SSH into the application gear. To SSH:

Next, change into the haproxy/conf directory.

Now change the following in haproxy.cfg:

to

Next, restart the HAProxy cartridge from your local machine using RHC.

Now refresh the HAProxy status page and you will see that local-gear is now in green. The green color means local-gear can now handle web requests.



Understanding the HAProxy Status Page

Let's spend some time understanding the HAProxy status page. HAProxy listens for all incoming requests to the application and proxies them to one of the preconfigured back ends.

The HAProxy status page shows two sections -- stats and express. The stats section is configured to listen to all the requests made to the HAProxy status page. Every time you refresh the http://todo-domainname.rhcloud.com/haproxy-status page, the total number of sessions under the Sessions tab will increment as shown below. This is shown under the "Total" column. The "Cur" column is the number of users currently accessing the status page. The "Max" column is the maximum concurrent users. All these numbers are calculated since HAProxy was started; if you restart HAProxy, the stats will be reset.

The express section is more interesting from the application point of view. The local-gear row corresponds to the requests handled by JBoss EAP. The total number of sessions handled by the application is shown in the "Total" column. The "Cur" column is the number of users currently accessing the application. The "Max" column is the maximum concurrent users. All these figures are since HAProxy was started; if you restart HAProxy, the stats will be reset. In the image shown above, we can see that local-gear has handled four requests, one at a time. When the application scales, it will add more rows for the new gears.

Auto-scalable Java EE PaaS Features

When you create a scalable JBoss OpenShift application, you get JBoss clustering with an HTTP load balancer. This has the following benefits:

Auto-Scalability: OpenShift adds a new node to the cluster to service a higher client load. The algorithm for scaling up and scaling down is based on the number of concurrent requests to your application. OpenShift allocates 16 connections per gear - if HAProxy sees that you're sustaining 90% of your total connections, it adds another gear. If your demand falls to 50% of your total connections for several minutes, HAProxy removes that gear.

HTTP Request Load Balancing: OpenShift uses HAProxy to load balance the HTTP requests. This makes sure each individual node only gets its fair share of the overall client load. HAProxy distributes client requests using the balance algorithm defined in its configuration. OpenShift configures HAProxy to use the leastconn algorithm. The leastconn algorithm makes sure that the server with the lowest number of connections receives the new connection. You can configure HAProxy to use any other balancing algorithm you prefer. We will cover this in a later section.

Session Replication: You get session replication with JBoss clustering. This ensures that if one of the nodes dies, your session data is replicated to the other nodes in the cluster. This is achieved by using a replicated, clustered, distributed cache. JBoss uses Infinispan to provide this. You have to use a distributable element in your web.xml to make use of session replication.

Distributed Cache: In your application you can use Infinispan as a replicated, clustered, distributed cache. You can use Infinispan by using the @Resource annotation as shown below.

You can then use get() and put() methods to get or put elements into the cache.

High Availability: Scaled apps will remain available even when a server instance fails. If a server instance fails, HAProxy will redirect the traffic to the healthy server instances.

Checking the Number of Requests

The HAProxy status page offers a lot of information related to your application. You can easily find out the number of requests handled by your application by looking at the Sessions tab under the express server configuration. All these figures are since HAProxy was started. Let's use Curl to create and read a few 'to do' items.

To create 'to do' items, we will use Curl as shown below.

To read 'to do' items, we will use Curl as shown below.

In total we have made four requests -- two POST and two GET requests. If you go to the status page, you can see the number of requests in the Sessions tab in the express server configuration, as shown below.

If at any point you want to clear the existing stats, just restart HAProxy and all the stats will be reset.

Auto-scaling in Action

To see auto-scaling in action, we will use Apache Benchmark. Apache Benchmark is a command line utility that can help simulate concurrent user access.

In the above command we are making 50000 requests with 50 concurrent requests at a time. You can think of it as 50 users making 1000 requests each. We are making a POST request to http://todo-domainname.rhcloud.com/api/v1/todos with data contained in a post_data.txt file. The post_data.txt file just contains JSON representing a 'to do' item.

When you run this test, OpenShift will start with one gear (shared by HAProxy and JBoss EAP), and proxy all the traffic to the co-located JBoss EAP instance.

HAProxy will fire a scale event, which we can view in the HAProxy logs.

OpenShift checks that you have a free gear (out of your three free gears) and then creates another copy of your web cartridge on that new gear. The code in the Git repository is copied to each new gear, but the data directory begins empty. When the new cartridge copy starts it will invoke your build hooks and then HAProxy will begin routing web requests to it. If you push a code change to your web application, all of the running gears will get that update.

After the gear is started, you will start seeing load being distributed to both gears as shown below.

Apache Benchmark gave the following requests. The application was able to process 86 requests per second.

To test the performance of the application with both gears already running, you can set the minimum number of gears using the command shown below.

We can check the performance of the application by running Apache Benchmark again.

As you can see, performance improved from 86 to 91 requests per second.

Let's now fix the number of gears to one, i.e. we will not scale beyond one gear.

Apache Benchmark results

How to Use Round Robin

You can configure HAProxy to use any balance algorithm. By default, OpenShift configures HAProxy to use the leastconn balance algorithm. If you want to use roundrobin instead, SSH into the application gear and change the configuration as shown below. In the roundrobin algorithm, each server is used in turns, according to their weights.

As you can see above, we changed the algorithm to roundrobin. Also, we added weights to each gear. We have given gear-1 a weight of 2 and local-gear a weight of 1. This will ensure gear-1 will get twice as many requests as local-gear.

That's it for today.

What's Next

Sign up for OpenShift Online

Get your own private Platform As a Service (PaaS) by evaluating OpenShift Enterprise

Need Help? Ask the OpenShift Community your questions in the forums

Showcase your awesome app in the OpenShift Developer Spotlight. Get in the OpenShift Application Gallery today.

Show more