2013-11-18

We get questions from time to time about the technologies we use at Doodle. Since the last post on this topic happened some time ago, it deserves an update.

Most of the front-end logic is implemented in JavaScript, with the help of the usual frameworks such as jQuery, Backbone.js, and Bootstrap. The code is heavily modularized and modules can be dynamically loaded thanks to magic provided by Require.js. Dynamic page elements are generated using Mustache templates, which allow for re-rendering parts of a page if the data changes. This combination of technologies enables us to do virtually anything the browser allows, without the constraints of a GUI framework such as JSF. Server-side front-end technologies (JSF in particular) are only used for templating on fairly static pages or to generate initial pages (usually without content, only boiler-plate code), which serve as starting points for JavaScript execution.

The front-end communicates with the web application over a semi-public REST-like API using Ajax calls (and some form-POST hacks for file uploads and the like). One of the reasons why it’s only REST-*like* is that PUT and DELETE operations are often blocked at company proxies and firewalls, thus we only use GET and POST. The web application itself is written in Java 7, runs in a Tomcat container and uses countless third-party libraries. One example is Jersey, which we use for all of our internal and external REST APIs.

Data is stored with MongoDB. We have migrated away from MySQL for several reasons: First, schema changes were a huge pain with multi-GB databases, and the schema had to change with practically every release. Second, the document-style approach is a better fit to our data: No more huge tables with absurdly large indexes just to link two entities. Just to illustrate the point: With MySQL, we even adopted a document-based approach for some use cases by storing zlib-compressed JSON data in BLOBs… and that’s kind of what MongoDB does, and MongoDB does it better. And last but not least, replica sets are much easier to use and maintain than MySQL’s replication mechanism. The mapping between MongoDB documents and Java classes is done using Morphia, which is not as sophisticated as JPA/Hibernate (all write operations have to be implemented manually), but is easy enough to use and works well.

On the server side, we use Debian Linux running on standard server hardware. The servers are located in Switzerland and hosted by a local service provider (thanks, AtrilA).

The server setup consists of three tiers: The static content is handled by Apache servers (we have experimented with content delivery networks, but the performance gain was not big enough to warrant the cost and increased complexity). Load balancing and failover is done using round-robin DNS pointing to multiple virtual IP addresses, which automatically move between the Apache servers if necessary (e.g., if a server is shut down). All requests to dynamic content are forwarded to our application servers running Tomcat, again using load balancing and failover to cope with the failure of an application server. A Postfix installation on each application server is responsible for delivering all application-generated email (and that’s a lot). Finally, the application accesses the MongoDB replica set, where MongoDB automatically replicates all data between the set members and ensures the availability of the set.

Of course, there’s also the usual bunch of internal servers for build automation (Jenkins), repositories, testing, backup and the like. The server configuration is managed by Puppet, which is a declarative language to describe all aspects of a computer’s configuration, and, as a side effect, also serves as documentation. Those manifests require are a lot of work to write, but being able to go from nil to a production-ready server in 10 minutes, including getting every tiny configuration option exactly right, is just awesome!

By David Gubler, Senior Operations and Software Engineer

Show more