2015-09-21

When we start thinking about static content in our applications, the first question that usually comes up is “where do we put stuff?” Specifically, we’re talking about things like images, documents, and other media files that are hosted as part of our application. The default answer is of course to place static files in our /public directory. This works for development purposes, but what about when we hit production?

Enter “the cloud.” That ethereal puff of moisture in the sky that somehow magically serves up our files to people around the world. Of course, us “insiders” know that the cloud is just someone else’s computer—or rather, a large collection of computers—that host copies of our data. What we want to know is…how do we make use of it in our Meteor applications? In this post, we’ll take a high-level look at the options we have for hosting static content, how we can get that content into the cloud, and how we can ultimately make use of it to build faster, more-reliable applications.

Cloud services

The first decision we have to make is where in the cloud we want to store our files. There are a lot of different services that offer the same functionality—a place to store our files remotely—with various tradeoffs. Let’s take a quick look at what we have access to and try to understand the difference between our options.

Amazon S3

My personal—and from observation, the crowd—favorite. Amazon offers inexpensive cloud storage starting at $0.0300 per GB for your first 1TB of data. Translation: it will cost you about $30/mo. to store an entire terrabye of data. In addition to this, Amazon S3 also charges for outbound data transfer (someone accessing your files) and inbound data transfer (you moving files into Amazon S3) on a per-request (HTTP request) basis, starting at $0.005 per 1,000 inbound requests (PUT, COPY, POST, and LIST) and $0.004 per 10,000 outbound requests (GET).

All of these costs are on a rolling basis and you only pay for what you use. So, assuming we have 1TB of data stored (no uploads or changes by us) and our users made ~50,000 requests that month, our bill would come in around ~$30.02. Pretty cheap! Even better, Amazon offers something unique in that they offer one year of storage for free. This includes up to 5GB of storage, 20,000 GET requests, and 2,000 PUT requests 100% free.

The S3 service works like a traditional server, where your files are stored in a single location inside of something called a “bucket” (think of this like a folder on a server). The difference is that Amazon allows you to specify where each bucket lives, independent of the others.

Beyond this, Amazon does offer a CDN (content delivery network) service, CloudFront, which helps you to make cached copies of your files available in different locations around the world (we’ll take a closer look at this in a bit). In other words, instead of having our files just being accessed from Tokyo, we can implement a CloudFront distribution which makes our files accessible in the location closest to the user requesting them.

To make access to all of this a little easier, Amazon offers a web-based GUI, as well as a REST API and several SDKs (software developer kits) for various platforms and languages. In respect to Meteor, they offer a Node SDK for server-side JavaScript interactions and a Browser SDK for client-side JavaScript interactions. Even better, a Meteor package of these SDKs exists for Meteor so we can start working with them quickly.

Google Cloud Storage

Google’s Cloud Storage offering is fairly similar to Amazon S3, with their price coming in a little bit under at $0.0260 per GB per month, or, $26/mo. per 1TB stored. For transfer, Google is a bit different requesting a fee per GB transferred starting at $0.12 depending on location. For a 1TB data store and customers in countries outside of China and Australia making requests around ~500GB of data in a month, your GCS bill would come in around ~$86.00/mo.

This is quite a bit more than Amazon, however, consider that you get the functionality of a CDN by default (your files are hosted across Google’s Cloud Storage network, not just a single location). Also, note, that our sample size 1TB of data/500GB of transfer is a lot. Most applications (e.g. those not hosting content for customers) will not hit this scale, so your costs will be subtantially lower/more affordable.

Costs aside, Google also offers a collection of libraries for interacting with their APIs (they offer both an XML-based API and a JSON-based API). As of writing, Google does not offer a server-side JavaScript library for accessing their API, however, a Meteor package does exist for making API requests for all of Google’s services on the client and server.

Rackspace Files

Rackspace’s offer follows in step with Amazon and Google, offering storage at a price of $0.10 per GB per month for your first 1TB of data. For 1TB of data, Rackspace’s offering will cost you $100/mo. For the first 10TB of transfer each month, Rackspace will ask $0.12. This will cost you ~$100.12 per month, however—similar to Google—Rackspace’s Files service also functions as a CDN, so you’re not incurring additional costs there.

Cloud Files leverages infrastructure that is located throughout our global data centers, and in over 200 global content delivery network (CDN) edge locations.

In respect to support for Meteor, Rackspace does offer a Node.js SDK. They also offer a Rest API for more custom implementations. As of writing, a single package exists for interacting with Rackspace Cloud Files, however, it doesn’t have any documentation and lacks any real usage.

Microsoft Azure

Microsoft’s Azure service is similar to Amazon, Google, and Rackspace, however, offering different grades of storage (better accessability of content for a higher price). Depending on which tier you select, your files will either be hosted in a single location, or, on a region-based CDN. To get a region-based CDN with Azure (branded as ZRS, or, Zone Redundant Storage), pricing starts at $0.048 per GB per month for the first 1TB of storage. So, 1TB per month will cost ~$48/mo. For transfer, the deal is pretty sweet:

We charge $0.0036 per 100,000 transactions for all Standard storage types. Transactions include both read and write operations to storage.

In respect to interacting with Azure, Microsoft does offer an NPM package for interacting with the service as well as a Rest API. For Meteor, too, a package does exist for interacting with their storage service on the server.

How to choose?

The easiest way to choose a service for hosting your files is to ask two questions: “how much do I want to pay?” and “how easy is it for me to integrate the service into my application?” This ultimately depends on the type of application you’re building and the political environment depending on where your work is being done. When it comes to Meteor, Amazon S3 is definitely the easiest option, however, depending on business concerns you may need to work with something else.

For the remainder of this post, though, we’re going to take a closer look at getting set up using Amazon S3 and CloudFront to host our static files.

Personal Preference

This is presented purely based on personal preference and is not intended to be an advertisement for Amazon S3. I’ve had little to no issues with their service, so it’s become my defacto option for hosting static files in the cloud. Make sure to consider your own needs and not just my opinion when making a decision on a provider!

Using Amazon S3

Getting set up with Amazon is pretty easy. To get started, we need to make sure we have an Amazon AWS account. Keep in mind, this can be your existing Amazon account if you have one available, however, it’s best to keep your diaper and obscure-anime purchases separate from your hosting tools. To each is own, though!



Once you’re signed up, take a deep breath when you have the “holy crap!” moment at the number of services that Amazon offers. For our uses right now, we’re going to focus on the S3 service.

Once you’ve accessed the S3 service, you will see a list of your buckets (if any exist) along with the option to create a bucket. Go ahead and click on that option to get a new one set up. You’ll be prompted for two pieces of information: the name for your bucket and the region you want it to be available. The latter option is important. Remember that Amazon S3 is tied to a specific location. This means that you should choose the region from the list that’s closest to you and the majority of your users. This is most definitely a guess, but think it through. Amazon is pretty quick these days, but every ms of latency counts!

Once you’ve created your bucket, you’re all set! Depending on how your bucket needs to behave, you may also need to configure permissions for the bucket using Amazon’s Bucket Policy feature with an Access Control List. It’s a bit out of the scope for this post, but an ACL allows you to set permissions as to who has access to content and what actions they can perform on that content (e.g. you want your content private to everyone but authenticated users of your application).

By default, all files uploaded to Amazon will be made private without a specific bucket policy in place. You can, however, grant permissions on a per-file basis. This is how some of the image files are stored on The Meteor Chef. For example, if we wanted to set permissions for a single image, we can right click on it and select the “Make Public” option from the dropdown list:

Without setting this, we’d get an error that looks something like this:

And making the file public we get our expected file:

Obviously doing this by hand could get annoying quick if you have a lot of files to manage, so it’s worth implementing a more-global bucket policy if you have a lot of content.

Managing content from within our app

Of course, uploading files one-by-one is only acceptable if we have a handful of files. If we’re uploading a lot of static content, our best option is to rely on one of the packages available to us on Atmosphere. Depending on our needs, something simple like the edgee:slingshot package will do the trick—this gives us a simple API for uploading files to Amazon S3—or, for more advanced implementations, we can access the AWS JavaScript SDK via the peerlibrary:aws-sdk package.

Regardless of how we do it, once we have our content uploaded and permissions set, we will have cloud-based storage for our application! Usage of content is as simple as including the Amazon S3 URLs in our application (yes, we can add custom domains to S3 to alias this) like https://s3.amazonaws.com/tmc-example-bucket/social.png.

Like many features, how automated we make this is up to us. Fortunately we have several packages that streamline this process for us, so it shouldn’t take too much time to get something working. A recommendation here is to invest as much time in building automation as you will use the storage. If you need to store/manage < 100 files, it may be worth doing it by hand. If your application allows users to store content—or you host a lot on your own—investing in building an uploading interface is time well spent.

Using Amazon CloudFront

Once we have our files up on Amazon S3, they are cloud based, however, they only technically exist in one location (the region we set for our bucket). To make our content available from multiple regions, Amazon offers a service called CloudFront which acts as an inbetween that routes user requests to cached copies of our content closest to their location. This is kind of confusing at first. Here are the steps:

Our domain name, e.g. http://files.themeteorchef.com is configured as a CNAME record in our DNS.

That CNAME record points to an Amazon CloudFront address like https://<distribution-ID>.cloudfront.net.

When a request is made to a file at our domain name http://files.themeteorchef.com/file.jpg, it’s pointed to https://<distribution-ID>.cloudfront.net/file.jpg.

From here, CloudFront locates the user and then points them to the cached copy of that file geographically closest to them.

Getting this set up is pretty easy. First, we need to create a CloudFront distribution by accessing the service from our AWS panel.

When we go to create a distribution, we select a bucket from Amazon S3 where our content lives. There are a bunch of other options when creating a distribution, but the default settings will work in most cases; the more you will rely on static content hosting, the more you will want to pay attention and tweak these.

Once you’ve created your distribution, Amazon will create a CloudFront URL for you and start distributing the content in your bucket to its data centers around the world. When all is said and done, our new file will be available at our CloudFront URL (this URL is visible in the list of distributions here):

Going further

This is obviously a complex topic but something that’s incredibly important to consider. Later this week, we’ll take a look at building an upload interface for Amazon S3 into our Meteor application. We’ll learn how to setup buckets, configure access policies, and build the actual uploader to make managing content a little bit easier on us!

How do you manage static content? Any tips or tricks for your fellow Meteor developers? Let us know in the comments!

Show more