Hosting a Static Website on AWS S3

A static website is one that serves content only, that has no server side computation necessary. AWS S3 provides a cheap way to host a static webpage.

This should be used for sites that do not require a lot of maintenance or at least are managed by a small number of people who will not be changing content at the same time. Information pages that don’t change often such as a biography page or company contact information are examples of the uses of static web pages.

This post walks through the process of setting up a static website on S3 and setting up a DNS entry in AWS Route 63.

Before getting started you need to sign up to the free tier of AWS. This allows a total of 5GB storage and 20,000 get requests (at time of writing). This should be more than enough to get a proof of concept up and running without incurring any cost. You should also have a registered domain. Both of these steps are beyond the scope of this post.

Note that using Amazon Route 53 does incur a cost per million DNS lookups. It is worth checking that you are comfortable with this before following along. Route 53 also incurs a cost; at time of writing it is €0.50 in the Europe region.

Create Buckets

A bucket is the S3 term for a group of files. A bucket can have a set of permissions. In the case of a statically hosted website, you must give full public access to all files in the bucket. Files in buckets are presented as if the lie in a folder structure similar to an operating system, however, these are simply labels that identify the files.

It is important to name the bucket so that it matches the domain name that you have registered. The AWS literature suggests creating a bucket called my-domain.com for the domain my-domain.com. This is where the files that make up the website will reside.

In this example, I am creating a website with the domain clarevilledaycentre.org. It is simple informational site for a non-profit organisation that provides day facilities for older citizens and also physically impaired individuals.

The first thing to ensure is that you are in the correct region. AWS has a large number of data centres across many different regions. Selecting the region where your main audience are based should reduce latency on the request. This should default to the closest based on your registration details and appear in the upper right corner of the AWS web console. In my case EU-Ireland is selected.

Log into the AWS console and select S3 from the myriad options on the Services tab. Click the Create Bucket button and enter the name of bucket. You can click create immediately or explore the other tabs.

I won’t go through all the options, but the Tags is interesting. If you are going to use AWS for multiple clients, you can use Tags to get a break down on billing costs when they arise.

Tags are simple key-value pairs and you can use as many as necessary. For example, you could have tags for client and department to give the client a breakdown by the departments within their organisation.

For now I will just create the bucket, accepting all the defaults bar the tag I added. I will extend the permissions later.

Note that a bucket can be created using the AWS Command Line Interface. See my recent post, AWS Command Line Interface and S3, on this for how to get setup with the CLI. The command would be:

aws s3 mb s3://clarevilledaycentre.org

I set up two buckets in total. The one above will be granted public access and host all the files related to the site. The other one is a www subdomain which will redirect all calls to the site page. In this way it does not matter whether the user inputs the www when writing the address into a browser, the same page will show up in both circumstances.

Uploading the Content

All html, images, css and other files need to reside in S3 so that they can be served. The simplest way is to use the web console; click on the bucket name and buttons to “Upload” and “Create folder” can be used.

If you have setup the CLI it is possible to copy the files in one command up to S3. In my case this was:

aws s3 cp --recursive site s3://clarevilledaycentre.org

There is also the sync command which allows incremental changes to be sent to S3 rather than overwriting everything.

Public Access

In order to serve as a static website, the bucket must allow anonymous or public access to all files. Amazon don’t recommend this for many use cases, but it is necessary to act as a static web site.

There are two steps; first is to untick the protection that blocks all public access to the bucket, and then we must assign an IAM policy (Identity and Access Management is a big topic for another day!), which grants access to all the files and resources. From the S3 landing page, first click the bucket name and then the Permissions tab.

Go to the  “Block public access” sub-tab and untick the “Block all public access” option. Hit save, follow the confirmation instructions.

Next, go to the “Bucket Policy” sub-tab and add a new policy. This is a small piece of JSON that has the effect of allowing access to any public user. Note that the resource record on line 11 needs to be updated to the ARN (Amazon Resource Name) for the bucket. This should appear on the same page just above the policy editor window.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::clarevilledaycentre.org/*"
        }
    ]
}

Configure Static Hosting

For each of the buckets in turn, we need to turn on static hosting. This is achieved from the properties tab under each bucket. First click on the square “Static hosting website” and then enable the setting. Note that if you have a different landing page other than index.html, then it can be configured on this screen.

This is a good time to note that you should configure an error page also to give a nicer message back than AWS 404 Not Found.

The configuration for the www bucket is a little different. Click the bucket name from the S3 page on the console and again select the “Static hosting website” square. This time tick the “Redirect request” radio button and type the name of the root domain.

All traffic on www will be redirected to the public bucket so either address will work.

Configure DNS

AWS Route 53 provides the service to map your domain name to the S3 bucket hosting the web site. Any DNS provider (I’ve use Cloudflare in the past) can do this, including the registrar that the name was purchased from, however, I thought I would keep this all in AWS for demonstration purposes.

Again, note that Amazon Route 53 incurs a cost that will be added to your bill once your hosted zone is up for twelve hours. The charge works out at about fifty cent per month, assuming a low load.

The first step is to go to the AWS Route 53 console at console.aws.amazon.com/route53. To get started click the Create Hosted Zone button. Enter the name and select public hosted zone and the click the “Create” button.

There will be two records created initially, one listing four name servers that act as the master DNS servers for the site. The other record indicates which of these servers should be considered the primary name server.

Within the new hosted zone, click “Create Record Set”. I’ve created an A record for clairevilledaycarecentre.org in this screenshot, selecting first that it is to be an Alias record and then selecting the bucket from the dropdown list. I left the rest of the defaults as is.

I also created a CNAME (canonical name) record which maps the www site to the A (address) record. DNS setup is out of the scope of this post but my setup appears to be the bare minimum required. Note that DNS propagation takes time, so be patient when testing each change.

In the above screenshot, the TTL is set to 300 seconds. This means that DNS servers can cache a result for up to that amount of time, meaning that if you make a change it may take five minutes to propagate until the caches are cleared out.

Update Domain Registration to Use Route 53

In order for traffic to get to the AWS DNS servers, we have to enter the appropriate name servers into the settings of the domain registrar. If I had purchased the domain name from Amazon, this step would not be necessary, but I had gotten it several months ago from Namecheap.

To find the name servers, go to the “Hosted Zones” page on AWS Route 53 and select the radio button beside the name of the hosted zone created above. A panel should spring up on the right hand side and there should be four name servers listed.

Note that AWS gives a set of four random name servers each time that you create a new hosted zone. This may not be what you want depending on how many hosted zones you control. Note that it is possible to change the default settings to configure AWS Route 53 so that the same name server set applies to each hosted zone by creating a delegation set, but that is beyond the scope of this post.

Once you have noted the name servers, go to the site that the domain name was registered with and enter the details there. In my case this was on Namecheap and the setting is on the main dashboard for the domain.

The instructions will tell you that this process can take up to twenty-four hours, but it is usually faster than that. After everything is set up as above, I was able to reach the site at both the domain and www subdomain URLs.

Summary

In this post I demonstrated setting up a static web site using AWS S3 and Route 53. This included:

  1. Creating a bucket in S3 to host the files
  2. Unblocking public access
  3. Applying an access policy to the bucket
  4. Creating a bucket in S3 to redirect www requests to the first bucket
  5. Configuring the www bucket for redirection
  6. Setting up a Route 53 Hosted Zone
  7. Adding A and CNAME records so that the content is served

As always, if you have any comments, suggestions or questions, let me know at ronan@failedtofunction.com.