7 Steps to Hosting Your Website on AWS S3 and Route 53
AWS S3 (Amazon Simple Storage Service) is an affordable, easy, and safe option to host a statically-generated website.
With statically-generated I am referring to a site that is generated without a server upon user requests. This could be a truly static site in plain HTML created by a static-site generator such as Jekyll. Or a site created using any number of single-page application (SPA) frontend frameworks, such as React, Angular, or Vue. The opposite of a statically-generated site would be ones supported by a server-side language such as PHP (think WordPress).
Given how sophisticated SPAs have become and how powerful modern browsers are now, it is quite feasible, even common, to have a statically-generated frontend that provides complex interactivity with dynamic data.
This simple step-by-step tutorial shows how to configure your AWS S3 bucket to host a statically-generated website. And how to route web traffic to a domain name using Route 53, Amazon’s DNS service.
In this post, I’ll use Route 53 to show an end-to-end process. But using Route 53 is optional. You can use any DNS service to resolve your domain traffic to S3.
Configure S3 Bucket for Website Hosting
We will create two S3 buckets — one where I will upload my static site and one for redirecting www
subdomain traffic.
1. Create Primary Bucket with Domain Name
My domain name is chienyihung.com
. The bucket name must be the same as that name. This is where I will upload my static site files.
In the next step, there are various options to be configured. I chose object versioning but it’s entirely optional. For most static hosting situation, you might not need it.
Click next to set permissions to enable public access.
Review all the choices and then hit Create bucket.
2. Enable Static Website Hosting
From the dashboard, choose the chienyihung.com
bucket and then click on Properties. Click on Static website hosting to enable this feature.
Enter the index.html
as the Index Document. Note the endpoint, which is the direct pointer to this s3 bucket. I can use it later to test a successful upload.
3. Add a Bucket Policy to allow for GetObject
In addition to enabling the static hosting and making the bucket itself publically accessible, I still need to attach a bucket policy. This will grant permission on an object/file level.
The policy should follow this JSON example from AWS:
{ "Version": "2012-10-17",
"Statement": [ {
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [ "s3:GetObject" ],
"Resource": [ "arn:aws:s3:::example.com/*" ]
}]
}
Or you can use the handy AWS Policy Generator to quickly create the proper JSON.
Then, I copy and paste the JSON to the Bucket Policy forchienyihung.com
.
We are now done with the primary bucket. We now have a destination for domain chienyihung.com
.
4. Create a Redirect S3 Bucket for Sub-domain
In order to make www.chienyihung.com
work, I need to set up a redirect bucket for the www subdomain.
The step is the same as Step 1 except to NOT enable the public access option.
5. Enable Static Website Hosting Redirect
From the S3 dashboard, choose the www subdomain bucket. And then click on Properties to edit.
Choose Static Website Hosting and select Redirect Request as the type.
6. Upload your site from Command Line
Finally! Now I can upload my site.
Instead of using the AWS console, I am going to configure the AWS credential profile to use with AWS CLI. This step can also be easily hooked up to your CI/CD pipeline if you have one.
After the credential setup, I can copy/upload my static site files to the destination bucket with a single command like this:
aws s3 sync source-folder s3://<bucket-name-here> --delete
aws s3 sync
will upload any changed or added files from source to target. The --delete
flag sets to delete any files in target that are not present in the source location. The sync
command is recursive by default.
Alternatively, you can use the cp
command:
aws s3 cp source-folder s3://<bucket-name-here> --recursive
This will upload/replace all files from the source to target recursively. It will not delete any other file in the target location.
Read more about high-level S3 commands from the official doc.
After the upload, I can now test with the endpoint. For example, I should see the basic index.html I had uploaded by navigating to http://chienyihung.s3-website.us-east-2.amazonaws.com/index.html
Configure Route 53 for Resolving Domain Name
I now need to configure my domain name in Route 53 to being routing traffic. If using Route 53 is not an option or you have a preferred DNS manager, see this post on configuring DNS outside of AWS.
7. Create Bucket with Domain Name
In the Route 53 dashboard, create a new hosted zone for the domain, chienyihung.com
.
Then on the dashboard, select this new zone and click Go To Record Sets to create new A records for the zone. I’ll create a total of two new A (A — IPv4 address
) records. One for the domain and one for the www sub-domain.
Each A record should have an alias target that is the corresponding S3 bucket.
This is where the AWS Zone UI makes life easier. It is smart enough to pre-populate the drop-down with the corresponding S3 bucket for me to choose.
Now, when I go to the domain chienyihung.com
or www.chienyihung.com
, I should see my super basic index page!
(well, neither of those URLs are accessible anymore because I have since deleted all the setup after writing 😃).