After several years working in on-premises data centers, I’ve recently returned to the cloud. This inspired my return to blogging, as much to play around with the newest AWS technologies as to write. I decided to build this site as “cloud native” as possible. This, by definition, excluded solutions such as Wordpress.
The goal is to not have servers to manage, as servers are single points of failure, require maintenance, security patching, general upkeep, and so on.
I’ve long been a reader of Daring Fireball and appreciate that Dan’s site is served as static content. This simplifies matters considerably, and I’m a big fan of simplicity. While Dan’s solution is built on Moveable Type and MT serves static content to visitors, the publishing toolchain is quite complicated to configure, install, and maintain. To make it cloud native on the configuration management side, I would need to write Terraform to set up infrastructure and then a considerable amount of ansible, puppet, salt, or chef to configure the server. That is exactly what I am trying not to do.
Terraform is a great solution because it allows us to define our infrastructure as code. This, in turn, allows us to simply change a config file and re-run the tool to make any changes necessary. For instance, if I chose a new domain name, I simply change the var.domain_name
variable and re-run terraform apply
et violà, my site is magically showing up at the new domain.
Jekyll is possibly the most popular open source static site generator at the moment, as it’s used for a larger fraction of the GitHub Pages sites out there. Its installation is relatively simple, being a single Ruby Gem. It uses Markdown for formatting, which is a huge bonus.
Infrastructure
The infra needed to run j.eremy is all Cloud Native, and hosted on Amazon S3. I used Terraform to configure the cloud components. The basic building block is an S3 bucket, which I configured as such:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
data "aws_iam_policy_document" "website_policy" {
statement {
actions = [
"s3:GetObject"
]
principals {
identifiers = ["*"]
type = "AWS"
}
resources = [
"arn:aws:s3:::${var.host_name}.${var.domain_name}/*"
]
}
}
resource "aws_s3_bucket" "website_bucket" {
bucket = "${var.host_name}.${var.domain_name}"
acl = "public-read"
policy = data.aws_iam_policy_document.website_policy.json
website {
index_document = "index.html"
error_document = "index.html"
}
}
By default, this makes a static website available at j.eremy.nl.s3-website-us-east-1.amazonaws.com. Of course, we’d rather it be available at our own domain name: j.eremy.nl. So we use Route53 to configure DNS. First we need to create the domain zone itself, and then the A record that points the j
name to the S3 bucket.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
resource "aws_route53_zone" "eremy_nl" {
name = var.domain_name
}
resource "aws_route53_record" "j_site" {
zone_id = aws_route53_zone.eremy_nl.zone_id
name = "${var.host_name}.${var.domain_name}"
type = "A"
alias {
name = aws_s3_bucket.website_bucket.website_domain
zone_id = aws_s3_bucket.website_bucket.hosted_zone_id
evaluate_target_health = false
}
}
The only manual step, sadly, is updating my nameserver records at my registrar. In order to avoid spelunking through the AWS Route53 console to find these, I have Terraform output the list.
1
2
3
output "eremy_nl_nameservers" {
value = aws_route53_zone.eremy_nl.name_servers
}
From this point, the files in the S3 bucket now appear at my domain name. It’s simply a matter of generating the site and uploading it to the bucket.
This doesn’t yet get us DOS protection or HTTPS, but we build things one step at a time in the Derr Household. All in good time.
Next, we’ll break down the publishing process.