Posts
All Cloud, No Cattle
Cancel

j.eremy.nl was a cute temporary name, but I’d wanted to start posting before I’d settled on a permanent moniker. After weeding out my shopping cart, I’ve settled on Impulsive Ventures.

I think this neatly summarizes my life in many ways. I famously struggle with impulse control in ways that, through both luck and hard work, seem to inevitably lead me into ventures, adventure, and general shenanigans.

Update your bookmarks, folks.

I’ll post an update later today explaining how I implemented this in Terraform.

I cheated a little bit in this post when I mentioned that I ride Tram 25 to work. The 25 will not actually exist until this Sunday. Until Spring 2019, I rode Metro 51 on the exact same route. The 51 was a spur of the Amsterdam subway that was retired at that time so that they could renovate the railway and introduce the new Tram 25 that will replace it. In the interim, until the pandemic hit I was riding a temporary bus (Busslijn 55) which covered the same area but snaked through the south Amstelveen neighborhoods instead of running straight down the Beneluxbaan.

Today, the project posted a video of the new tramlijn’s journey in each direction. As I mentioned, I board at Meent station and travel to Amsterdam Zuid. I’ve cued the video to start on the approach to my stop. While it’s stopped, you can see a street stretching off to the right. Our house is in that direction.

There’s a similar (one-way) video of the Noord-Zuid Metro 52 here. My old office at Booking was at Vijzelgracht station at about the halfway point. My new office is at Noorderpark, the first stop after the video begins.

Yours truly, writing for Hundred Degree Hockey back in July:

I was quite keen to continue playing as we prepared for our relocation to the Netherlands, but I couldn’t find any organizations similar to this - not just in Amsterdam, but really anywhere in the country. The only ice hockey I could find was Nederlandse IJshockeybond, the national senior hockey circuit. That seemed a bit over my pay grade, so I sold my hockey gear before we moved and sadly figured that my hockey career was largely over.

I was more than a little wrong.

We only barely finished our 2019-20 campaign before the pandemic, and it’s looking like our 2020-21 season is going to be wiped out as well.

I converted back to goalie last season and went 5-0 in net. Before the second wave suspended the current season, I also won 1 match, 12-6 against Alkmaar.

Truly the end of an era, as Threadgills falls silent for the last time. A perennial favorite during my years in Austin, at one point I was eating there at least monthly. It was my go-to spot for business lunches. The down-home eats and constant stream of live music made it perhaps the most gezellig place in all of Texas.

You will be missed, old friend. Your chicken fried pork chops will live on in my dreams.

I plan to keep the site avatars constrained to photos I’ve personally taken. The rodeo avatar I started with was a temporary placeholder.

No Entry Smiley

A graffiti artist tagged this No Entry sign on the Beneluxbaan bike path a year or so ago, and I couldn’t help but stop to take a snap of it. I framed it well enough that I’ve used it as my Slack avatar off and on ever since.

Commuting is quite a bit different here in the Netherlands than I was used to back in Texas. I lived in north Round Rock, Texas, and commuted to an office on South Loop 360. The trip was about 25 miles (40 km) and, at rush hour, often took me around 45 minutes to an hour. It often felt like the opening scene of Office Space. Humorously enough, that scene is includes shots from two different locations that I’ve commuted through a lot in my life: Burnet Road and Mopac in North Austin, and I-635 in Dallas.

  1. Get in my car
  2. Drive a while
  3. Get out of my car

Of course, there’s also the itinerant costs of owning a car, such as fuel, insurance, upkeep, and so on. Just getting to and from work probably cost me around $400 a month. That’s quite a lot, and ditching this cost has been a real godsend here in the Netherlands.

My Dutch Commute

Up until recently, I worked on Vijzelstraat in central Amsterdam. From my home in Amstelveen Zuid, it’s about 7 miles (11 km). If I were to drive, it’s about a 20 minute drive plus another 10 or 15 to park and walk to the office. Parking alone in the Centrum is outrageously expensive, often running around €500 ($590) a month for a parking slot. That makes owning a car not just “unattractive” but ruinously expensive for the purposes of commuting.

By Bike

When the weather is good in the spring and early fall, I simply bike it and it takes me about 40 minutes to get to work. Two-thirds of the trip, from Amstelveen until Amsterdam Zuid train station, is on very nicely paved, smooth, protected, bike lanes. Once I get into Amsterdam proper, around De Pijp, streets begin to narrow and my path varies between protected bike paths and shared lanes. Car traffic is restricted to 30km/h (18mph) so this isn’t a very big deal.

By Public Transport

For most of the year, though, I take public transit. The weather in the Netherlands is notoriously wet and windy. Riding through that for 7 miles can feel like an eternity (let alone arriving to work soggy and winded). Instead, I ride my bike over to the nearby Meent tram stop, about a kilometer from the house. Here, I pick up Tram 25, a special spur of Amsterdam’s public transit system called the Amstelveenlijn. From Meent, it’s 15 minutes to Amsterdam Zuid, the major transportation hub on the south side of town.

I have about a 5 minute walk from the Zuid tram stop to the Metro station, where I pick up the subway for the last leg of my journey. Line 52 is a subway that cuts through the center of Amsterdam (several other lines form a ring around the city). I take it a mere 3 stops to Vijzelgracht station, which lets out right at the feet of my office.

From leaving my house on bike to walking in the door is about 40 minutes, and I often sit quietly listening to podcasts, reading the news, or even dozing off along the way.

The OV-Chipkaart

To use public transit in the Netherlands, you must “check in” and “check out” of each vehicle or station that you use. You do this using an OV Chipkaart. This card can be loaded with money for fares, tickets to specific destinations, discount cards, and subscriptions. When you enter a major train ore metro station, or step onto any bus or tram in the country, you check in with your card. When you exit at the end of your journey, it charges you appropriately for your journey.

The same card is good for journeys on any public transit in the country, in any city, by any provider, company, or authority. One card to rule them all.

OV Chipkaarts can be loaded with money at terminals in many grocerier and corner stores, and at major stations across the country. I subscribe to a transit plan with the Amsterdam transit system, GVB, that allows me unlimited travel within specific districts each month for a flat rate (about €90). I do about 90% of my travel within this area and it saves me about €15 a month over what it would cost if I paid full fare each journey. In the rare case that I travel outside that area, I pay the full fare.

Once we’ve set up our basic infra and published our content, we now have our blog published as static files, on S3, under our own domain name. We’re still missing two things that a modern, cloud native website should have: HTTPS and regional redundancy. Thankfully, AWS CloudFront gives us both of these things. We’ll return back to our Terraform code to move forward.

Again, the goal here is to use Cloud Native tools for every step, to the greatest extent possible.

We’ll start by using AWS Certificate Manager (“ACM”), which will generate the SSL certificate we need to enable HTTPS. ACM requires you to validate that you control the domain name of the certificate either by responding to an email or by entering a custom domain name validation record. As luck has it, we’re already controlling our domain via Terraform, so we can automate this entire process.

For starters, we add an acm.tf to our Terraform code.

resource "aws_acm_certificate" "cert" {
  domain_name       = "${var.host_name}.${var.domain_name}"
  validation_method = "DNS"

  tags = {
    "site" = "j.eremy.nl"
  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_route53_record" "cert_validation" {
  for_each = {
    for dvo in aws_acm_certificate.cert.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }

  allow_overwrite = true
  name            = each.value.name
  records         = [each.value.record]
  ttl             = 60
  type            = each.value.type
  zone_id         = aws_route53_zone.eremy_nl.zone_id
}

resource "aws_acm_certificate_validation" "cert" {
  certificate_arn         = aws_acm_certificate.cert.arn
  validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
}

This does three things, with three resources. It creates the certificate as aws_acm_certificate.cert and configures it for DNS validation. We then create the required DNS entries in Route53 with the aws_route53_record.cert_validation record. Lastly, aws_acm_certificate_validation.cert ensures that validation is completed. If we apply this code, we now have an SSL certificate approved for the j.eremy.nl domain name.

Now it’s time to set up the CloudFront distribution. There’s a lot going on here and I’m not going to go through it line by line. The important thing to know is that it creates the distribution, sets our original website S3 bucket as the source, and uses our new ACM certificate. For good measure, I enable automatic http-to-https redirection so that all visitors wind up at the secure site.

resource "aws_cloudfront_distribution" "prod_distribution" {
    origin {
        domain_name = "${var.host_name}.${var.domain_name}.s3.amazonaws.com"
        origin_id = "S3-${var.host_name}.${var.domain_name}"
    }
    aliases = ["${var.host_name}.${var.domain_name}"]
    # By default, show index.html file
    default_root_object = "index.html"
    enabled = true
    # If there is a 404, return index.html with a HTTP 200 Response
    custom_error_response {
        error_caching_min_ttl = 3000
        error_code = 404
        response_code = 200
        response_page_path = "/index.html"
    }
    default_cache_behavior {
        allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
        cached_methods = ["GET", "HEAD"]
        target_origin_id = "S3-${aws_s3_bucket.website_bucket.bucket}"
        # Forward all query strings, cookies and headers
        forwarded_values {
            query_string = true
            cookies {
              forward = "all"
            }
        }
        viewer_protocol_policy = "redirect-to-https"
        min_ttl = 0
        default_ttl = 3600
        max_ttl = 86400
    }
    # Distributes content to US and Europe
    price_class = "PriceClass_100"
    # Restricts who is able to access this content
    restrictions {
        geo_restriction {
            # type of restriction, blacklist, whitelist or none
            restriction_type = "none"
        }
    }
    # SSL certificate for the service.
    viewer_certificate {
      acm_certificate_arn = aws_acm_certificate.cert.arn
      ssl_support_method = "sni-only"
    }

    tags = {
      "site" = "j.eremy.nl"
    }

    depends_on = [
      aws_s3_bucket.website_bucket,
    ]
}

One of the attributes of a CloudFront distribution is its CloudFront domain name. The last thing we need is to take that and update our j.eremy.nl Route53 record to point to this (it currently points directly to the S3 bucket).

We change dns.tf to replace aws_route53_record.root with this:

resource "aws_route53_record" "root" {
  zone_id = aws_route53_zone.eremy_nl.zone_id
  name    = "${var.host_name}.${var.domain_name}"
  type    = "A"

  alias {
    name                   = replace(aws_cloudfront_distribution.prod_distribution.domain_name, "/[.]$/", "")
    zone_id                = aws_cloudfront_distribution.prod_distribution.hosted_zone_id
    evaluate_target_health = true
  }

  depends_on = [aws_cloudfront_distribution.prod_distribution]
}

After applying these changes, our site is now fronted by CloudFront, forces HTTPS, and is distributed to CDN edge locations around the globe.

While trying to share some posts to some colleagues, I noticed earlier today that the sites tag page was displaying a garbled version of the homepage. Oddly enough, this error does not happen directly from S3, nor even from the Cloudfront domain. I chased down several rabbit holes on Stack Overflow (as you do).

Interestingly, the page rendered fine if I requested index.html directly, but not if I just requested the path and left index.html implicit. But again, it worked fine on S3 and the CloudFront domain. While I initially wondered if this was a CloudFront config issue, it dawned on me that the page is supposed to be rendered by some Chirpy javascript that interprets the path, and the only apparent difference (especially when using the CF domain) is in fact the domain name in the request.

This line in the Jekyll config for Chirpy defines the site’s URL and I had this set to https://j.eremy.nl as you might expect. I changed it to be an empty string and now the site works as intended.

Additionally, I had to configure CloudFront’s origin to be the website URL of the S3 bucket instead of the S3 content URL. It’s a subtle difference, but mattered in the end.

Here’s the cloudfront.tf diff:

diff --git a/terraform/cloudfront.tf b/terraform/cloudfront.tf
index 4a36293..1ca7b41 100644
--- a/terraform/cloudfront.tf
+++ b/terraform/cloudfront.tf
@@ -1,7 +1,19 @@
 resource "aws_cloudfront_distribution" "prod_distribution" {
     origin {
-        domain_name = "${var.host_name}.${var.domain_name}.s3.amazonaws.com"
+        domain_name = "${var.host_name}.${var.domain_name}.s3-website-us-east-1.amazonaws.com"
         origin_id = "S3-${var.host_name}.${var.domain_name}"
+        custom_origin_config {
+                      http_port                = 80
+                      https_port               = 443
+                      origin_keepalive_timeout = 5
+                      origin_protocol_policy   = "http-only"
+                      origin_read_timeout      = 30
+                      origin_ssl_protocols     = [
+                       "TLSv1",
+                       "TLSv1.1",
+                       "TLSv1.2",
+                      ]
+                    }
     }
     aliases = ["${var.host_name}.${var.domain_name}"]
     # By default, show index.html file

In working to enable CloudFront Distribution and SSL for the site, I found that the styling was not consistent between the homepage and the post pages. Most glaringly, example code blocks were rendering dark text on a dark background.

After some digging around, I found that a CSS file was not being linked on the homepage. This was easily fixed locally and uploaded to the bucket. Alas, the very nature of a content delivery network is that updates to files can take a very long time to show up on clients - or never at all, if the right conditions are not met. To force the update, I had to find out which files were updated and invalidate the CloudFront cache for them. Looking back at the update, I found that these were the updated files:

[info] Deploying content/_site/* to j.eremy.nl
[succ] Updated sw.js (application/javascript)
[succ] Updated sitemap.xml (application/xml)
[succ] Updated feed.xml (text/html; charset=utf-8)
[succ] Updated tabs/about.html (text/html; charset=utf-8)
[succ] Updated tabs/archives.html (text/html; charset=utf-8)
[succ] Updated assets/css/home.css (text/css; charset=utf-8)
[succ] Updated assets/css/home.css.map (text/plain; charset=utf-8)
[info] Summary: Updated 7 files. Transferred 215.0 kB, 154.0 kB/s.

Invalidating these paths updated the styling and now codeblocks are legible on the main page.

Ultimately, I added this to my Makefile:

invalidate:
    AWS_PAGER="" aws cloudfront create-invalidation \
        --distribution-id E1B35UBTOYE07S \
        --paths '/*' --output yaml

and added invalidate to the all target.

Following up from this post. As noted, I use Jekyll for content management. The documentation for Jekyll is quite nice and you can see how to create and write posts here.

My general format is quite simple. As an example, this post up until this point looks like this:

---
title: "A Cloud Native j.eremy: Content"
date: 2020-12-05 12:00:00+0100
---

Following up from [this post][1]. As noted, I use [Jekyll][2] for content
management. The documentation for Jekyll is quite nice and you can see how to
create and write posts [here][3].

My general format is quite simple. As an example, this post up until this point
looks like this:

I have a quick and dirty Makefile that allows me to perform the most common operations:

  • make serve turns on Jekylls local preview website so I can preview my changes.
  • make build generates the HTML static site files.
  • make push uses s3_website to upload the site to S3.
  • make all runs both build and push
all: build push

serve:
	jekyll serve --future

build:
	jekyll build

push:
	s3_website push

You could just as well use the AWS CLI to upload, or any of a billion other tools that do the job.

My last task is to push my changes to Github, so I add the new post with git add and then commit and push. My ultimate goal is to not need to run jekyll locally: I’ll push to Github, then have a system such as CodeFresh automatically runs make all and publishes the changes for me. We’ll get to that later.