I recently deployed my portfolio website (this website) to AWS Amplify. The site is built with Next.js. Since the Amplify’s UI is intuitive, the deployment process itself was fairly straightforward.
However, there was one issue that took me a while to figure out and is worth writing about: environment variables.
Environment Variables in Next.js on Amplify
When you deploy a Next.js app on AWS Amplify, the process looks roughly like this:
- Amplify runs a build step, similar to when you run
npm run buildlocally. - The output of that build is what gets published and served to users.
- When users later visit your site in the browser, the built output is served. This stage is called runtime.
- When running your app locally with
npm run dev, Next.js automatically loads environment variables from.env.localand makes them available viaprocess.env.
When deploying on Amplify, you can also define environment variables using its UI. However, those variables are only available at build time.
At runtime, they are no longer available because the build process has already finished.
This is actually mentioned in the Amplify UI, but I somehow missed it, which cost me about an hour of debugging.
Making Environment Variables Available at Runtime
So what if you need environment variables at runtime?
The solution is to copy them into a runtime environment file during the build step.
Amplify allows you to customize the build process via the amplify.yml file. By adding the following line before next build, you can persist selected environment variables:
env | grep -e YOUR_ENVIRONMENT_VARIABLE >> .env.production
This command:
- Reads all environment variables available at build time
- Filters out the one(s) you care about
- Writes them into .env.production, which Next.js can read at runtime
Moving My Domain to Route 53
After deploying the app, I needed to point my domain, ledminh.dev, to the new site. At the time, it was still pointing to an older version of my website.
Following AWS’s recommendation, I created a hosted zone in Route 53 and updated my domain registrar to use Route 53’s name servers.
Once everything propagated, I noticed something interesting in the Route 53 UI:
- One row with:
- Record name: ledminh.dev
- Type: A
- Value: a CloudFront distribution
- Another row with:
- Record name: ledminh.dev
- Type: NS
- Value: four name server addresses
This sparked my curiosity. I asked ChatGPT about it, and after a long (and very educational) discussion, I finally built a clear mental model of how DNS resolution actually works.
What Happens When You Visit ledminh.dev
When I type "ledminh.dev" into my browser’s address bar and press Enter, here’s what happens.
First, the browser asks the operating system for the IP address of ledminh.dev. The OS handles this using a built-in component called the stub resolver.
The stub resolver is a minimal DNS client built into the OS networking stack. It does not resolve domain names by itself. Instead, it forwards DNS queries to configured recursive DNS resolvers.
On macOS, you can inspect this configuration by running:
scutil --dns
This shows the IP addresses of:
- Your local router (usually acting as a DNS forwarder)
- Your ISP’s recursive DNS resolver (Spectrum, Xfinity, AT&T, etc.)
The stub resolver forwards the query to a recursive DNS resolver operated by the ISP. This is where the real DNS work begins.
First, the recursive resolver queries a root DNS server: “Who is responsible for .dev domains?”
Root servers don’t know anything about ledminh.dev. They only know which servers manage top-level domains.
The root server replies with a list of .dev TLD servers.
TLD stands for Top-Level Domain, such as:
.com.org.dev
The .dev TLD is operated by Google, acting as the registry. This means Google maintains the authoritative list of which name servers are responsible for each .dev domain.
The recursive resolver then asks a .dev TLD server: “Who is authoritative for ledminh.dev?”
The TLD server responds with a list of four Route 53 name servers, the same ones I saw in the second row of the Route 53 UI.
Next, the recursive resolver queries one of those Route 53 name servers.
Route 53 looks into its hosted zone and finds an A record (Alias) that points ledminh.dev to a CloudFront distribution. This corresponds to the first row I saw in the Route 53 UI.
AWS then resolves the CloudFront distribution internally and returns IP addresses for a nearby CloudFront edge location.
The IP address is returned through the chain:
- Route 53 → recursive resolver
- Recursive resolver caches the result
- Recursive resolver → OS stub resolver
- Stub resolver → browser
At this point, DNS resolution is complete.
Now that the browser has an IP address, it will open a TCP connection, perform a TLS (HTTPS) handshake, and send an HTTPS request to CloudFront. CloudFront serves cached content or fetches it from the origin. But that’s it for today. Maybe another day I’ll dig deeper into what happens during the HTTPS handshake and content delivery phase.