An Introduction to Terragrunt

Level 200 technical writeup; moderate knowledge of Terraform is expected

Nick Doyle
4 min readSep 29, 2021

Too late in my use of Terraform did I come across Terragrunt, a third-party wrapper around Terraform to make working with it easier and more effective.

Terragrunt’s primary stated reason is to keep your TF code DRY. However in this short writeup I’ll explain the two benefits I actually valued from it.

Auto Backend Provision & Init

With TF there’s this “chicken and the egg” problem with the backend: you need a backend, but how do you provision that (as code) when your IaC tool (Terraform) does not have a backend yet …

People solve this in different ways, some examples

  • Makefiles or scripts with targets / functions
  • CFN to initially deploy the backend s3 bucket and dynamo locktable
  • Possibly then committing your .terraform/terraform.tfstate to your repo (this begins to smell right …)

Apart from meaning that everyone solves this differently (lack of standardization), it also means backends are Toil, the result of which is people either

  1. Expend more effort on undifferentiated tasks (setup backend) or
  2. Share backends between environments using TF Workspaces, possibly in a shared services account — something I recommend avoiding primarily for security and operational reasons.

With Terragrunt, I can simply put in my terragrunt.hcl

remote_state {
backend = "s3"
config = {
bucket = "${get_aws_account_id()}-tf-backend"
region = "ap-southeast-2"
key = "terraform.tfstate"
encrypt = true
dynamodb_table = "${get_aws_account_id()}-tf-locktable"
}
}

And partially configure the backend in main.tf

terraform {
backend "s3" {}
}

And so long as I’m currently authenticated to the target AWS account with permissions to create s3 & dynamodb, it’ll auto provision and init:

$ terragrunt apply
Remote state S3 bucket 747843067444-nickdoyle-tf-backend does not exist or you don't have permissions to access it. Would you like Terragrunt to create it? (y/n) y
Initializing the backend...Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v3.60.0...
- Installed hashicorp/aws v3.60.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:<... snip rest of terraform apply run... >

Standardized, Simple Per-Environment Configuration

It’s a very common use case with IaC to have different environments e.g. dev/stage/prod, and each of these environments in their own AWS Account, with the same architecture, but different configurations e.g.

  • dev env uses 1 x t3.micro instances for an ETL worker, whereas prod has 3 x m5.large
  • dev dynamodb tables have lower provisioned throughput, prod ones have higher provisioned throughput

With standard terraform this is done using variables; we will define e.g. dev.tfvars, prod.tfvars each with environment-specific values for TF variables (can also supply these as env vars, but that gets messy quickly).
And run terraform apply -var-file=dev.tfvars

To run the apply with the correct variables for the target environment, we’ll normally craft a Makefile or shell script to determine the current environment e.g. calling aws sts get-caller-identity, aws configure get region or so, then running terraform apply with -var-file=<relevant tfvars file>

Two downsides with this approach

  • Everyone does it differently
  • It’s very undifferentiated
  • Can be surprisingly heavy lifting
    (especially in Makefiles — which I think many of use love/hate — getting/setting vars in global scope or within Makefile targets is a PITA)

Terragrunt neatly solves this by allowing you to include other Terragrunt files based on variables such as the current AWS account ID.
Then within these included files, one can have env-specific input variables defined. Let’s say our TF variables.tf looks like so

variable "dynamodb_demo_read_capacity" {
type = number
}
variable "dynamodb_demo_write_capacity" {
type = number
}

To our top-level terragrunt.hcl I’ve added a block

include {
path = "env/${get_aws_account_id()}.hcl"
}

And created env/123456789012.hcl with an inputs block that sets values for those input variables

inputs = {
dynamodb_demo_read_capacity = 5
dynamodb_demo_write_capacity = 5
}

Assuming I’m currently authenticated to AWS account ID 123456789012 those input values will be used when I run terragrunt apply
No env detection, or -var-file argument required.

Conclusion

In this post I’ve covered off two of my favourite things about Terragrunt.

To be honest, these two were big enough wins for me to be much more productive in using Terraform and I haven’t since explored Terragrunt more, but suspect it has even more to offer.

Source for the demo code in this post: https://github.com/rdkls/terragrunt-demo

Please feel free to reach out with any thoughts or suggestions on using either Terraform or Terragrunt, I’d love to hear from you. Happy Terraforming!

--

--

Nick Doyle

Cloud-Security-Agile, in Melbourne Australia, experience includes writing profanity-laced Perl, surprise Migrations, furious DB Admin and Motorcycle instructing