Managing Terraform Environments for Consistent Infrastructure
Note: This blog post was enhanced with the help of AI to improve grammar and refine tone. All content and opinions are my own.
So here’s a gem I picked up from a colleague—shoutout to you, wise infra wizard 👋. It’s a neat Terraform pattern for managing multiple environments that actually keeps your sanity intact. If you’ve ever tried juggling QA, Staging, Sandbox, and Production and wondered, “Why is staging acting like a gremlin again?”—this post is for you.
The goal? Consistent infrastructure across environments, with just enough room to breathe for differences like instance sizes or bucket names. You want your environments to mirror production closely enough that testing isn’t just ceremonial. Let’s dive into three strategies I’ve used (and abused), each with examples and a reality check.
❌ Option 1: The Copy-Pasta Folder Explosion (Not Recommended)
You start simple. “I’ll just make a folder per environment. How bad could it be?” Famous last words.
infra/
qa/
main.tf
staging/
main.tf
production/
main.tf
Pros:
- Each environment is blissfully isolated. No chance of cross-contamination.
Cons:
- You’re now the human compiler for infra consistency.
- Tiny drift becomes big drama. “Wait, why is there no load balancer in QA?!”
- Copy-paste errors sneak in like raccoons into open trash bins.
Example:
# infra/qa/main.tf
resource "aws_s3_bucket" "example" {
bucket = "example-qa-bucket"
}
# infra/staging/main.tf
resource "aws_s3_bucket" "example" {
bucket = "example-staging-bucket"
versioning {
enabled = true
}
}
Now good luck figuring out why your integration tests pass in QA but blow up in staging. 🍀
⚖️ Option 2: The Almighty Shared Module (Solid Middle Ground)
This is the Terraform version of “don’t repeat yourself.” Create a central module, parameterize everything you can, and reuse it across environments.
infra/
modules/
infrastructure/
main.tf
qa/
main.tf
staging/
main.tf
production/
main.tf
Example:
# modules/infrastructure/main.tf
resource "aws_s3_bucket" "example" {
bucket = var.bucket_name
}
# qa/main.tf
module "infra" {
source = "../modules/infrastructure"
bucket_name = "example-qa-bucket"
}
# staging/main.tf
module "infra" {
source = "../modules/infrastructure"
bucket_name = "example-staging-bucket"
}
Pros:
- Reuse is great. Consistency is great.
- Easier to enforce structure than Option 1.
Cons:
- People can still sneak in one-off resources like “just a little cache” outside the module.
- Configs are spread across folders like peanut butter on too much toast.
✅ Option 3: One Codebase, Many Faces (Recommended)
Okay, this one’s my current favorite. Use Terraform workspaces to switch environments, and shove all the
env-specific config into a single locals block. No duplication. No folder zoo.
infra/
main.tf
variables.tf
locals.tf
Fire up your environments with:
tf workspace new qa
tf workspace new staging
tf workspace select production
Example:
# locals.tf
locals {
config = tomap({
qa = {
bucket_name = "example-qa-bucket"
},
staging = {
bucket_name = "example-staging-bucket"
},
production = {
bucket_name = "example-prod-bucket"
}
})
current_env = terraform.workspace
env_config = local.config[local.current_env]
}
# main.tf
resource "aws_s3_bucket" "example" {
bucket = local.env_config.bucket_name
}
Pros:
- Your infrastructure looks the same everywhere. 🎯
- One single place to view all your environment differences.
- Code is lean, clean, and blessed by the DevOps gods.
Cons:
- Slight learning curve for new teammates. (“What do you mean there’s only one folder?”)
Final Thoughts
I’ve tried all three. Broke things with the first. Survived the second. Now I’m thriving with the third.
Workspaces plus centralized config give you that sweet spot between flexibility and control. Your infra doesn’t surprise you anymore. And your environments start to feel less like siblings raised in different households.
Stick to one codebase, centralize your configs, and keep those environments honest.
Happy Terraforming, and may your state files stay locked 🔒