Iac
Terraform
Infrastructure provisioning with HashiCorp Configuration Language - providers, modules, state management, and best practices
Terraform
Terraform is an open-source infrastructure as code tool that lets you define cloud and on-premise resources in declarative configuration files (HCL). It manages the lifecycle of resources through plan, apply, and destroy workflows.
Core Concepts
| Concept | Description |
|---|---|
| Provider | Plugin that interacts with APIs (AWS, GCP, Azure, Kubernetes, etc.) |
| Resource | Infrastructure object managed by Terraform (EC2 instance, S3 bucket) |
| Data Source | Read-only reference to existing infrastructure |
| Module | Reusable group of resources |
| State | JSON file tracking managed resources and their current state |
| Plan | Preview of changes before applying |
Project Structure
infrastructure/
├── main.tf # Root module entry point
├── variables.tf # Input variable declarations
├── outputs.tf # Output value declarations
├── providers.tf # Provider configuration
├── terraform.tfvars # Variable values (gitignored for secrets)
├── backend.tf # Remote state configuration
├── modules/
│ ├── networking/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ ├── compute/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ └── database/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
└── environments/
├── staging/
│ ├── main.tf
│ └── terraform.tfvars
└── production/
├── main.tf
└── terraform.tfvarsProvider Configuration
# providers.tf
terraform {
required_version = ">= 1.7.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = var.aws_region
default_tags {
tags = {
Environment = var.environment
ManagedBy = "terraform"
Project = var.project_name
}
}
}Resource Examples
VPC and Networking
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = { Name = "${var.project_name}-vpc" }
}
resource "aws_subnet" "public" {
count = length(var.availability_zones)
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)
availability_zone = var.availability_zones[count.index]
map_public_ip_on_launch = true
tags = { Name = "${var.project_name}-public-${count.index}" }
}
resource "aws_subnet" "private" {
count = length(var.availability_zones)
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index + 100)
availability_zone = var.availability_zones[count.index]
tags = { Name = "${var.project_name}-private-${count.index}" }
}ECS Fargate Service
resource "aws_ecs_cluster" "main" {
name = "${var.project_name}-cluster"
setting {
name = "containerInsights"
value = "enabled"
}
}
resource "aws_ecs_task_definition" "app" {
family = "${var.project_name}-app"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = 512
memory = 1024
execution_role_arn = aws_iam_role.ecs_execution.arn
task_role_arn = aws_iam_role.ecs_task.arn
container_definitions = jsonencode([{
name = "app"
image = "${var.ecr_repository_url}:${var.image_tag}"
essential = true
portMappings = [{
containerPort = 3000
protocol = "tcp"
}]
environment = [
{ name = "NODE_ENV", value = "production" },
{ name = "PORT", value = "3000" },
]
secrets = [
{ name = "DATABASE_URL", valueFrom = aws_ssm_parameter.db_url.arn },
]
logConfiguration = {
logDriver = "awslogs"
options = {
"awslogs-group" = aws_cloudwatch_log_group.app.name
"awslogs-region" = var.aws_region
"awslogs-stream-prefix" = "app"
}
}
}])
}
resource "aws_ecs_service" "app" {
name = "${var.project_name}-app"
cluster = aws_ecs_cluster.main.id
task_definition = aws_ecs_task_definition.app.arn
desired_count = var.app_count
launch_type = "FARGATE"
network_configuration {
subnets = aws_subnet.private[*].id
security_groups = [aws_security_group.app.id]
assign_public_ip = false
}
load_balancer {
target_group_arn = aws_lb_target_group.app.arn
container_name = "app"
container_port = 3000
}
}State Management
Remote State (S3 Backend)
# backend.tf
terraform {
backend "s3" {
bucket = "myproject-terraform-state"
key = "production/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-lock"
encrypt = true
}
}Modules
Using a Module
module "database" {
source = "./modules/database"
project_name = var.project_name
environment = var.environment
vpc_id = aws_vpc.main.id
subnet_ids = aws_subnet.private[*].id
instance_class = "db.t3.medium"
engine_version = "16.2"
allocated_storage = 50
}
output "database_endpoint" {
value = module.database.endpoint
}Common Commands
# Initialize (download providers and modules)
terraform init
# Format code
terraform fmt -recursive
# Validate configuration
terraform validate
# Plan changes (preview)
terraform plan -out=tfplan
# Apply changes
terraform apply tfplan
# Destroy all resources
terraform destroy
# Import existing resource
terraform import aws_s3_bucket.example my-bucket-name
# Show current state
terraform state list
terraform state show aws_ecs_service.app
# Taint resource for recreation
terraform taint aws_ecs_task_definition.appBest Practices
Terraform Guidelines
- Remote state: Always use remote state with locking (S3 + DynamoDB, Terraform Cloud)
- Modules: Extract reusable patterns into modules; use versioned module sources
- Environments: Use workspaces or separate state files per environment
- Variables: Never hardcode secrets; use variables with
sensitive = true - Plan before apply: Always review
terraform planoutput before applying - Small changes: Make incremental changes; avoid large refactors in a single apply
- Tagging: Tag all resources with project, environment, and
ManagedBy = terraform - CI/CD: Run
terraform planin CI on PRs,terraform applyon merge to main - State locking: Enable state locking to prevent concurrent modifications
- Version pinning: Pin provider and module versions to avoid unexpected changes