基建模式
terraform-patterns
by alirezarezvani
Terraform infrastructure-as-code agent skill and plugin for Claude Code, Codex, Gemini CLI, Cursor, OpenClaw. Covers module design patterns, state management strategies, provider configuration, security hardening, policy-as-code with Sentinel/OPA, and CI/CD plan/apply workflows. Use when: user wants to design Terraform modules, manage state backends, review Terraform security, implement multi-region deployments, or follow IaC best practices.
安装
claude skill add --url github.com/openclaw/skills/tree/main/skills/alirezarezvani/terraform-patterns文档
Terraform Patterns
Predictable infrastructure. Secure state. Modules that compose. No drift.
Opinionated Terraform workflow that turns sprawling HCL into well-structured, secure, production-grade infrastructure code. Covers module design, state management, provider patterns, security hardening, and CI/CD integration.
Not a Terraform tutorial — a set of concrete decisions about how to write infrastructure code that doesn't break at 3 AM.
Slash Commands
| Command | What it does |
|---|---|
/terraform:review | Analyze Terraform code for anti-patterns, security issues, and structure problems |
/terraform:module | Design or refactor a Terraform module with proper inputs, outputs, and composition |
/terraform:security | Audit Terraform code for security vulnerabilities, secrets exposure, and IAM misconfigurations |
When This Skill Activates
Recognize these patterns from the user:
- "Review this Terraform code"
- "Design a Terraform module for..."
- "My Terraform state is..."
- "Set up remote state backend"
- "Multi-region Terraform deployment"
- "Terraform security review"
- "Module structure best practices"
- "Terraform CI/CD pipeline"
- Any request involving:
.tffiles, HCL, Terraform modules, state management, provider configuration, infrastructure-as-code
If the user has .tf files or wants to provision infrastructure with Terraform → this skill applies.
Workflow
/terraform:review — Terraform Code Review
-
Analyze current state
- Read all
.tffiles in the target directory - Identify module structure (flat vs nested)
- Count resources, data sources, variables, outputs
- Check naming conventions
- Read all
-
Apply review checklist
codeMODULE STRUCTURE ├── Variables have descriptions and type constraints ├── Outputs expose only what consumers need ├── Resources use consistent naming: {provider}_{type}_{purpose} ├── Locals used for computed values and DRY expressions └── No hardcoded values — everything parameterized or in locals STATE & BACKEND ├── Remote backend configured (S3, GCS, Azure Blob, Terraform Cloud) ├── State locking enabled (DynamoDB for S3, native for others) ├── State encryption at rest enabled ├── No secrets stored in state (or state access is restricted) └── Workspaces or directory isolation for environments PROVIDERS ├── Version constraints use pessimistic operator: ~> 5.0 ├── Required providers block in terraform {} block ├── Provider aliases for multi-region or multi-account └── No provider configuration in child modules SECURITY ├── No hardcoded secrets, keys, or passwords ├── IAM follows least-privilege principle ├── Encryption enabled for storage, databases, secrets ├── Security groups are not overly permissive (no 0.0.0.0/0 ingress on sensitive ports) └── Sensitive variables marked with sensitive = true -
Generate report
bashpython3 scripts/tf_module_analyzer.py ./terraform -
Run security scan
bashpython3 scripts/tf_security_scanner.py ./terraform
/terraform:module — Module Design
-
Identify module scope
- Single responsibility: one module = one logical grouping
- Determine inputs (variables), outputs, and resource boundaries
- Decide: flat module (single directory) vs nested (calling child modules)
-
Apply module design checklist
codeSTRUCTURE ├── main.tf — Primary resources ├── variables.tf — All input variables with descriptions and types ├── outputs.tf — All outputs with descriptions ├── versions.tf — terraform {} block with required_providers ├── locals.tf — Computed values and naming conventions ├── data.tf — Data sources (if any) └── README.md — Usage examples and variable documentation VARIABLES ├── Every variable has: description, type, validation (where applicable) ├── Sensitive values marked: sensitive = true ├── Defaults provided for optional settings ├── Use object types for related settings: variable "config" { type = object({...}) } └── Validate with: validation { condition = ... } OUTPUTS ├── Output IDs, ARNs, endpoints — things consumers need ├── Include description on every output ├── Mark sensitive outputs: sensitive = true └── Don't output entire resources — only specific attributes COMPOSITION ├── Root module calls child modules ├── Child modules never call other child modules ├── Pass values explicitly — no hidden data source lookups in child modules ├── Provider configuration only in root module └── Use module "name" { source = "./modules/name" } -
Generate module scaffold
- Output file structure with boilerplate
- Include variable validation blocks
- Add lifecycle rules where appropriate
/terraform:security — Security Audit
-
Code-level audit
Check Severity Fix Hardcoded secrets in .tffilesCritical Use variables with sensitive = true or vault IAM policy with *actionsCritical Scope to specific actions and resources Security group with 0.0.0.0/0 on port 22/3389 Critical Restrict to known CIDR blocks or use SSM/bastion S3 bucket without encryption High Add server_side_encryption_configurationblockS3 bucket with public access High Add aws_s3_bucket_public_access_blockRDS without encryption High Set storage_encrypted = trueRDS publicly accessible High Set publicly_accessible = falseCloudTrail not enabled Medium Add aws_cloudtrailresourceMissing prevent_destroyon stateful resourcesMedium Add lifecycle { prevent_destroy = true }Variables without sensitive = truefor secretsMedium Add sensitive = trueto secret variables -
State security audit
Check Severity Fix Local state file Critical Migrate to remote backend with encryption Remote state without encryption High Enable encryption on backend (SSE-S3, KMS) No state locking High Enable DynamoDB for S3, native for TF Cloud State accessible to all team members Medium Restrict via IAM policies or TF Cloud teams -
Generate security report
bashpython3 scripts/tf_security_scanner.py ./terraform python3 scripts/tf_security_scanner.py ./terraform --output json
Tooling
scripts/tf_module_analyzer.py
CLI utility for analyzing Terraform directory structure and module quality.
Features:
- Resource and data source counting
- Variable and output analysis (missing descriptions, types, validation)
- Naming convention checks
- Module composition detection
- File structure validation
- JSON and text output
Usage:
# Analyze a Terraform directory
python3 scripts/tf_module_analyzer.py ./terraform
# JSON output
python3 scripts/tf_module_analyzer.py ./terraform --output json
# Analyze a specific module
python3 scripts/tf_module_analyzer.py ./modules/vpc
scripts/tf_security_scanner.py
CLI utility for scanning .tf files for common security issues.
Features:
- Hardcoded secret detection (AWS keys, passwords, tokens)
- Overly permissive IAM policy detection
- Open security group detection (0.0.0.0/0 on sensitive ports)
- Missing encryption checks (S3, RDS, EBS)
- Public access detection (S3, RDS, EC2)
- Sensitive variable audit
- JSON and text output
Usage:
# Scan a Terraform directory
python3 scripts/tf_security_scanner.py ./terraform
# JSON output
python3 scripts/tf_security_scanner.py ./terraform --output json
# Strict mode (elevate warnings)
python3 scripts/tf_security_scanner.py ./terraform --strict
Module Design Patterns
Pattern 1: Flat Module (Small/Medium Projects)
infrastructure/
├── main.tf # All resources
├── variables.tf # All inputs
├── outputs.tf # All outputs
├── versions.tf # Provider requirements
├── terraform.tfvars # Environment values (not committed)
└── backend.tf # Remote state configuration
Best for: Single application, < 20 resources, one team owns everything.
Pattern 2: Nested Modules (Medium/Large Projects)
infrastructure/
├── environments/
│ ├── dev/
│ │ ├── main.tf # Calls modules with dev params
│ │ ├── backend.tf # Dev state backend
│ │ └── terraform.tfvars
│ ├── staging/
│ │ └── ...
│ └── prod/
│ └── ...
├── modules/
│ ├── networking/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ ├── compute/
│ │ └── ...
│ └── database/
│ └── ...
└── versions.tf
Best for: Multiple environments, shared infrastructure patterns, team collaboration.
Pattern 3: Mono-Repo with Terragrunt
infrastructure/
├── terragrunt.hcl # Root config
├── modules/ # Reusable modules
│ ├── vpc/
│ ├── eks/
│ └── rds/
├── dev/
│ ├── terragrunt.hcl # Dev overrides
│ ├── vpc/
│ │ └── terragrunt.hcl # Module invocation
│ └── eks/
│ └── terragrunt.hcl
└── prod/
├── terragrunt.hcl
└── ...
Best for: Large-scale, many environments, DRY configuration, team-level isolation.
Provider Configuration Patterns
Version Pinning
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0" # Allow 5.x, block 6.0
}
random = {
source = "hashicorp/random"
version = "~> 3.5"
}
}
}
Multi-Region with Aliases
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "west"
region = "us-west-2"
}
resource "aws_s3_bucket" "primary" {
bucket = "my-app-primary"
}
resource "aws_s3_bucket" "replica" {
provider = aws.west
bucket = "my-app-replica"
}
Multi-Account with Assume Role
provider "aws" {
alias = "production"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::PROD_ACCOUNT_ID:role/TerraformRole"
}
}
State Management Decision Tree
Single developer, small project?
├── Yes → Local state (but migrate to remote ASAP)
└── No
├── Using Terraform Cloud/Enterprise?
│ └── Yes → TF Cloud native backend (built-in locking, encryption, RBAC)
└── No
├── AWS?
│ └── S3 + DynamoDB (encryption, locking, versioning)
├── GCP?
│ └── GCS bucket (native locking, encryption)
├── Azure?
│ └── Azure Blob Storage (native locking, encryption)
└── Other?
└── Consul or PostgreSQL backend
Environment isolation strategy:
├── Separate state files per environment (recommended)
│ ├── Option A: Separate directories (dev/, staging/, prod/)
│ └── Option B: Terraform workspaces (simpler but less isolation)
└── Single state file for all environments (never do this)
CI/CD Integration Patterns
GitHub Actions Plan/Apply
# .github/workflows/terraform.yml
name: Terraform
on:
pull_request:
paths: ['terraform/**']
push:
branches: [main]
paths: ['terraform/**']
jobs:
plan:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init
- run: terraform validate
- run: terraform plan -out=tfplan
- run: terraform show -json tfplan > plan.json
# Post plan as PR comment
apply:
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
environment: production
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init
- run: terraform apply -auto-approve
Drift Detection
# Run on schedule to detect drift
name: Drift Detection
on:
schedule:
- cron: '0 6 * * 1-5' # Weekdays at 6 AM
jobs:
detect:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init
- run: |
terraform plan -detailed-exitcode -out=drift.tfplan 2>&1 | tee drift.log
EXIT_CODE=$?
if [ $EXIT_CODE -eq 2 ]; then
echo "DRIFT DETECTED — review drift.log"
# Send alert (Slack, PagerDuty, etc.)
fi
Proactive Triggers
Flag these without being asked:
- No remote backend configured → Migrate to S3/GCS/Azure Blob with locking and encryption.
- Provider without version constraint → Add
version = "~> X.0"to prevent breaking upgrades. - Hardcoded secrets in .tf files → Use variables with
sensitive = true, or integrate Vault/SSM. - IAM policy with
"Action": "*"→ Scope to specific actions. No wildcard actions in production. - Security group open to 0.0.0.0/0 on SSH/RDP → Restrict to bastion CIDR or use SSM Session Manager.
- No state locking → Enable DynamoDB table for S3 backend, or use TF Cloud.
- Resources without tags → Add default_tags in provider block. Tags are mandatory for cost tracking.
- Missing
prevent_destroyon databases/storage → Add lifecycle block to prevent accidental deletion.
Installation
One-liner (any tool)
git clone https://github.com/alirezarezvani/claude-skills.git
cp -r claude-skills/engineering/terraform-patterns ~/.claude/skills/
Multi-tool install
./scripts/convert.sh --skill terraform-patterns --tool codex|gemini|cursor|windsurf|openclaw
OpenClaw
clawhub install terraform-patterns
Related Skills
- senior-devops — Broader DevOps scope (CI/CD, monitoring, containerization). Complementary — use terraform-patterns for IaC-specific work, senior-devops for pipeline and infrastructure operations.
- aws-solution-architect — AWS architecture design. Complementary — terraform-patterns implements the infrastructure, aws-solution-architect designs it.
- senior-security — Application security. Complementary — terraform-patterns covers infrastructure security posture, senior-security covers application-level threats.
- ci-cd-pipeline-builder — Pipeline construction. Complementary — terraform-patterns defines infrastructure, ci-cd-pipeline-builder automates deployment.
相关 Skills
可观测性设计
by alirezarezvani
面向生产系统规划可落地的可观测性体系,串起指标、日志、链路追踪与 SLI/SLO、错误预算、告警和仪表盘设计,适合搭建监控平台与优化故障响应。
✎ 把监控、日志、链路追踪串起来,帮助团队从设计阶段构建可观测性,排障更快、系统演进更稳。
资深开发运维
by alirezarezvani
覆盖 CI/CD 流水线生成、Terraform 基建脚手架和自动化部署,适合在 AWS、GCP、Azure 上搭建云原生发布流程,管理 Docker/Kubernetes 基础设施并持续优化交付。
✎ 把CI/CD、基础设施即代码、容器与监控串成一条交付链,尤其适合AWS/GCP/Azure多云团队高效落地。
环境密钥管理
by alirezarezvani
统一梳理dev/staging/prod的.env和密钥流程,自动生成.env.example、校验必填变量、扫描Git历史泄漏,并联动Vault、AWS SSM、1Password、Doppler完成轮换。
✎ 统一管理环境变量、密钥与配置,减少泄露和部署混乱,安全治理与团队协作一起做好,DevOps 场景很省心。
相关 MCP 服务
kubefwd
编辑精选by txn2
kubefwd 是让 AI 帮你批量转发 Kubernetes 服务到本地的开发神器。
✎ 微服务开发者最头疼的本地调试问题,它一键搞定——自动分配 IP 避免端口冲突,还能用自然语言查询状态。但依赖 AI 工作流,纯命令行爱好者可能觉得不够直接。
Cloudflare
编辑精选by Cloudflare
Cloudflare MCP Server 是让你用自然语言管理 Workers、KV 和 R2 等云资源的工具。
✎ 这个工具解决了开发者频繁切换控制台和文档的痛点,特别适合那些在 Cloudflare 上部署无服务器应用、需要快速调试或管理配置的团队。不过,由于它依赖多个子服务器,初次设置可能有点繁琐,建议先从 Workers Bindings 这类核心功能入手。
Terraform
编辑精选by hashicorp
Terraform MCP Server 是让 AI 助手直接操作 Terraform Registry 和 HCP Terraform 的桥梁。
✎ 如果你经常在 Terraform 里翻文档找模块配置,这个服务器能省不少时间——直接问 Claude 就能生成准确的代码片段。最适合管理多云基础设施的团队,但注意它目前只适合本地使用,别在生产环境里暴露 HTTP 端点。