基建模式

terraform-patterns

by alirezarezvani

Terraform infrastructure-as-code agent skill and plugin for Claude Code, Codex, Gemini CLI, Cursor, OpenClaw. Covers module design patterns, state management strategies, provider configuration, security hardening, policy-as-code with Sentinel/OPA, and CI/CD plan/apply workflows. Use when: user wants to design Terraform modules, manage state backends, review Terraform security, implement multi-region deployments, or follow IaC best practices.

3.7kDevOps未扫描2026年3月23日

安装

claude skill add --url github.com/openclaw/skills/tree/main/skills/alirezarezvani/terraform-patterns

文档

Terraform Patterns

Predictable infrastructure. Secure state. Modules that compose. No drift.

Opinionated Terraform workflow that turns sprawling HCL into well-structured, secure, production-grade infrastructure code. Covers module design, state management, provider patterns, security hardening, and CI/CD integration.

Not a Terraform tutorial — a set of concrete decisions about how to write infrastructure code that doesn't break at 3 AM.


Slash Commands

CommandWhat it does
/terraform:reviewAnalyze Terraform code for anti-patterns, security issues, and structure problems
/terraform:moduleDesign or refactor a Terraform module with proper inputs, outputs, and composition
/terraform:securityAudit Terraform code for security vulnerabilities, secrets exposure, and IAM misconfigurations

When This Skill Activates

Recognize these patterns from the user:

  • "Review this Terraform code"
  • "Design a Terraform module for..."
  • "My Terraform state is..."
  • "Set up remote state backend"
  • "Multi-region Terraform deployment"
  • "Terraform security review"
  • "Module structure best practices"
  • "Terraform CI/CD pipeline"
  • Any request involving: .tf files, HCL, Terraform modules, state management, provider configuration, infrastructure-as-code

If the user has .tf files or wants to provision infrastructure with Terraform → this skill applies.


Workflow

/terraform:review — Terraform Code Review

  1. Analyze current state

    • Read all .tf files in the target directory
    • Identify module structure (flat vs nested)
    • Count resources, data sources, variables, outputs
    • Check naming conventions
  2. Apply review checklist

    code
    MODULE STRUCTURE
    ├── Variables have descriptions and type constraints
    ├── Outputs expose only what consumers need
    ├── Resources use consistent naming: {provider}_{type}_{purpose}
    ├── Locals used for computed values and DRY expressions
    └── No hardcoded values — everything parameterized or in locals
    
    STATE & BACKEND
    ├── Remote backend configured (S3, GCS, Azure Blob, Terraform Cloud)
    ├── State locking enabled (DynamoDB for S3, native for others)
    ├── State encryption at rest enabled
    ├── No secrets stored in state (or state access is restricted)
    └── Workspaces or directory isolation for environments
    
    PROVIDERS
    ├── Version constraints use pessimistic operator: ~> 5.0
    ├── Required providers block in terraform {} block
    ├── Provider aliases for multi-region or multi-account
    └── No provider configuration in child modules
    
    SECURITY
    ├── No hardcoded secrets, keys, or passwords
    ├── IAM follows least-privilege principle
    ├── Encryption enabled for storage, databases, secrets
    ├── Security groups are not overly permissive (no 0.0.0.0/0 ingress on sensitive ports)
    └── Sensitive variables marked with sensitive = true
    
  3. Generate report

    bash
    python3 scripts/tf_module_analyzer.py ./terraform
    
  4. Run security scan

    bash
    python3 scripts/tf_security_scanner.py ./terraform
    

/terraform:module — Module Design

  1. Identify module scope

    • Single responsibility: one module = one logical grouping
    • Determine inputs (variables), outputs, and resource boundaries
    • Decide: flat module (single directory) vs nested (calling child modules)
  2. Apply module design checklist

    code
    STRUCTURE
    ├── main.tf        — Primary resources
    ├── variables.tf   — All input variables with descriptions and types
    ├── outputs.tf     — All outputs with descriptions
    ├── versions.tf    — terraform {} block with required_providers
    ├── locals.tf      — Computed values and naming conventions
    ├── data.tf        — Data sources (if any)
    └── README.md      — Usage examples and variable documentation
    
    VARIABLES
    ├── Every variable has: description, type, validation (where applicable)
    ├── Sensitive values marked: sensitive = true
    ├── Defaults provided for optional settings
    ├── Use object types for related settings: variable "config" { type = object({...}) }
    └── Validate with: validation { condition = ... }
    
    OUTPUTS
    ├── Output IDs, ARNs, endpoints — things consumers need
    ├── Include description on every output
    ├── Mark sensitive outputs: sensitive = true
    └── Don't output entire resources — only specific attributes
    
    COMPOSITION
    ├── Root module calls child modules
    ├── Child modules never call other child modules
    ├── Pass values explicitly — no hidden data source lookups in child modules
    ├── Provider configuration only in root module
    └── Use module "name" { source = "./modules/name" }
    
  3. Generate module scaffold

    • Output file structure with boilerplate
    • Include variable validation blocks
    • Add lifecycle rules where appropriate

/terraform:security — Security Audit

  1. Code-level audit

    CheckSeverityFix
    Hardcoded secrets in .tf filesCriticalUse variables with sensitive = true or vault
    IAM policy with * actionsCriticalScope to specific actions and resources
    Security group with 0.0.0.0/0 on port 22/3389CriticalRestrict to known CIDR blocks or use SSM/bastion
    S3 bucket without encryptionHighAdd server_side_encryption_configuration block
    S3 bucket with public accessHighAdd aws_s3_bucket_public_access_block
    RDS without encryptionHighSet storage_encrypted = true
    RDS publicly accessibleHighSet publicly_accessible = false
    CloudTrail not enabledMediumAdd aws_cloudtrail resource
    Missing prevent_destroy on stateful resourcesMediumAdd lifecycle { prevent_destroy = true }
    Variables without sensitive = true for secretsMediumAdd sensitive = true to secret variables
  2. State security audit

    CheckSeverityFix
    Local state fileCriticalMigrate to remote backend with encryption
    Remote state without encryptionHighEnable encryption on backend (SSE-S3, KMS)
    No state lockingHighEnable DynamoDB for S3, native for TF Cloud
    State accessible to all team membersMediumRestrict via IAM policies or TF Cloud teams
  3. Generate security report

    bash
    python3 scripts/tf_security_scanner.py ./terraform
    python3 scripts/tf_security_scanner.py ./terraform --output json
    

Tooling

scripts/tf_module_analyzer.py

CLI utility for analyzing Terraform directory structure and module quality.

Features:

  • Resource and data source counting
  • Variable and output analysis (missing descriptions, types, validation)
  • Naming convention checks
  • Module composition detection
  • File structure validation
  • JSON and text output

Usage:

bash
# Analyze a Terraform directory
python3 scripts/tf_module_analyzer.py ./terraform

# JSON output
python3 scripts/tf_module_analyzer.py ./terraform --output json

# Analyze a specific module
python3 scripts/tf_module_analyzer.py ./modules/vpc

scripts/tf_security_scanner.py

CLI utility for scanning .tf files for common security issues.

Features:

  • Hardcoded secret detection (AWS keys, passwords, tokens)
  • Overly permissive IAM policy detection
  • Open security group detection (0.0.0.0/0 on sensitive ports)
  • Missing encryption checks (S3, RDS, EBS)
  • Public access detection (S3, RDS, EC2)
  • Sensitive variable audit
  • JSON and text output

Usage:

bash
# Scan a Terraform directory
python3 scripts/tf_security_scanner.py ./terraform

# JSON output
python3 scripts/tf_security_scanner.py ./terraform --output json

# Strict mode (elevate warnings)
python3 scripts/tf_security_scanner.py ./terraform --strict

Module Design Patterns

Pattern 1: Flat Module (Small/Medium Projects)

code
infrastructure/
├── main.tf          # All resources
├── variables.tf     # All inputs
├── outputs.tf       # All outputs
├── versions.tf      # Provider requirements
├── terraform.tfvars # Environment values (not committed)
└── backend.tf       # Remote state configuration

Best for: Single application, < 20 resources, one team owns everything.

Pattern 2: Nested Modules (Medium/Large Projects)

code
infrastructure/
├── environments/
│   ├── dev/
│   │   ├── main.tf          # Calls modules with dev params
│   │   ├── backend.tf       # Dev state backend
│   │   └── terraform.tfvars
│   ├── staging/
│   │   └── ...
│   └── prod/
│       └── ...
├── modules/
│   ├── networking/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   └── outputs.tf
│   ├── compute/
│   │   └── ...
│   └── database/
│       └── ...
└── versions.tf

Best for: Multiple environments, shared infrastructure patterns, team collaboration.

Pattern 3: Mono-Repo with Terragrunt

code
infrastructure/
├── terragrunt.hcl           # Root config
├── modules/                  # Reusable modules
│   ├── vpc/
│   ├── eks/
│   └── rds/
├── dev/
│   ├── terragrunt.hcl       # Dev overrides
│   ├── vpc/
│   │   └── terragrunt.hcl   # Module invocation
│   └── eks/
│       └── terragrunt.hcl
└── prod/
    ├── terragrunt.hcl
    └── ...

Best for: Large-scale, many environments, DRY configuration, team-level isolation.


Provider Configuration Patterns

Version Pinning

hcl
terraform {
  required_version = ">= 1.5.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"    # Allow 5.x, block 6.0
    }
    random = {
      source  = "hashicorp/random"
      version = "~> 3.5"
    }
  }
}

Multi-Region with Aliases

hcl
provider "aws" {
  region = "us-east-1"
}

provider "aws" {
  alias  = "west"
  region = "us-west-2"
}

resource "aws_s3_bucket" "primary" {
  bucket = "my-app-primary"
}

resource "aws_s3_bucket" "replica" {
  provider = aws.west
  bucket   = "my-app-replica"
}

Multi-Account with Assume Role

hcl
provider "aws" {
  alias  = "production"
  region = "us-east-1"

  assume_role {
    role_arn = "arn:aws:iam::PROD_ACCOUNT_ID:role/TerraformRole"
  }
}

State Management Decision Tree

code
Single developer, small project?
├── Yes → Local state (but migrate to remote ASAP)
└── No
    ├── Using Terraform Cloud/Enterprise?
    │   └── Yes → TF Cloud native backend (built-in locking, encryption, RBAC)
    └── No
        ├── AWS?
        │   └── S3 + DynamoDB (encryption, locking, versioning)
        ├── GCP?
        │   └── GCS bucket (native locking, encryption)
        ├── Azure?
        │   └── Azure Blob Storage (native locking, encryption)
        └── Other?
            └── Consul or PostgreSQL backend

Environment isolation strategy:
├── Separate state files per environment (recommended)
│   ├── Option A: Separate directories (dev/, staging/, prod/)
│   └── Option B: Terraform workspaces (simpler but less isolation)
└── Single state file for all environments (never do this)

CI/CD Integration Patterns

GitHub Actions Plan/Apply

yaml
# .github/workflows/terraform.yml
name: Terraform
on:
  pull_request:
    paths: ['terraform/**']
  push:
    branches: [main]
    paths: ['terraform/**']

jobs:
  plan:
    runs-on: ubuntu-latest
    if: github.event_name == 'pull_request'
    steps:
      - uses: actions/checkout@v4
      - uses: hashicorp/setup-terraform@v3
      - run: terraform init
      - run: terraform validate
      - run: terraform plan -out=tfplan
      - run: terraform show -json tfplan > plan.json
      # Post plan as PR comment

  apply:
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main' && github.event_name == 'push'
    environment: production
    steps:
      - uses: actions/checkout@v4
      - uses: hashicorp/setup-terraform@v3
      - run: terraform init
      - run: terraform apply -auto-approve

Drift Detection

yaml
# Run on schedule to detect drift
name: Drift Detection
on:
  schedule:
    - cron: '0 6 * * 1-5'  # Weekdays at 6 AM

jobs:
  detect:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: hashicorp/setup-terraform@v3
      - run: terraform init
      - run: |
          terraform plan -detailed-exitcode -out=drift.tfplan 2>&1 | tee drift.log
          EXIT_CODE=$?
          if [ $EXIT_CODE -eq 2 ]; then
            echo "DRIFT DETECTED — review drift.log"
            # Send alert (Slack, PagerDuty, etc.)
          fi

Proactive Triggers

Flag these without being asked:

  • No remote backend configured → Migrate to S3/GCS/Azure Blob with locking and encryption.
  • Provider without version constraint → Add version = "~> X.0" to prevent breaking upgrades.
  • Hardcoded secrets in .tf files → Use variables with sensitive = true, or integrate Vault/SSM.
  • IAM policy with "Action": "*" → Scope to specific actions. No wildcard actions in production.
  • Security group open to 0.0.0.0/0 on SSH/RDP → Restrict to bastion CIDR or use SSM Session Manager.
  • No state locking → Enable DynamoDB table for S3 backend, or use TF Cloud.
  • Resources without tags → Add default_tags in provider block. Tags are mandatory for cost tracking.
  • Missing prevent_destroy on databases/storage → Add lifecycle block to prevent accidental deletion.

Installation

One-liner (any tool)

bash
git clone https://github.com/alirezarezvani/claude-skills.git
cp -r claude-skills/engineering/terraform-patterns ~/.claude/skills/

Multi-tool install

bash
./scripts/convert.sh --skill terraform-patterns --tool codex|gemini|cursor|windsurf|openclaw

OpenClaw

bash
clawhub install terraform-patterns

Related Skills

  • senior-devops — Broader DevOps scope (CI/CD, monitoring, containerization). Complementary — use terraform-patterns for IaC-specific work, senior-devops for pipeline and infrastructure operations.
  • aws-solution-architect — AWS architecture design. Complementary — terraform-patterns implements the infrastructure, aws-solution-architect designs it.
  • senior-security — Application security. Complementary — terraform-patterns covers infrastructure security posture, senior-security covers application-level threats.
  • ci-cd-pipeline-builder — Pipeline construction. Complementary — terraform-patterns defines infrastructure, ci-cd-pipeline-builder automates deployment.

相关 Skills

可观测性设计

by alirezarezvani

Universal
热门

面向生产系统规划可落地的可观测性体系,串起指标、日志、链路追踪与 SLI/SLO、错误预算、告警和仪表盘设计,适合搭建监控平台与优化故障响应。

把监控、日志、链路追踪串起来,帮助团队从设计阶段构建可观测性,排障更快、系统演进更稳。

DevOps
未扫描9.0k

资深开发运维

by alirezarezvani

Universal
热门

覆盖 CI/CD 流水线生成、Terraform 基建脚手架和自动化部署,适合在 AWS、GCP、Azure 上搭建云原生发布流程,管理 Docker/Kubernetes 基础设施并持续优化交付。

把CI/CD、基础设施即代码、容器与监控串成一条交付链,尤其适合AWS/GCP/Azure多云团队高效落地。

DevOps
未扫描9.0k

环境密钥管理

by alirezarezvani

Universal
热门

统一梳理dev/staging/prod的.env和密钥流程,自动生成.env.example、校验必填变量、扫描Git历史泄漏,并联动Vault、AWS SSM、1Password、Doppler完成轮换。

统一管理环境变量、密钥与配置,减少泄露和部署混乱,安全治理与团队协作一起做好,DevOps 场景很省心。

DevOps
未扫描9.0k

相关 MCP 服务

kubefwd

编辑精选

by txn2

热门

kubefwd 是让 AI 帮你批量转发 Kubernetes 服务到本地的开发神器。

微服务开发者最头疼的本地调试问题,它一键搞定——自动分配 IP 避免端口冲突,还能用自然语言查询状态。但依赖 AI 工作流,纯命令行爱好者可能觉得不够直接。

DevOps
4.1k

Cloudflare

编辑精选

by Cloudflare

热门

Cloudflare MCP Server 是让你用自然语言管理 Workers、KV 和 R2 等云资源的工具。

这个工具解决了开发者频繁切换控制台和文档的痛点,特别适合那些在 Cloudflare 上部署无服务器应用、需要快速调试或管理配置的团队。不过,由于它依赖多个子服务器,初次设置可能有点繁琐,建议先从 Workers Bindings 这类核心功能入手。

DevOps
3.6k

Terraform

编辑精选

by hashicorp

Terraform MCP Server 是让 AI 助手直接操作 Terraform Registry 和 HCP Terraform 的桥梁。

如果你经常在 Terraform 里翻文档找模块配置,这个服务器能省不少时间——直接问 Claude 就能生成准确的代码片段。最适合管理多云基础设施的团队,但注意它目前只适合本地使用,别在生产环境里暴露 HTTP 端点。

DevOps
1.3k

评论