Why Context Becomes Critical for Platform Engineers Using Terraform Nested Modules
The disaster was staring me in the face. I was looking at a Terraform plan that would have brought down our entire production environment. I had asked an AI agent to help me add monitoring to our SQL Database module, and it had given me what looked like a perfectly reasonable configuration. But it couldn’t see the full picture.
After writing about workspaces and AI coding agents, I realized the same principles apply to infrastructure code, but with even higher stakes. When you’re managing Terraform deployments with nested modules, context isn’t just helpful. It’s essential for avoiding production disasters.
Platform engineers know this pain all too well. You’re not just dealing with application code; you’re managing the foundation that everything else runs on. A misconfigured module can bring down entire environments, and without proper context, even the most sophisticated AI agents can make dangerous assumptions.
The context problem
Consider this typical infrastructure setup: modules for networking with virtual networks, subnets, and network security groups; compute modules with virtual machines, scale sets, and application gateways; data modules with SQL databases, Redis cache, and storage accounts; environment configurations for dev, staging, and prod; and shared variables, outputs, and providers. Each module has dependencies, outputs, and specific configuration requirements. The relationships between modules are complex, and changes in one can cascade through the entire infrastructure.
Without proper workspace setup, here’s what happens when you ask an AI agent for help. You ask “Help me add monitoring to the SQL Database module” and the AI responds “Here’s how to add Azure Monitor monitoring to your SQL Database…” But the AI can’t see that the SQL Database module depends on the networking module for network security groups, the monitoring configuration needs to reference the virtual network ID from the networking module, there are environment-specific variables that affect the monitoring setup, and the module outputs need to be updated to expose monitoring endpoints. The result? A configuration that looks correct but breaks the entire deployment.
The breakthrough moment
When I set up a proper workspace that includes the entire infrastructure codebase, the AI agent can understand module dependencies by seeing how modules relate to each other and suggesting changes that maintain those relationships. It can reference shared resources by understanding that the SQL Database module needs to reference network security groups from the networking module and suggesting the correct data sources. It can consider environment differences by seeing how the same module behaves differently across dev, staging, and prod environments. It can maintain consistency by suggesting patterns that work across all modules and environments.
The transformation
Let me show you the difference with a concrete example. Without proper context, the AI suggests incomplete configuration that only adds basic monitoring tags. But with full context, the AI suggests complete and correct configuration that references the network security group from the networking module, uses environment-specific variables for backup retention, applies proper tagging that merges common tags with monitoring-specific tags, creates the monitoring diagnostic setting with proper log analytics workspace references, enables the right log categories for SQL security audit events and insights, configures metrics collection, and updates module outputs to expose the monitoring workspace ID.
The difference is profound. Without context, you get a configuration that looks correct but breaks the entire deployment. With context, you get a configuration that works seamlessly with your existing infrastructure and follows your organization’s patterns.
The strategic framework
As a platform engineer, you’re not just writing code. You’re designing systems that other teams depend on. The context requirements are even more critical because a small change in a base module can break multiple dependent modules across all environments, infrastructure changes often have security implications that aren’t immediately obvious without full context, poorly configured resources can lead to unexpected costs especially in production environments, and many organizations have compliance requirements that affect how infrastructure is configured and monitored.
Here’s how I structure my Terraform workspaces for maximum AI agent effectiveness. Include everything by not just including the module you’re working on but the entire infrastructure codebase including all modules and their dependencies, environment configurations, shared variables and outputs, and documentation and README files. Use descriptive module names to make it clear what each module does and how it relates to others. Document dependencies by using comments and documentation to explain complex relationships. Version your modules by using version constraints to prevent unexpected changes.
With proper context, AI agents become incredibly powerful for platform engineering. They can identify all the modules that will be affected by a change through dependency analysis. They can spot potential security issues across the entire infrastructure through security review. They can suggest changes that reduce costs while maintaining functionality through cost optimization. They can ensure changes meet organizational compliance requirements through compliance checking.
The biggest lesson? Infrastructure code is inherently complex, and context is not optional. It’s essential. Without it, you’re not just writing bad code. You’re potentially creating production incidents. If you’re working with complex Terraform deployments and AI agents, invest in proper workspace setup. The time you spend organizing your infrastructure code will pay dividends in the quality and safety of your deployments. The future of platform engineering isn’t just about better tools. It’s about tools that understand the full context of your infrastructure. Workspaces make that possible.