HashiCorp has announced the availability of experimental Model Context Protocol (MCP) servers for Terraform, Vault, and Vault Radar. These offerings aim to extend how organisations can integrate AI into infrastructure provisioning, security management, and risk analysis workflows. MCP is an open standard that enables Large Language Models (LLMs) to connect with trusted automation systems while maintaining a secure, controlled, and auditable environment. As HashiCorp explains, these servers are designed to provide a “critical new interface layer between trusted automation systems and emerging AI ecosystems.” They are currently experimental, recommended only for development and evaluation purposes, and not intended for production use.
The Terraform MCP Server provides LLMs with structured access to “query the Terraform Registry for provider, module, and policy information and request recommendations”. This allows AI assistants to base recommendations on accurate, validated configuration patterns, ensuring that suggested changes align with current best practices. In practical use, prompts given to an AI system can be transformed into Terraform commands for provisioning resources, adjusting infrastructure configurations, or executing plan and apply operations. The server is available both as an open-source project and via the AWS Marketplace, with integration to Amazon Bedrock AgentCore to support teams adopting agent-based architectures.
The Vault MCP Server “enables operators to use natural language to perform basic queries and operations in Vault” without requiring direct interaction with the Vault API. It supports creating and deleting mounts, listing available mounts, reading stored secrets, writing secrets to key-value stores, and listing all secrets under a specified path. By enabling AI-assisted workflows to carry out these functions, teams can integrate secure secrets management into automated processes while preventing raw credential exposure to the LLM.
The Vault Radar MCP Server “enhances security operations” by integrating with HashiCorp Vault Radar to facilitate natural language queries of complex risk datasets. For example, a security analyst could ask, “Which leaked secrets are of both critical severity and present in Vault?” and receive actionable insights without performing manual data correlation. This capability is aimed at helping security teams quickly identify high-priority risks and accelerate remediation workflows.
Security principles underpin all MCP servers. Scoped APIs enforce least-privilege access, ensuring that AI systems only perform approved actions. Raw secrets are never directly exposed, and each operation is recorded for auditing purposes. HashiCorp cautions that outputs generated by these servers can vary depending on the model, the query, and the connected MCP server, and should always be reviewed for compliance with organisational security, cost, and policy requirements.
The Terraform, Vault, and Vault Radar MCP servers are currently available as open-source projects. Implementation guides, configuration examples, and full source code can be accessed from HashiCorp’s GitHub repositories, allowing teams to explore and evaluate these tools in non-production environments.