Amazon announced the preview of Amazon Bedrock AgentCore, a collection of enterprise-grade services that help developers deploy and operate AI agents at scale across frameworks and foundation models. The platform addresses infrastructure challenges developers face when building production AI agents by providing “purpose-built infrastructure to scale agents securely, powerful tools to enhance agent capabilities, and essential controls to ensure trustworthy operations.” AgentCore works with popular frameworks including CrewAI, LangGraph, LlamaIndex, and Strands Agents, while supporting models both within and outside the Amazon Bedrock ecosystem. Amazon claims this approach “eliminates the trade-off between open-source flexibility and enterprise-grade security.”
Source: Amazon Bedrock
The service differs from Amazon’s existing Bedrock Agents by targeting production-grade scalability rather than simplified agent creation. AgentCore removes infrastructure burdens developers typically encounter, automatically handling session management, identity controls, memory systems, and observability. This development reflects the evolution of foundation models from direct content generation tools to systems that power AI agents, which “reason, plan, act, learn, and adapt in pursuit of user-defined goals with limited human oversight.” The emergence of standardized protocols such as Model Context Protocol (MCP) and Agent2Agent (A2A) has enabled this new wave of agentic AI by simplifying how agents connect with external tools and systems, allowing developers to focus on application logic rather than infrastructure complexity.
AgentCore’s architecture consists of six core components: “Runtime, Gateway, Memory, Identity, Observability, and tools like Browser Tool and Code Interpreter.” Amazon designed these components to function independently or together, offering flexibility for development teams to select specific services rather than adopting the complete suite.
Source: Amazon Bedrock AgentCore
AgentCore Runtime component provides a secure, serverless and purpose-built hosting environment for deploying and running AI agents or tools. Runtime supports both real-time interactions and long-running workloads up to 8 hours, enabling complex agent reasoning and asynchronous workloads that may involve multi-agent collaboration or extended problem-solving sessions. The service implements session isolation where each user session runs in a dedicated microVM with isolated CPU, memory, and filesystem resources to prevent cross-session data contamination. Runtime integrates with corporate identity providers including Okta, Microsoft Entra ID, and Amazon Cognito, while supporting outbound authentication flows to securely access third-party services like Slack, Zoom, and GitHub.
AgentCore Gateway handles tool integration by converting APIs, Lambda functions, and existing services into Model Context Protocol (MCP)-compatible tools while supporting OpenAPI, Smithy, and Lambda as input types. Amazon positions Gateway as the solution that provides both comprehensive ingress authentication and egress authentication in a fully-managed service. Gateway includes one-click integration with enterprise tools including Salesforce, Slack, Jira, Asana, and Zendesk. The component offers semantic tool selection that enables agents to search across available tools to find the most appropriate ones for specific contexts, allowing agents to leverage thousands of tools while minimizing prompt size and reducing latency.
AgentCore Memory component enables context persistence through a dual-memory architecture providing “both immediate and long-term knowledge.” Short-term memory stores conversations to keep track of immediate context and supports multi-step task completion and context-aware decision making. Long-term memory stores “extracted insights – such as user preferences, semantic facts, and summaries – for knowledge retention across sessions.”
AgentCore Identity management addresses enterprise security through a centralized approach, providing “a centralized capability for managing agent identities, securing credentials, and enabling seamless integration with AWS and third-party services through Sigv4, standardized OAuth 2.0 flows, and API keys.” The identity system implements controls that verify each request independently, requiring explicit verification for all access attempts regardless of source. The platform includes a token vault that provides security for storing OAuth 2.0 tokens, OAuth client credentials, and API keys with comprehensive encryption at rest and in transit.
AgentCore Observability provides production monitoring capabilities designed specifically for AI agent workflows. The service “helps developers trace, debug, and monitor agent performance in production environments” while offering “detailed visualizations of each step in the agent workflow, enabling developers to inspect an agent’s execution path, audit intermediate outputs, and debug performance bottlenecks and failures.” The platform outputs data in standardized OpenTelemetry (OTEL)-compatible format, enabling you to easily integrate it with your existing monitoring and observability stack.
AgentCore Code Interpreter extends agent capabilities by enabling secure computational execution within containerized environments. The tool “allows AI agents to write, execute, and debug code securely in sandbox environments” while serving as “a bridge between natural language understanding and computational execution.” The service supports various programming languages including Python, JavaScript, and TypeScript while allowing agents to perform complex workflows and data analysis in isolated sandbox environments, while accessing internal data sources without exposing sensitive data or compromising security.
AgentCore Browser Tool completes the platform by providing “a fast, secure, cloud-based browser runtime to enable AI agents to interact with websites at scale” while maintaining “enterprise-grade security, comprehensive observability features, and automatically scales— all without infrastructure management overhead.”
The AgentCore launch represents the latest move in intensifying competition among cloud providers to offer AI infrastructure platforms. Shelly Palmer, Professor of Advanced Media in Residence at Syracuse University’s Newhouse School and CEO of The Palmer Group, observed that
AWS, Microsoft, Google, and pretty much every foundational model builder are in a ‘I can build you a platform to build platforms’ race. This is great for everyone. The competition will keep the big players sharp, and we’ll all reap the benefits.
Amazon Bedrock AgentCore preview launches in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Frankfurt). Development teams can access technical documentation through the official AgentCore documentation, while pricing information remains available on the Amazon Bedrock AgentCore Pricing page. Amazon provides implementation examples through the amazon-bedrock-agentcore-samples repository on GitHub for developers integrating the services with existing enterprise systems.