Docker launched a new feature to let developers define, build, and run agents using Docker Compose, with the aim to streamline agent development process and reduce repetitive tasks. Additionally, Docker Offload, now in beta, provides a way to seamlessly offload building and running models to remote GPU compute.
Adding support for defining agents using Docker Compose is a further step in Docker’s strategy to position itself as a key tool provider for agent development, much like it did for container-based development. As Docker’s Mark Cavage and Tushar Jain note, this means simplifying repetitive and tedious tasks that agent development typically involves, such as iterating with different models, securely connecting to MCP tools, and packaging everything so teammates can easily run the same workflow.
The new feature allows developers to declare open models, agents, and MCP tools in a compose.yaml
file, then build and run them using docker compose up
. Docker Compose integrates with many current agentic frameworks, including LangGraph, Embabel, Vercel AI, Spring AI, CrewAI, Google ADK, and Agno.
To help developers get started with using Docker Compose for agent development, Docker has created a GitHub repository with sample projects for all supported frameworks. For instance, one example shows how to build a collaborative multi-agent fact checker using Google ADK:
The Critic agent gathers evidence via live internet searches using DuckDuckGo through the Model Context Protocol (MCP), while the Reviser agent analyzes and refines the conclusion using internal reasoning alone. The system showcases how agents with distinct roles and tools can collaborate under orchestration.
The corresponding compose.yaml
file defines two services, adk
and mcp-gateway
, and includes a models
section listing used models along with their arguments. Docker introduced the possibility to package and run local models with Model Runner in Docker Desktop 4.40, but you can also use remote or Cloud based models by providing the appropriate credentials.
Docker Compose modular architecture makes it easy to create compose overrides for multiple agent configurations. For example, you can define a variant that uses OpenAI instead of a local model, or one that targets Google Cloud Run. This lets you combine multiple compose files to easily switch your agent setup:
docker compose -f compose.yaml -f compose.openai.yaml up --build
Another new feature for agent development supported in the latest Docker Desktop is Docker Offload. This fully managed service can be used as a drop-in replacement for Docker Model Runner when local resources are not sufficient, allowing developers to run models and containers on a cloud GPU transparently using the same workflow as for local deployment.
Docker Offload frees you from infrastructure constraints by offloading compute-intensive workloads, like large language models and multi-agent orchestration, to high-performance cloud environments. No complex setup, no GPU shortages, no configuration headaches.
The service is currently available in beta and Docker is providing 300 minutes of free usage to help developers get started.