Amazon Web Services has announced enhancements to its CodeBuild service, allowing teams to use Amazon ECR as a remote Docker layer cache, significantly reducing image build times in CI/CD pipelines. By leveraging ECR repositories to persist and reuse build layers across runs, organisations can skip rebuilding unchanged parts of containers and accelerate delivery.
The blog outlines how Docker BuildKit support enables commands such as –cache-from and –cache-to pointing to ECR-based cache images, allowing CodeBuild-initiated builds to pull cached layers from previous runs and push updated cache layers back to ECR. As a result, rebuilds can skip large parts of the Dockerfile when base layers remain unchanged, improving developer feedback loops and reducing build resource consumption.
AWS emphasises that this approach addresses one of the major bottlenecks in container-based CI/CD: rebuild times driven by repeatedly constructing heavy dependency layers. The remote caching model complements existing local caching options and provides a more reliable, global cache across frequent builds and multiple agents.
To implement the solution, users configure their build pipelines to push an image representing the cache (often using a separate tag) to ECR, then reference it in subsequent builds. Over time, as the cache accumulates, build durations drop significantly because the Docker engine bypasses unchanged layers. AWS notes that cache effectiveness depends on how changes propagate through the Dockerfile and how well the cache is structured; optimisations such as multi-stage builds and layer ordering enhance gains.
The setup is straightforward. Developers create an ECR repository to store cached image layers, grant CodeBuild permissions to push and pull from it, and enable BuildKit within their build environment. From there, CodeBuild can automatically pull previously built layers from ECR before building new ones and push updated cache layers once the build completes. This process drastically reduces redundant rebuilds, improving build efficiency and developer productivity.
A typical configuration involves defining environment variables and commands in the project’s buildspec.yml. During the pre-build phase, CodeBuild authenticates with ECR and pulls the cache image if it exists. The build phase runs Docker with BuildKit enabled, specifying the ECR cache as both the source and destination for layer reuse. In the post-build phase, updated cache layers are pushed back to ECR for use in future builds.
According to the blog post, early adopters report substantial time savings, with builds that previously took 10-15 minutes now completing in under five minutes when cache reuse is high. One user posted how they were able to reduce their build times from 6 to 2 minutes as a result of this new feature. The improvement is particularly noticeable for projects that rely on large dependency layers or multi-stage Docker builds. AWS notes that optimal performance depends on consistent Dockerfile layer ordering and frequent cache refreshes to avoid stale dependencies.
This approach aligns AWS with a broader industry shift toward smarter caching and reproducible builds. Competing CI/CD providers such as GitHub Actions and GitLab CI also offer remote caching solutions, GitHub through its Actions Cache service and GitLab via registry-based caching, but AWS’s integration with ECR and BuildKit provides a more native, cloud-optimized workflow.
By integrating Docker’s native caching capabilities directly into its managed build environment, AWS continues to streamline the developer experience, bringing faster, more reliable builds to cloud-native teams. For organizations running large containerized workloads, this enhancement represents a significant step toward more efficient, cost-effective CI/CD pipelines.
Overall, this update provides a practical lever for development teams seeking to improve pipeline speed and resource efficiency, especially in container-heavy organisations. By reusing cache layers stored in ECR, AWS CodeBuild users can shorten turnaround times, reduce compute cost, and improve build reliability.
AWS is not alone with this approach to accelerating Docker image builds, as Google and Microsoft also offer something similar:
With Cloud Build, Google supports remote cache images for container builds and Buildpacks. For example, when using buildpacks, you can specify a –cache-image flag pointing to a repository like LOCATION-docker.pkg.dev/PROJECT_ID/REPO_NAME/CACHE_IMAGE_NAME:latest to reuse previous build results. In addition, Cloud Build’s documentation recommends using –cache-from for Docker builds to leverage cached layers stored in a registry. These features mirror AWS’s concept of using a remote registry as a cache, though Google emphasises Buildpacks and registry-based cache images rather than a dedicated “image-layer cache” mechanism built into the build service itself.
Microsoft’s Azure Pipelines supports caching of various build artifacts and dependencies, but when it comes to container layer caching, user community feedback suggests limitations. On self-hosted agents, you can implement Docker’s –cache-from strategy and manage your own registry of cached layers, but on Microsoft-hosted agents, the experience can be brittle or less efficient. One user on Reddit observed:
“Azure Pipelines build pipeline with caching support for multi-stage Docker images… downloading the cache negated any benefits we got from the cache”
In short, while Azure supports caching tasks in pipelines, the end-to-end flow for Docker layer reuse is less mature compared with custom registry-based caching workflows.
