Stop Waiting on Docker Builds: Practical Ways I Cut Local Image Build Time by 70%
Concrete, low‑friction techniques to speed up Docker image builds on a laptop — with commands, tradeoffs, and tips for Indian developers facing limited bandwidth and modest machines.
Written by: Rohan Deshpande
Two years ago I would nurse a cup of coffee while waiting 10–15 minutes for the dev container to rebuild after a tiny change. That’s time, and patience, you don’t get back. After a few focused experiments I shaved build times down dramatically without changing our CI. If you’re on a modest laptop, behind a flaky company VPN, or on metered home broadband in India, these tactics will help you be faster and a lot less frustrated.
What I learned: most slow Docker builds are avoidable. The fix is not a single command — it’s a set of habits and a few tools that play well together.
Why builds get slow (short list)
- Large base images pulled repeatedly over slow networks.
- Inefficient Dockerfiles that invalidate cache frequently.
- Building heavy toolchains or compiling code inside the image every time.
- Not using BuildKit/parallel builds or remote cache layers.
Below are practical changes I made. Each section has an explicit command or option you can try today.
- Use BuildKit and buildx — the modern builder Enable BuildKit (it’s faster and better at parallelism):
Linux/macOS: export DOCKER_BUILDKIT=1 docker buildx build —load -t myapp:dev .
BuildKit runs steps in parallel, gives better cache reuse, and supports advanced features like mounting caches during build. It’s the single biggest win for real-world speed.
- Reduce cache pollution with .dockerignore and smart COPY ordering A large COPY that includes your node_modules or .git can bust the cache all the time. Add a .dockerignore that mirrors .gitignore plus things like local logs, .env.local, and editor folders.
Then order Dockerfile steps so infrequently changing things come first. Example:
FROM node:18 as base WORKDIR /app COPY package.json yarn.lock ./ # stable RUN yarn install COPY . . # changing files RUN yarn build
This way “yarn install” stays cached unless package.json changes.
- Leverage mount=type=cache for package managers and compilers With BuildKit you can use temporary caches during build to avoid re-downloading dependencies:
syntax=docker/dockerfile:1.4
FROM node:18
WORKDIR /app
COPY package.json yarn.lock ./
RUN —mount=type=cache,target=/root/.cache/yarn
yarn install
COPY . .
RUN yarn build
On my laptop this cut repeated install time from ~40s to ~8s.
- Use a registry cache for cross-machine builds If you switch between CI and local dev, push intermediate cache layers to a registry:
docker buildx build —cache-to=type=registry,ref=ghcr.io/myorg/myapp-cache:latest —cache-from=type=registry,ref=ghcr.io/myorg/myapp-cache:latest —push -t myapp:ci .
Now your CI and local machines can pull cached layers instead of rebuilding everything. In India, where bandwidth can be a constraint, hosting cache images on a nearer registry (GitHub Container Registry, GitLab, or a Harbor instance) noticeably cuts re-download time.
- Cache compilation artifacts (ccache, pip wheelhouse) If you build native extensions, enable ccache or create a pip wheelhouse and mount it as cache. For example for C/C++:
RUN —mount=type=cache,target=/root/.ccache
make
This keeps object files between builds and saves minutes on large codebases.
-
Offload heavy steps to multi-stage builds or prebuilt images If you have a heavy toolchain, build it once into a base image and reuse it. Say you compile a binary used across services — build it in a dedicated image and FROM that image in multiple services. That turns frequent small app edits into fast layer-only builds.
-
Use layer-specific cache hints (when needed) For troublesome steps you can control cache busting deliberately with build args or labels. This helps when you want reproducibility but still want cache benefits.
-
Be pragmatic about disk and cleanup Build caches grow. On laptops with limited SSD space, schedule docker system prune -af and remove dangling images occasionally. Tradeoff: you’ll lose the cache and some builds will be slow until the cache rebuilds.
Real constraints and tradeoffs
- Security: caching can accidentally preserve secrets if you COPY them into an image. Never bake secrets into layers; use build-time mounts or secrets management.
- Consistency vs speed: aggressive caching can mask environment drift. A fully cached image might not fail where a clean build would. I periodically run a full CI rebuild without cache to catch issues.
- Resource usage: BuildKit and buildx can use a lot of CPU and memory on a small laptop. If your machine heats up or swaps, throttling builds may be necessary.
- Bandwidth cost: pushing caches to registries uses data. In India, choose registries with good regional performance or self-hosted caches inside your company.
Quick checklist to try tonight
- Enable BuildKit: export DOCKER_BUILDKIT=1
- Add .dockerignore and reorder COPY in Dockerfile
- Try —mount=type=cache for your package manager
- Push/pull cache to a nearby registry with buildx
- Run docker system prune monthly
A final note: speed without predictability is frustrating. I aim for local fast builds that mirror CI behavior closely. That means keeping base images and toolchain versions pinned, and occasionally forcing a clean build in CI. The result: faster iteration on my laptop, fewer broken surprises later, and more time for the work that actually matters.
If you want, I can look at your Dockerfile and suggest the one or two edits that will save the most time.