Stop Waiting on Docker Builds: Practical Ways I Cut Local Image Build Time by 70%

Concrete, low‑friction techniques to speed up Docker image builds on a laptop — with commands, tradeoffs, and tips for Indian developers facing limited bandwidth and modest machines.

Written by: Rohan Deshpande

Developer working on a laptop with a terminal open and a coffee cup beside it.
Image credit: Andrea Piacquadio / Pexels

Two years ago I would nurse a cup of coffee while waiting 10–15 minutes for the dev container to rebuild after a tiny change. That’s time, and patience, you don’t get back. After a few focused experiments I shaved build times down dramatically without changing our CI. If you’re on a modest laptop, behind a flaky company VPN, or on metered home broadband in India, these tactics will help you be faster and a lot less frustrated.

What I learned: most slow Docker builds are avoidable. The fix is not a single command — it’s a set of habits and a few tools that play well together.

Why builds get slow (short list)

Below are practical changes I made. Each section has an explicit command or option you can try today.

  1. Use BuildKit and buildx — the modern builder Enable BuildKit (it’s faster and better at parallelism):

Linux/macOS: export DOCKER_BUILDKIT=1 docker buildx build —load -t myapp:dev .

BuildKit runs steps in parallel, gives better cache reuse, and supports advanced features like mounting caches during build. It’s the single biggest win for real-world speed.

  1. Reduce cache pollution with .dockerignore and smart COPY ordering A large COPY that includes your node_modules or .git can bust the cache all the time. Add a .dockerignore that mirrors .gitignore plus things like local logs, .env.local, and editor folders.

Then order Dockerfile steps so infrequently changing things come first. Example:

FROM node:18 as base WORKDIR /app COPY package.json yarn.lock ./ # stable RUN yarn install COPY . . # changing files RUN yarn build

This way “yarn install” stays cached unless package.json changes.

  1. Leverage mount=type=cache for package managers and compilers With BuildKit you can use temporary caches during build to avoid re-downloading dependencies:

syntax=docker/dockerfile:1.4

FROM node:18 WORKDIR /app COPY package.json yarn.lock ./ RUN —mount=type=cache,target=/root/.cache/yarn
yarn install COPY . . RUN yarn build

On my laptop this cut repeated install time from ~40s to ~8s.

  1. Use a registry cache for cross-machine builds If you switch between CI and local dev, push intermediate cache layers to a registry:

docker buildx build —cache-to=type=registry,ref=ghcr.io/myorg/myapp-cache:latest —cache-from=type=registry,ref=ghcr.io/myorg/myapp-cache:latest —push -t myapp:ci .

Now your CI and local machines can pull cached layers instead of rebuilding everything. In India, where bandwidth can be a constraint, hosting cache images on a nearer registry (GitHub Container Registry, GitLab, or a Harbor instance) noticeably cuts re-download time.

  1. Cache compilation artifacts (ccache, pip wheelhouse) If you build native extensions, enable ccache or create a pip wheelhouse and mount it as cache. For example for C/C++:

RUN —mount=type=cache,target=/root/.ccache
make

This keeps object files between builds and saves minutes on large codebases.

  1. Offload heavy steps to multi-stage builds or prebuilt images If you have a heavy toolchain, build it once into a base image and reuse it. Say you compile a binary used across services — build it in a dedicated image and FROM that image in multiple services. That turns frequent small app edits into fast layer-only builds.

  2. Use layer-specific cache hints (when needed) For troublesome steps you can control cache busting deliberately with build args or labels. This helps when you want reproducibility but still want cache benefits.

  3. Be pragmatic about disk and cleanup Build caches grow. On laptops with limited SSD space, schedule docker system prune -af and remove dangling images occasionally. Tradeoff: you’ll lose the cache and some builds will be slow until the cache rebuilds.

Real constraints and tradeoffs

Quick checklist to try tonight

A final note: speed without predictability is frustrating. I aim for local fast builds that mirror CI behavior closely. That means keeping base images and toolchain versions pinned, and occasionally forcing a clean build in CI. The result: faster iteration on my laptop, fewer broken surprises later, and more time for the work that actually matters.

If you want, I can look at your Dockerfile and suggest the one or two edits that will save the most time.