How I stopped reinstalling node_modules for every project (and the ugly tradeoffs I accepted)
I stopped wasting hours and mobile data rebuilding node_modules by sharing a single cached store via overlayfs — how I set it up, what broke, and why I still use it.
Written by: Arjun Malhotra
It was 9pm, my home Wi‑Fi dropped to a crawl, and I was staring at “npm install” at 0% for the third time that week. My laptop has a 256GB SSD (I paid ₹41,000 for it three years ago) and I split it between work, personal stuff, and a few side projects. Every repo had its own node_modules. Every checkout change meant another reinstall. Every reinstall ate data when I was tethering. I’d spend an hour waiting for packages while the rest of the team kept coding.
I had tried the usual fixes: pnpm (great), npm ci caches (helpful), and Docker volumes (messy and heavy). The pattern that worked for me in the end was almost embarrassingly low-level: share one cached node_modules store on disk and mount it into projects with overlayfs so each project sees its own view. It cut my reinstall time by 80% and saved mobile data. But it also introduced a few nastier bugs that taught me to be careful.
Why overlayfs, not pnpm?
pnpm solves many problems by design — content‑addressable store, hard links, smaller disk use. I love pnpm. But I work with a lot of existing projects and clients that use npm/yarn. Migrating them wasn’t an option. Docker cached layers help on CI, but on a laptop they were slow and I hated waiting for container churn. Overlayfs is available on every modern Linux and gives me a middle path: one physical store of packages, per-project writable overlays.
What I built (short)
- A central cache directory: /var/local/node_store (owned by me, ~8–10GB).
- Per-project mountpoints using overlayfs: project/node_modules is a mount that uses the central store as lowerdir and a tiny upperdir for project-specific files.
- A small systemd user service that mounts overlays on repo checkout and unmounts them on cleanup.
- A per-project postinstall hook that copies native-build artifacts into upperdir when necessary.
The setup (practical, runnable)
This is the essence — not a copy-paste script, but the commands I run manually in a new repo:
-
Create a shared store (once) mkdir -p /var/local/node_store chown $USER /var/local/node_store
-
In each project, make space for overlay mkdir -p .overlay/{upper,work} sudo mount -t overlay overlay -o lowerdir=/var/local/node_store,upperdir=$(pwd)/.overlay/upper,workdir=$(pwd)/.overlay/work node_modules
-
npm install once (it populates the upperdir and the shared store over time). Subsequent installs are fast because most files are in the lowerdir already.
I automated mounting via a systemd —user unit tied to the repo path so the mount happens when I cd into the project. It’s messy but reliable for me.
Why it actually saved time and data
- No more redownloading tarballs for every project. The shared store keeps the files.
- Disk space improved: multiple projects no longer duplicate identical package files.
- CI parity: I still run fresh installs on CI, but desktop iteration is snappy. On a 5GB mobile tether recharge (₹399), I stopped burning it re‑installing lodash dozens of times.
What broke (and this is important)
This is the honest part. Overlayfs is not magic.
- Native modules. Packages that have compiled binaries (node-gyp) sometimes end up with ABI mismatches. I fixed this by copying built artifacts into the upperdir for that project and running npm rebuild on the upperdir. Ugly, and it meant I still did some local builds.
- File watchers. Tools like chokidar and jest in watch mode occasionally missed file changes because of how inotify behaves with mounts. The workaround: use polling (watchman helped) or run watchers from the project root, not inside node_modules.
- Version drift. When two projects depend on different major versions of the same package, deduping into a shared lowerdir can hide the mismatch. My rule: don’t dedupe across major versions — encode that in the mount logic. If a repo needs a different major version, it gets a fresh private lowerdir.
- Cross-platform team pain. My teammates on macOS/Windows didn’t use this. That’s fine for me locally, but it meant I still had to be mindful of “works on my machine” surprises.
An honest failure: I tried to be clever and make a global symlink manager to redirect package.json requests into the shared store. It lasted two weeks and then broke our CI when a package-lock mismatch surfaced. I rolled it back. The overlayfs approach is simpler and easier to reason about.
Why I still prefer this in 2026
- Predictability for local iteration. I can switch projects without waiting. For me that beats the marginal gains of a perfect reproducible environment because the team already runs reproducible CI.
- Cheap and local. No extra monthly cost. No VPS. No new service to maintain.
- Works with legacy projects. I didn’t have to convert repositories to pnpm or expect clients to change tooling.
When not to use it
- If you rely heavily on native modules and can’t tolerate per-project rebuilds.
- If you need perfect parity with CI for every local test (you should still run CI).
- On shared filesystems like NFS — overlayfs doesn’t play well there.
If I did it again
I’d spend the time upfront to make the mount/unmount lifecycle bulletproof. My systemd user service handles most cases, but a few times a stray mount stuck after an abrupt laptop suspension and I had to chown things back. I’d also push for pnpm where I can — it solves many problems cleanly — but keep overlayfs for repos I can’t change.
Takeaway (one honest thing I walked away with)
Fast local iteration is worth a little mess. I traded a few weird bugs and a tiny bit of maintenance for hours back every week and a lot less mobile data spent waiting on installs. If you’re on Linux, have limited SSD space, and hate reinstalling node_modules while tethering, try an overlayfs-backed shared store for a week. But do expect to fix native builds and watcher oddities. It’s not a silver bullet — it’s a practical, slightly ugly hack that actually got me to ship more.