How I stopped reinstalling node_modules for every project (and the ugly tradeoffs I accepted)

I stopped wasting hours and mobile data rebuilding node_modules by sharing a single cached store via overlayfs — how I set it up, what broke, and why I still use it.

Written by: Arjun Malhotra

A person typing on a laptop showing code editor and terminal on the screen
Photo by Amine M. on Unsplash

It was 9pm, my home Wi‑Fi dropped to a crawl, and I was staring at “npm install” at 0% for the third time that week. My laptop has a 256GB SSD (I paid ₹41,000 for it three years ago) and I split it between work, personal stuff, and a few side projects. Every repo had its own node_modules. Every checkout change meant another reinstall. Every reinstall ate data when I was tethering. I’d spend an hour waiting for packages while the rest of the team kept coding.

I had tried the usual fixes: pnpm (great), npm ci caches (helpful), and Docker volumes (messy and heavy). The pattern that worked for me in the end was almost embarrassingly low-level: share one cached node_modules store on disk and mount it into projects with overlayfs so each project sees its own view. It cut my reinstall time by 80% and saved mobile data. But it also introduced a few nastier bugs that taught me to be careful.

Why overlayfs, not pnpm?

pnpm solves many problems by design — content‑addressable store, hard links, smaller disk use. I love pnpm. But I work with a lot of existing projects and clients that use npm/yarn. Migrating them wasn’t an option. Docker cached layers help on CI, but on a laptop they were slow and I hated waiting for container churn. Overlayfs is available on every modern Linux and gives me a middle path: one physical store of packages, per-project writable overlays.

What I built (short)

The setup (practical, runnable)

This is the essence — not a copy-paste script, but the commands I run manually in a new repo:

  1. Create a shared store (once) mkdir -p /var/local/node_store chown $USER /var/local/node_store

  2. In each project, make space for overlay mkdir -p .overlay/{upper,work} sudo mount -t overlay overlay -o lowerdir=/var/local/node_store,upperdir=$(pwd)/.overlay/upper,workdir=$(pwd)/.overlay/work node_modules

  3. npm install once (it populates the upperdir and the shared store over time). Subsequent installs are fast because most files are in the lowerdir already.

I automated mounting via a systemd —user unit tied to the repo path so the mount happens when I cd into the project. It’s messy but reliable for me.

Why it actually saved time and data

What broke (and this is important)

This is the honest part. Overlayfs is not magic.

An honest failure: I tried to be clever and make a global symlink manager to redirect package.json requests into the shared store. It lasted two weeks and then broke our CI when a package-lock mismatch surfaced. I rolled it back. The overlayfs approach is simpler and easier to reason about.

Why I still prefer this in 2026

When not to use it

If I did it again

I’d spend the time upfront to make the mount/unmount lifecycle bulletproof. My systemd user service handles most cases, but a few times a stray mount stuck after an abrupt laptop suspension and I had to chown things back. I’d also push for pnpm where I can — it solves many problems cleanly — but keep overlayfs for repos I can’t change.

Takeaway (one honest thing I walked away with)

Fast local iteration is worth a little mess. I traded a few weird bugs and a tiny bit of maintenance for hours back every week and a lot less mobile data spent waiting on installs. If you’re on Linux, have limited SSD space, and hate reinstalling node_modules while tethering, try an overlayfs-backed shared store for a week. But do expect to fix native builds and watcher oddities. It’s not a silver bullet — it’s a practical, slightly ugly hack that actually got me to ship more.