Why I stopped running Postgres and Redis on my laptop for development

I moved dev databases off my laptop to a tiny remote host (₹300/mo), cut iteration time, and learned the hard way about offline failures and data hygiene.

Written by: Arjun Malhotra

Person typing on a laptop at a wooden table with a coffee cup nearby
Photo by NeONBRAND on Unsplash

I remember the afternoon I finally decided enough was enough. My 2018 MacBook Pro had spent the morning huffing through three local Postgres instances, a Redis server, and a dozen Docker containers. Every code change triggered a 30‑second wait while health checks spun up. CI passed, but on my machine I could make one change and walk to the tea stall before the test suite finished. My cycle time was a joke.

I was also the person who wanted “realistic data” locally. So I had a 2GB subset of production, which made every migration and local query feel like an episode of slow-motion. And in our Bengaluru office, with flaky internal VPNs and intermittent Wi‑Fi, I was wasting hours on simple tasks because my laptop kept swapping and thrashing disks.

I stopped. Not all at once, but over three months I moved Postgres and Redis off my laptop and onto a tiny remote host. The result was faster iteration for everyday work and far fewer environment headaches. It also introduced a new class of failure — one that bit me enough to change the setup again. Here’s what I did, why, and the tradeoffs.

Why I moved databases off my laptop

How it’s wired (practical and blunt) I use Tailscale for access (we’re a small team and didn’t want to fight corporate VPN policies). The VPS runs Postgres + pgbouncer + Redis. Nightly cron jobs pull a scrubbed production dump, load it into a separate schema, and rotate it. My local environment points to the dev host via DB_HOST and REDIS_HOST env vars when I’m in “remote mode”.

Pgbouncer is the secret sauce — it keeps connection count sane and makes the remote DB behave like a local one. I also keep a tiny, in‑repo SQLite fallback for unit tests so I can run a subset of tests completely offline (more on that below).

What actually changed for my day-to-day My edit→test cycle dropped from 30–40 seconds to often under 5, because my laptop no longer hit the I/O wall. Docker builds were faster. Code that interacted with the DB felt reliable: no weird “localhost vs 127.0.0.1” surprises, no stray processes holding ports. New devs cloned, ran a single bootstrap script, and were pointing to the same dev DB within 10 minutes.

A practical benefit nobody warned me about: battery life improved. Less disk thrash = less power draw. That matters when you’re slogging through a late-night deploy on train Wi‑Fi.

The day it failed (and why it mattered) Two weeks after we moved, Bangalore had a partial fiber outage in my area. My home Wi‑Fi was down for five hours. I opened my editor, tried to run the test suite, and was stuck. The remote DB was unreachable. My fallback? I had to cobble tests to run against the tiny in‑repo SQLite copy. That worked for unit tests, but a whole class of integration checks — migrations, complex JSONB queries, connection pooling behaviour — were invisible.

I’d ignored one obvious constraint: patchy connectivity in Indian metros and frequent VPN policies at client sites. For three days afterwards I split my workflows:

That outage changed my mentality. Moving services off your laptop helps speed — but not if you can’t reach them.

Security and hygiene I wish I’d done earlier I learned the hard way that “cheap and accessible” often becomes lax. Initially, the VPS had an open SSH port and pgbouncer was misconfigured. We fixed it: firewall rules restricting access to Tailscale IPs, rotated DB passwords monthly, and enforced a scrub script for nightly snapshots (strip PII, anonymize emails/UPI ids, remove large blobs). If you’re copying production data, assume compliance questions will come. Address them early.

Tradeoffs you should accept

If you want to try this quickly Bootstrap with a ₹300–₹500/month VPS, install Postgres and pgbouncer, set up a Tailscale ACL, and automate a nightly scrubbed dump. Use environment flags to switch between LOCAL and REMOTE modes. Add a tiny SQLite fallback for unit tests. Test the “offline” case once a month by unplugging your Wi‑Fi and forcing yourself to work with the fallback.

What I walked away with My iteration time improved in ways I could measure — fewer distractions, faster builds, less thermal throttling. But the bigger lesson was about tradeoffs: optimization for speed revealed an availability blindspot I’d ignored. Cheap remote DBs are a net win for a small team in India, so long as you accept the extra work to make offline development and data hygiene first-class citizens.

I still prefer the remote setup for most work. But now I carry an honest fallback: a tiny local DB image, an SQLite test suite, and a mental checklist to expect outages. That combination feels realistic for Indian work environments — fast when the net is good, and survivable when it’s not.