SQLite in Production: Lessons from Two Years
Two years ago we moved a service handling around 800 requests per second from PostgreSQL to SQLite. People thought we were crazy. Here is what actually happened.
Why We Switched
Our service was a read-heavy metadata store. Roughly 95% reads, 5% writes. The Postgres instance was on a separate server, adding network latency to every query. We were paying for connection pooling, replication, and a managed database service we did not really need.
SQLite eliminated all of that. The database file sits on the same disk as the application. Reads are a function call, not a network round trip. For our access pattern, this cut p99 latency from 12ms to under 1ms.
What Worked
- WAL mode was essential. It allows concurrent readers while a single writer operates. For our read-heavy workload, this was perfect.
- Backups turned out to be simpler.
sqlite3_backup_init()gives you a consistent snapshot without stopping the service. We back up to object storage every hour. - Deployment became trivial. The database is just a file. New deployment means copying the application binary and optionally seeding a fresh database.
What Bit Us
- Write contention under load. SQLite serializes writes. During bulk import jobs, write latency would spike. We solved this by batching writes into transactions of 500 rows.
- File locking on NFS does not work. We learned this the hard way when someone mounted a shared volume. SQLite and network filesystems do not mix.
- Schema migrations need care.
ALTER TABLEin SQLite is limited compared to Postgres. Adding a column is fine, but renaming or dropping columns requires recreating the table.
Would I Do It Again?
For the right workload, absolutely. SQLite is not a replacement for Postgres or MySQL in general. But for single-node, read-heavy services where you control the deployment, it is genuinely excellent. The operational simplicity alone is worth it.