A delightful PostgreSQL setup for early startups

Having talked to a number of friends in startups, I've noticed a common trend: painful Postgres developer setups.

In the past week, I asked several friends how their startup teams use databases during local development (all use Postgres so I'll focus on that here):

  • 4 said they develop on the production database
  • 2 said they develop on the staging database
  • 1 said they have a local database and some scripts to seed sample data, but it's very painful

In part, this makes sense. Most of us building products only have a working knowledge of databases & Postgres. And putting too much effort into a nice setup is not worth the time when you are early stage and trying to launch a product. Once the team starts to grow, though, a poor Postgres setup can be very painful and slow down development.

I thought I'd share a fairly simple setup that I've come to love. It doesnt work for all use cases, but it works for most.

Postgres provider

Where you host Postgres isn't the most important part of this setup, but I'll plug Neon here. It's a serverless, cloud-based Postgres database with lot's of features like branching and history. It's just very easy to use and has a free tier.

In the very early stages of a startup, I'd typically create a single prod database, and then a staging database post-MVP.

Local setup

Locally, I use Docker to run a local Postgres instance. This is my docker compose file:

version: '3.9'
name: zap-local-postgres
services:
  db:
    image: postgres:16
    restart: always
    container_name: zap-dev-db
    environment:
      POSTGRES_USER: my_owner
      POSTGRES_PASSWORD: postgres
    ports:
      - '5433:5432'
    volumes:
      - ./db-init:/docker-entrypoint-initdb.d/
  shadow_db:
    image: postgres:16
    restart: always
    container_name: zap-dev-shadow-db
    ports:
      - '5434:5432'
    environment:
      POSTGRES_USER: my_owner
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: shadow

POSTGRES_USER and POSTGRES_PASSWORD can be whatever you want and will be used to create the user and password for your local database.

The key piece is the volumes section. It's saying to take the folder db-init in your project, and then copy it to the docker-entrypoint-initdb.d folder in the container. This is a special folder which will execute any scripts in that folder when the container starts.

Now in your project, create this db-init folder in the root of your project. Then, inside that folder, create a shell script:

#!/bin/bash
PGPASSWORD=[put your hosted postgres password here] pg_dump --no-privileges -C -h your-postgres-host@host.com -U neondb_owner neondb | psql -U my_owner postgres

What does this file do? It's a script that will make a copy of your cloud database to your local database. This mean that you can make whatever development changes you want to your local database without fear of messing up your official databases (whether that be sandbox or prod).

To use the script:

  1. Put the password for your cloud database in the PGPASSWORD variable
  2. Put the host for your cloud database in place of your-postgres-host@host.com
  3. Put the database user name and password (neondb_owner and neondb in this example, which are Neon's defaults)

And that's pretty much it! To startup your local database, first add this script to your package.json:

"scripts": {
    "local:db": "docker compose rm --force --stop --volumes && docker compose -f docker-compose.yml up --remove-orphans",
  }

When you run pnpm run local:db, you will get a local database whose schema and data perfectly matches your cloud one.

You can now make risky migration changes, dump the db with garbage data, etc. and when you need to reset the data, you can just re-run the script.

I know this might seem obvious to some more seasoned developers, but I've seen so many startups struggling with their Postgres developer experience that I thought it's worth sharing. 😊