Environments (dev / staging / prod)
Carabase runs against three Postgres databases on the same instance:
| Env | DB name | Purpose | Safe to reset? |
|---|---|---|---|
dev | carabase_dev | Local-loop dev — fast iteration, noisy fixtures | ✅ yes |
staging | carabase_staging | Full eval-seeded corpus for smoke testing | ✅ yes |
prod | carabase_prod | Your real data on the always-on host | ❌ protected |
Each env has its own dotfile:
.env.dev ← copy from .env.dev.example.env.staging ← copy from .env.staging.example.env.production ← copy from .env.production.exampleAll three are gitignored; the .example templates are tracked.
The resolver
Section titled “The resolver”scripts/load-env.sh <env> is the single source of truth for the env → file mapping. It’s sourced (not exec’d) by every script that needs an env:
source "$REPO_ROOT/scripts/load-env.sh" "$ENV_NAME"# → CARABASE_ENV, DATABASE_URL, HOST_MASTER_KEY, etc. now in scopeFor pnpm scripts, the wrapper is scripts/with-env.sh <env> -- <cmd>:
"dev:staging": "./scripts/with-env.sh staging -- tsx watch src/server.ts"Dev falls back to plain .env if .env.dev doesn’t exist yet, so you’re not forced to split on day one.
The reset guardrail
Section titled “The reset guardrail”src/db/reset.ts classifies the DATABASE_URL by the db name suffix:
*_prod/*_production→ prod, refuses to run*_staging→ staging, runs*_dev→ dev, runs- anything else → unknown, warns but runs (legacy
carabasedb)
The classifier is pure and unit-tested. To override the prod refusal you must explicitly set CARABASE_I_KNOW=1, which logs the bypass alongside the masked URL.
There is intentionally no pnpm db:reset:prod script. If you truly need to wipe prod, you have to type:
CARABASE_I_KNOW=1 ./scripts/with-env.sh prod -- tsx src/db/reset.ts…yourself. The friction is the point.
Per-env pnpm scripts
Section titled “Per-env pnpm scripts”| Script | What it does |
|---|---|
pnpm bootstrap | First-run setup for a new install (default dev) |
pnpm bootstrap --env staging | Same for staging |
pnpm setup:envs | Create + migrate all three DBs (idempotent) |
pnpm dev:dev | Run the host against the dev DB |
pnpm dev:staging | Run the host against the staging DB |
pnpm start:prod | Run the built host against the prod DB |
pnpm db:migrate:<env> | Run migrations against a specific env |
pnpm db:seed:<env> | Run the hand-written seed against a specific env |
pnpm db:seed:eval:<env> | Run the realistic sam-rivera fixture against dev or staging |
pnpm db:reset:dev | Truncate + reseed dev |
pnpm db:reset:staging | Truncate + reseed staging |
pnpm backup:<env> | Encrypted nightly backup |
Why separate databases on the same instance?
Section titled “Why separate databases on the same instance?”Cheapest blast-radius isolation that still protects prod. A dev query physically cannot SELECT from prod rows — the connection string points at a different database. A DROP DATABASE against the wrong handle could still hurt, but that’s what the backup pipeline is for.
Different master keys per env
Section titled “Different master keys per env”Each .env.<env> has its own HOST_MASTER_KEY. Compromising dev doesn’t leak prod. The pnpm bootstrap flow generates fresh keys per env when you scaffold each one, so this happens by default.