Threat model
This page describes Carabase’s threat model — a single-tenant, self-hosted deployment with no public-internet exposure. If you’re considering a configuration that doesn’t match those assumptions (running it on a VPS, exposing port 3000 publicly, sharing the install across multiple users), some of the protections below stop holding.
The shape of the system
Section titled “The shape of the system”Carabase is a single Node.js process that:
- Accepts HTTP on
:3000(default). Bound dual-stack (HOST=::) — listens on every interface, including the Tailscale virtual interface - Talks to a local Postgres + pgvector
- Talks to a local OpenClaw gateway daemon on
:18789(only listens on localhost) - Spawns background workers in-process via pg-boss + node-cron
No external orchestrator, no public-internet listener, no shared infrastructure with any other Carabase install.
The trust boundary
Section titled “The trust boundary”The network boundary is the trust boundary. Anything that can reach :3000 is trusted; the routes don’t implement public-internet authentication on top. This is fine because the network is supposed to be a Tailscale tailnet you control — every device on it is one you signed into.
If you remove that assumption (DDNS, port-forward, VPS without a tailnet), you’ve broken the threat model. There is no rate limiting, no per-user authentication, and no audit logging on the routes themselves.
What’s defended
Section titled “What’s defended”Cross-workspace data leakage (RLS)
Section titled “Cross-workspace data leakage (RLS)”Every workspace-scoped table has a Postgres Row-Level Security policy keyed on workspace_id. Even if application code forgets a WHERE workspace_id = ? clause, the database refuses to serve cross-workspace rows. The application connects as carabase_app (not the migration superuser), which has RLS enforced. There’s a regression test that SET ROLEs into the app role and asserts cross-workspace SELECTs return 0 rows — see Workspaces & RLS.
Credential theft from disk
Section titled “Credential theft from disk”OAuth tokens, OAuth client secrets, model-routing API keys, webhook signing secrets, and backup files are all encrypted at rest with AES-256-GCM. The key (HOST_MASTER_KEY) is read from the env file once at startup; without it, the host refuses to boot. Each env (dev / staging / prod) has its own key, so compromising the dev box doesn’t leak prod.
Webhook spoofing
Section titled “Webhook spoofing”Inbound webhooks (Slack, Telegram, WhatsApp, Matrix, generic) verify HMAC-SHA256 signatures with channel-specific signing secrets, using crypto.timingSafeEqual() to prevent timing attacks. Slack additionally enforces a 5-minute timestamp window to block replay attacks.
Compromised dev dependency / supply chain
Section titled “Compromised dev dependency / supply chain”- Every GitHub Action is pinned to a commit SHA (not a floating tag), so a compromised maintainer of
actions/checkoutcan’t ship malicious code into our pipeline overnight - Dependabot opens weekly bumps with the new SHA + tag in a comment, so SHA pinning doesn’t rot into “pinned-and-stale”
pnpm audit --audit-level=highruns as a CI gate on every PR and on every release tag — high/critical advisories block the merge / publish- GitHub default Code Scanning (CodeQL) runs on every PR and weekly
Stolen device
Section titled “Stolen device”The host depends on physical security of the machine it runs on. If the laptop is stolen + powered on, the master key is in memory and credentials can be decrypted. Mitigation: run on an always-on Mac at home with FileVault and require a wake-from-sleep password.
Compromised gateway password
Section titled “Compromised gateway password”If OPENCLAW_GATEWAY_PASSWORD leaks, an attacker on the same Tailnet could speak to the OpenClaw gateway directly and bypass Carabase. Rotation: generate a new value with ./scripts/gen-secret.sh 32, paste into .env.<env> AND ~/.openclaw/config.toml, restart both processes.
What’s NOT defended
Section titled “What’s NOT defended”- Public-internet exposure (see above — explicitly unsupported)
- Multi-user installs — there’s no per-user authentication on routes; the assumption is one operator per host
- Local privilege escalation — anything running as your user on the host machine can read the master key from
.env.<env>. Mitigation: don’t run untrusted code as your user - Lost master key — backups are encrypted with it; lose the key, lose the backups. Store it in a password manager
- Postgres permissions — the application role has standard CRUD privileges. SQL-injection via Drizzle ORM is not a credible vector but a future audit should still check
- MCP tool exhaust — the agent can call any tool the MCP server exposes. The tool definitions themselves don’t have a permission model beyond “available / not available”; if you wire a tool to do something destructive, the agent can do that destructive thing