Skip to content

Other runtimes (coming soon)

Carabase’s agent runtime is pluggable through the AgentRuntimeProvider interface in src/services/agent-runtime/provider.ts. Three providers exist in the v0.1 codebase, but only one ships:

ProviderModuleStatus in v0.1
OpenClawopenclaw-provider.ts✅ Supported — see OpenClaw
Claudeclaude-provider.tsStub present, not advertised
Codexcodex-provider.tsStub present, not advertised

Two reasons:

  1. The chat surface is the most user-visible thing in the product. Shipping it with one runtime that we’ve end-to-end tested beats shipping with three runtimes where two have rough edges.
  2. OpenClaw is the only runtime whose memory + skill model we’ve validated against Carabase’s MCP server contract. Claude / Codex providers work for one-shot agentic flows (issued via POST /api/v1/agent-tasks) but haven’t been hardened for the chat-loop pattern.

The AI SDK provider modules are wired through src/services/agent-runtime/registry.ts and selectable via the provider field on each agent task. So this works in v0.1:

Terminal window
curl -X POST http://localhost:3000/api/v1/agent-tasks \
-H "x-workspace-id: $WORKSPACE_ID" \
-d '{ "instruction": "...", "originBlockId": "...", "provider": "claude" }'

…as long as utilityHigh model routing is configured for an Anthropic key. But this is a developer-only path — no UI surface, no chat history persistence in the chat-sessions table, no streaming over the existing /api/v1/chat/stream SSE endpoint.

  • Claude as a fully-supported chat runtime with the same MCP tool surface OpenClaw gets
  • Codex as a fully-supported chat runtime
  • A picker in the Admin SPA’s AI Engine page that lets you choose per-workspace
  • A “test connection” round-trip for each runtime, similar to the existing OpenClaw one

If you want to track this work or have a strong opinion on which runtime to prioritize, comment on the relevant issue in the GitHub repo.