ADR-019: clarify MCP is a temporary external-coder scaffold, not production wire

Internal services (SF↔memory, ACE↔memory, SF↔ACE) talk via typed direct
clients generated from the Go/TS APIs — HTTP/gRPC for memory, existing
JSON-RPC stdio for SF↔ACE. MCP is reserved for external LLM-driven coding
tools (Claude Code, Cursor) that don't share our build system; it is a
scaffold for the period when external coders help build the platform and
shrinks as the system becomes self-hosting.

Adds an explicit "MCP scope" table so the rule is stated once. Updates the
three-layer architecture diagram, Phase 2, and Phase 6 to remove the
inaccurate "all consumers over MCP" framing.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Mikael Hugo 2026-05-01 23:38:25 +02:00
parent 0976bbbb83
commit 2280893464

View file

@ -26,8 +26,14 @@ Two autonomous agent systems are being developed in parallel:
(`tenant_id`) exists; per-task execution isolation does not.
- **singularity-memory** — Separate Go service (migrating from Python per ADR-014).
Postgres + vchord vector store. Federated knowledge layer shared across SF, ACE,
Claude Code, Cursor, and other tools over MCP.
Postgres + vchord vector store. Federated knowledge layer.
- **Internal consumers** (SF, ACE, future first-party services) talk to it via
typed direct clients (HTTP/gRPC generated from the Go API). No MCP, no JSON-RPC
framing, no protocol cost.
- **External coding tools** (Claude Code, Cursor, third-party LLM clients) get
an MCP façade. This is a temporary scaffold so external coders can read/write
memory while they help build the system; it is not the production wire for
internal services and is expected to shrink once the system is self-hosting.
Both systems share the same end destination but are approaching it from different
directions. SF is production-reliable but architecturally constrained (single-repo,
@ -82,8 +88,10 @@ A workspace is:
├─────────────────────────────────────────────────────────────────────┤
│ Knowledge layer │
│ │
│ singularity-memory: Go + Postgres + vchord + MCP server │
│ Serves all consumers (SF, ACE, Claude Code, Cursor) over MCP. │
│ singularity-memory: Go + Postgres + vchord │
│ Internal services (SF, ACE) use typed direct clients (HTTP/gRPC). │
│ External coding tools (Claude Code, Cursor) use an MCP façade — │
│ temporary scaffold while external coders help build the system. │
│ Tenant-scoped knowledge banks (to be designed — see below). │
│ │
│ Language: Go (ADR-014 migration, phases 03 only — NOT phase 4) │
@ -155,6 +163,32 @@ workspace VM primitive is stable.
---
## MCP scope
MCP is **not** the production wire for this system. The rule:
| Caller | Callee | Wire | Why |
|--------|--------|------|-----|
| ACE host → ACE tools | in-process Python imports | function call | type-safe, zero overhead |
| ACE host → singularity-memory | typed Python client (gen from Go API) | HTTP/gRPC | typed, fast, refactorable |
| SF → singularity-memory | typed TS client (gen from Go API) | HTTP/gRPC | same, in TS |
| SF → ACE worker | existing JSON-RPC stdio (`rpc-client`) | stdio JSON-RPC | already in production, language-agnostic |
| ACE worker VM → host | direct gRPC over tailnet | gRPC | typed, low-latency |
| Claude Code / Cursor → singularity-memory | MCP façade | MCP | external tool, no shared types |
| Claude Code → ACE | MCP façade (temporary) | MCP | external coder helping build, until self-hosting |
MCP exists only at the **boundary to external LLM-driven coding tools** that don't
share our type system. It is a scaffold for the period when external coders
(Claude Code, Cursor, third-party agents) help build the system. As the system
becomes self-hosting, the MCP surface shrinks to whatever third parties still
need to integrate against.
Internally everything is real agentic tools — Python functions, generated typed
clients, direct calls. No JSON-RPC framing where the caller and callee share a
build system.
---
## Incremental convergence path
### Phase 1 — SF continues, ACE gets built (now)
@ -163,9 +197,14 @@ workspace VM primitive is stable.
- Both systems mature on their own tracks.
### Phase 2 — Federated memory (near-term, ADR-012 Tier 1)
- Wire `memory-store.ts` remote-mode → singularity-memory HTTP endpoint.
- Wire `memory-store.ts` remote-mode → singularity-memory HTTP endpoint (typed
TS client generated from the Go API — not MCP).
- SF instances on different machines share learnings.
- ACE connects to the same singularity-memory endpoint (same MCP wire).
- ACE connects to the same singularity-memory endpoint via a typed Python client
(also generated, also not MCP). Internal services do not pay the MCP tax.
- The MCP façade on singularity-memory is reserved for external coding tools
(Claude Code, Cursor) that need to read/write memory while helping build the
system. Temporary scaffold; not a production wire.
- **Outcome:** shared knowledge layer operational before execution convergence.
### Phase 3 — Workspace VM opt-in for SF (medium-term)
@ -189,10 +228,12 @@ workspace VM primitive is stable.
- **Outcome:** two orchestrators, one execution substrate.
### Phase 6 — Orchestration convergence (long-term)
- SF's state machine (milestone → slice → task) becomes an ACE PM persona.
- SF's state machine (milestone → slice → task) becomes an ACE workflow spec
(compiled DAG via ACE's `graph_compiler`), not a hand-coded state machine.
- ACE's HTDAG becomes the unified orchestration backbone.
- SF's CLI and headless mode remain as user-facing entry points (they don't go away —
they become ACE clients over MCP).
- SF's CLI and headless mode remain as user-facing entry points; they drive ACE
via the existing JSON-RPC stdio contract (already in `packages/rpc-client/`),
not via MCP. MCP at this layer would be redundant — both ends are first-party.
- **Outcome:** one system with SF's reliability and ACE's generality.
---