docs: remove stale direct db and mcp guidance

This commit is contained in:
Mikael Hugo 2026-05-07 03:33:14 +02:00
parent 9ab0b9fe63
commit 3e6827e7dc
18 changed files with 46 additions and 67 deletions

View file

@ -219,7 +219,7 @@ See [`docs/plans/README.md`](docs/plans/README.md), [`docs/adr/README.md`](docs/
## SF Schedule
The SF schedule system (`/sf schedule`) stores time-bound reminders in `.sf/schedule.jsonl` as append-only JSONL. Items surface on their due date via pull queries at launch and auto-mode boundaries — there is no background daemon.
The SF schedule system (`/sf schedule`) stores time-bound reminders in `.sf/schedule.jsonl` as versioned append-only JSONL. Items surface on their due date via pull queries at launch and auto-mode boundaries — there is no background daemon.
**When to use `sf schedule` vs backlog:**
- **`sf schedule`** — time-bound items that must surface at a future date: a 2-week adoption review after shipping a feature, a 1-month audit of an architectural decision, a 30-minute reminder to run a command. Use when the *timing* matters, not just the *priority*.

View file

@ -33,7 +33,7 @@ This ADR plans the extraction.
- *Rejected:* fragile FFI; defeats the architectural goal of separating engine from UI.
- **Keep `pi-tui` indefinitely; only build Charm TUI as an alternative for SSH access.**
- *Rejected:* leaves ~10k LOC of TS in sf core *forever* as a maintenance burden. The whole point is to delete it.
- **Don't build a new TUI; expose the daemon over MCP/HTTP and rely on third-party clients (Claude Code, Cursor) to render.**
- **Don't build a new TUI; expose the daemon over an external API and rely on third-party clients (Claude Code, Cursor) to render.**
- *Rejected:* sf's user-facing surface is the TUI when working interactively. Outsourcing it removes a major UX touchpoint we own.
## Consequences

View file

@ -257,14 +257,13 @@ memory_sources (
**Purpose:** Command-line interface to memory system.
**Commands:**
- `sf memory list [category]` — List all memories (optionally filtered)
- `sf memory search <query>` — Find memories by content
- `sf memory add <content> --category <cat>` — Manually add memory
- `sf memory recall <context>` — Get context-relevant memories
- `sf memory decay [--older-than-days N]` — Age memories
- `sf memory stats` — Memory database statistics
- `sf memory export` — Export all memories to JSON
- `sf memory import <file>` — Import memories from JSON
- `/sf memory list [category]` — List all memories (optionally filtered)
- `/sf memory search <query>` — Find memories by content
- `/sf memory note <content>` — Manually add memory
- `/sf memory status` — Memory database statistics
- `/sf memory decay [--older-than-days N]` — Age memories
- `/sf memory export <path.json>` — Export all memories to JSON
- `/sf memory import <path.json>` — Import memories from JSON
---

View file

@ -37,13 +37,13 @@
## Contradictions Found
- ADR-008 (SF tools over MCP) is marked "Accepted — impl in progress" but the user has clarified that SF is the only runtime in use; Claude Code is used as an external dev assistant, not as a provider inside SF. ADR-008's premise (provider parity for Claude Code CLI as a Pi provider) may not apply to the current usage model. Needs clarification.
- ADR-008 (SF tools over MCP) was superseded after the user clarified that SF is the only runtime in use; Claude Code is used as an external dev assistant, not as a provider inside SF. Current guidance rejects SF-as-MCP-server exposure and keeps MCP strictly client-side for external tools.
- `docs/design-docs/` and `docs/dev/ADR-*.md` are split across two directories. The design-docs folder has 2 files; 18 ADRs live in dev/. This split is navigable with the index but worth consolidating eventually.
## What Remains Unresolved
- ADR-008 relevance: does exposing workflow mutations over MCP make sense if SF is always the sole runtime?
- ADR-008 relevance is resolved by `docs/dev/ADR-008-sf-tools-over-mcp-for-provider-parity.md`: SF must not expose workflow mutations over MCP.
- ADR-018 Phase 1 (repo profiler wired into dispatch) is not yet started
- Notification event model implementation (Phase 2 of the spec) is not yet started
- No ADR template or `just adr` recipe

View file

@ -301,7 +301,7 @@ rm .sf/routing-history.json
/sf doctor
```
Doctor rebuilds `STATE.md` from plan and roadmap files on disk and fixes detected inconsistencies.
Doctor derives current state from the DB-backed runtime model when available, regenerates projections such as `STATE.md`, and fixes detected inconsistencies. File-based plan and roadmap parsing is only a recovery path for unmigrated or damaged state.
## Getting Help

View file

@ -1,6 +1,7 @@
// SF — Exec (context-mode) tool registration.
//
// Exposes the `sf_exec`, `sf_exec_search`, `sf_resume`, and `kill_agent` tools over MCP.
// Registers the `sf_exec`, `sf_exec_search`, `sf_resume`, and `kill_agent`
// tools as native SF agent tools.
// Opt-in: sf_exec is disabled unless `context_mode.enabled: true` is set
// (or left unset — enabled by default).
import { existsSync, readFileSync, unlinkSync, writeFileSync } from "node:fs";

View file

@ -1,8 +1,8 @@
// SF — Memory tool registration
//
// Exposes the memory-layer tools (capture_thought, memory_query, sf_graph)
// to the LLM over MCP. All three degrade gracefully when the SF database
// is unavailable.
// as native SF tools. All three degrade gracefully when the SF database is
// unavailable.
import { Type } from "@sinclair/typebox";
import {
executeMemoryCapture,

View file

@ -308,5 +308,5 @@ high confidence, skip the capture entirely.
### Step 8 Surprises stay only in LEARNINGS.md
Surprises are milestone-local context and are NOT cross-session-reusable. Do
not persist them via \`capture_thought\` or any other MCP tool.`;
not persist them via \`capture_thought\` or any other native memory tool.`;
}

View file

@ -8,7 +8,7 @@
// Read consumers:
// (1) `getRelevantMemoriesRanked` walks edges of cosine top-N memories
// and applies a one-pass intra-pool score boost (damping 0.4).
// (2) `sf_graph` MCP tool exposes BFS traversal for explicit queries.
// (2) `sf_graph` exposes BFS traversal for explicit agent queries.
// All writes go through the single-writer gate in `sf-db.ts`.
import { _getAdapter, isDbAvailable } from "./sf-db.js";
export const VALID_RELATIONS = [

View file

@ -136,7 +136,7 @@ Cover: purpose, consumer, contract, implementation sketch, test strategy, eviden
When approved, persist to memory so the next session can find it:
```
sf_save_memory(
capture_thought(
category="design-decision",
content="design: <what> for <consumer> — approach: <key decision> — refused: <scope defence>",
confidence=0.9

View file

@ -34,13 +34,15 @@ Read each persistent context file and judge:
| `.sf/PM-STRATEGY.md` | Still aligned with current direction? |
| `MEMORY.md` (root) | Index lines still pointing at extant files? |
For the memory store (the `memories` table in `.sf/sf.db`):
For the memory store, use SF's DB-backed memory/query tools rather than direct `sqlite3` access:
```bash
sqlite3 .sf/sf.db "SELECT category, content, confidence, hit_count FROM memories ORDER BY confidence DESC LIMIT 20"
sqlite3 .sf/sf.db "SELECT COUNT(*) FROM memories WHERE confidence < 0.5"
/sf memory status
/sf memory search "pattern"
```
Inside an agent session, prefer the registered `memory_query` tool for targeted lookups.
Look for:
- Low-confidence rows (`< 0.5`) that haven't been hit in N days — candidates for archival.
@ -71,7 +73,7 @@ For non-trivial cleanups, present this plan to the user before executing — sf'
### Step 4 — Execute Repairs
Apply edits with `Edit`. For the `memories` table, prefer `sf_save_memory` / `sf_delete_memory` (or whatever the current sf memory tools are) over direct `sqlite3` writes — those bypass tool-level invariants.
Apply edits with `Edit`. For memory cleanup, use the current `/sf memory` commands or registered memory tools over direct `sqlite3` writes — direct DB mutation bypasses tool-level invariants.
**Scope rules:**

View file

@ -275,7 +275,7 @@ After a parallel or debate batch returns, the parent agent **must** synthesise.
Persist non-trivial syntheses to memory:
```
sf_save_memory(
capture_thought(
category="design-synthesis",
content="<one-line synthesis of the swarm result> — slice <id>",
confidence=<0.0-1.0>

View file

@ -145,7 +145,7 @@ Keep scope tight: fix the validated review issue, not every adjacent redesign th
After resolving significant review comments:
```
sf_save_memory(
capture_thought(
category="review-learning",
content="review caught: <what> in <component> — prevent by <design principle or test gap>",
confidence=0.9

View file

@ -181,7 +181,7 @@ Stop when:
Persist the delivery context so future units can trace what was built and why:
```
sf_save_memory(
capture_thought(
category="delivery",
content="delivered: <what changed> — slice: <id> — consumer: <caller> — insight: <what was learned>",
confidence=0.9

View file

@ -7,7 +7,7 @@ description: Researches codebase, project state, and external knowledge using lo
Research a topic using four complementary information sources, in priority order:
1. **Native LSP tool + local search** (`lsp`, `rg`, `find`, `ls`) — use FIRST for code exploration
2. **sift** (hybrid BM25+vector local search) — use when LSP/rg is not enough
3. **SF project database** (sqlite3) — use for project state (milestones, requirements, decisions)
3. **SF project state tools** — use DB-backed SF query tools for milestones, requirements, and decisions
4. **Web search** — use for external documentation and current information
This skill is the first step before planning — it produces the evidence base that drives good decisions. Without research, agents plan from assumptions; with this skill, they plan from evidence.
@ -21,7 +21,7 @@ Strictly prohibited while running this skill:
- File creation, modification, or deletion (no `Write`, `Edit`, `NotebookEdit`, `touch`, `rm`, `mv`, `cp`).
- Bash redirects or heredocs that write files: `>`, `>>`, `tee`, `cat <<EOF > file`, `python -c "open(...).write(...)"`. The shell back-door does not bypass the read-only contract.
- DB writes: `sqlite3 .sf/sf.db "INSERT|UPDATE|DELETE|DROP|CREATE ..."`. Use `SELECT` only.
- Direct DB access: do not run `sqlite3 .sf/sf.db` or load SQLite from ad-hoc scripts. Use SF query tools exposed by the runtime; the engine owns the WAL connection.
- Git write operations: `add`, `commit`, `push`, `merge`, `rebase`, `checkout -b`, `branch -d`.
- Package installs: `npm install`, `pip install`, `cargo add`.
- Spawning subagents that perform any of the above on the researcher's behalf — the prohibition applies to delegated actions as well.
@ -78,20 +78,12 @@ quotas. Use GitHub code search only for repositories that are not on disk,
dedupe repeated queries, and treat `403` rate-limit responses as a signal to
wait for reset or continue with local evidence.
**SF project database queries:**
```bash
# Current milestone and slices
sqlite3 .sf/sf.db "SELECT id, title, status FROM milestones WHERE status='active'"
**SF project state queries:**
Use the runtime query tools instead of opening `.sf/sf.db` directly:
# All requirements
sqlite3 .sf/sf.db "SELECT id, class, status, description FROM requirements"
# Recent decisions
sqlite3 .sf/sf.db "SELECT id, scope, decision FROM decisions ORDER BY seq DESC LIMIT 10"
# Tasks by slice
sqlite3 .sf/sf.db "SELECT id, title, status FROM tasks WHERE milestone_id='M001' AND slice_id='S01'"
```
- `sf_milestone_status` — read milestone, slice, and task status inside an agent session.
- `sf headless query` — get the full DB-backed project snapshot when running from the shell.
- `/sf inspect db` — inspect schema/version diagnostics when the user asks for database health.
**Web search — use the search-the-web tool directly for current information.**
</quick_start>
@ -102,7 +94,7 @@ sqlite3 .sf/sf.db "SELECT id, title, status FROM tasks WHERE milestone_id='M001'
Before searching, identify what you need to know:
- **Code exploration** (finding functions, types, references) → use native `lsp` first, then `rg`
- **Project state** (milestones, slices, tasks, requirements) → query the SF DB
- **Project state** (milestones, slices, tasks, requirements) → use SF DB-backed query tools
- **Current external information** → use web search
- **All of the above** → combine all four sources
@ -139,31 +131,16 @@ or explicit strategy control:
{"query": "how does the dispatch loop handle retries and timeouts", "agent": true, "agentMode": "graph"}
```
## Step 4: Query the SF project database
## Step 4: Query SF project state
The SF database (`.sf/sf.db`) contains the canonical project state:
The SF database contains the canonical project state, but agents should inspect it through SF's runtime tools so they do not contend with the single-writer WAL connection:
```bash
# List active milestones with their slices
sqlite3 .sf/sf.db "
SELECT m.id, m.title, m.status, s.id, s.title, s.status
FROM milestones m
LEFT JOIN slices s ON s.milestone_id = m.id
WHERE m.status IN ('active','planning')
ORDER BY m.id, s.id
"
# Full state snapshot from shell
sf headless query
# Get requirements by status
sqlite3 .sf/sf.db "SELECT id, class, status, description FROM requirements WHERE status='active'"
# Recent decisions (most recent first)
sqlite3 .sf/sf.db "SELECT id, scope, decision, choice FROM decisions ORDER BY seq DESC LIMIT 20"
# Blocked or pending tasks
sqlite3 .sf/sf.db "SELECT id, title, status FROM tasks WHERE status IN ('blocked','pending')"
# Artifacts (plans, summaries) for a milestone
sqlite3 .sf/sf.db "SELECT path, artifact_type FROM artifacts WHERE milestone_id='M001'"
# In an agent session, call the DB-backed `sf_milestone_status` tool
# with milestoneId=M001 when you need a focused milestone snapshot.
```
## Step 5: Web search for external information

View file

@ -133,7 +133,7 @@ LLM confidence is poorly calibrated in absolute terms — the relative signal ma
- For non-trivial slices, persist the contract to sf memory:
```
sf_save_memory(
capture_thought(
category="contract",
content="<symbol><what the test proved> — slice <sliceId>",
confidence=0.9

View file

@ -137,7 +137,7 @@ A check without a `Command run` block is a skip. "I re-ran the repro and it work
Persist the pattern to memory so future units don't re-hit it:
```
sf_save_memory(
capture_thought(
category="anti-pattern",
content="<symptom> in <component> — root cause: <one line> — fix: <approach> — test: <name>",
confidence=0.9

View file

@ -1,8 +1,8 @@
// SF Exec Tool — executor for the sf_exec MCP tool.
// SF Exec Tool — executor for the native sf_exec agent tool.
//
// Thin wrapper around exec-sandbox.ts that reads effective options from
// the project preferences (context_mode block) and formats the result
// for MCP return.
// for agent-tool return.
import { EXEC_DEFAULTS, runExecSandbox } from "../exec-sandbox.js";
import { isContextModeEnabled } from "../preferences-types.js";
export function buildExecOptions(baseDir, cfg, extras) {