sf snapshot: uncommitted changes after 49m inactivity

This commit is contained in:
Mikael Hugo 2026-05-12 16:45:04 +02:00
parent 0426aafad2
commit 16db710468
42 changed files with 761 additions and 110 deletions

View file

@ -12,27 +12,23 @@ only when the project needs to override or add something.
```
.agents/
AGENTS.md ← this file
manifest.yaml ← specVersion, defaults (mode/policy), enabled policies + skills
manifest.yaml ← specVersion, defaults, enabled skills/policies
prompts/
base.md ← instructions injected into every agent turn
project.md ← project-specific context: non-negotiables, workflow, build commands
snippets/ ← reusable prompt fragments included by modes
style.md ← code style rules
principles.md ← invariants (DB-first, two modes, etc.)
non-goals.md ← things SF explicitly will not do
modes/ ← project mode OVERRIDES only (add file with same name to override a default)
base.md ← injected into every agent turn (iron law, DB-first, key pointers)
project.md ← SF-specific context (modes, state, build commands, source layout)
snippets/ ← reusable prompt fragments (empty — no project snippets yet)
modes/ ← project mode OVERRIDES only (empty — SF built-ins apply)
policies/
default-safe.yaml ← confirm destructive ops; deny .env/.ssh/db paths
yolo.yaml ← no confirmations (YOLO flag); path denies still apply
default-safe.yaml ← conservative policy: confirm destructive ops, deny secrets paths
skills/ ← project-specific skills + built-in overrides (same name = override)
forge-autonomous-runtime/ ← explains SF autonomous loop, UOK gates, recovery paths
forge-command-surface/ ← SF slash commands, browser command parity, headless dispatch
nix-build/ ← build any @singularity-forge/* package via nix develop
sf-wiki/ ← override of built-in sf-wiki: use UPPERCASE filenames (.sf/ convention)
smoke-test/ ← run sf-run smoke tests (--version, --help, --print)
scopes/ ← path-based config overrides (empty — no path-specific overrides yet)
profiles/ ← named overlays e.g. "ci", "dev" (empty — no profiles yet)
schemas/ ← generated JSON schemas (not committed; tooling writes these)
scopes/ ← path-based config overrides (empty)
profiles/ ← named overlays e.g. "ci", "dev" (empty)
schemas/ ← generated JSON schemas (not committed)
state/
.gitignore ← excludes state.yaml (per-developer convenience, never committed)
```
@ -49,15 +45,5 @@ To override a built-in mode or skill, add a file with the **same name**:
.agents/modes/build.md
```
Built-in defaults (ask, build, autonomous modes; all SF system skills) are
provided by SF and do not need to be listed here.
## Policies
| Policy | When applied |
|--------|-------------|
| `default-safe` | Default — confirms destructive ops, denies secrets paths |
| `yolo` | When YOLO flag is active (`Ctrl+Y` / `/mode yolo`) — removes confirmations, path denies still apply |
YOLO is a **flag** on top of Build or Autonomous, not a mode. It is not a
Shift+Tab stop.
Built-in defaults (ask, build, autonomous modes; default-safe policy; all SF
system skills) are provided by SF and do not need to be listed here.

View file

@ -11,10 +11,9 @@ defaults:
policy: default-safe
enabled:
# modes: not listed — no project overrides; SF built-in modes (ask/build/autonomous) apply
modes: [] # no project overrides; SF built-in modes (ask/build/autonomous) apply
policies:
- default-safe
- yolo
skills:
- forge-autonomous-runtime
- forge-command-surface

0
.agents/modes/.gitkeep Normal file
View file

View file

@ -0,0 +1,37 @@
id: default-safe
description: >-
Conservative defaults — confirm destructive operations; deny secrets paths.
Applied when no other policy is active.
capabilities:
filesystem:
allow: ["**"]
deny:
- ".env"
- ".env.*"
- ".ssh/**"
- "**/*.key"
- "**/*.pem"
- "**/*.p12"
- "**/*.pfx"
redact:
- "**/.env*"
- "**/secrets/**"
exec:
allow: true
confirmRequired: true
network:
allow: true
paths:
deny:
- .env
- .env.*
- .ssh/**
- "**/*.key"
- "**/*.pem"
confirmations:
requiredFor:
- destructive
- exec

22
.agents/prompts/base.md Normal file
View file

@ -0,0 +1,22 @@
# Base agent instructions
You are working in **singularity-forge** — the SF (Singularity Forge) monorepo.
Read `.agents/AGENTS.md` for folder layout and override conventions.
Read `AGENTS.md` (root) and `CLAUDE.md` (root) for the full planning and build guide before doing substantive work.
## Iron law
```
THE TEST IS THE SPEC. THE JSDOC IS THE PURPOSE. CODE EXISTS TO FULFIL PURPOSE.
NO BEHAVIOUR CHANGE WITHOUT A FAILING TEST FIRST.
NO COMPLETION WITHOUT A REAL CONSUMER.
NO JUDGMENT CALL WITHOUT A CONFIDENCE AND FALSIFIER.
```
## DB-first — non-negotiable
All state lives in SQLite via `node:sqlite` (`DatabaseSync`).
**Never** use `better-sqlite3`, file-based fallbacks, or in-memory state for anything that belongs in the DB.
Canonical store: `.sf/sf.db`.

View file

@ -1,43 +1,40 @@
# Project Prompt — singularity-forge
# SF project context
## What this is
## What SF is
SF — Singularity Forge — is a **purpose-to-software compiler**. The
foundational contract is documented in
[`docs/adr/0000-purpose-to-software-compiler.md`](../../docs/adr/0000-purpose-to-software-compiler.md).
Every milestone exists to serve a stated purpose; mechanics (paths,
schemas, commit refs) are subordinate to that purpose.
SF is a **purpose-to-software compiler**: it captures bounded intent, translates it into PDD fields, researches missing context, applies run-control policy, generates milestone/slice/task contracts, writes failing tests, implements the smallest satisfying code change, and records evidence.
For the longer narrative form see [`AGENTS.md`](../../AGENTS.md) and
[`CLAUDE.md`](../../CLAUDE.md). For style decisions see
[`.sf/STYLE.md`](../../.sf/STYLE.md). For invariants see
[`.sf/PRINCIPLES.md`](../../.sf/PRINCIPLES.md). For things we
explicitly will not do see [`.sf/NON-GOALS.md`](../../.sf/NON-GOALS.md).
Foundational contract: `docs/adr/0000-purpose-to-software-compiler.md`.
## Non-negotiables
## Modes
- **DB-first**: all state lives in SQLite via Node's built-in
`node:sqlite` (`DatabaseSync`). Never use `better-sqlite3` or any
native SQLite addon. Never use file-based fallbacks for state that
belongs in the DB (milestone context, sessions, memories, mode
state). If a pattern uses files as a proxy for DB state, that's a
bug to fix, not a convention to follow.
- **Two work modes**: Ask and Build. Shift+Tab cycles between them.
YOLO (Ctrl+Y) is a flag on Build that drops confirmations; it is
never a third mode and is not a Shift+Tab stop.
- **Build pipeline**: source TypeScript files under
`src/resources/extensions/sf/` compile to `dist/resources/...` via
`npm run copy-resources`. Files installed at
`~/.sf/agent/extensions/sf/` are not auto-redirected to TS source —
edits to `.ts` only take effect after `copy-resources`.
- **Tests**: vitest, no pre-compile.
SF has exactly **two work modes**: Ask and Build.
`Shift+Tab` cycles between them.
**YOLO** (`Ctrl+Y` / `/mode yolo`) is a flag layered on top of Build — not a third mode.
## Workflow
## State and planning
- `/todo triage` empties `TODO.md` and routes items into structured
plan artifacts (`docs/plans/`), backlog rows, and BUILD_PLAN tier
lists. Run before starting work if the inbox has content.
- New milestones via `sf headless new-milestone --context <spec>`
see SF's own TODO.md for the headless-unattended-mode caveat.
- Bulk import of a flat roadmap via `sf headless import-backlog
<file.md>` (this one works headless).
- `.sf/sf.db` is the canonical structured store (SQLite, `node:sqlite`).
- Runtime planning artifacts (`.sf/milestones/`, `.sf/evals/`, locks, journals) are transient and gitignored — never committed.
- Promoted artifacts go to `docs/plans/`, `docs/adr/`, `docs/specs/`.
- Naming: milestone IDs `M001…`, slice IDs `S01…`, task IDs `T01…`.
## Build and test
```bash
npm run build # full build
npm run build:core # packages + tsc + resources only
npm run test:unit # Vitest unit tests
npm run test:integration
npm run copy-resources # recompile src/resources/extensions after editing .ts files
```
## Key source locations
| Path | Purpose |
|------|---------|
| `src/resources/extensions/sf/` | SF flow extension (TypeScript source) |
| `packages/` | Seven npm workspace packages |
| `web/` | Next.js browser surface |
| `src/headless*.ts` | `sf headless` machine-surface command |
| `vscode-extension/` | Editor surface |

View file

@ -1,12 +1,6 @@
---
name: forge-autonomous-runtime
description: Explains SF autonomous loop, UOK gates, installed-runtime drift, and recovery paths.
user-invocable: false
model-invocable: true
side-effects: none
permission-profile: restricted
triggers:
- "*"
---
# forge-autonomous-runtime

View file

@ -1,14 +1,6 @@
---
name: forge-command-surface
description: Use when changing SF slash commands, browser command parity, or headless command dispatch.
user-invocable: true
model-invocable: true
side-effects: code-edits
permission-profile: normal
triggers:
- build
- code
- "*"
---
# forge-command-surface

View file

View file

View file

View file

@ -1,3 +1,3 @@
{
"lastFullVacuumAt": "2026-05-10T23:00:57.885Z"
"lastFullVacuumAt": "2026-05-12T13:59:07.765Z"
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
.sf/metrics.db-shm Normal file

Binary file not shown.

BIN
.sf/metrics.db-wal Normal file

Binary file not shown.

View file

@ -1 +1 @@
{"fetchedAt":"2026-05-10T23:01:38.499Z","modelIds":["mistral-medium-2505","mistral-medium-2508","mistral-medium-latest","mistral-medium","mistral-vibe-cli-with-tools","open-mistral-nemo","open-mistral-nemo-2407","mistral-tiny-2407","mistral-tiny-latest","codestral-2508","codestral-latest","devstral-2512","devstral-medium-latest","devstral-latest","mistral-small-2603","mistral-small-latest","mistral-vibe-cli-fast","magistral-small-latest","magistral-medium-2509","magistral-medium-latest","labs-leanstral-2603","mistral-large-2512","mistral-large-latest","mistral-large-2512","mistral-large-latest","ministral-3b-2512","ministral-3b-latest","ministral-8b-2512","ministral-8b-latest","ministral-14b-2512","ministral-14b-latest","mistral-medium-3-5","mistral-medium-3.5","mistral-medium-3","mistral-medium-2604","mistral-medium-c21211-r0-75","mistral-vibe-cli-latest","mistral-large-2411","pixtral-large-2411","pixtral-large-latest","mistral-large-pixtral-2411","devstral-small-2507","devstral-medium-2507","magistral-small-2509","mistral-small-2506"]}
{"fetchedAt":"2026-05-11T15:00:26.617Z","modelIds":["mistral-medium-2505","mistral-medium-2508","mistral-medium-latest","mistral-medium","mistral-vibe-cli-with-tools","open-mistral-nemo","open-mistral-nemo-2407","mistral-tiny-2407","mistral-tiny-latest","codestral-2508","codestral-latest","devstral-2512","devstral-medium-latest","devstral-latest","mistral-small-2603","mistral-small-latest","mistral-vibe-cli-fast","magistral-small-latest","magistral-medium-2509","magistral-medium-latest","labs-leanstral-2603","mistral-large-2512","mistral-large-latest","mistral-large-2512","mistral-large-latest","ministral-3b-2512","ministral-3b-latest","ministral-8b-2512","ministral-8b-latest","ministral-14b-2512","ministral-14b-latest","mistral-medium-3-5","mistral-medium-3.5","mistral-medium-3","mistral-medium-2604","mistral-medium-c21211-r0-75","mistral-vibe-cli-latest","mistral-large-2411","pixtral-large-2411","pixtral-large-latest","mistral-large-pixtral-2411","devstral-small-2507","devstral-medium-2507","magistral-small-2509","mistral-small-2506"]}

File diff suppressed because one or more lines are too long

21
.sf/preferences.yaml Normal file
View file

@ -0,0 +1,21 @@
# SF preferences — see ~/.sf/agent/extensions/sf/docs/preferences-reference.md for docs
version: 1
last_synced_with_sf: 2.75.3
sf_template_state: pending
verification_commands:
- "npm run typecheck:extensions"
- npm run build
- npm run lint
- "npm run test:sf-light"
- "bash -c 'set -e; for d in \"rust-engine\" \"rust-engine/crates/ast\" \"rust-engine/crates/engine\" \"rust-engine/crates/grep\"; do (cd \"$d\" && cargo fmt --check); done'"
- "bash -c 'set -e; for d in \"rust-engine\" \"rust-engine/crates/ast\" \"rust-engine/crates/engine\" \"rust-engine/crates/grep\"; do (cd \"$d\" && cargo check); done'"
- "bash -c 'set -e; for d in \"rust-engine\" \"rust-engine/crates/ast\" \"rust-engine/crates/engine\" \"rust-engine/crates/grep\"; do (cd \"$d\" && cargo test -- --test-threads=2); done'"
- "bash -c 'set -e; for d in \"rust-engine\" \"rust-engine/crates/ast\" \"rust-engine/crates/engine\" \"rust-engine/crates/grep\"; do (cd \"$d\" && cargo clippy -- -D warnings); done'"
always_use_skills: []
prefer_skills: []
avoid_skills: []
skill_rules: []
custom_instructions: []
models: {}
skill_discovery: {}
auto_supervisor: {}

View file

@ -115,6 +115,12 @@ These came up during recent ports and refactor passes — tracked here so they d
| Follow-up | Why | Tier | Effort |
|---|---|---|---|
| **Minimax search tests** | Search agent ported the feature but explicitly skipped tests because bunker's tests don't match our preferences/provider export shape. Need: `getMiniMaxSearchApiKey()` priority order, `resolveSearchProvider()` returning "minimax", `/search-provider minimax` CLI behavior, no-key error messages, `executeMiniMaxSearch` request shape. | 1 | 0.5 day |
| **Headless `new-milestone` unattended fix** | `sf headless new-milestone --context-text "…"` stalls when the agent calls `ask_user_questions` because the tool returns "unavailable" in non-interactive contexts. No milestone is created. Blocks batch backlog ingestion. | 1 | 1 day |
| **Adversarial-collaborative question probes** | Replace blocking `ask_user_questions` in headless/autonomous mode with parallel combatant + partner probes. Converge → proceed; diverge → conservative scope + flag in `OPEN-QUESTIONS.md`. Only ask human if interactive and high-stakes. | 1 | 23 days |
| **Auto-triage TODO.md on autonomous cycles** | Wire `triageTodoDump` to the autonomous orchestrator so each cycle starts by checking `TODO.md` for new dump content before picking the next unit. Skip when empty. | 2 | 1 day |
| **Bulk roadmap import** | `sf headless import-roadmap --file BACKLOG.md` — deterministic markdown → milestone/slice transform without LLM. H2 = milestone, `⬜` bullet = slice. | 2 | 23 days |
| **`sf plan list` TTY-free variant** | `sf plan list` fails in non-TTY. Add `--plain` or `sf headless plan list` emitting one `id title` per line. | 2 | 0.5 day |
| **Hand-authorable milestone scaffold** | Support a "minimum milestone" — just `CONTEXT.md` with frontmatter `id: MNNN\ntitle: …` — that SF auto-fills the rest from on first operation. | 2 | 12 days |
| **Product-audit phase machine wire-up** | Slim port (commit `a8cf2cd94`) shipped the prompt + `sf_product_audit` tool + workflow template, but doesn't yet dispatch into PhaseMerge or PhaseComplete. The tool is callable; the phase doesn't auto-fire. | 2 | 0.5 day |
| **Headless assistant-text preview** | Headless UX commit (`dff0df5fd`) covered notification spam, categorization, and phase/status tag distinction. The fourth bunker improvement — separating `assistantTextBuffer` from `thinkingBuffer` and flushing both as concise previews on tool-execution-start / message-end — was deferred because it's a meatier change in `headless.ts`. | 2 | 0.5 day |
| **Search provider registry refactor** | Adding minimax took 9 files because the provider list is duplicated across `provider.ts` (type + VALID_PREFERENCES), `native-search.ts`, `command-search-provider.ts` (CLI), `tool-search.ts` + `tool-llm-context.ts` (two separate execute paths!), `preferences-types.ts`, `preferences-validation.ts`, manifest, docs. A single `SearchProviderRegistry` array would let everything iterate. | 2 | 3-5 days |
@ -248,6 +254,7 @@ Worth building, just not blocking. Ship after Tier 2 if calendar allows.
| `pending_retain` queue | § 16.1, C-51 | Sm retain failures queue locally and retry with backoff. Required if and only if sm is integrated (Tier 1.2). |
| Capability-tag handoff | § 18.4, C-82, C-90 | `handoff("capability:go,testing", ...)` resolves to any matching agent. Adds `agent_capabilities` index. Builds on Tier 2.1 + Tier 3 inter-agent messaging. ~3 days. |
| `agent_run` budget + termination | § 17.5, C-54, C-65 | When does an agent run end? (inbox drained / explicit stop / budget hard-limit / supervisor signal / timeout). Compaction preserves wake message. ~1 week. |
| **Discoverable `--answers` schema** | Headless UX | `sf headless <cmd> --print-answer-schema` emits the JSON schema of every question the command might ask, so callers can pre-supply via `--answers` instead of probing or falling back to `OPEN-QUESTIONS.md`. ~1 day. |
---

View file

@ -0,0 +1,41 @@
# TODO Inbox Triage Plan — 2026-05-11
## Summary
Root `TODO.md` contained seven untriaged implementation notes related to headless
machine-surface reliability and planning ergonomics. All have been promoted into
durable roadmap items and cross-referenced with `BUILD_PLAN.md`. Future agents
should use this plan and the referenced docs instead of treating the old raw dump
as instruction.
## Existing Durable Homes
These raw notes did **not** have a suitable existing durable home. None were
represented in `BUILD_PLAN.md`, `docs/specs/`, or milestone planning state.
## Newly Promoted Roadmap Items
| Item | Why | Suggested tier | Implementation note |
|---|---|---|---|
| **Headless `new-milestone` broken in unattended mode** | `sf headless new-milestone --context-text "…"` stalls when the agent calls `ask_user_questions` because the tool returns "unavailable" in non-interactive contexts. No milestone is created. Blocks batch backlog ingestion. | **Tier 1** | Two viable paths: (a) prompt-level — instruct the agent that `--context`/`--context-text` is the complete spec and to proceed without follow-up; (b) tool-level — in headless mode without `--supervised`, route `ask_user_questions` through the probe-resolution flow. Either works; both ideal. |
| **Question resolution via adversarial-collaborative probes** | Replace blocking `ask_user_questions` in headless/autonomous mode with parallel combatant + partner probes. Combatant challenges the assumption; partner researches the codebase for the likely answer. Converge → proceed; diverge → conservative minimal scope + flag in `OPEN-QUESTIONS.md`. Only ask human if interactive mode is available and stakes are high. Makes `headless new-milestone --context …` finish unattended. | **Tier 1** | Builds on the fix above but generalises to any headless/autonomous question. Needs a short budget (30 s / 2 tool calls per probe). Requires `OPEN-QUESTIONS.md` append path. |
| **Auto-triage TODO.md on each autonomous cycle** | `commands-todo.js` already implements `triageTodoDump`. Today it's manual only (`/todo triage`). Wire it to the autonomous orchestrator so each cycle starts by checking if `TODO.md` has content beyond the empty template, and if so runs `triageTodoDump` before picking the next unit. Skip when `TODO.md` == `_EMPTY_TODO` template. | **Tier 2** | One LLM call per cycle when content exists (Minimax M2.7 etc per `PREFERRED_TRIAGE_MODEL_PATTERNS`). Cheap relative to a cycle. Need a hook in the autonomous loop entrypoint before unit dispatch. |
| **Bulk roadmap import** | `sf headless import-roadmap --file BACKLOG.md` — read flat markdown with H2 sections and bullet items, emit one milestone per H2, slices per `⬜` item, no LLM. Pure text → SF-structure transform. | **Tier 2** | Needs a deterministic parser for the markdown schema (H2 = milestone, paragraph = context, `⬜` bullet = slice, optional H3 = phase boundary). Useful for ingesting human roadmaps without 16 LLM round-trips. |
| **`sf plan list` TTY-free variant** | `sf plan list` fails with "Interactive mode requires a terminal" in non-TTY. The actual operation (list files in `.sf/milestones/`) needs no interaction. Add `--plain` or `sf headless plan list` that emits one `milestone-id title` per line. | **Tier 2** | Very small surface change. The plan list logic should check `isTTY` or accept an explicit `--plain` flag; headless variant is a thin adapter. |
| **Hand-authorable milestone scaffold** | Today a milestone is a directory tree with `CONTEXT.md`, `MILESTONE-SUMMARY.md`, `ROADMAP.md`, `SUMMARY.md`, plus `slices/SNN/` and `tasks/TNN/`. Naming uses an ID + 6-char hash that's not documented. Support a "minimum milestone" — just `CONTEXT.md` with frontmatter `id: MNNN\ntitle: …` — that SF accepts and auto-fills the rest from on first operation. | **Tier 2** | Lets humans (or other tools) hand-author milestones when SF's LLM scaffold is unavailable or overkill. Need to document the minimum schema and add an auto-scaffold path in milestone load. |
| **Discoverable `--answers` schema** | `sf headless` has `--answers <path>` for pre-supplying interactive answers, but the answer schema for each command isn't discoverable. Add `--print-answer-schema` to headless commands that emit the JSON schema of every question the command *might* ask. | **Tier 3** | Complements probe-resolution flow — if probes converge, use that; if they diverge but caller pre-supplied via `--answers`, use that instead of falling back to `OPEN-QUESTIONS.md`. |
## Grouping Note
Items 12 (headless unattended question handling) and items 37 (headless/planning
surface ergonomics) are related but separable. The unattended-mode fixes should
land first because they unblock the autonomous loop for milestone creation.
Bulk import, plain plan list, hand-authorable scaffolds, and answer-schema
discovery can ship independently in any order.
## Acceptance Criteria
- `TODO.md` contains no untriaged raw notes.
- New work starts from this plan or `BUILD_PLAN.md`, not from deleted raw dump text.
- Items that need implementation are converted into SF milestone/slice/task state
before code changes begin.

View file

@ -0,0 +1,206 @@
/**
* headless-import-backlog.ts deterministic markdownSF-DB backlog importer.
*
* Parses a flat markdown file with H2 sections (## Title) into SF milestones,
* and bullet list items under each H2 into slices. No LLM, no RPC child.
* Writes directly to `.sf/sf.db` via the SF DB layer.
*
* Usage: sf headless import-backlog <file.md>
*/
import { existsSync, readFileSync } from "node:fs";
import { join } from "node:path";
export interface ImportBacklogOptions {
json: boolean;
}
interface ParsedSlice {
title: string;
status: string;
}
interface ParsedMilestone {
title: string;
slices: ParsedSlice[];
}
/**
* Convert a title string to a safe kebab-case slug for use as an SF ID.
* SF milestone IDs must match /^[a-z0-9]+(-[a-z0-9]+)*$/.
*/
function slugify(title: string): string {
return title
.toLowerCase()
.replace(/[^a-z0-9\s-]/g, "")
.trim()
.replace(/[\s-]+/g, "-")
.replace(/^-+|-+$/g, "")
.slice(0, 48);
}
/**
* Parse a markdown backlog document into a list of milestones with slices.
*
* Sections are delimited by H2 headings (## Title). Bullet list items directly
* under a heading are collected as slices. Text paragraphs are ignored (they
* become part of the milestone vision if present).
*/
export function parseBacklogMarkdown(text: string): ParsedMilestone[] {
const milestones: ParsedMilestone[] = [];
let current: ParsedMilestone | null = null;
for (const rawLine of text.split("\n")) {
const line = rawLine.trimEnd();
const h2 = line.match(/^##\s+(.+)$/);
if (h2) {
current = { title: h2[1].trim(), slices: [] };
milestones.push(current);
continue;
}
if (!current) continue;
// Bullet items: -, *, +, or numbered (1. ...) — strip status emoji/markers
const bullet = line.match(/^\s*[-*+]\s+(.+)$/) ?? line.match(/^\s*\d+\.\s+(.+)$/);
if (bullet) {
let title = bullet[1].trim();
// Strip leading status markers: ✅, 🟡, ⬜, ✓, x, [x], [ ], etc.
title = title.replace(/^[🟡x]\s+/u, "");
title = title.replace(/^\[[x ]\]\s+/i, "");
// Detect done status from emoji prefix in original
const isDone =
bullet[1].trim().startsWith("✅") ||
bullet[1].trim().match(/^\[x\]/i) != null;
if (title) {
current.slices.push({
title,
status: isDone ? "complete" : "pending",
});
}
}
}
return milestones.filter((m) => m.title.length > 0);
}
/**
* Run the import. Opens the SF DB, parses the backlog file, and upserts
* milestones + slices. Skips milestones whose slugged ID already exists.
*/
export async function runImportBacklog(
filePath: string,
cwd: string,
opts: ImportBacklogOptions,
): Promise<number> {
const log = opts.json
? () => {}
: (msg: string) => process.stderr.write(`[import-backlog] ${msg}\n`);
if (!existsSync(filePath)) {
process.stderr.write(
`[import-backlog] Error: file not found: ${filePath}\n`,
);
return 1;
}
const sfDir = join(cwd, ".sf");
if (!existsSync(sfDir)) {
process.stderr.write(
`[import-backlog] Error: no .sf directory found in ${cwd}\n` +
` Run 'sf headless init' first to bootstrap the project.\n`,
);
return 1;
}
// Open the SF database
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const dynamicToolsPath = "./resources/extensions/sf/bootstrap/dynamic-tools.js";
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const { ensureDbOpen } = await import(dynamicToolsPath) as any;
const opened = await ensureDbOpen(cwd);
if (!opened) {
process.stderr.write(
`[import-backlog] Error: could not open .sf/sf.db in ${cwd}\n`,
);
return 1;
}
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const sfDbPath = "./resources/extensions/sf/sf-db.js";
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const { insertMilestone, getMilestone, insertSlice, getAllMilestones } =
await import(sfDbPath) as any;
const text = readFileSync(filePath, "utf8");
const parsed = parseBacklogMarkdown(text);
if (parsed.length === 0) {
process.stderr.write(
`[import-backlog] No H2 sections found in ${filePath}\n` +
` Expected format: ## Section Title, with optional bullet items below.\n`,
);
return 1;
}
log(`Parsed ${parsed.length} milestone(s) from ${filePath}`);
// Determine the next sequence number to preserve import order
const existing = getAllMilestones();
let sequence = existing.length;
const results: { id: string; title: string; slices: number; skipped: boolean }[] = [];
for (const m of parsed) {
const id = slugify(m.title);
if (!id) {
log(`Skipping milestone with unslugifiable title: "${m.title}"`);
continue;
}
const exists = getMilestone(id) != null;
if (exists) {
log(`Skipping existing milestone: ${id}`);
results.push({ id, title: m.title, slices: 0, skipped: true });
continue;
}
insertMilestone({
id,
title: m.title,
status: "queued",
sequence: sequence++,
});
let sliceSeq = 0;
for (const s of m.slices) {
const sliceId = `s${String(++sliceSeq).padStart(2, "0")}`;
insertSlice({
milestoneId: id,
id: sliceId,
title: s.title,
status: s.status,
sequence: sliceSeq,
});
}
log(` + ${id}: "${m.title}" (${m.slices.length} slices)`);
results.push({ id, title: m.title, slices: m.slices.length, skipped: false });
}
const imported = results.filter((r) => !r.skipped).length;
const skipped = results.filter((r) => r.skipped).length;
if (opts.json) {
process.stdout.write(
JSON.stringify({ schemaVersion: 1, imported, skipped, milestones: results }) + "\n",
);
} else {
process.stderr.write(
`[import-backlog] Done: ${imported} imported, ${skipped} skipped.\n`,
);
}
return 0;
}

View file

@ -121,7 +121,7 @@ import {
MAX_FINALIZE_TIMEOUTS,
MAX_RECOVERY_CHARS,
} from "./types.js";
import { closeoutAndStop } from "./phases-helpers.js";
import { closeoutAndStop, _resolveDispatchGuardBasePath } from "./phases-helpers.js";
/**
* Decide whether the UOK diagnostics verdict may continue into dispatch.

View file

@ -121,7 +121,7 @@ import {
MAX_FINALIZE_TIMEOUTS,
MAX_RECOVERY_CHARS,
} from "./types.js";
import { recordLearningOutcomeForUnit } from "./phases-helpers.js";
import { recordLearningOutcomeForUnit, shouldSkipArtifactVerification } from "./phases-helpers.js";
// ─── runFinalize ──────────────────────────────────────────────────────────────
/**

View file

@ -9,6 +9,7 @@ import { debugLog } from "../debug-logger.js";
import { recordLearnedOutcome } from "../learning/runtime.js";
import { handleProductAudit } from "../tools/product-audit-tool.js";
import { logWarning } from "../workflow-logger.js";
import { resolveWorktreeProjectRoot } from "../worktree-root.js";
/**
* Resolve the base path for milestone reports.
@ -28,7 +29,7 @@ export function _resolveReportBasePath(s) {
* The audit is fired with a "no-gaps" placeholder verdict. Re-run
* `/product-audit` manually for full LLM-powered gap analysis.
*/
async function maybeFireProductAudit(s, ctx) {
export async function maybeFireProductAudit(s, ctx) {
const mid = s.currentMilestoneId;
if (!mid) return;
// Guard: only fire once per milestone
@ -74,13 +75,13 @@ const PLANNING_FLOW_GATE_PHASES = new Set([
"validating-milestone",
"completing-milestone",
]);
function shouldRunPlanningFlowGate(phase) {
export function shouldRunPlanningFlowGate(phase) {
return PLANNING_FLOW_GATE_PHASES.has(phase);
}
function shouldSkipArtifactVerification(unitType) {
export function shouldSkipArtifactVerification(unitType) {
return unitType.startsWith("hook/") || unitType === "custom-step";
}
function recordLearningOutcomeForUnit(
export function recordLearningOutcomeForUnit(
ic,
unitType,
unitId,
@ -121,7 +122,7 @@ function recordLearningOutcomeForUnit(
* Generate and write an HTML milestone report snapshot.
* Extracted from the milestone-transition block in autoLoop.
*/
async function generateMilestoneReport(s, ctx, milestoneId) {
export async function generateMilestoneReport(s, ctx, milestoneId) {
const { loadVisualizerData } = await importExtensionModule(
import.meta.url,
"../visualizer-data.js",
@ -184,7 +185,7 @@ async function generateMilestoneReport(s, ctx, milestoneId) {
* If a unit is in-flight, close it out, then stop autonomous mode.
* Extracted from ~4 identical if-closeout-then-stop sequences in autoLoop.
*/
async function closeoutAndStop(ctx, pi, s, deps, reason) {
export async function closeoutAndStop(ctx, pi, s, deps, reason) {
if (s.currentUnit) {
await deps.closeoutUnit(
ctx,
@ -198,7 +199,7 @@ async function closeoutAndStop(ctx, pi, s, deps, reason) {
}
await deps.stopAuto(ctx, pi, reason);
}
async function emitCancelledUnitEnd(
export async function emitCancelledUnitEnd(
ic,
unitType,
unitId,

View file

@ -121,7 +121,7 @@ import {
MAX_FINALIZE_TIMEOUTS,
MAX_RECOVERY_CHARS,
} from "./types.js";
import { closeoutAndStop, generateMilestoneReport, maybeFireProductAudit } from "./phases-helpers.js";
import { closeoutAndStop, generateMilestoneReport, maybeFireProductAudit, shouldRunPlanningFlowGate } from "./phases-helpers.js";
// ─── runPreDispatch ───────────────────────────────────────────────────────────
/**

View file

@ -121,7 +121,7 @@ import {
MAX_FINALIZE_TIMEOUTS,
MAX_RECOVERY_CHARS,
} from "./types.js";
import { emitCancelledUnitEnd, recordLearningOutcomeForUnit } from "./phases-helpers.js";
import { emitCancelledUnitEnd, recordLearningOutcomeForUnit, shouldSkipArtifactVerification } from "./phases-helpers.js";
// ─── Session timeout scheduled resume state ────────────────────────────────────────
let consecutiveSessionTimeouts = 0;

View file

@ -5,7 +5,8 @@
* import surface for loop.js and other consumers.
*/
export { assessUokDiagnosticsDispatchGate, runDispatch } from "./phases-dispatch.js";
export { runGuards, requiresHumanProductionMutationApproval, _resolveDispatchGuardBasePath } from "./phases-guards.js";
export { runGuards, requiresHumanProductionMutationApproval } from "./phases-guards.js";
export { _resolveDispatchGuardBasePath } from "./phases-helpers.js";
export { runPreDispatch } from "./phases-pre-dispatch.js";
export { runUnitPhase, resetSessionTimeoutState } from "./phases-unit.js";
export { runFinalize } from "./phases-finalize.js";

View file

@ -500,6 +500,25 @@ export function registerHooks(pi, ecosystemHandlers = []) {
} catch {
/* non-fatal — model catalog refresh must never block session start */
}
// Detect drift in source-of-truth markdown files since last session.
try {
const { detectMdFileDrift, formatDriftReport } = await import(
"../md-file-tracker.js"
);
const drift = detectMdFileDrift(process.cwd());
if (
drift.changed.length > 0 ||
drift.deleted.length > 0
) {
const report = formatDriftReport(drift);
ctx.ui?.notify?.(report, "info", {
noticeKind: NOTICE_KIND.SYSTEM_NOTICE,
dedupe_key: "md-file-drift",
});
}
} catch {
/* non-fatal — md-file tracker must never block session start */
}
// Compaction should never behave like a stop boundary. If autonomous mode
// was active when compaction happened, continue automatically on session start.
try {

View file

@ -0,0 +1,199 @@
/**
* md-file-tracker.js Session-start sha tracker for source-of-truth markdown files.
*
* Purpose: detect external edits (hand-edits, git pulls, cross-agent edits) to
* key markdown files between sessions and surface them as notifications so SF and
* the operator always know what changed since last time.
*
* Consumer: bootstrap/register-hooks.js session_start hook.
*/
import { createHash } from "node:crypto";
import { existsSync, readdirSync, readFileSync, statSync } from "node:fs";
import { join, relative } from "node:path";
import { spawnSync } from "node:child_process";
import {
deactivateTrackedMdFile,
getAllTrackedMdFiles,
getTrackedMdFile,
upsertTrackedMdFile,
} from "./sf-db/sf-db-md-tracker.js";
import { isDbAvailable } from "./sf-db.js";
// ─── Exclusions ───────────────────────────────────────────────────────────────
/** Uppercase-root md files excluded by design: high churn, low signal. */
const EXCLUDED_ROOT_FILES = new Set(["TODO.md", "CHANGELOG.md", "BUILD_PLAN.md"]);
// ─── Hashing ─────────────────────────────────────────────────────────────────
function hashFile(absPath) {
try {
return createHash("sha256").update(readFileSync(absPath)).digest("hex");
} catch {
return null;
}
}
// ─── Git helpers ──────────────────────────────────────────────────────────────
function getCurrentCommit(cwd) {
try {
const r = spawnSync("git", ["rev-parse", "HEAD"], { cwd, encoding: "utf-8" });
return r.status === 0 ? (r.stdout.trim() || null) : null;
} catch {
return null;
}
}
function gitDiffForFile(cwd, relpath, sinceCommit) {
if (!sinceCommit) return null;
try {
const r = spawnSync(
"git",
["diff", "--unified=3", sinceCommit, "--", relpath],
{ cwd, encoding: "utf-8" },
);
return r.status === 0 ? (r.stdout.trim() || null) : null;
} catch {
return null;
}
}
// ─── File discovery ───────────────────────────────────────────────────────────
function walkDir(dir, category, repoRoot, out) {
let entries;
try {
entries = readdirSync(dir, { withFileTypes: true });
} catch {
return;
}
for (const entry of entries) {
const abs = join(dir, entry.name);
if (entry.isDirectory()) {
walkDir(abs, category, repoRoot, out);
} else if (entry.isFile() && entry.name.endsWith(".md")) {
out.push({ relpath: relative(repoRoot, abs), absPath: abs, category });
}
}
}
/**
* Discover all candidate source-of-truth markdown files under repoRoot.
*
* Purpose: produce the per-session candidate set without shelling out to find.
*
* Consumer: detectMdFileDrift().
*/
function discoverTrackedFiles(repoRoot) {
const results = [];
// Root-level uppercase .md files
try {
for (const entry of readdirSync(repoRoot, { withFileTypes: true })) {
if (!entry.isFile()) continue;
const { name } = entry;
if (!name.endsWith(".md")) continue;
if (EXCLUDED_ROOT_FILES.has(name)) continue;
if (/^[A-Z][A-Z_\-0-9]*\.md$/.test(name)) {
results.push({ relpath: name, absPath: join(repoRoot, name), category: "meta" });
}
}
} catch { /* noop if root unreadable */ }
// .github/copilot-instructions.md
const copilotMd = join(repoRoot, ".github", "copilot-instructions.md");
if (existsSync(copilotMd)) {
results.push({ relpath: ".github/copilot-instructions.md", absPath: copilotMd, category: "meta" });
}
// docs/adr/**/*.md
walkDir(join(repoRoot, "docs", "adr"), "adr", repoRoot, results);
// docs/plans/**/*.md
walkDir(join(repoRoot, "docs", "plans"), "plan", repoRoot, results);
// .sf/wiki/**/*.md
walkDir(join(repoRoot, ".sf", "wiki"), "wiki", repoRoot, results);
return results;
}
// ─── Main API ─────────────────────────────────────────────────────────────────
/**
* Scan tracked markdown files, detect drift from last session, and return a
* report of changed/new/deleted files. Updates the DB in-place.
*
* Purpose: give SF and the operator visibility into which source-of-truth docs
* changed between sessions so nothing silently drifts.
*
* Consumer: bootstrap/register-hooks.js session_start.
*
* @param {string} repoRoot Absolute path to the project root.
* @returns {{ changed: DriftEntry[], added: DriftEntry[], deleted: string[] }}
*/
export function detectMdFileDrift(repoRoot) {
if (!isDbAvailable()) return { changed: [], added: [], deleted: [] };
const headCommit = getCurrentCommit(repoRoot);
const candidates = discoverTrackedFiles(repoRoot);
const seen = new Set();
const changed = [];
const added = [];
for (const { relpath, absPath, category } of candidates) {
seen.add(relpath);
const sha = hashFile(absPath);
if (!sha) continue;
let sizeBytes = 0;
try { sizeBytes = statSync(absPath).size; } catch { /* noop */ }
const existing = getTrackedMdFile(relpath);
if (!existing) {
upsertTrackedMdFile({ relpath, sha256: sha, sizeBytes, lastSeenCommit: headCommit, category });
added.push({ relpath, category });
} else if (existing.sha256 !== sha) {
const diff = gitDiffForFile(repoRoot, relpath, existing.last_seen_commit);
upsertTrackedMdFile({ relpath, sha256: sha, sizeBytes, lastSeenCommit: headCommit, category });
changed.push({ relpath, category, prevCommit: existing.last_seen_commit, diff });
} else {
// Unchanged — refresh timestamp and commit pointer so we stay current.
upsertTrackedMdFile({ relpath, sha256: sha, sizeBytes, lastSeenCommit: headCommit, category });
}
}
// Detect deletions among previously-tracked files.
const deleted = [];
for (const row of getAllTrackedMdFiles()) {
if (!seen.has(row.relpath) && !existsSync(join(repoRoot, row.relpath))) {
deactivateTrackedMdFile(row.relpath);
deleted.push(row.relpath);
}
}
return { changed, added, deleted };
}
/**
* Format a drift report as a human-readable notification string.
*
* Purpose: keep UI-layer formatting out of the tracker core so the logic can
* be tested without a UI dependency.
*
* Consumer: bootstrap/register-hooks.js after detectMdFileDrift().
*/
export function formatDriftReport({ changed, added, deleted }) {
const lines = [];
if (changed.length > 0) {
lines.push(`${changed.length} tracked md file${changed.length === 1 ? "" : "s"} changed since last session:`);
for (const { relpath } of changed) lines.push(`${relpath}`);
}
if (added.length > 0) {
lines.push(`${added.length} new tracked md file${added.length === 1 ? "" : "s"} now tracked:`);
for (const { relpath } of added) lines.push(`${relpath}`);
}
if (deleted.length > 0) {
lines.push(`${deleted.length} previously tracked md file${deleted.length === 1 ? "" : "s"} removed:`);
for (const relpath of deleted) lines.push(`${relpath}`);
}
return lines.join("\n");
}

View file

@ -19,4 +19,5 @@ export * from './sf-db/sf-db-learning.js';
export * from './sf-db/sf-db-memory.js';
export * from './sf-db/sf-db-profile.js';
export * from './sf-db/sf-db-self-feedback.js';
export * from './sf-db/sf-db-md-tracker.js';

View file

@ -0,0 +1,90 @@
/**
* sf-db/sf-db-md-tracker.js DB adapter for tracked_md_files.
*
* Purpose: persist sha256 observations for source-of-truth markdown files so
* SF can detect external edits (hand-edits, git pulls, cross-agent edits) on
* the next session start.
*
* Consumer: md-file-tracker.js called from bootstrap/register-hooks.js on
* session_start.
*/
import { _getAdapter } from "./sf-db-core.js";
/**
* Upsert a file observation row. Called after hashing the file on session
* start, both for new files and for files whose sha has changed.
*
* Purpose: record "SF last saw this file with this hash at this commit" so
* the next session can detect drift.
*
* Consumer: md-file-tracker.js observeMdFiles().
*/
export function upsertTrackedMdFile({ relpath, sha256, sizeBytes, lastSeenCommit, category }) {
const db = _getAdapter();
if (!db) return;
db.prepare(`
INSERT INTO tracked_md_files (relpath, sha256, size_bytes, last_seen_at, last_seen_commit, category, active)
VALUES (:relpath, :sha256, :size_bytes, :last_seen_at, :last_seen_commit, :category, 1)
ON CONFLICT(relpath) DO UPDATE SET
sha256 = excluded.sha256,
size_bytes = excluded.size_bytes,
last_seen_at = excluded.last_seen_at,
last_seen_commit = excluded.last_seen_commit,
category = excluded.category,
active = 1
`).run({
":relpath": relpath,
":sha256": sha256,
":size_bytes": sizeBytes,
":last_seen_at": new Date().toISOString(),
":last_seen_commit": lastSeenCommit ?? null,
":category": category ?? "meta",
});
}
/**
* Return the tracked row for a repo-relative path, or null if untracked.
*
* Purpose: retrieve the last-seen sha so the tracker can detect drift.
*
* Consumer: md-file-tracker.js detectMdFileDrift().
*/
export function getTrackedMdFile(relpath) {
const db = _getAdapter();
if (!db) return null;
return db.prepare(
"SELECT * FROM tracked_md_files WHERE relpath = :relpath AND active = 1",
).get({ ":relpath": relpath }) ?? null;
}
/**
* Mark a tracked file inactive (e.g. deleted from disk).
* Does not purge the row that requires explicit operator confirmation.
*
* Purpose: distinguish "file gone" from "never seen" without data loss.
*
* Consumer: md-file-tracker.js on file-not-found during walk.
*/
export function deactivateTrackedMdFile(relpath) {
const db = _getAdapter();
if (!db) return;
db.prepare(
"UPDATE tracked_md_files SET active = 0 WHERE relpath = :relpath",
).run({ ":relpath": relpath });
}
/**
* Return all active tracked rows. Used during session start to find files that
* were tracked in a previous session but are no longer present on disk.
*
* Purpose: detect deletions that occurred between sessions.
*
* Consumer: md-file-tracker.js detectMdFileDrift().
*/
export function getAllTrackedMdFiles() {
const db = _getAdapter();
if (!db) return [];
return db.prepare(
"SELECT * FROM tracked_md_files WHERE active = 1 ORDER BY relpath",
).all();
}

View file

@ -15,7 +15,7 @@ function defaultQueryTimeout(operation, fallbackValue) {
}
}
const SCHEMA_VERSION = 61;
const SCHEMA_VERSION = 62;
function indexExists(db, name) {
return !!db
.prepare(
@ -3008,6 +3008,28 @@ function migrateSchema(db, { currentPath, withQueryTimeout }) {
":applied_at": new Date().toISOString(),
});
}
if (currentVersion < 62) {
// Schema v62: tracked_md_files — sha-track source-of-truth markdown files
// so SF can detect external edits (hand-edits, git pulls, cross-agent edits)
// and surface diffs at session start.
db.exec(`
CREATE TABLE IF NOT EXISTS tracked_md_files (
relpath TEXT PRIMARY KEY,
sha256 TEXT NOT NULL,
size_bytes INTEGER NOT NULL DEFAULT 0,
last_seen_at TEXT NOT NULL,
last_seen_commit TEXT DEFAULT NULL,
category TEXT NOT NULL DEFAULT 'meta',
active INTEGER NOT NULL DEFAULT 1
);
`);
db.prepare(
"INSERT INTO schema_version (version, applied_at) VALUES (:version, :applied_at)",
).run({
":version": 62,
":applied_at": new Date().toISOString(),
});
}
db.exec("COMMIT");
} catch (err) {
db.exec("ROLLBACK");

View file

@ -91,6 +91,11 @@ function parseYamlValue(value) {
/**
* Validate skill frontmatter against required fields.
*
* Project and user skills follow the open Agent Skills ecosystem standard:
* only `name` and `description` are required. SF-specific fields
* (user-invocable, model-invocable, side-effects, permission-profile) are
* optional defaults are applied in buildSkillRecord.
*/
export function validateSkillFrontmatter(frontmatter) {
const errors = [];
@ -100,18 +105,6 @@ export function validateSkillFrontmatter(frontmatter) {
if (!frontmatter.description || typeof frontmatter.description !== "string") {
errors.push("Missing or invalid 'description' field");
}
if (frontmatter["user-invocable"] === undefined) {
errors.push("Missing 'user-invocable' field");
}
if (frontmatter["model-invocable"] === undefined) {
errors.push("Missing 'model-invocable' field");
}
if (frontmatter["side-effects"] === undefined) {
errors.push("Missing 'side-effects' field");
}
if (frontmatter["permission-profile"] === undefined) {
errors.push("Missing 'permission-profile' field");
}
const validProfiles = ["restricted", "normal", "trusted", "unrestricted"];
if (
@ -135,16 +128,20 @@ export function validateSkillFrontmatter(frontmatter) {
* Purpose: produce a typed, normalized skill record so callers never
* access raw frontmatter keys directly.
*
* SF-specific fields default to safe values when absent, preserving
* compatibility with the open Agent Skills ecosystem standard which
* only mandates name + description.
*
* Consumer: skill loader, /skills catalog, model context assembly.
*/
export function buildSkillRecord(skillDir, frontmatter, body) {
return {
name: frontmatter.name,
description: frontmatter.description,
userInvocable: frontmatter["user-invocable"] ?? false,
modelInvocable: frontmatter["model-invocable"] ?? false,
userInvocable: frontmatter["user-invocable"] ?? true,
modelInvocable: frontmatter["model-invocable"] ?? true,
sideEffects: frontmatter["side-effects"] ?? "none",
permissionProfile: frontmatter["permission-profile"] ?? "restricted",
permissionProfile: frontmatter["permission-profile"] ?? "normal",
triggers: frontmatter.triggers ?? [],
maxActivations: frontmatter["max-activations"] ?? null,
locked: frontmatter.locked === true,

View file

@ -222,7 +222,7 @@ test("openDatabase_migrates_v27_tasks_without_created_at_through_spec_backfill",
const version = db
.prepare("SELECT MAX(version) AS version FROM schema_version")
.get();
assert.equal(version.version, 61);
assert.equal(version.version, 62);
// v61: intent_chapters table exists
const chaptersTable = db
.prepare(
@ -233,6 +233,16 @@ test("openDatabase_migrates_v27_tasks_without_created_at_through_spec_backfill",
chaptersTable,
"intent_chapters table should exist after v61 migration",
);
// v62: tracked_md_files table exists
const trackedMdTable = db
.prepare(
"SELECT name FROM sqlite_master WHERE type='table' AND name='tracked_md_files'",
)
.get();
assert.ok(
trackedMdTable,
"tracked_md_files table should exist after v62 migration",
);
const taskSpec = db
.prepare(
"SELECT milestone_id, slice_id, task_id, verify FROM task_specs WHERE task_id = 'T01'",

View file

@ -64,6 +64,16 @@ Some instructions.
expect(result.errors).toHaveLength(0);
});
test("validateSkillFrontmatter_passes_minimal_ecosystem_frontmatter", () => {
// Ecosystem standard: only name + description required
const result = validateSkillFrontmatter({
name: "test",
description: "desc",
});
expect(result.valid).toBe(true);
expect(result.errors).toHaveLength(0);
});
test("validateSkillFrontmatter_fails_missing_fields", () => {
const result = validateSkillFrontmatter({
name: "test",
@ -71,7 +81,6 @@ Some instructions.
expect(result.valid).toBe(false);
expect(result.errors.length).toBeGreaterThan(0);
expect(result.errors.some((e) => e.includes("description"))).toBe(true);
expect(result.errors.some((e) => e.includes("user-invocable"))).toBe(true);
});
test("validateSkillFrontmatter_fails_invalid_permission_profile", () => {