fix(sf): correct stale .sf milestone paths in prompts + ADR-impl absolute links

prompts/parallel-research-slices.md step 3 told the dispatcher to verify
research at `.sf/{{mid}}/`, but slice research files actually live at
`.sf/milestones/{{mid}}/slices/<sliceId>/<sliceId>-RESEARCH.md`. Step 3
verification could only ever fail.

prompts/validate-milestone.md sent the three milestone-validation reviewer
agents to wrong paths:
- parentTrace pointed at `.sf/{{milestoneId}}/S0X-SUMMARY.md` (slice
  summaries actually live at `.sf/milestones/{{milestoneId}}/slices/S0X/`)
- Reviewer A read `.sf/{{milestoneId}}/REQUIREMENTS.md` (the file is at
  project-level `.sf/REQUIREMENTS.md`)
- Reviewer A scanned `.sf/{{milestoneId}}/` for slice SUMMARYs (wrong dir)
- Reviewer C read `.sf/{{milestoneId}}/CONTEXT.md` (actual file is
  `.sf/milestones/{{milestoneId}}/{{milestoneId}}-CONTEXT.md`)

Reviewers would either return false MISSING / FAIL verdicts or have to
re-discover the layout.

docs/dev/ADR-{008,009}-IMPLEMENTATION-PLAN.md "Related ADR" links pointed
to absolute paths inside a contributor's old Mac (`/Users/jeremymcspadden/
Github/sf-2/...`). Replaced with sibling-file relative paths.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Mikael Hugo 2026-05-02 18:06:16 +02:00
parent 21113e18a9
commit ba4bab1034
4 changed files with 7 additions and 7 deletions

View file

@ -1,6 +1,6 @@
# ADR-008 Implementation Plan
**Related ADR:** [ADR-008-sf-tools-over-mcp-for-provider-parity.md](/Users/jeremymcspadden/Github/sf-2/docs/ADR-008-sf-tools-over-mcp-for-provider-parity.md)
**Related ADR:** [ADR-008-sf-tools-over-mcp-for-provider-parity.md](./ADR-008-sf-tools-over-mcp-for-provider-parity.md)
**Status:** Draft
**Date:** 2026-04-09

View file

@ -1,6 +1,6 @@
# ADR-009 Implementation Plan
**Related ADR:** [ADR-009-orchestration-kernel-refactor.md](/Users/jeremymcspadden/Github/sf-2/docs/dev/ADR-009-orchestration-kernel-refactor.md)
**Related ADR:** [ADR-009-orchestration-kernel-refactor.md](./ADR-009-orchestration-kernel-refactor.md)
**Status:** Draft
**Date:** 2026-04-14
**Target Window:** 8-10 waves (incremental, no big-bang rewrite)

View file

@ -14,7 +14,7 @@ Dispatch ALL slices simultaneously using the `subagent` tool in **parallel mode*
1. Call `subagent` exactly once with the JSON payload below
2. Wait for ALL subagents to complete
3. Verify each slice's RESEARCH file was written (check the `.sf/{{mid}}/` directory)
3. Verify each slice's RESEARCH file was written (check the `.sf/milestones/{{mid}}/slices/<sliceId>/` directory for `<sliceId>-RESEARCH.md`)
4. If any subagent failed to write its RESEARCH file, re-run it individually
5. Report which slices completed research and which (if any) failed

View file

@ -36,8 +36,8 @@ artifacts, not just trust the SUMMARY prose.
```
subagent({
parentTrace: "Slice claims to audit:\n" +
"- S01: <one-line claim> (.sf/{{milestoneId}}/S01-SUMMARY.md)\n" +
"- S02: <one-line claim> (.sf/{{milestoneId}}/S02-SUMMARY.md)\n" +
"- S01: <one-line claim> (.sf/milestones/{{milestoneId}}/slices/S01/S01-SUMMARY.md)\n" +
"- S02: <one-line claim> (.sf/milestones/{{milestoneId}}/slices/S02/S02-SUMMARY.md)\n" +
"- ...",
tasks: [
{ agent: "reviewer", task: "<Reviewer A prompt below>" },
@ -49,7 +49,7 @@ subagent({
**Reviewer A — Requirements Coverage**
Agent: `reviewer`
Prompt: "Review milestone {{milestoneId}} requirements coverage. Working directory: {{workingDirectory}}. Read `.sf/{{milestoneId}}/REQUIREMENTS.md` (or equivalent requirements file). For each requirement, check the slice SUMMARY files in `.sf/{{milestoneId}}/` to determine if it is: COVERED (clearly demonstrated), PARTIAL (mentioned but not fully demonstrated), or MISSING (no evidence). Output a markdown table with columns: Requirement | Status | Evidence. End with a one-line verdict: PASS if all covered, NEEDS-ATTENTION if partials exist, FAIL if any missing."
Prompt: "Review milestone {{milestoneId}} requirements coverage. Working directory: {{workingDirectory}}. Read `.sf/REQUIREMENTS.md` (project-level). For each requirement, check the slice SUMMARY files in `.sf/milestones/{{milestoneId}}/slices/<sliceId>/<sliceId>-SUMMARY.md` to determine if it is: COVERED (clearly demonstrated), PARTIAL (mentioned but not fully demonstrated), or MISSING (no evidence). Output a markdown table with columns: Requirement | Status | Evidence. End with a one-line verdict: PASS if all covered, NEEDS-ATTENTION if partials exist, FAIL if any missing."
**Reviewer B — Cross-Slice Integration**
Agent: `reviewer`
@ -57,7 +57,7 @@ Prompt: "Review milestone {{milestoneId}} cross-slice integration. Working direc
**Reviewer C — Assessment & Acceptance Criteria**
Agent: `reviewer`
Prompt: "Review milestone {{milestoneId}} assessment evidence and acceptance criteria. Working directory: {{workingDirectory}}. Read `.sf/{{milestoneId}}/CONTEXT.md` for acceptance criteria. Check for ASSESSMENT files in each slice directory. Verify each acceptance criterion maps to either a passing assessment result or clear SUMMARY evidence. Then review the inlined milestone verification classes from planning. For each non-empty planned class, output a markdown table: Class | Planned Check | Evidence | Verdict. Use the exact class names `Contract`, `Integration`, `Operational`, and `UAT` whenever those classes are present. If no verification classes were planned, say that explicitly. Output two sections: `Acceptance Criteria` with a checklist `[ ] Criterion | Evidence`, and `Verification Classes` with the table. End with a one-line verdict: PASS if all criteria and verification classes are covered, NEEDS-ATTENTION if gaps exist."
Prompt: "Review milestone {{milestoneId}} assessment evidence and acceptance criteria. Working directory: {{workingDirectory}}. Read `.sf/milestones/{{milestoneId}}/{{milestoneId}}-CONTEXT.md` for acceptance criteria. Check for ASSESSMENT files in each slice directory. Verify each acceptance criterion maps to either a passing assessment result or clear SUMMARY evidence. Then review the inlined milestone verification classes from planning. For each non-empty planned class, output a markdown table: Class | Planned Check | Evidence | Verdict. Use the exact class names `Contract`, `Integration`, `Operational`, and `UAT` whenever those classes are present. If no verification classes were planned, say that explicitly. Output two sections: `Acceptance Criteria` with a checklist `[ ] Criterion | Evidence`, and `Verification Classes` with the table. End with a one-line verdict: PASS if all criteria and verification classes are covered, NEEDS-ATTENTION if gaps exist."
### Step 2 — Synthesize Findings