feat(sf): port 5 deep-planning-mode prompts from gsd-2

Adds the prompt templates that gsd-2 uses for its 'deep' planning_depth
mode — a multi-stage discussion flow (project → requirements → research
decision → parallel research) that runs BEFORE any milestone-level
discussion. SF only had milestone-level discuss flow; this fills the
project-level and requirements-level gaps.

Ported files:
- guided-discuss-project.md     — project-wide vision/users/anti-goals
- guided-discuss-requirements.md — structured R### requirements interview
- guided-research-decision.md    — yes/no gate for parallel research
- guided-research-project.md     — 4-way parallel research orchestrator
- guided-workflow-preferences.md — workflow + planning prefs collection

gsd→sf adaptations: GSD/gsd → SF/sf, .gsd/ → .sf/, gsd_*_save tool
names → sf_*_save, GSD Skill Preferences → SF Skill Preferences.

All 5 verified to load via loadPrompt with their required template
variables. The two sf_* tools they reference (sf_requirement_save and
sf_summary_save) already exist in db-tools.ts.

This is the first half of the deep-mode port. Remaining work for full
end-to-end:
- Port 5 builders to auto-prompts.ts (buildDiscussProjectPrompt, etc.)
- Port dispatch rules to auto-dispatch.ts (each gates on
  prefs.planning_depth === 'deep')
- Port resolveDeepProjectSetupState helper for the research-decision
  marker file
- Add planning_depth: 'deep' | 'light' to PhaseSkipPreferences

Default behavior preserved: without planning_depth set, current SF
'light' behavior is unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Mikael Hugo 2026-05-02 19:33:19 +02:00
parent a5c3d75344
commit b771dd0b31
5 changed files with 513 additions and 0 deletions

View file

@ -0,0 +1,133 @@
**Working directory:** `{{workingDirectory}}`. All file reads, writes, and shell commands MUST operate relative to this directory. Do NOT `cd` to any other directory. For `.sf` files in this prompt, use absolute paths rooted at `{{workingDirectory}}` instead of discovering them with `Glob`.
Discuss the **project** as a whole. Identify gray areas at the project level — vision, users, anti-goals, key constraints — ask the user about them, and write `.sf/PROJECT.md` with the decisions. Use the **Project** output template below. If a `SF Skill Preferences` block is present in system context, use it to decide which skills to load and follow; do not override required artifact rules.
This stage runs ONCE per project, before any milestone-level discussion. It produces the project-level context that all subsequent milestones, requirements, and roadmaps will reference.
**Structured questions available: {{structuredQuestionsAvailable}}**
{{inlinedTemplates}}
---
## Stage Banner
Before your first action, print this banner verbatim in chat:
• QUESTIONING (project)
---
## Interview Protocol
### Open the conversation
Ask the user a single freeform question in plain text, not structured: **"What do you want to build?"**
Wait for their response. This grounds every follow-up in their own terminology.
### Before deeper rounds
Do a lightweight targeted investigation so your questions are grounded in reality:
- Scout the codebase (`rg`, `find`, or `scout`) — is this greenfield or brownfield? What language/framework signals exist?
- Identify any prior `.planning/` or `.sf/` artifacts hinting at history
- Use `resolve_library` / `get_library_docs` for unfamiliar libraries the user mentions
**Web search budget:** typically 35 per turn. Prefer `resolve_library` / `get_library_docs` for library docs. Target 23 web searches in the investigation pass; distribute remaining searches across follow-up rounds.
Do **not** go deep — just enough that your follow-ups reflect what's actually true rather than what you assume.
### Question rounds
Ask **13 questions per round**. Each round targets one of:
- **What they're building** — concrete enough to describe to a stranger
- **Who it's for** — primary users, secondary users, internal vs external
- **The core value** — the ONE thing that must work even if everything else is cut
- **Anti-goals** — what they explicitly don't want, what would disappoint them
- **Constraints** — budget, timeline, tech limitations, irreversible architectural choices
- **Existing context** — prior work, brownfield state, decisions already made
- **Milestone shape** — rough version sequence (v1 / v1.1 / ...) and what differentiates them
**Never fabricate or simulate user input.** Never generate fake transcript markers like `[User]`, `[Human]`, or `User:`. Ask one question round, then wait for the user's actual response before continuing.
**Plain-text default:** Project discovery is open-ended. Ask question rounds in plain text unless you are presenting 23 concrete alternatives with clear tradeoffs.
**If `{{structuredQuestionsAvailable}}` is `true` and you use `ask_user_questions`:** ask 13 questions per call. Every question object MUST include a stable lowercase `id`. Keep option labels short (35 words). Do not add a separate "Other" option; the question UI provides a freeform path automatically. Wait for each tool result before asking the next round.
**If `{{structuredQuestionsAvailable}}` is `false`:** ask questions in plain text. Keep each round to 13 focused questions.
After each round, investigate further if any answer opens a new unknown, then ask the next round.
### Round cadence
After each round, decide whether you have enough depth to write a strong PROJECT.md.
- **Incremental persistence:** After every 2 question rounds, silently save `.sf/PROJECT-DRAFT.md` using `sf_summary_save` with `artifact_type: "PROJECT-DRAFT"` and no `milestone_id`. Crash protection. Do NOT mention this save to the user.
- If not ready, continue to the next round.
- Use a wrap-up prompt only when you believe the depth checklist below is satisfied or the user signals they want to stop.
---
## Questioning philosophy
**Start open, follow energy.** Let the user's enthusiasm guide where you dig deeper.
**Challenge vagueness.** When the user says "it should be smart" or "good UX", push for specifics.
**Position-first framing.** Have opinions. "I'd lean toward X because Y — does that match your thinking?" is better than "what do you think about X vs Y?"
**Negative constraints.** Ask what would disappoint them. What they explicitly don't want. Negative constraints are sharper than positive wishes.
**Anti-patterns — never do these:**
- Checklist walking through predetermined topics regardless of what the user said
- Canned generic questions ("What are your key success metrics?")
- Rapid-fire questions without acknowledging answers
- Asking about technical skill level
- Asking about specific milestone implementations — that's the next stage
---
## Depth Verification
Before moving to the wrap-up gate, verify you have covered:
- [ ] What they're building — concrete enough to describe to a stranger
- [ ] Who it's for
- [ ] Core value (the ONE thing that must work)
- [ ] Anti-goals / explicit non-wants
- [ ] Constraints (budget, time, tech, architecture)
- [ ] Greenfield vs brownfield state
- [ ] Rough milestone sequence (at least M001's intent)
**Print a structured depth summary in chat first** — using the user's own terminology. Cover what you understood, what shaped your understanding, and any areas of remaining uncertainty.
**Then confirm:**
**If `{{structuredQuestionsAvailable}}` is `true`:** use `ask_user_questions` with:
- header: "Depth Check"
- id: "depth_verification_project_confirm"
- question: "Did I capture the depth right?"
- options: "Yes, you got it (Recommended)", "Not quite — let me clarify"
- **The question ID must contain `depth_verification_project`** — this enables the write-gate downstream.
**If `{{structuredQuestionsAvailable}}` is `false`:** ask in plain text: "Did I capture that correctly? If not, tell me what I missed." Wait for explicit confirmation. **The same non-bypassable gate applies to the plain-text path** — if the user does not respond, gives an ambiguous answer, or does not explicitly confirm, you MUST re-ask.
If they clarify, absorb the correction and re-verify.
The depth verification is the only required confirmation gate. Do not add a second "ready to proceed?" gate after it.
**CRITICAL — Confirmation gate:** Do not write final PROJECT.md until the user selects the "(Recommended)" option (structured path) or explicitly confirms (plain-text path). If the user declines, cancels, does not respond, or the tool fails, you MUST re-ask — never rationalize past the block.
---
## Output
Once the user confirms depth:
1. Use the **Project** output template (inlined above).
2. Call `sf_summary_save` with `artifact_type: "PROJECT"` and the full project markdown as `content`; omit `milestone_id`. The tool writes `.sf/PROJECT.md` to disk and persists to DB. Preserve the user's exact terminology, emphasis, and framing.
3. The `## Capability Contract` section MUST reference `.sf/REQUIREMENTS.md` — that file does not yet exist; the next stage (`discuss-requirements`) will produce it.
4. The `## Milestone Sequence` MUST list at least M001 with title and one-liner. Subsequent milestones may be listed as known intents; they will be elaborated in their own discuss-milestone stages.
5. Do NOT use `artifact_type: "CONTEXT"` and do NOT pass `milestone_id: "PROJECT"`; that creates a fake milestone named PROJECT.
6. {{commitInstruction}}
7. Say exactly: `"Project context written."` — nothing else.

View file

@ -0,0 +1,122 @@
**Working directory:** `{{workingDirectory}}`. All file reads, writes, and shell commands MUST operate relative to this directory. Do NOT `cd` to any other directory. For `.sf` files in this prompt, use absolute paths rooted at `{{workingDirectory}}` instead of discovering them with `Glob`.
Discuss **project-level requirements**. Read `.sf/PROJECT.md` first — it is the authoritative source for vision, core value, anti-goals, and milestone sequence. All requirements must trace back to it. Identify gray areas about what capabilities the project must deliver, ask the user, and write `.sf/REQUIREMENTS.md` using the v2 structured `R###` format. Use the **Requirements** output template below.
This stage runs ONCE per project, after `discuss-project` and before any milestone-level work. It produces the explicit capability contract that all milestones, slices, and verification will reference.
**Structured questions available: {{structuredQuestionsAvailable}}**
{{inlinedTemplates}}
---
## Stage Banner
Before your first action, print this banner verbatim in chat:
• REQUIREMENTS
---
## Pre-flight
1. Read `.sf/PROJECT.md` end-to-end. If it does not exist, STOP and emit: `"PROJECT.md missing — run discuss-project first."`
2. Extract: Core Value, Anti-goals, Constraints, Milestone Sequence.
3. Check for existing `.sf/REQUIREMENTS.md` — if present, this is a refinement pass, not a fresh write. Read existing requirements and treat them as the working set.
---
## Interview Protocol
### Before your first question round
Investigate to ground requirements in reality:
- Scout the codebase for existing capabilities (anything already built counts as `Validated` or `Active`)
- Cross-check the project's milestone sequence — every milestone must have at least one Active requirement it owns
- Use `resolve_library` / `get_library_docs` for libraries that imply capabilities (auth library → auth requirements)
- Identify table-stakes capabilities for the domain (research the domain only if PROJECT.md confidence is low)
**Web search budget:** 35 per turn. Target 12 web searches in this pre-investigation; reserve the rest for follow-ups.
### Question rounds
Ask **13 questions per round**. Each round targets one dimension:
- **Capability scoping** — what must the project DO at the capability level? (Not features, capabilities. "User can recover account" not "Forgot-password button")
- **Class assignment** — for each capability, which class? (`core-capability`, `primary-user-loop`, `launchability`, `continuity`, `failure-visibility`, `integration`, `quality-attribute`, `operability`, `admin/support`, `compliance/security`, `differentiator`, `constraint`, `anti-feature`)
- **Milestone ownership** — which milestone in the sequence will own this capability? Provisional ownership for later milestones is fine.
- **Status** — Active (must build), Deferred (later), Out of Scope (explicit no), Validated (already proven)
- **Anti-features** — what capabilities are explicitly excluded? Capture as `out-of-scope` with rationale.
- **Quality attributes** — performance, reliability, observability, security thresholds. These are requirements too.
**Never fabricate or simulate user input.** Wait for actual responses.
**If `{{structuredQuestionsAvailable}}` is `true`:** use `ask_user_questions`. Every question object MUST include a stable lowercase `id`. For class assignments, present the allowed classes as multi-select options. For status, present the four statuses as exclusive options. Ask 13 questions per call. Wait for each tool result before asking the next round.
**If `{{structuredQuestionsAvailable}}` is `false`:** ask in plain text. Keep each round to 13 questions.
### Round cadence
- **Incremental persistence:** After every 2 question rounds, silently save the current requirements draft using `sf_summary_save` with `artifact_type: "REQUIREMENTS-DRAFT"` and no `milestone_id`. Crash protection. Do NOT mention this save.
- Continue rounds until the depth checklist is satisfied or the user signals stop.
---
## Questioning philosophy
**Capability-oriented, not feature-oriented.** "User can authenticate" is a capability. "Sign-up button shows on landing page" is implementation. Push back when users describe implementation — extract the underlying capability.
**Position-first framing.** Have opinions. "I'd suggest making this Active because the milestone goal can't ship without it — sound right?"
**Atomic and testable.** Each requirement should be one verifiable thing. Reject "user can sign up and manage profile" — split it.
**Anti-patterns — never do these:**
- Listing every conceivable feature ("requirement inflation")
- Vague verbs ("Handle", "Support") — push for "User can X" or "System emits Y when Z"
- Skipping anti-features — explicit out-of-scope is part of the contract
- Mapping requirements to slices that don't exist yet — use `M###/none yet` with the milestone id required
---
## Depth Verification
Before the wrap-up gate, verify:
- [ ] Every milestone in PROJECT.md has at least one Active requirement
- [ ] Core Value (from PROJECT.md) is covered by at least one Active requirement
- [ ] Each Active requirement has: ID, title, class, status, description, why-it-matters, source, primary owner (`M###/S##` or `M###/none yet`; never bare `none yet`), validation, notes
- [ ] At least one explicit Out of Scope entry per major capability area (anti-features captured)
- [ ] Quality attributes (performance, reliability, etc.) captured where the user has stated thresholds
- [ ] No requirement is implementation-flavored ("button", "endpoint", "table") — all are capability-flavored
**Print a structured requirements table in chat first** — markdown table with columns: ID, Title, Class, Status, Owner, Source. Group by status (Active / Deferred / Out of Scope / Validated). This is the user's audit trail.
**Then confirm:**
**If `{{structuredQuestionsAvailable}}` is `true`:** use `ask_user_questions` with:
- header: "Depth Check"
- id: "depth_verification_requirements_confirm"
- question: "Are these the right requirements at the right scope?"
- options: "Yes, ship it (Recommended)", "Not quite — let me adjust"
- **The question ID must contain `depth_verification_requirements`** — enables the write-gate.
**If `{{structuredQuestionsAvailable}}` is `false`:** ask in plain text: "Are these requirements right? Tell me anything to add, remove, or reclassify." Wait for explicit confirmation.
If they adjust, absorb and re-verify.
**CRITICAL — Confirmation gate:** Do not write final REQUIREMENTS.md until explicit confirmation. Never rationalize past it.
---
## Output
Once the user confirms:
1. Use the **Requirements** output template (inlined above) to render the final markdown in working memory.
2. Every entry must conform to the `R###` format with all listed fields. Use `sf_requirement_save` (NOT plain file edit) for each requirement so DB state is saved first.
3. After all `sf_requirement_save` calls complete, call `sf_summary_save` with `artifact_type: "REQUIREMENTS"`; omit `milestone_id`. The requirements table is the source of truth, and this tool renders `.sf/REQUIREMENTS.md` from DB state. Pass the rendered markdown as `content` for audit context only; do not rely on markdown to update DB rows.
4. The file MUST contain all required sections: `## Active`, `## Validated`, `## Deferred`, `## Out of Scope`, `## Traceability`, `## Coverage Summary`. Empty sections are OK; missing sections are not.
5. Print the final coverage summary in chat: `Active: N | Validated: N | Deferred: N | Out of Scope: N | Mapped to slices: N | Unmapped active: N`.
6. Do NOT use `artifact_type: "CONTEXT"` and do NOT pass `milestone_id: "REQUIREMENTS"`; that creates a fake milestone instead of `.sf/REQUIREMENTS.md`.
7. {{commitInstruction}}
8. End your response with exactly: `Requirements written.`

View file

@ -0,0 +1,70 @@
**Working directory:** `{{workingDirectory}}`. All file reads, writes, and shell commands MUST operate relative to this directory. Do NOT `cd` to any other directory.
Capture the project research decision. This stage runs ONCE per project, after `discuss-requirements` and before any milestone-level work. It asks the user whether to run domain research now, then records the decision so downstream dispatch rules know what to do.
This is a **fixed-question** stage. Do NOT do open Socratic interviewing. Ask the one question below, capture the answer, write the marker file, end.
**Structured questions available: {{structuredQuestionsAvailable}}**
---
## Stage Banner
Print this banner verbatim in chat as your first action:
• RESEARCH DECISION
Then say: "Domain research finds table-stakes capabilities, ecosystem norms, and common pitfalls. Worth doing if you don't know this domain cold."
---
## The Question
**If `{{structuredQuestionsAvailable}}` is `true`:** call `ask_user_questions` exactly once with:
- **header:** "Research"
- **question:** "Run domain research before starting milestones?"
- **options:**
- "Skip (Recommended)" — go straight to milestone work; you know the domain
- "Yes" — runs 4 parallel research passes (stack, features, architecture, pitfalls) before milestone planning
**If `{{structuredQuestionsAvailable}}` is `false`:** ask in plain text: "Run domain research now? (y/n)"
---
## Output
Once the answer is captured:
1. Make sure `.sf/runtime/` exists: `mkdir -p .sf/runtime/`
2. Write `.sf/runtime/research-decision.json` containing:
```json
{
"decision": "research" | "skip",
"decided_at": "<ISO 8601 timestamp>",
"source": "research-decision"
}
```
- Use `"research"` if the user picked "Yes" or answered yes/y in plain text
- Use `"skip"` if the user picked "Skip" or answered no/n
- Always include `"source": "research-decision"`
- Optional for ambiguous or "Other / let me explain" answers: add an `inference_note` field to the JSON. Do not put inference text in chat.
3. Print exactly one of these one-line confirmations in chat:
```text
Research decision: research
Research decision: skip
```
4. Say exactly:
```text
Research decision recorded.
```
Nothing else.
---
## Critical rules
- One question, one turn, write file, done. No follow-ups.
- Do NOT actually run research in this stage — that's a separate dispatch unit (`research-project`) that fires only if the decision is `research`.
- Do NOT call `ask_user_questions` more than once per turn.
- If the user picks "Other / let me explain" or gives an ambiguous freeform answer, treat it as "skip" (the recommended choice). Do not change the required confirmation strings.

View file

@ -0,0 +1,120 @@
**Working directory:** `{{workingDirectory}}`. All file reads, writes, and shell commands MUST operate relative to this directory. Do NOT `cd` to any other directory.
Run **project-level domain research** in 4 parallel dimensions. Read `.sf/PROJECT.md` and `.sf/REQUIREMENTS.md` first — they define the scope of what to research. Then spawn 4 parallel `Task` calls (one per dimension) using agent class `scout`, each writing its findings to `.sf/research/`. This stage runs ONCE per project, after `discuss-requirements` and the `research-decision` gate, before any milestone-level work.
**Structured questions available: {{structuredQuestionsAvailable}}**
---
## Stage Banner
Print this banner verbatim in chat as your first action:
• RESEARCHING (project)
Then say: "Spawning 4 research agents in parallel: stack, features, architecture, pitfalls."
---
## Pre-flight
1. Read `.sf/PROJECT.md` end-to-end. Extract: domain, vision, current state, milestone sequence.
2. Read `.sf/REQUIREMENTS.md` end-to-end. Extract: Active requirement classes (focus research on what the project must deliver).
3. `mkdir -p .sf/research/`
If either file is missing, STOP and emit: `"PROJECT.md or REQUIREMENTS.md missing — research-project cannot run."`
---
## Fan-out
Issue **4 `Task` tool calls in a single assistant response** (one tool block containing four `Task` invocations). Use `agent: "scout"` for every task. Do not use `agent: "researcher"` — this unit runs under the `planning-dispatch` tools-policy and only `scout` is permitted for project research. The tool runtime executes the calls concurrently — that is the parallelism mechanism here. Do not split them across multiple turns; do not chain them sequentially. After issuing the four calls, wait for ALL of their tool results to come back before doing anything in the "After fan-out completes" step below.
Each task gets its own focused prompt. Each task writes one file.
### Task 1 — Stack research → `.sf/research/STACK.md`
Prompt:
> Research the standard stack for [domain] as of today. Identify the dominant libraries, frameworks, runtimes, and infrastructure tools used by [domain] products. For each: current stable version, primary alternatives, why teams pick it, when to avoid it.
>
> Constraints from PROJECT.md: [list any tech constraints / required frameworks the user already specified].
>
> Deliverable: `.sf/research/STACK.md` with sections:
> - **Recommended Stack** (with versions and rationale)
> - **Alternatives Considered** (and why not)
> - **What NOT to use** (and why)
> - **Open questions** (anything where the user's choice will materially shape the architecture)
>
> Use `resolve_library` / `get_library_docs` for library docs. Use web search sparingly (23 queries). Cite sources where versions matter. Mark confidence per recommendation: high / medium / low.
### Task 2 — Features research → `.sf/research/FEATURES.md`
Prompt:
> Research what features [domain] products typically have. Categorize as **table stakes** (users expect this; missing it breaks the product) vs **differentiators** (compelling but optional).
>
> Active requirements from REQUIREMENTS.md to cross-check: [list R### IDs and titles].
>
> Deliverable: `.sf/research/FEATURES.md` with sections per category (Authentication, Content, Notifications, etc.):
> - **Table stakes** — bullet list of expected capabilities, with one-sentence justification each
> - **Differentiators** — bullet list of optional capabilities
> - **Anti-features** — what successful [domain] products explicitly avoid
> - **Cross-check vs REQUIREMENTS.md** — which active requirements are covered, which features are missing from REQUIREMENTS, which REQUIREMENTS look excessive
>
> Use web search to surface 35 representative competitors / examples in the space. Don't go deep — aim for coverage breadth.
### Task 3 — Architecture research → `.sf/research/ARCHITECTURE.md`
Prompt:
> Research the typical architecture for [domain] products at the project's scale. Surface common patterns, data models, integration points, and scaling considerations.
>
> Vision/scale signals from PROJECT.md: [extract scale-relevant phrases — solo / small team / enterprise / planned user count].
>
> Deliverable: `.sf/research/ARCHITECTURE.md` with sections:
> - **Recommended Architecture** — diagram-friendly description (data flow, services, key boundaries)
> - **Data Model Sketch** — core entities, relationships, where state lives
> - **Integration Points** — external services typically required (auth, payments, email, etc.)
> - **Scaling Tier** — what works at this project's scale, what to defer
> - **Reversibility risk** — which architectural choices are hardest to walk back later
>
> Use `resolve_library` for library-specific architecture docs. Mark confidence per recommendation.
### Task 4 — Pitfalls research → `.sf/research/PITFALLS.md`
Prompt:
> Research common failure modes, gotchas, and footguns for [domain] products. Things experienced builders wish they'd known earlier.
>
> Project type from PROJECT.md: [greenfield / brownfield / migration].
>
> Deliverable: `.sf/research/PITFALLS.md` with sections:
> - **Domain Pitfalls** — failure modes specific to this domain (e.g., for auth: session fixation, password reset flows, token rotation)
> - **Stack Pitfalls** — known footguns of the recommended stack from STACK.md (or domain norm if STACK isn't ready)
> - **Scope Traps** — features that look small but are huge ("just add notifications", "just add search")
> - **Compliance / Security gotchas** — surfaces where regulators or attackers tend to bite
> - **Migration pitfalls** (only if brownfield) — common breakage when retrofitting [domain] capability into existing systems
>
> Web search for postmortems, incident reports, and "lessons learned" content. Sources matter — prefer specific writeups over generic listicles.
---
## After fan-out completes
Once all 4 tasks return:
1. Verify all 4 files exist: `STACK.md`, `FEATURES.md`, `ARCHITECTURE.md`, `PITFALLS.md` in `.sf/research/`. If any are missing, retry that task once.
2. Print a concise summary in chat: one sentence per dimension, what each found or why it was blocked. The runtime clears the dispatch marker after this unit exits.
3. Say exactly: `"Project research complete."` — nothing else.
---
## Critical rules
- **Issue all 4 `Task` calls in a single assistant response** (one block of four tool calls). The tool runtime parallelizes them; do NOT chain them across turns or await them individually.
- **Each task writes exactly one file** to `.sf/research/`. No cross-writes.
- **Research is informational, not prescriptive** — it surfaces options; the user / requirements stage already chose what to build.
- **Stay within scope** — don't research milestones or slices. That's a different stage.
- **Budget:** ~35 web searches per dimension. Prefer `resolve_library` / `get_library_docs` for library questions.
- If any task fails twice, write a placeholder `.sf/research/{DIMENSION}-BLOCKER.md` with the failure reason and continue. If all four dimensions are blockers, the runtime will stop before milestone planning because no usable research exists.

View file

@ -0,0 +1,68 @@
**Working directory:** `{{workingDirectory}}`. All file reads, writes, and shell commands MUST operate relative to this directory. Do NOT `cd` to any other directory. For `.sf` files in this prompt, use absolute paths rooted at `{{workingDirectory}}` instead of discovering them with `Glob`.
Configure project workflow preferences. This stage runs ONCE per project, early in deep-mode bootstrap, before `discuss-project`. It applies a small set of recommended workflow defaults and persists them to the YAML frontmatter of `.sf/PREFERENCES.md` (the same file the runtime reads its preferences from).
This is a **default-writing** stage — do NOT ask the user questions. Write the recommended defaults, then end. No follow-ups, no research, no opinion.
---
## Stage Banner
Print this banner verbatim in chat as your first action:
• WORKFLOW PREFERENCES
Then say: "Quick setup — applying recommended workflow defaults."
---
## Default Set
Use these recommended defaults without asking:
- `commit_policy: per-task` — atomic commit after every task; finest granularity, easiest to revert
- `branch_model: single` — all work on current branch
- `uat_dispatch: true` — verification runs automatically; failures pause execution
- `models.executor_class: balanced` — sensible cost/quality default
- `research: skip` — deterministic default; the dedicated research-decision stage can later switch to `research`
---
## Output
Apply the defaults:
1. Read `{{workingDirectory}}/.sf/PREFERENCES.md` if it exists. The file is YAML frontmatter (between `---` lines) followed by an optional markdown body. Parse the existing frontmatter so you can preserve unrelated keys (e.g. `planning_depth`).
2. Merge the defaults into the frontmatter under these keys, preserving any existing explicit value:
- top-level `commit_policy: per-task`
- top-level `branch_model: single`
- top-level `uat_dispatch: true`
- top-level `research: skip`
- nested `models.executor_class: balanced`
3. Also set top-level `workflow_prefs_captured: true` — this is the single explicit marker the dispatch layer uses to know the wizard has run.
4. Write `{{workingDirectory}}/.sf/PREFERENCES.md` back with the merged frontmatter and the original body preserved unchanged. Frontmatter delimiters are exactly `---` on their own lines.
5. Pre-seed the research decision so the standalone `research-decision` stage is a no-op if the user already answered here:
- Ensure `{{workingDirectory}}/.sf/runtime/` exists.
- Write `{{workingDirectory}}/.sf/runtime/research-decision.json`:
```json
{
"decision": "skip",
"decided_at": "<ISO 8601 timestamp>",
"source": "workflow-preferences",
"reason": "deterministic-default"
}
```
Use `"skip"` unless an existing valid `{{workingDirectory}}/.sf/runtime/research-decision.json` explicitly says `"research"` with `"source": "research-decision"` or `"source": "user"`.
6. Print a concise summary in chat: each key on its own line, format `key: value`. Include `commit_policy`, `branch_model`, `uat_dispatch`, `models.executor_class`, and `research` (matching the preserved or pre-seeded runtime research decision).
7. Say exactly: `"Workflow preferences saved."` — nothing else.
Do NOT write to `.sf/config.json`; runtime preferences load from `PREFERENCES.md`.
---
## Critical rules
- Do NOT ask any questions. Defaults only, write file, done.
- Do NOT call `ask_user_questions`, `AskUserQuestion`, or any other interactive user-input tool in this stage.
- Do NOT change any keys other than the frontmatter keys specified plus `workflow_prefs_captured`. Research is persisted to `.sf/runtime/research-decision.json`, NOT to `phases.skip_research`.
- Preserve existing explicit values for `commit_policy`, `branch_model`, `uat_dispatch`, and `models.executor_class`; only fill missing values with defaults.