feat(discuss): structured ask_user_questions rounds in guided-discuss-milestone (#688)

guided-discuss-milestone.md was a single-paragraph stub — the agent had
no interview protocol, no check-in round, no depth verification, and no
host-conditional behaviour. On Copilot this meant every clarification
burned a separate request with no structure.

Changes:

- guided-discuss-milestone.md: full interview protocol matching
  guided-discuss-slice structure:
  - mandatory investigation pass before first round
  - 1–3 questions per round
  - check-in after each round (wrap up vs keep going)
  - depth verification checklist before wrap-up
  - host-conditional: uses ask_user_questions when available (pi),
    falls back to plain text when not (Copilot, Cursor, Windsurf)
  - depth_verification question ID convention preserved for the
    write-gate in index.ts

- guided-flow.ts: all 5 loadPrompt('guided-discuss-milestone') call
  sites now pass structuredQuestionsAvailable by checking
  ctx.getActiveTools().includes('ask_user_questions') at dispatch time.
  Returns 'true'/'false' string so the prompt can branch conditionally.
This commit is contained in:
Jeremy McSpadden 2026-03-16 18:39:31 -05:00
parent d10412bb1e
commit 18aa6b1084
2 changed files with 114 additions and 6 deletions

View file

@ -821,8 +821,9 @@ export async function showDiscuss(
if (choice === "discuss_draft") {
const discussMilestoneTemplates = inlineTemplate("context", "Context");
const structuredQuestionsAvailable = ctx.getActiveTools().includes("ask_user_questions") ? "true" : "false";
const basePrompt = loadPrompt("guided-discuss-milestone", {
milestoneId: mid, milestoneTitle, inlinedTemplates: discussMilestoneTemplates,
milestoneId: mid, milestoneTitle, inlinedTemplates: discussMilestoneTemplates, structuredQuestionsAvailable,
});
const seed = draftContent
? `${basePrompt}\n\n## Prior Discussion (Draft Seed)\n\n${draftContent}`
@ -831,9 +832,10 @@ export async function showDiscuss(
dispatchWorkflow(pi, seed, "gsd-discuss");
} else if (choice === "discuss_fresh") {
const discussMilestoneTemplates = inlineTemplate("context", "Context");
const structuredQuestionsAvailable = ctx.getActiveTools().includes("ask_user_questions") ? "true" : "false";
pendingAutoStart = { ctx, pi, basePath, milestoneId: mid, step: false };
dispatchWorkflow(pi, loadPrompt("guided-discuss-milestone", {
milestoneId: mid, milestoneTitle, inlinedTemplates: discussMilestoneTemplates,
milestoneId: mid, milestoneTitle, inlinedTemplates: discussMilestoneTemplates, structuredQuestionsAvailable,
}), "gsd-discuss");
} else if (choice === "skip_milestone") {
const milestoneIds = findMilestoneIds(basePath);
@ -1136,8 +1138,9 @@ export async function showSmartEntry(
if (choice === "discuss_draft") {
const discussMilestoneTemplates = inlineTemplate("context", "Context");
const structuredQuestionsAvailable = ctx.getActiveTools().includes("ask_user_questions") ? "true" : "false";
const basePrompt = loadPrompt("guided-discuss-milestone", {
milestoneId, milestoneTitle, inlinedTemplates: discussMilestoneTemplates,
milestoneId, milestoneTitle, inlinedTemplates: discussMilestoneTemplates, structuredQuestionsAvailable,
});
const seed = draftContent
? `${basePrompt}\n\n## Prior Discussion (Draft Seed)\n\n${draftContent}`
@ -1146,9 +1149,10 @@ export async function showSmartEntry(
dispatchWorkflow(pi, seed, "gsd-discuss");
} else if (choice === "discuss_fresh") {
const discussMilestoneTemplates = inlineTemplate("context", "Context");
const structuredQuestionsAvailable = ctx.getActiveTools().includes("ask_user_questions") ? "true" : "false";
pendingAutoStart = { ctx, pi, basePath, milestoneId, step: stepMode };
dispatchWorkflow(pi, loadPrompt("guided-discuss-milestone", {
milestoneId, milestoneTitle, inlinedTemplates: discussMilestoneTemplates,
milestoneId, milestoneTitle, inlinedTemplates: discussMilestoneTemplates, structuredQuestionsAvailable,
}), "gsd-discuss");
} else if (choice === "skip_milestone") {
const milestoneIds = findMilestoneIds(basePath);
@ -1220,8 +1224,9 @@ export async function showSmartEntry(
}));
} else if (choice === "discuss") {
const discussMilestoneTemplates = inlineTemplate("context", "Context");
const structuredQuestionsAvailable = ctx.getActiveTools().includes("ask_user_questions") ? "true" : "false";
dispatchWorkflow(pi, loadPrompt("guided-discuss-milestone", {
milestoneId, milestoneTitle, inlinedTemplates: discussMilestoneTemplates,
milestoneId, milestoneTitle, inlinedTemplates: discussMilestoneTemplates, structuredQuestionsAvailable,
}));
} else if (choice === "skip_milestone") {
const milestoneIds = findMilestoneIds(basePath);

View file

@ -1,5 +1,108 @@
Discuss milestone {{milestoneId}} ("{{milestoneTitle}}"). Identify gray areas, ask the user about them, and write `{{milestoneId}}-CONTEXT.md` in the milestone directory with the decisions. Use the **Context** output template below. If a `GSD Skill Preferences` block is present in system context, use it to decide which skills to load and follow; do not override required artifact rules.
**Structured questions available: {{structuredQuestionsAvailable}}**
{{inlinedTemplates}}
**Investigate between question rounds to make your questions smarter.** Before each round of questions, do enough lightweight research that your questions are grounded in reality — not guesses about what exists or what's possible. Check library docs (`resolve_library`/`get_library_docs`) when tech choices are relevant, search the web (`search-the-web` with `freshness`/`domain` filters, then `fetch_page` for full content) to verify the landscape, scout the codebase (`rg`, `find`, `scout`) to understand what already exists. Don't go deep — just enough that your next question reflects what's actually true. The goal is to ask questions the user can't answer by saying "did you check the docs?" or "look at the code."
---
## Interview Protocol
### Before your first question round
Do a lightweight targeted investigation so your questions are grounded in reality:
- Scout the codebase (`rg`, `find`, or `scout`) to understand what already exists that this milestone touches or builds on
- Check the roadmap context above (if present) to understand what surrounds this milestone
- Identify the 35 biggest behavioural and architectural unknowns: things where the user's answer will materially change what gets built
Do **not** go deep — just enough that your questions reflect what's actually true rather than what you assume.
### Question rounds
Ask **13 questions per round**. Keep each question focused on one of:
- **What they're building** — concrete enough to explain to a stranger
- **Why it needs to exist** — the problem it solves or the desire it fulfills
- **Who it's for** — user, team, themselves
- **What "done" looks like** — observable outcomes, not abstract goals
- **The biggest technical unknowns / risks** — what could fail, what hasn't been proven
- **What external systems/services this touches** — APIs, databases, third-party services
**If `{{structuredQuestionsAvailable}}` is `true`:** use `ask_user_questions` for each round. 13 questions per call, each as a separate question object. Keep option labels short (35 words). Always include a freeform "Other / let me explain" option. When the user picks that option or writes a long freeform answer, switch to plain text follow-up for that thread before resuming structured questions.
**If `{{structuredQuestionsAvailable}}` is `false`:** ask questions in plain text. Keep each round to 13 focused questions. Wait for answers before asking the next round.
After the user answers, investigate further if any answer opens a new unknown, then ask the next round.
### Check-in after each round
After each round of answers, ask:
> "I think I have a solid picture of this milestone. Ready to wrap up and write the context file, or is there more to cover?"
**If `{{structuredQuestionsAvailable}}` is `true`:** use `ask_user_questions` with options:
- "Wrap up — write the context file" *(recommended after ~23 rounds)*
- "Keep going — more to discuss"
**If `{{structuredQuestionsAvailable}}` is `false`:** ask in plain text.
If the user wants to keep going, keep asking. Stop when they say wrap up.
---
## Questioning philosophy
**Start open, follow energy.** Let the user's enthusiasm guide where you dig deeper.
**Challenge vagueness, make abstract concrete.** When the user says something abstract ("it should be smart" / "good UX"), push for specifics.
**Questions must be about the experience, not the implementation.** Never ask "what auth provider?" — ask "when someone logs in, what should that feel like?" Implementation is your job. Understanding what they want to experience is the discussion's job.
**Position-first framing.** Have opinions. "I'd lean toward X because Y — does that match your thinking?" is better than "what do you think about X vs Y?"
**Negative constraints.** Ask what would disappoint them. What they explicitly don't want. Negative constraints are sharper than positive wishes.
**Anti-patterns — never do these:**
- Checklist walking through predetermined topics regardless of what the user said
- Canned generic questions that could apply to any project
- Corporate speak ("What are your key success metrics?")
- Rapid-fire questions without acknowledging answers
- Asking about technical skill level
---
## Depth Verification
Before moving to the wrap-up gate, verify you have covered:
- [ ] What they're building — concrete enough to explain to a stranger
- [ ] Why it needs to exist
- [ ] Who it's for
- [ ] What "done" looks like
- [ ] The biggest technical unknowns / risks
- [ ] What external systems/services this touches
**Print a structured depth summary in chat first** — using the user's own terminology. Cover what you understood, what shaped your understanding, and any areas of remaining uncertainty.
**Then confirm:**
**If `{{structuredQuestionsAvailable}}` is `true`:** use `ask_user_questions` with:
- header: "Depth Check"
- question: "Did I capture the depth right?"
- options: "Yes, you got it (Recommended)", "Not quite — let me clarify"
- **The question ID must contain `depth_verification`** (e.g. `depth_verification_confirm`) — this enables the write-gate downstream.
**If `{{structuredQuestionsAvailable}}` is `false`:** ask in plain text: "Did I capture that correctly? Anything I missed?" Wait for confirmation before proceeding.
If they clarify, absorb the correction and re-verify.
---
## Output
Once the user confirms depth:
1. Use the **Context** output template below
2. `mkdir -p` the milestone directory if needed
3. Write `{{milestoneId}}-CONTEXT.md` — preserve the user's exact terminology, emphasis, and framing. Do not paraphrase nuance into generic summaries. The context file is downstream agents' only window into this conversation.
4. Commit: `git add {{milestoneId}}-CONTEXT.md && git commit -m "docs({{milestoneId}}): milestone context from discuss"`
5. Say exactly: `"{{milestoneId}} context written."` — nothing else.