fix: add web search budget awareness to discuss and queue prompts (#1702)

The discuss prompts (discuss.md, guided-discuss-milestone.md,
guided-discuss-slice.md) and queue.md had no web search budget guidance.
The mandatory investigation pass, question rounds, focused research, and
requirements all compete for the same per-turn web_search quota.

Research prompts (research-milestone.md, research-slice.md) already had
budget awareness. This commit adds consistent guidance to all four
discussion/queue prompts:

- Explicit per-turn budget note (typically 3-5 searches)
- Prefer resolve_library/get_library_docs over web_search for library docs
- Prefer search_and_read for one-shot topic research
- Target 2-3 searches in investigation, save budget for later phases
- Distribute searches across turns rather than clustering
- Clarify that multiple text spans per result are normal formatting
This commit is contained in:
deseltrus 2026-03-21 15:46:14 +01:00 committed by GitHub
parent c1c7f8b6b0
commit 47d7d7563c
4 changed files with 15 additions and 2 deletions

View file

@ -36,9 +36,16 @@ Before asking your first question, do a mandatory investigation pass. This is no
2. **Check library docs**`resolve_library` / `get_library_docs` for any tech the user mentioned. Get current facts about capabilities, constraints, API shapes, version-specific behavior.
3. **Web search**`search-the-web` if the domain is unfamiliar, if you need current best practices, or if the user referenced external services/APIs you need facts about. Use `fetch_page` for full content when snippets aren't enough.
**Web search budget:** You have a limited number of web searches per turn (typically 3-5). The discuss phase spans many turns (investigation, question rounds, focused research, requirements), so budget carefully:
- Prefer `resolve_library` / `get_library_docs` over `web_search` for library documentation — they don't consume the web search budget.
- Prefer `search_and_read` for one-shot topic research — it combines search + page fetch in a single call.
- Target 2-3 web searches in the investigation pass. Save remaining budget for the focused research pass before roadmap creation.
- Do NOT repeat the same or similar queries. If a search didn't find what you need, rephrase once or move on.
- When a search returns many results, each result contains multiple text spans — this is normal formatting, not separate searches.
This happens ONCE, before the first round. The goal: your first questions should reflect what's actually true, not what you assume.
For subsequent rounds, continue investigating between rounds — check docs, search, or scout as needed to make each round's questions smarter. But the first-round investigation is mandatory and explicit.
For subsequent rounds, continue investigating between rounds — check docs, search, or scout as needed to make each round's questions smarter. But the first-round investigation is mandatory and explicit. Distribute searches across turns rather than clustering them in one turn.
## Questioning Philosophy

View file

@ -13,8 +13,11 @@ Discuss milestone {{milestoneId}} ("{{milestoneTitle}}"). Identify gray areas, a
Do a lightweight targeted investigation so your questions are grounded in reality:
- Scout the codebase (`rg`, `find`, or `scout`) to understand what already exists that this milestone touches or builds on
- Check the roadmap context above (if present) to understand what surrounds this milestone
- Use `resolve_library` / `get_library_docs` for unfamiliar libraries — prefer this over `web_search` for library documentation
- Identify the 35 biggest behavioural and architectural unknowns: things where the user's answer will materially change what gets built
**Web search budget:** You have a limited number of web searches per turn (typically 3-5). Prefer `resolve_library` / `get_library_docs` for library documentation and `search_and_read` for one-shot topic research — they are more budget-efficient. Target 2-3 web searches in the investigation pass. Distribute remaining searches across subsequent question rounds rather than clustering them.
Do **not** go deep — just enough that your questions reflect what's actually true rather than what you assume.
### Question rounds

View file

@ -13,8 +13,11 @@ Your goal is **not** to center the discussion on tech stack trivia, naming conve
Do a lightweight targeted investigation so your questions are grounded in reality:
- Scout the codebase (`rg`, `find`, or `scout` for broad unfamiliar areas) to understand what already exists that this slice touches or builds on
- Check the roadmap context above to understand what surrounds this slice — what comes before, what depends on it
- Use `resolve_library` / `get_library_docs` for unfamiliar libraries — prefer this over `web_search` for library documentation
- Identify the 35 biggest behavioural unknowns: things where the user's answer will materially change what gets built
**Web search budget:** You have a limited number of web searches per turn (typically 3-5). Prefer `resolve_library` / `get_library_docs` for library documentation and `search_and_read` for one-shot topic research — they are more budget-efficient. Target 2-3 web searches in the investigation pass. Distribute remaining searches across subsequent question rounds rather than clustering them.
Do **not** go deep — just enough that your questions reflect what's actually true rather than what you assume.
### Question rounds

View file

@ -24,7 +24,7 @@ After they describe it, your job is to understand the new work deeply enough to
**Investigate between question rounds to make your questions smarter.** Before each round of questions, do enough lightweight research that your questions are grounded in reality — not guesses about what exists or what's possible.
- Check library docs (`resolve_library` / `get_library_docs`) when the user mentions tech you need current facts about — capabilities, constraints, API shapes, version-specific behavior
- Do web searches (`search-the-web`) to verify the landscape — what solutions exist, what's changed recently, what's the current best practice. Use `freshness` for recency-sensitive queries, `domain` to target specific sites. Use `fetch_page` to read the full content of promising URLs when snippets aren't enough.
- Do web searches (`search-the-web`) to verify the landscape — what solutions exist, what's changed recently, what's the current best practice. Use `freshness` for recency-sensitive queries, `domain` to target specific sites. Use `fetch_page` to read the full content of promising URLs when snippets aren't enough. **Budget:** You have a limited number of web searches per turn (typically 3-5). Prefer `resolve_library` / `get_library_docs` for library documentation and `search_and_read` for one-shot topic research. Do NOT repeat the same or similar queries. Distribute searches across turns rather than clustering them.
- Scout the codebase (`ls`, `find`, `rg`, or `scout` for broad unfamiliar areas) to understand what already exists, what patterns are established, what constraints current code imposes
Don't go deep — just enough that your next question reflects what's actually true rather than what you assume.