The pinned "Latest Output" zone was only cleared at agent_end, but during
flows with form elicitation (e.g. discuss-phase), there is a gap between
message_end and agent_end where the agent waits for user input. During this
gap, the same content was visible in both the chat history and the pinned
zone. Clear the pinned zone at message_end when the assistant message is
finalized in the chat container.
Cover the composable helpers extracted from deriveStateFromDb:
reconcileDiskToDb, buildCompletenessSet, buildRegistryAndFindActive,
handleNoActiveMilestone, resolveSliceDependencies, reconcileSliceTasks,
detectBlockers, checkReplanTrigger, checkInterruptedWork, and queue
order sorting.
Extracts the monolithic deriveStateFromDb function into distinct,
composable helper functions (reconcileDiskToDb, resolveSliceDependencies,
detectBlockers, etc.) inside state.ts.
Resolves technical debt identified during the code quality audit by
drastically reducing cyclomatic complexity while preserving the exact
type signature and logical behavior.
Also removes duplicate disk->DB reconciliation that could overwrite
milestone statuses.
Restores the pinned assistant output zone that shows the latest narration
during tool execution. Adds markdown rendering, animated spinner, height
capping to prevent render flashing, and session rebuild support.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Implements file-level advisory locking via proper-lockfile to ensure
atomic read-modify-write sequences in:
- compactMilestoneEvents (event-log.jsonl)
- custom-workflow-engine reconcile (GRAPH.yaml)
Fixes silent data loss when concurrent auto-mode or dashboard
sessions overlap in these operations.
Exposes secure_env_collect as an MCP tool so any MCP client (Claude Code,
Cursor, etc.) can collect secrets through form-based input. Values are
written directly to .env/Vercel/Convex and never appear in LLM context —
only key names and applied/skipped status are returned.
- New env-writer.ts with writeEnvKey, detectDestination, checkExistingEnvKeys, applySecrets
- Uses server.server.elicitInput() to present form fields to the MCP client
- Pre-checks existing keys to skip already-set env vars
- Auto-detects destination from project files (vercel.json, convex/ dir)
- 27 tests covering utilities and tool integration
Closes#3975
33 markdown files organized for GitBook import with SUMMARY.md navigation.
Covers installation, core concepts, auto mode, configuration, all providers,
cost management, skills, parallel orchestration, remote questions, teams,
headless CI, and full command reference. User-facing only — no internal/dev content.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This fixes an issue where MCP workflow tools (e.g. gsd_plan_slice) would return error details in their JSON response, but without setting the 'isError: true' flag at the top level of the tool response payload. This caused MCP clients (like Claude Code) to interpret failed validations (like empty tasks arrays) as successes and get stuck in infinite validation failure loops.
The depth verification gate and multi-milestone readiness gate
unconditionally referenced ask_user_questions. On transports where
structuredQuestionsAvailable is false, this would trap the flow in
re-ask loops.
Add {{structuredQuestionsAvailable}} conditionals to:
- Depth verification (true: ask_user_questions, false: plain text)
- Multi-milestone Phase 3 readiness gate
- Per-milestone technical assumption verification
The discuss.md prompt (used for /gsd new project creation) only asked
"What's the vision?" once and relied on LLM judgment for follow-ups.
This led to the agent asking a single question then jumping straight
to planning without gathering enough context.
Add explicit Question Rounds section with:
- 1-3 questions per round structure
- Conditional ask_user_questions vs plain text support
- Incremental persistence (CONTEXT-DRAFT save every 2 rounds)
- Depth-matching rule (1-2 rounds for simple, 4+ for large visions)
- Round cadence that drives toward the depth enforcement checklist
Thread structuredQuestionsAvailable through buildDiscussPrompt() and
prepareAndBuildDiscussPrompt() so the template variable resolves
correctly at runtime.
Closes#3976
Add comprehensive installation walkthroughs for macOS, Windows, Linux (Ubuntu/Debian,
Fedora/RHEL, Arch, nvm), and Docker. Each OS section follows a consistent numbered
step-by-step flow covering dependency install, verification, GSD install, provider
setup, first launch, and verification. Includes download links, platform-specific
tips, and a quick troubleshooting table.
Add Option C covering how Claude Pro/Max subscribers can use GSD's
workflow tools directly inside Claude Code via the MCP server — including
automatic setup, manual config, and verification steps.
resolvePreferredModelConfig now skips routing-ceiling synthesis when
isAutoMode=false, preventing interactive dispatches from silently switching
to the tier_models.heavy model. The auto-start banner now reflects
effective routing state by checking flat-rate provider suppression and
using the actual ceiling from tier_models.heavy when configured.
Dynamic routing silently downgraded models in interactive commands (guided-flow),
overriding the user's /model selection. Now routing only applies in auto-mode where
cost optimization is expected. Model downgrade notifications always fire regardless
of verbose setting, and auto-mode shows routing status upfront on start.