feat(sf): teach executor about the escalation field on sf_task_complete
The escalation feature was invisible to agents — the prompt didn't say it existed, so agents made silent assumptions instead of surfacing genuine tradeoffs. Now, when phases.mid_execution_escalation is on, execute-task includes a guidance block showing the escalation payload shape and noting auto-mode auto-accepts the recommendation by default. When the feature is off the field is silently dropped, so the guidance is omitted entirely to avoid misleading the agent. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
3895ae2cd3
commit
08859624f8
2 changed files with 32 additions and 0 deletions
|
|
@ -2489,6 +2489,35 @@ export async function buildExecuteTaskPrompt(
|
|||
}
|
||||
})();
|
||||
|
||||
// ADR-011 P2: when the feature is enabled, teach the executor that it can
|
||||
// surface non-obvious choices via the `escalation` field on sf_task_complete
|
||||
// rather than silently picking. Auto-mode auto-accepts the recommendation
|
||||
// (see phases.escalation_auto_accept), so this is low-cost overhead — but
|
||||
// it produces an audit trail and a hard constraint for downstream tasks.
|
||||
// When the feature is off, the field is silently dropped, so we omit the
|
||||
// guidance entirely to avoid misleading the agent.
|
||||
const escalationGuidance =
|
||||
prefs?.preferences?.phases?.mid_execution_escalation === true
|
||||
? [
|
||||
"**Surfacing non-obvious choices (optional).** If you hit a decision with material tradeoffs that downstream tasks should respect (e.g. data-loss vs. block-progress, two valid library choices with different long-term cost), include an `escalation` payload in your `sf_task_complete` call:",
|
||||
"",
|
||||
"```json",
|
||||
'"escalation": {',
|
||||
' "question": "Short, concrete question",',
|
||||
' "options": [',
|
||||
' { "id": "a", "label": "Option A", "tradeoffs": "what it costs" },',
|
||||
' { "id": "b", "label": "Option B", "tradeoffs": "what it costs" }',
|
||||
" ],",
|
||||
' "recommendation": "a",',
|
||||
' "recommendationRationale": "why a wins on this evidence",',
|
||||
' "continueWithDefault": true',
|
||||
"}",
|
||||
"```",
|
||||
"",
|
||||
"Provide 2–4 options with concrete tradeoffs. The recommendation must reference one of the option ids. Auto-mode accepts your recommendation and carries it forward as a hard constraint for downstream tasks; the user can review or override later via `/sf escalate`. Set `continueWithDefault: false` only when the choice is severe enough that the loop should pause for human review even in auto-mode (rare).",
|
||||
].join("\n")
|
||||
: "";
|
||||
|
||||
return loadPrompt("execute-task", {
|
||||
memoriesSection,
|
||||
overridesSection,
|
||||
|
|
@ -2512,6 +2541,7 @@ export async function buildExecuteTaskPrompt(
|
|||
inlinedTemplates,
|
||||
verificationBudget,
|
||||
gatesToClose,
|
||||
escalationGuidance,
|
||||
skillActivation: buildSkillActivationBlock({
|
||||
base,
|
||||
milestoneId: mid,
|
||||
|
|
|
|||
|
|
@ -95,6 +95,8 @@ All work stays in your working directory: `{{workingDirectory}}`.
|
|||
|
||||
**Autonomous execution:** Do not call `ask_user_questions` or `secure_env_collect`. You are running in auto-mode — there is no human available to answer questions. Make reasonable assumptions and document them in the task summary. If a decision genuinely requires human input, note it in the summary and proceed with the best available option.
|
||||
|
||||
{{escalationGuidance}}
|
||||
|
||||
**You MUST call `sf_task_complete` before finishing. Do not manually write `{{taskSummaryPath}}`.**
|
||||
|
||||
When done, say: "Task {{taskId}} complete."
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue