fix(sf): harden exit and worktree cleanup
This commit is contained in:
parent
ddee5c8711
commit
1412eac60a
16 changed files with 1179 additions and 21 deletions
|
|
@ -1,7 +1,8 @@
|
|||
# ADR-001: Branchless Worktree Architecture
|
||||
|
||||
**Status:** Accepted
|
||||
**Status:** Accepted — partial drift
|
||||
**Date:** 2026-03-15
|
||||
**Revised:** 2026-05-02 — partial drift documented; code migration incomplete
|
||||
**Deciders:** Lex Christopherson
|
||||
**Advisors:** Claude Opus 4.6, Gemini 2.5 Pro, GPT-5.4 (Codex)
|
||||
|
||||
|
|
@ -147,12 +148,14 @@ Planning artifacts (milestones/, PROJECT.md, DECISIONS.md, REQUIREMENTS.md, QUEU
|
|||
|------|--------------|----------------|
|
||||
| `auto-worktree.ts` | ~246 | `mergeSliceToMilestone()`, `shouldUseWorktreeIsolation()`, `getMergeToMainMode()`, slice merge guards |
|
||||
| `git-service.ts` | ~250 | `mergeSliceToMain()`, conflict resolution, runtime stripping post-merge, `ensureSliceBranch()`, `switchToMain()` |
|
||||
| `git-self-heal.ts` | ~86 | `abortAndReset()`, `withMergeHeal()` (merge-specific recovery) |
|
||||
| `auto.ts` | ~150 | Merge dispatch guards, `fix-merge` dispatch path, branch-mode routing |
|
||||
| `worktree.ts` | ~40 | `getSliceBranchName()`, `ensureSliceBranch()`, `mergeSliceToMain()` delegates |
|
||||
| `git-self-heal.ts` | ~86 | `abortAndReset()`, `withMergeHeal()` (merge-specific recovery) **(still present — see Drift section)** |
|
||||
| `auto.ts` | ~150 | Merge dispatch guards, `fix-merge` dispatch path, branch-mode routing **(partially still present — see Drift section)** |
|
||||
| `worktree.ts` | ~40 | `getSliceBranchName()`, `ensureSliceBranch()`, `mergeSliceToMain()` delegates **(getSliceBranchName still present — see Drift section)** |
|
||||
| **Test files** | ~11 files | `auto-worktree-merge.test.ts`, `auto-worktree-milestone-merge.test.ts`, merge-related test cases |
|
||||
| **Total** | **~770+ lines** | |
|
||||
|
||||
*Verified 2026-05-02: mergeSliceToMilestone and the original mergeSliceToMain (git-service.ts) are deleted. Several other items listed above are still present — see Drift section below. Current authoritative milestone merge path: `src/resources/extensions/sf/auto-worktree.ts:1616` (`mergeMilestoneToMain`). A separate newer function `mergeSliceToMain` at `src/resources/extensions/sf/slice-cadence.ts:92` was added post-ADR for the slice-cadence collapse feature (#4765) and is unrelated to the deleted branch-era function.*
|
||||
|
||||
### What `mergeMilestoneToMain()` Becomes
|
||||
|
||||
The function simplifies dramatically:
|
||||
|
|
@ -174,6 +177,8 @@ The force-add of `SF_DURABLE_PATHS` is no longer needed — planning artifacts a
|
|||
|
||||
The `_runtimeFilesCleanedUp` one-time migration logic can also be removed.
|
||||
|
||||
*Verified 2026-05-02: `smartStage()` at `src/resources/extensions/sf/git-service.ts:576` is still present in its original form, including `_runtimeFilesCleanedUp` migration logic and `SF_MILESTONE_LOCK` parallel-scope logic. The simplification described here has not been performed — see Drift section.*
|
||||
|
||||
### What Happens to `handleAgentEnd()`
|
||||
|
||||
After any unit completes:
|
||||
|
|
@ -188,6 +193,8 @@ The "Path A fix" (lines 937-953) becomes the only path. No branch mismatch possi
|
|||
|
||||
The `fix-merge` dispatch unit type is eliminated. Within a worktree, there are no merges that can conflict. The only merge is milestone→main (squash), and if that conflicts (rare, parallel milestone edge case), it's handled as a one-time resolution at milestone completion — not a dispatch loop.
|
||||
|
||||
*Verified 2026-05-02: The `fix-merge` prompt template has been deleted (no `fix-merge.md` exists in `src/resources/extensions/sf/prompts/`). However, `MergeConflictError` (re-exported from `git-service.ts:201`) is still present along with a JSDoc comment at `git-service.ts:199` that still mentions dispatching a "fix-merge session". No active dispatch-loop code was found — this appears to be residual documentation in the class definition, not active dispatch logic. Treated as "still present (partial)" in the Drift section.*
|
||||
|
||||
### Backwards Compatibility
|
||||
|
||||
The `shouldUseWorktreeIsolation()` three-tier preference resolution is replaced by a single behavior: worktree isolation is always used. The `git.isolation: "branch"` preference is deprecated.
|
||||
|
|
@ -277,3 +284,28 @@ Response: Accepted in spirit. Commits with conventional tags (`feat(M001/S01):`,
|
|||
- Update/delete 11 test files
|
||||
- Update README suggested gitignore
|
||||
- Migration path for existing projects with slice branches
|
||||
|
||||
## Drift From Original Decision
|
||||
|
||||
*Audited 2026-05-02. Items the ADR claims were deleted that are still present in the codebase:*
|
||||
|
||||
| Item | Status | File:Line | Cleanup Pending |
|
||||
|------|--------|-----------|-----------------|
|
||||
| `git-self-heal.ts` (whole file) | **Still present** | `src/resources/extensions/sf/git-self-heal.ts:1–142` | File is 142 lines; exports `abortAndReset()` and `formatGitError()`. The ADR claimed ~86 lines deleted. Delete entire file and migrate any callers of `abortAndReset()` to in-place reset logic. |
|
||||
| `smartStage()` | **Still present** | `src/resources/extensions/sf/git-service.ts:576` | Still has `_runtimeFilesCleanedUp` migration logic, `SF_MILESTONE_LOCK` parallel-scope exclusions, and 49+ lines of runtime exclusion. Simplify as described in Consequences section. |
|
||||
| `shouldUseWorktreeIsolation()` | **Still present** | `src/resources/extensions/sf/auto.ts:357` | ADR requires single-mode worktree-always behavior; this function still exists and defaults to `false` (worktree off unless explicit opt-in). Branch-mode fallback persists. Remove after deprecating `git.isolation: "branch"`. |
|
||||
| `getSliceBranchName()` | **Still present** | `src/resources/extensions/sf/worktree.ts:261` | Still used by `workspace-index.ts:156` to record historical branch names. Evaluate whether this is still needed or can be removed. |
|
||||
| `MergeConflictError` + fix-merge JSDoc | **Partially present** | `src/resources/extensions/sf/git-service.ts:199–225` | `MergeConflictError` class and a JSDoc comment referencing "dispatch a fix-merge session" remain. The `fix-merge.md` prompt template is deleted; no active dispatch loop found. Remove the JSDoc reference and evaluate if `MergeConflictError` is still needed (it is — used by `slice-cadence.ts`). |
|
||||
|
||||
### Items Confirmed Deleted
|
||||
|
||||
- `mergeSliceToMilestone()` — not found anywhere in the codebase.
|
||||
- Original `mergeSliceToMain()` (branch-era, from `git-service.ts`) — deleted. A *new* `mergeSliceToMain()` exists at `src/resources/extensions/sf/slice-cadence.ts:92` but was added post-ADR for the slice-cadence collapse feature (#4765) and is architecturally consistent with the branchless model.
|
||||
- `fix-merge.md` prompt template — deleted (no file at `src/resources/extensions/sf/prompts/fix-merge.md`).
|
||||
- Conflict categorization (~80 lines) — not found.
|
||||
- `withMergeHeal()` — not found.
|
||||
- `ensureSliceBranch()` / `switchToMain()` / `getMergeToMainMode()` — not found.
|
||||
|
||||
### Current Authoritative Merge Path
|
||||
|
||||
The current milestone→main merge implementation is `mergeMilestoneToMain()` at `src/resources/extensions/sf/auto-worktree.ts:1616`. It performs squash-merge after auto-committing dirty worktree state, reconciling the worktree DB, and running a pre-flight rebase. It does not use slice branches, `withMergeHeal`, or conflict categorization.
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@
|
|||
|
||||
**Date**: 2026-04-29
|
||||
**Status**: proposed (deferred — capture for staged execution)
|
||||
**Revised**: 2026-05-02 — Phase 4 cancelled, see [ADR-019](./ADR-019-workspace-vm-convergence.md)
|
||||
|
||||
## Context
|
||||
|
||||
|
|
@ -12,7 +13,7 @@ Two trajectories converge:
|
|||
- **Knowledge federates** — anti-patterns, learnings, contracts should be reachable across sf instances and across other agent products on the tailnet (Hermes, OpenClaw, Claude Code, Cursor).
|
||||
- **Persistent agents centralise** — long-lived cross-project agents (code-reviewer with cross-project memory, memory-curator, security-auditor, build-watch) are too heavy and too cross-cutting to live per-project.
|
||||
|
||||
These two needs collapse into one service: the **Singularity Knowledge + Agent Platform** — a single Go server hosting the federated memory store *and* the central persistent-agent runtime.
|
||||
These two needs collapse into one service: the **Singularity Knowledge + Agent Platform** — a single Go server hosting the federated memory store *and* the central persistent-agent runtime. *(Note: the persistent-agent runtime portion — Phase 4 — has since been cancelled by [ADR-019](./ADR-019-workspace-vm-convergence.md). This ADR's active scope is the knowledge layer only, Phases 0–3.)*
|
||||
|
||||
This ADR fixes the stack.
|
||||
|
||||
|
|
@ -23,7 +24,7 @@ The implementation arm of this ADR lives in [`singularity-memory/MIGRATION.md`](
|
|||
- **Language: Go.**
|
||||
- **Storage backbone: Postgres + vchord** (existing) — accessed from Go via `pgx`. No data migration; same schema, same vchord index.
|
||||
- **Identity / auth / sync layer: `charmbracelet/charm`-server patterns** — SSH-key identity, JWT issuance, encrypted KV for user-level prefs and config. Adopted as ported library code; not run as a sidecar.
|
||||
- **Agent runtime: `charmbracelet/fantasy`** — multi-provider LLM access (Anthropic, OpenAI, Google, Bedrock, OpenRouter, etc. via `catwalk`). Used for embeddings/summarisation today; for full central persistent agents tomorrow.
|
||||
- **Agent runtime: `charmbracelet/fantasy`** — multi-provider LLM access (Anthropic, OpenAI, Google, Bedrock, OpenRouter, etc. via `catwalk`). Used for embeddings/summarisation today. *(The original plan to grow this into a full central persistent-agent runtime — Phase 4 — is cancelled by [ADR-019](./ADR-019-workspace-vm-convergence.md). `fantasy` is retained for embeddings/summarisation within the knowledge layer only.)*
|
||||
- **HTTP API: Go `net/http` + chi or echo router**, serving the *exact* current OpenAPI contract.
|
||||
- **MCP server: same wire protocol** as today's Python implementation. Clients (sf, Hermes, OpenClaw, Claude Code, Cursor) keep working unchanged.
|
||||
- **CLI scaffolding: `charmbracelet/fang`.**
|
||||
|
|
@ -47,7 +48,7 @@ The implementation arm of this ADR lives in [`singularity-memory/MIGRATION.md`](
|
|||
|
||||
### Agent runtime
|
||||
|
||||
- **Direct SDK calls (`anthropic-sdk-go`, `openai-go`, `go-genai`).** Simplest for today's narrow LLM use (embeddings + summarisation). But future central persistent agents need agent-loop semantics (multi-turn, tool calls); building those on raw SDKs reinvents fantasy's abstractions. Rejected — foundation bet.
|
||||
- **Direct SDK calls (`anthropic-sdk-go`, `openai-go`, `go-genai`).** Simplest for today's narrow LLM use (embeddings + summarisation). But future central persistent agents need agent-loop semantics (multi-turn, tool calls); building those on raw SDKs reinvents fantasy's abstractions. Rejected — foundation bet. *(Phase 4 is now cancelled by [ADR-019](./ADR-019-workspace-vm-convergence.md), so the persistent-agent motivation no longer applies; however `fantasy` is still chosen for its clean multi-provider API for embeddings/summarisation.)*
|
||||
- **Build our own agent runtime in Go.** Pure NIH. Rejected.
|
||||
- **`charmbracelet/fantasy`.** ← chosen. 730 stars, actively developed, clean API, multi-provider via `catwalk`.
|
||||
|
||||
|
|
@ -55,7 +56,7 @@ The implementation arm of this ADR lives in [`singularity-memory/MIGRATION.md`](
|
|||
|
||||
**Positive**
|
||||
|
||||
- **Foundation is right** for central persistent agents (sf SPEC §17). Adding new agents means defining their tools and system prompt, not rebuilding the runtime.
|
||||
- **Foundation is right** for the knowledge layer. *(The original "foundation for central persistent agents" rationale is superseded — Phase 4 is cancelled by [ADR-019](./ADR-019-workspace-vm-convergence.md). Persistent agents now live as Firecracker VM snapshots managed by ACE.)*
|
||||
- **Single static Go binary** is operationally simpler than Python uv/venv + Alembic + worker on each deployment host.
|
||||
- **Charm ecosystem alignment** with sf-worker (ADR-013), flight recorder (ADR-015), Charm TUI client (ADR-017). One language for the new-services tier.
|
||||
- **Wire contract preserved** — clients are zero-touch.
|
||||
|
|
@ -75,7 +76,7 @@ The implementation arm of this ADR lives in [`singularity-memory/MIGRATION.md`](
|
|||
- *Risk:* `fantasy` API churn during the migration.
|
||||
- *Mitigation:* pin a version; one planned upgrade midway through the migration.
|
||||
- *Risk:* central agents prove unworkable as a model and we've over-built the foundation.
|
||||
- *Mitigation:* the foundation cost is incremental (fantasy ≈ raw SDK + a thin abstraction). Worst case we use fantasy for embeddings only and never grow it. No wasted bet.
|
||||
- *Mitigation:* the foundation cost is incremental (fantasy ≈ raw SDK + a thin abstraction). Worst case we use fantasy for embeddings only and never grow it. No wasted bet. *(Moot — Phase 4 is cancelled by [ADR-019](./ADR-019-workspace-vm-convergence.md); fantasy stays scoped to the knowledge layer.)*
|
||||
|
||||
## Out of Scope
|
||||
|
||||
|
|
@ -92,10 +93,26 @@ The implementation arm of this ADR lives in [`singularity-memory/MIGRATION.md`](
|
|||
| 1 | Greenfield Go scaffold parallel to Python; first endpoint (`GET /v1/banks`) | 2–3 weeks |
|
||||
| 2 | Endpoint parity (recall is the critical gate) | 4–8 weeks |
|
||||
| 3 | Worker + admin UI (`pony` + `ultraviolet` on `wish`) | 2–3 weeks |
|
||||
| 4 | Central persistent-agent host (depends on sf SPEC §17 scoping) | variable |
|
||||
| ~~4~~ | ~~Central persistent-agent host~~ | ~~variable~~ |
|
||||
| 5 | Python deprecation | 1 week |
|
||||
|
||||
Total: ~12 weeks for Phases 0–3 + Phase 5; Phase 4 lands when sf-side agent layer is scoped.
|
||||
Total: ~12 weeks for Phases 0–3 + Phase 5. Phase 4 is cancelled — see section below.
|
||||
|
||||
## Phase 4 — Cancelled (See [ADR-019](./ADR-019-workspace-vm-convergence.md))
|
||||
|
||||
Phase 4 was originally planned as a "central persistent-agent runtime" built on `charmbracelet/fantasy` inside singularity-memory's Go server. [ADR-019](./ADR-019-workspace-vm-convergence.md) (Workspace VM Convergence, 2026-05-01) supersedes this plan entirely.
|
||||
|
||||
**What replaced it:** Persistent agents now live as **Firecracker VM snapshots managed by ACE**'s orchestration layer. A "persistent agent" is a named VM snapshot: restore it, and the agent wakes with its full memory and context intact. singularity-memory's scope is now strictly the knowledge layer (Phases 0–3). See ADR-019 § "ADR-014 Phase 4 is reassigned" for the authoritative statement.
|
||||
|
||||
### Historical: Original Phase 4 Plan
|
||||
|
||||
> *The content below is the original Phase 4 design, preserved as a historical record. It is **not** the current plan.*
|
||||
|
||||
The original Phase 4 called for singularity-memory's Go server to host a central persistent-agent runtime using `charmbracelet/fantasy`. Long-lived cross-project agents (code-reviewer, memory-curator, security-auditor, build-watch) would run there, with their state managed by the same Postgres store. This depended on sf SPEC §17 scoping being completed ("status NEW" at ADR-014's writing date).
|
||||
|
||||
The rationale for building this in singularity-memory was ecosystem alignment with `fantasy` + `charm-server` + `wish` and avoiding per-project agent redundancy. The timeline was listed as "variable" because SPEC §17 had not been fully scoped.
|
||||
|
||||
ADR-019 made this moot by choosing a cleaner isolation model (hypervisor-level VM snapshots) that is language-agnostic inside the VM, multi-tenant by construction, and owned by ACE rather than a shared Go server.
|
||||
|
||||
## References
|
||||
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@
|
|||
|
||||
**Status:** Proposed
|
||||
**Date:** 2026-05-01
|
||||
**Revised:** 2026-05-02 — wire-format scope superseded by ADR-020
|
||||
**Deciders:** Mikael Hugo
|
||||
**Context repos:** `singularity-forge` (SF), `ace-coder` (ACE)
|
||||
|
||||
|
|
@ -175,7 +176,9 @@ workspace VM primitive is stable.
|
|||
|
||||
## MCP scope
|
||||
|
||||
Internal services use typed direct clients (gRPC for first-party). MCP is reserved
|
||||
> **Superseded by ADR-020:** This section's proposal to use MCP for internal service wires is replaced. ADR-020 mandates **gRPC** for first-party services (SF, ACE, memory). MCP is reserved for **external coding tools** (Claude Code, Cursor) only. The original analysis below is preserved as historical context.
|
||||
|
||||
[Originally proposed: MCP for internal services — superseded by ADR-020 in favor of gRPC.] Internal services use typed direct clients (gRPC for first-party). MCP is reserved
|
||||
for external coding tools (Claude Code, Cursor) that don't share our build system.
|
||||
See [ADR-020](./ADR-020-internal-wire-architecture.md) for the full wire-format table and rationale.
|
||||
|
||||
|
|
@ -190,7 +193,8 @@ See [ADR-020](./ADR-020-internal-wire-architecture.md) for the full wire-format
|
|||
|
||||
### Phase 2 — Federated memory for ACE (near-term, ADR-012 Tier 1)
|
||||
- ACE connects to singularity-memory via a typed Python client (generated from
|
||||
the Go API — not MCP). Internal services do not pay the MCP tax.
|
||||
the Go API — not MCP). Internal services do not pay the MCP tax. [Wire format
|
||||
confirmed by ADR-020: gRPC for first-party services.]
|
||||
- **SF stays local.** SF is single-machine, single-user, local-first by design.
|
||||
`memory-store.ts` continues to work on `.sf/memory/`; no remote mode wired in
|
||||
SF core. When SF runs inside an ACE-managed workspace, the workspace surfaces
|
||||
|
|
|
|||
|
|
@ -39,5 +39,6 @@ export const BUILTIN_SLASH_COMMANDS: ReadonlyArray<BuiltinSlashCommand> = [
|
|||
{ name: "edit-mode", description: "Toggle edit mode (standard/hashline)" },
|
||||
{ name: "terminal", description: "Run a shell command directly (e.g. /terminal ping -c3 1.1.1.1)" },
|
||||
{ name: "stop", description: "Stop the currently running response" },
|
||||
{ name: "exit", description: "Quit pi" },
|
||||
{ name: "quit", description: "Quit pi" },
|
||||
];
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ function createHost(options: HostOptions = {}) {
|
|||
let editorText = "";
|
||||
let settingsOpened = 0;
|
||||
let aborts = 0;
|
||||
let shutdowns = 0;
|
||||
const statuses: string[] = [];
|
||||
let pendingDisplayUpdates = 0;
|
||||
let renderRequests = 0;
|
||||
|
|
@ -67,6 +68,9 @@ function createHost(options: HostOptions = {}) {
|
|||
settingsOpened += 1;
|
||||
},
|
||||
showStatus: host.showStatus,
|
||||
shutdown: async () => {
|
||||
shutdowns += 1;
|
||||
},
|
||||
}),
|
||||
handleBashCommand: async () => {},
|
||||
showWarning(message: string) {
|
||||
|
|
@ -113,6 +117,7 @@ function createHost(options: HostOptions = {}) {
|
|||
getEditorText: () => editorText,
|
||||
getSettingsOpened: () => settingsOpened,
|
||||
getAborts: () => aborts,
|
||||
getShutdowns: () => shutdowns,
|
||||
statuses,
|
||||
getPendingDisplayUpdates: () => pendingDisplayUpdates,
|
||||
getRenderRequests: () => renderRequests,
|
||||
|
|
@ -147,6 +152,17 @@ test("input-controller: built-in slash commands stay in TUI dispatch", async ()
|
|||
);
|
||||
});
|
||||
|
||||
test("input-controller: /exit is a built-in shutdown alias", async () => {
|
||||
const { host, prompted, errors, getEditorText, getShutdowns } = createHost();
|
||||
|
||||
await host.defaultEditor.onSubmit("/exit");
|
||||
|
||||
assert.equal(getShutdowns(), 1);
|
||||
assert.deepEqual(prompted, []);
|
||||
assert.deepEqual(errors, []);
|
||||
assert.equal(getEditorText(), "");
|
||||
});
|
||||
|
||||
test("input-controller: /stop aborts the current response", async () => {
|
||||
const { host, prompted, errors, statuses, getAborts, getEditorText } =
|
||||
createHost();
|
||||
|
|
|
|||
|
|
@ -219,6 +219,15 @@ export async function dispatchSlashCommand(
|
|||
ctx.showSessionSelector();
|
||||
return true;
|
||||
}
|
||||
if (text === "/exit") {
|
||||
const extensionExit = ctx.session.extensionRunner?.getCommand("exit");
|
||||
if (extensionExit && ctx.session.extensionRunner) {
|
||||
await extensionExit.handler("", ctx.session.extensionRunner.createCommandContext());
|
||||
} else {
|
||||
await ctx.shutdown();
|
||||
}
|
||||
return true;
|
||||
}
|
||||
if (text === "/quit") {
|
||||
await ctx.shutdown();
|
||||
return true;
|
||||
|
|
|
|||
|
|
@ -7,6 +7,7 @@
|
|||
*/
|
||||
|
||||
import { execFileSync } from "node:child_process";
|
||||
import { randomUUID } from "node:crypto";
|
||||
import {
|
||||
cpSync,
|
||||
existsSync,
|
||||
|
|
@ -57,6 +58,7 @@ import {
|
|||
isDbAvailable,
|
||||
reconcileWorktreeDb,
|
||||
} from "./sf-db.js";
|
||||
import { emitJournalEvent } from "./journal.js";
|
||||
import { logError, logWarning } from "./workflow-logger.js";
|
||||
import { detectWorktreeName, nudgeGitBranchCache } from "./worktree.js";
|
||||
import {
|
||||
|
|
@ -66,6 +68,7 @@ import {
|
|||
resolveGitDir,
|
||||
worktreePath,
|
||||
} from "./worktree-manager.js";
|
||||
import { isInsideWorktree } from "./repo-identity.js";
|
||||
|
||||
const sfHome = process.env.SF_HOME || join(homedir(), ".sf");
|
||||
const PROJECT_PREFERENCES_FILE = "PREFERENCES.md";
|
||||
|
|
@ -1204,6 +1207,26 @@ export function createAutoWorktree(
|
|||
basePath: string,
|
||||
milestoneId: string,
|
||||
): string {
|
||||
// Guard: refuse to create a worktree from inside an existing worktree.
|
||||
// Nested worktrees corrupt state on merge-back and are never intentional.
|
||||
if (isInsideWorktree(basePath)) {
|
||||
emitJournalEvent(basePath, {
|
||||
ts: new Date().toISOString(),
|
||||
flowId: randomUUID(),
|
||||
seq: 0,
|
||||
eventType: "worktree-create-failed",
|
||||
data: {
|
||||
milestoneId,
|
||||
reason: "nested-worktree-rejected",
|
||||
basePath,
|
||||
},
|
||||
});
|
||||
throw new SFError(
|
||||
SF_GIT_ERROR,
|
||||
`cannot create a nested worktree from inside an existing worktree: ${basePath}`,
|
||||
);
|
||||
}
|
||||
|
||||
const branch = autoWorktreeBranch(milestoneId);
|
||||
|
||||
// Check if the milestone branch already exists — it survives auto-mode
|
||||
|
|
|
|||
|
|
@ -63,7 +63,9 @@ export type JournalEventType =
|
|||
| "milestone-resquash"
|
||||
// dispatch telemetry — measure agent/subagent invocation frequency and shape
|
||||
| "subagent-invoked"
|
||||
| "subagent-completed";
|
||||
| "subagent-completed"
|
||||
// #6 — divergence cap enforcement
|
||||
| "worktree-divergence-warning";
|
||||
|
||||
/** A single structured event in the journal. */
|
||||
export interface JournalEntry {
|
||||
|
|
|
|||
|
|
@ -1484,6 +1484,25 @@ export function nativeUnpushedCount(basePath: string, branch: string): number {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Count commits that are ahead of and behind a reference (e.g. main branch).
|
||||
* Returns { commitsAhead, commitsBehind } from the perspective of `worktreePath`.
|
||||
*
|
||||
* commitsAhead = commits in HEAD that are not in mainRef (HEAD..mainRef inverse)
|
||||
* commitsBehind = commits in mainRef not yet in HEAD (mainRef..HEAD inverse)
|
||||
*
|
||||
* Fallback: `git rev-list --count <ref>..HEAD` and the inverse.
|
||||
*/
|
||||
export function getCommitsBehindMain(
|
||||
worktreePath: string,
|
||||
mainRef: string,
|
||||
): { commitsAhead: number; commitsBehind: number } {
|
||||
// nativeCommitCountBetween is already available and backed by native or CLI
|
||||
const commitsAhead = nativeCommitCountBetween(worktreePath, mainRef, "HEAD");
|
||||
const commitsBehind = nativeCommitCountBetween(worktreePath, "HEAD", mainRef);
|
||||
return { commitsAhead, commitsBehind };
|
||||
}
|
||||
|
||||
// ─── Re-exports for type consumers ──────────────────────────────────────
|
||||
|
||||
export type {
|
||||
|
|
|
|||
185
src/resources/extensions/sf/orphan-worktree-sweep.ts
Normal file
185
src/resources/extensions/sf/orphan-worktree-sweep.ts
Normal file
|
|
@ -0,0 +1,185 @@
|
|||
/**
|
||||
* SF Orphan Worktree Sweep
|
||||
*
|
||||
* Detects orphaned worktree directories left behind by crashed or killed units
|
||||
* and either leaves them intact (resumable) or removes them (broken).
|
||||
*
|
||||
* Called at session_start to ensure stale worktrees from prior crashes don't
|
||||
* accumulate indefinitely.
|
||||
*
|
||||
* Triage logic:
|
||||
* Active — auto.lock present and PID alive → leave alone
|
||||
* Resumable — no active lock, but .git file is valid → leave intact, journal
|
||||
* Broken — .git missing or unreadable → prune, journal
|
||||
*/
|
||||
|
||||
import { randomUUID } from "node:crypto";
|
||||
import { existsSync, lstatSync, readdirSync, readFileSync } from "node:fs";
|
||||
import { join } from "node:path";
|
||||
import { emitJournalEvent } from "./journal.js";
|
||||
import { removeWorktree, worktreesDir, worktreePath } from "./worktree-manager.js";
|
||||
|
||||
// ─── Types ───────────────────────────────────────────────────────────────────
|
||||
|
||||
export interface SweepResult {
|
||||
/** Milestone IDs whose worktrees were left intact for operator resume. */
|
||||
resumed: string[];
|
||||
/** Milestone IDs whose worktrees were removed (broken structure). */
|
||||
pruned: string[];
|
||||
/** Per-worktree errors that did not stop the sweep. */
|
||||
errors: { id: string; reason: string }[];
|
||||
}
|
||||
|
||||
// ─── Internal Helpers ─────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Read the auto.lock file for a worktree and return the PID, or null if absent.
|
||||
* The lock lives at <worktreeDir>/.sf/auto.lock (the worktree has its own .sf/).
|
||||
*/
|
||||
function readWorktreeLockPid(worktreeDir: string): number | null {
|
||||
const lockPath = join(worktreeDir, ".sf", "auto.lock");
|
||||
if (!existsSync(lockPath)) return null;
|
||||
try {
|
||||
const raw = readFileSync(lockPath, "utf-8");
|
||||
const data = JSON.parse(raw) as { pid?: unknown };
|
||||
return typeof data.pid === "number" ? data.pid : null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns true if the given PID is alive in this OS process table.
|
||||
* Uses signal 0 — POSIX / Node standard liveness probe.
|
||||
*/
|
||||
function isPidAlive(pid: number): boolean {
|
||||
try {
|
||||
process.kill(pid, 0);
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns true if the worktree at `wtPath` has a structurally valid git pointer:
|
||||
* a `.git` *file* (not a directory) whose content starts with "gitdir:".
|
||||
*
|
||||
* A real git worktree always has a .git file. A .git directory means it's a
|
||||
* standalone repo nested inside (not a registered worktree). Either way the
|
||||
* worktree is not structurally healthy as an SF worktree.
|
||||
*/
|
||||
function isStructurallyHealthy(wtPath: string): boolean {
|
||||
const gitPath = join(wtPath, ".git");
|
||||
if (!existsSync(gitPath)) return false;
|
||||
try {
|
||||
const stat = lstatSync(gitPath);
|
||||
if (!stat.isFile()) return false;
|
||||
const content = readFileSync(gitPath, "utf-8").trim();
|
||||
return content.startsWith("gitdir:");
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Public API ───────────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Sweep `.sf/worktrees/` for orphaned worktrees and triage each one.
|
||||
*
|
||||
* - Active (PID alive) → skipped entirely.
|
||||
* - Resumable (healthy) → left intact; operator-visible journal event emitted.
|
||||
* - Broken (no .git) → removed via removeWorktree(); journal event emitted.
|
||||
*
|
||||
* Never throws. Each per-worktree error is accumulated in `errors[]`.
|
||||
* Idempotent: running twice with no orphans returns all-empty arrays both times.
|
||||
*
|
||||
* @param basePath The project root (not the worktree path).
|
||||
*/
|
||||
export function sweepOrphanWorktrees(basePath: string): SweepResult {
|
||||
const result: SweepResult = { resumed: [], pruned: [], errors: [] };
|
||||
|
||||
const wtDir = worktreesDir(basePath);
|
||||
if (!existsSync(wtDir)) return result;
|
||||
|
||||
let entries: string[];
|
||||
try {
|
||||
entries = readdirSync(wtDir, { withFileTypes: true })
|
||||
.filter((d) => d.isDirectory())
|
||||
.map((d) => d.name);
|
||||
} catch (err) {
|
||||
result.errors.push({
|
||||
id: "<worktrees-dir>",
|
||||
reason: `readdirSync failed: ${err instanceof Error ? err.message : String(err)}`,
|
||||
});
|
||||
return result;
|
||||
}
|
||||
|
||||
for (const id of entries) {
|
||||
try {
|
||||
const wtPath = worktreePath(basePath, id);
|
||||
|
||||
// ── Active check ──────────────────────────────────────────────────
|
||||
const pid = readWorktreeLockPid(wtPath);
|
||||
if (pid !== null && isPidAlive(pid)) {
|
||||
// In-flight unit owns this worktree — leave it completely alone.
|
||||
continue;
|
||||
}
|
||||
|
||||
// ── Structural health ─────────────────────────────────────────────
|
||||
if (isStructurallyHealthy(wtPath)) {
|
||||
// Orphan but intact — operator may want to resume or inspect.
|
||||
result.resumed.push(id);
|
||||
try {
|
||||
emitJournalEvent(basePath, {
|
||||
ts: new Date().toISOString(),
|
||||
flowId: randomUUID(),
|
||||
seq: 0,
|
||||
eventType: "worktree-orphaned",
|
||||
data: {
|
||||
milestoneId: id,
|
||||
reason: "resumable",
|
||||
worktreeDirExists: true,
|
||||
detectedAt: new Date().toISOString(),
|
||||
},
|
||||
});
|
||||
} catch {
|
||||
// telemetry failure must not abort sweep
|
||||
}
|
||||
} else {
|
||||
// Broken worktree — prune it.
|
||||
const detail = !existsSync(join(wtPath, ".git"))
|
||||
? "missing .git file"
|
||||
: "invalid .git content";
|
||||
|
||||
removeWorktree(basePath, id, { deleteBranch: false, force: true });
|
||||
result.pruned.push(id);
|
||||
|
||||
try {
|
||||
emitJournalEvent(basePath, {
|
||||
ts: new Date().toISOString(),
|
||||
flowId: randomUUID(),
|
||||
seq: 0,
|
||||
eventType: "worktree-orphaned",
|
||||
data: {
|
||||
milestoneId: id,
|
||||
reason: "broken-pruned",
|
||||
worktreeDirExists: false,
|
||||
detail,
|
||||
detectedAt: new Date().toISOString(),
|
||||
},
|
||||
});
|
||||
} catch {
|
||||
// telemetry failure must not abort sweep
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
result.errors.push({
|
||||
id,
|
||||
reason: err instanceof Error ? err.message : String(err),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
|
@ -0,0 +1,150 @@
|
|||
/**
|
||||
* auto-worktree-nested-guard.test.ts
|
||||
*
|
||||
* Verifies that createAutoWorktree() refuses to create a nested worktree
|
||||
* when basePath is itself a git worktree (.git file, not directory).
|
||||
*/
|
||||
|
||||
import assert from "node:assert/strict";
|
||||
import {
|
||||
mkdirSync,
|
||||
mkdtempSync,
|
||||
readdirSync,
|
||||
readFileSync,
|
||||
rmSync,
|
||||
writeFileSync,
|
||||
} from "node:fs";
|
||||
import { tmpdir } from "node:os";
|
||||
import { join } from "node:path";
|
||||
import { afterEach, beforeEach, describe, test } from "vitest";
|
||||
|
||||
import { SFError } from "../errors.ts";
|
||||
import type { JournalEntry } from "../journal.ts";
|
||||
import { createAutoWorktree } from "../auto-worktree.ts";
|
||||
|
||||
// ─── Helpers ──────────────────────────────────────────────────────────────────
|
||||
|
||||
/** Write a fake .git file so the directory looks like a git worktree. */
|
||||
function makeWorktreeDir(dir: string): void {
|
||||
writeFileSync(
|
||||
join(dir, ".git"),
|
||||
"gitdir: /some/repo/.git/worktrees/fake-wt\n",
|
||||
"utf-8",
|
||||
);
|
||||
}
|
||||
|
||||
/** Read all journal entries from a temp .sf/journal directory. */
|
||||
function readJournalEntries(basePath: string): JournalEntry[] {
|
||||
const journalDir = join(basePath, ".sf", "journal");
|
||||
try {
|
||||
const files = readdirSync(journalDir)
|
||||
.filter((f) => f.endsWith(".jsonl"))
|
||||
.sort();
|
||||
const entries: JournalEntry[] = [];
|
||||
for (const file of files) {
|
||||
const raw = readFileSync(join(journalDir, file), "utf-8");
|
||||
for (const line of raw.split("\n")) {
|
||||
if (!line.trim()) continue;
|
||||
entries.push(JSON.parse(line) as JournalEntry);
|
||||
}
|
||||
}
|
||||
return entries;
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Tests ────────────────────────────────────────────────────────────────────
|
||||
|
||||
describe("createAutoWorktree nested-worktree guard", () => {
|
||||
let tmp: string;
|
||||
|
||||
beforeEach(() => {
|
||||
tmp = mkdtempSync(join(tmpdir(), "sf-nested-wt-guard-"));
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(tmp, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
test("throws SFError when basePath is a git worktree", () => {
|
||||
makeWorktreeDir(tmp);
|
||||
|
||||
assert.throws(
|
||||
() => createAutoWorktree(tmp, "M001"),
|
||||
(err: unknown) => {
|
||||
assert.ok(err instanceof SFError, "should be an SFError");
|
||||
return true;
|
||||
},
|
||||
);
|
||||
});
|
||||
|
||||
test("error message mentions 'nested worktree' and the offending basePath", () => {
|
||||
makeWorktreeDir(tmp);
|
||||
|
||||
assert.throws(
|
||||
() => createAutoWorktree(tmp, "M001"),
|
||||
(err: unknown) => {
|
||||
assert.ok(err instanceof Error);
|
||||
assert.ok(
|
||||
err.message.includes("nested worktree"),
|
||||
`expected 'nested worktree' in message, got: ${err.message}`,
|
||||
);
|
||||
assert.ok(
|
||||
err.message.includes(tmp),
|
||||
`expected basePath in message, got: ${err.message}`,
|
||||
);
|
||||
return true;
|
||||
},
|
||||
);
|
||||
});
|
||||
|
||||
test("emits worktree-create-failed journal event with nested-worktree-rejected reason", () => {
|
||||
// Ensure .sf/journal directory is writable for the journal emit.
|
||||
mkdirSync(join(tmp, ".sf", "journal"), { recursive: true });
|
||||
makeWorktreeDir(tmp);
|
||||
|
||||
assert.throws(() => createAutoWorktree(tmp, "M001"));
|
||||
|
||||
const entries = readJournalEntries(tmp);
|
||||
const failed = entries.find(
|
||||
(e) => e.eventType === "worktree-create-failed",
|
||||
);
|
||||
assert.ok(failed, "worktree-create-failed event should be emitted");
|
||||
assert.equal(
|
||||
failed!.data?.reason,
|
||||
"nested-worktree-rejected",
|
||||
"reason must be nested-worktree-rejected",
|
||||
);
|
||||
assert.equal(
|
||||
failed!.data?.milestoneId,
|
||||
"M001",
|
||||
"milestoneId must be recorded",
|
||||
);
|
||||
assert.equal(
|
||||
failed!.data?.basePath,
|
||||
tmp,
|
||||
"basePath must be recorded in event data",
|
||||
);
|
||||
});
|
||||
|
||||
test("succeeds (does not throw) when basePath is a regular repo directory (control case)", () => {
|
||||
// Simulate a real repo: .git is a *directory*, not a file.
|
||||
mkdirSync(join(tmp, ".git"), { recursive: true });
|
||||
|
||||
// createAutoWorktree will fail with a git error (no real git binary
|
||||
// operations succeed on this fake repo), but it must NOT throw the
|
||||
// nested-worktree guard error — i.e. it must get past the guard.
|
||||
assert.throws(
|
||||
() => createAutoWorktree(tmp, "M001"),
|
||||
(err: unknown) => {
|
||||
assert.ok(err instanceof Error);
|
||||
assert.ok(
|
||||
!err.message.includes("nested worktree"),
|
||||
`guard must NOT fire for a regular repo; got: ${err.message}`,
|
||||
);
|
||||
return true;
|
||||
},
|
||||
);
|
||||
});
|
||||
});
|
||||
277
src/resources/extensions/sf/tests/orphan-worktree-sweep.test.ts
Normal file
277
src/resources/extensions/sf/tests/orphan-worktree-sweep.test.ts
Normal file
|
|
@ -0,0 +1,277 @@
|
|||
/**
|
||||
* Tests for orphan-worktree-sweep.ts
|
||||
*
|
||||
* Uses a tmpdir for a fake project root. We don't need real git plumbing because
|
||||
* the sweep only reads the filesystem (auto.lock for PID, .git file for structure).
|
||||
* removeWorktree is mocked where needed to avoid git side-effects.
|
||||
*/
|
||||
|
||||
import assert from "node:assert/strict";
|
||||
import {
|
||||
existsSync,
|
||||
mkdirSync,
|
||||
mkdtempSync,
|
||||
readdirSync,
|
||||
readFileSync,
|
||||
rmSync,
|
||||
writeFileSync,
|
||||
} from "node:fs";
|
||||
import { tmpdir } from "node:os";
|
||||
import { join } from "node:path";
|
||||
import { afterEach, beforeEach, describe, test, vi } from "vitest";
|
||||
|
||||
// ─── Module under test ───────────────────────────────────────────────────────
|
||||
// We import after vi.mock declarations so the mocks are in place.
|
||||
|
||||
// Mock removeWorktree so tests don't need a live git repo.
|
||||
vi.mock("../worktree-manager.js", async (importOriginal) => {
|
||||
const real = await importOriginal<typeof import("../worktree-manager.js")>();
|
||||
return {
|
||||
...real,
|
||||
removeWorktree: vi.fn(),
|
||||
};
|
||||
});
|
||||
|
||||
import { sweepOrphanWorktrees } from "../orphan-worktree-sweep.ts";
|
||||
import { removeWorktree } from "../worktree-manager.js";
|
||||
import type { JournalEntry } from "../journal.js";
|
||||
|
||||
// ─── Helpers ─────────────────────────────────────────────────────────────────
|
||||
|
||||
/** Create a minimal fake project tree: <root>/.sf/worktrees/ */
|
||||
function makeProject(): string {
|
||||
const root = mkdtempSync(join(tmpdir(), "orphan-sweep-test-"));
|
||||
mkdirSync(join(root, ".sf", "worktrees"), { recursive: true });
|
||||
// Minimal .sf/ so journal writes don't fail
|
||||
mkdirSync(join(root, ".sf", "journal"), { recursive: true });
|
||||
return root;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a worktree directory with a valid .git pointer file.
|
||||
* The `gitdir:` target doesn't need to exist for the structural check.
|
||||
*/
|
||||
function makeHealthyWorktree(root: string, id: string): string {
|
||||
const wtPath = join(root, ".sf", "worktrees", id);
|
||||
mkdirSync(join(wtPath, ".sf"), { recursive: true });
|
||||
writeFileSync(join(wtPath, ".git"), "gitdir: ../../.git/worktrees/" + id + "\n");
|
||||
return wtPath;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a worktree directory that is BROKEN — no .git file at all.
|
||||
*/
|
||||
function makeBrokenWorktree(root: string, id: string): string {
|
||||
const wtPath = join(root, ".sf", "worktrees", id);
|
||||
mkdirSync(join(wtPath, ".sf"), { recursive: true });
|
||||
// Intentionally no .git file
|
||||
return wtPath;
|
||||
}
|
||||
|
||||
/**
|
||||
* Write a fake auto.lock into a worktree with a given PID.
|
||||
*/
|
||||
function writeLock(wtPath: string, pid: number): void {
|
||||
writeFileSync(
|
||||
join(wtPath, ".sf", "auto.lock"),
|
||||
JSON.stringify({ pid, startedAt: new Date().toISOString(), unitType: "execute-task", unitId: "T01", unitStartedAt: new Date().toISOString() }),
|
||||
);
|
||||
}
|
||||
|
||||
/** Read all journal entries from a temp project's .sf/journal directory. */
|
||||
function readJournal(root: string): JournalEntry[] {
|
||||
const journalDir = join(root, ".sf", "journal");
|
||||
try {
|
||||
const files = readdirSync(journalDir)
|
||||
.filter((f) => f.endsWith(".jsonl"))
|
||||
.sort();
|
||||
const entries: JournalEntry[] = [];
|
||||
for (const file of files) {
|
||||
const raw = readFileSync(join(journalDir, file), "utf-8");
|
||||
for (const line of raw.split("\n")) {
|
||||
if (!line.trim()) continue;
|
||||
try {
|
||||
entries.push(JSON.parse(line) as JournalEntry);
|
||||
} catch {
|
||||
// skip
|
||||
}
|
||||
}
|
||||
}
|
||||
return entries;
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Setup ────────────────────────────────────────────────────────────────────
|
||||
|
||||
let projectRoot: string;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
projectRoot = makeProject();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(projectRoot, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
// ─── Test cases ───────────────────────────────────────────────────────────────
|
||||
|
||||
describe("sweepOrphanWorktrees", () => {
|
||||
// 1. Empty worktrees directory
|
||||
test("empty .sf/worktrees/ returns all-empty arrays, no errors", () => {
|
||||
const result = sweepOrphanWorktrees(projectRoot);
|
||||
assert.deepStrictEqual(result.resumed, []);
|
||||
assert.deepStrictEqual(result.pruned, []);
|
||||
assert.deepStrictEqual(result.errors, []);
|
||||
});
|
||||
|
||||
// 2. Active worktree (PID is our own process — guaranteed alive)
|
||||
test("active worktree (live PID in auto.lock) is left alone", () => {
|
||||
const wtPath = makeHealthyWorktree(projectRoot, "M001");
|
||||
writeLock(wtPath, process.pid); // Our own PID is always alive
|
||||
|
||||
const result = sweepOrphanWorktrees(projectRoot);
|
||||
assert.deepStrictEqual(result.resumed, [], "active worktree must not appear in resumed");
|
||||
assert.deepStrictEqual(result.pruned, [], "active worktree must not appear in pruned");
|
||||
assert.deepStrictEqual(result.errors, []);
|
||||
assert.ok(existsSync(wtPath), "worktree directory must still exist");
|
||||
});
|
||||
|
||||
// 3. Resumable orphan — healthy worktree, no live lock
|
||||
test("resumable orphan is left intact and journaled with reason 'resumable'", () => {
|
||||
makeHealthyWorktree(projectRoot, "M002");
|
||||
// No auto.lock written → no active PID
|
||||
|
||||
const result = sweepOrphanWorktrees(projectRoot);
|
||||
assert.deepStrictEqual(result.resumed, ["M002"]);
|
||||
assert.deepStrictEqual(result.pruned, []);
|
||||
assert.deepStrictEqual(result.errors, []);
|
||||
|
||||
// worktree directory must still exist
|
||||
assert.ok(existsSync(join(projectRoot, ".sf", "worktrees", "M002")));
|
||||
|
||||
// Journal must have a worktree-orphaned event with reason 'resumable'
|
||||
const events = readJournal(projectRoot).filter(
|
||||
(e) => e.eventType === "worktree-orphaned",
|
||||
);
|
||||
assert.equal(events.length, 1, "one orphaned event expected");
|
||||
assert.equal(events[0].data?.milestoneId, "M002");
|
||||
assert.equal(events[0].data?.reason, "resumable");
|
||||
});
|
||||
|
||||
// 4. Broken orphan — missing .git file
|
||||
test("broken orphan (missing .git) is pruned and journaled with reason 'broken-pruned'", () => {
|
||||
makeBrokenWorktree(projectRoot, "M003");
|
||||
|
||||
const result = sweepOrphanWorktrees(projectRoot);
|
||||
assert.deepStrictEqual(result.pruned, ["M003"]);
|
||||
assert.deepStrictEqual(result.resumed, []);
|
||||
assert.deepStrictEqual(result.errors, []);
|
||||
|
||||
// removeWorktree must have been called
|
||||
assert.ok(
|
||||
(removeWorktree as ReturnType<typeof vi.fn>).mock.calls.some(
|
||||
(args) => args[1] === "M003",
|
||||
),
|
||||
"removeWorktree called for M003",
|
||||
);
|
||||
|
||||
// Journal event
|
||||
const events = readJournal(projectRoot).filter(
|
||||
(e) => e.eventType === "worktree-orphaned",
|
||||
);
|
||||
assert.equal(events.length, 1, "one orphaned event expected");
|
||||
assert.equal(events[0].data?.milestoneId, "M003");
|
||||
assert.equal(events[0].data?.reason, "broken-pruned");
|
||||
});
|
||||
|
||||
// 5. Mixed: one active, one resumable, one broken
|
||||
test("mixed scenario: active/resumable/broken triaged correctly", () => {
|
||||
// Active — use current process PID
|
||||
const activeWt = makeHealthyWorktree(projectRoot, "M010");
|
||||
writeLock(activeWt, process.pid);
|
||||
|
||||
// Resumable
|
||||
makeHealthyWorktree(projectRoot, "M011");
|
||||
|
||||
// Broken
|
||||
makeBrokenWorktree(projectRoot, "M012");
|
||||
|
||||
const result = sweepOrphanWorktrees(projectRoot);
|
||||
|
||||
assert.deepStrictEqual(result.resumed, ["M011"]);
|
||||
assert.deepStrictEqual(result.pruned, ["M012"]);
|
||||
assert.deepStrictEqual(result.errors, []);
|
||||
|
||||
// Active worktree still exists
|
||||
assert.ok(existsSync(activeWt), "active worktree directory must survive");
|
||||
|
||||
// removeWorktree called only for broken
|
||||
const calls = (removeWorktree as ReturnType<typeof vi.fn>).mock.calls;
|
||||
assert.ok(
|
||||
calls.some((args) => args[1] === "M012"),
|
||||
"removeWorktree called for broken",
|
||||
);
|
||||
assert.ok(
|
||||
!calls.some((args) => args[1] === "M010"),
|
||||
"removeWorktree NOT called for active",
|
||||
);
|
||||
assert.ok(
|
||||
!calls.some((args) => args[1] === "M011"),
|
||||
"removeWorktree NOT called for resumable",
|
||||
);
|
||||
|
||||
// Two journal events: one resumable, one broken-pruned
|
||||
const events = readJournal(projectRoot).filter(
|
||||
(e) => e.eventType === "worktree-orphaned",
|
||||
);
|
||||
assert.equal(events.length, 2);
|
||||
const reasons = events.map((e) => e.data?.reason as string).sort();
|
||||
assert.deepStrictEqual(reasons, ["broken-pruned", "resumable"]);
|
||||
});
|
||||
|
||||
// 6. Error path — removeWorktree throws; error captured, sweep continues
|
||||
test("removeWorktree throw is captured in errors[] and sweep continues", () => {
|
||||
makeBrokenWorktree(projectRoot, "M020");
|
||||
makeHealthyWorktree(projectRoot, "M021"); // resumable — processed after error
|
||||
|
||||
(removeWorktree as ReturnType<typeof vi.fn>).mockImplementationOnce(() => {
|
||||
throw new Error("simulated git failure");
|
||||
});
|
||||
|
||||
const result = sweepOrphanWorktrees(projectRoot);
|
||||
|
||||
// M020 threw, M021 is resumable — both processed
|
||||
assert.equal(result.errors.length, 1, "error accumulated for M020");
|
||||
assert.equal(result.errors[0].id, "M020");
|
||||
assert.ok(result.errors[0].reason.includes("simulated git failure"));
|
||||
|
||||
// M021 still processed as resumable
|
||||
assert.deepStrictEqual(result.resumed, ["M021"]);
|
||||
assert.deepStrictEqual(result.pruned, []);
|
||||
});
|
||||
|
||||
// 7. No .sf/worktrees/ directory — returns empty result immediately
|
||||
test("no .sf/worktrees/ directory returns empty result", () => {
|
||||
rmSync(join(projectRoot, ".sf", "worktrees"), { recursive: true, force: true });
|
||||
|
||||
const result = sweepOrphanWorktrees(projectRoot);
|
||||
assert.deepStrictEqual(result.resumed, []);
|
||||
assert.deepStrictEqual(result.pruned, []);
|
||||
assert.deepStrictEqual(result.errors, []);
|
||||
});
|
||||
|
||||
// 8. Idempotency — running twice with no orphans returns empty arrays
|
||||
test("idempotent: running twice with no orphans returns empty arrays both times", () => {
|
||||
const r1 = sweepOrphanWorktrees(projectRoot);
|
||||
const r2 = sweepOrphanWorktrees(projectRoot);
|
||||
|
||||
for (const result of [r1, r2]) {
|
||||
assert.deepStrictEqual(result.resumed, []);
|
||||
assert.deepStrictEqual(result.pruned, []);
|
||||
assert.deepStrictEqual(result.errors, []);
|
||||
}
|
||||
});
|
||||
});
|
||||
318
src/resources/extensions/sf/tests/worktree-divergence.test.ts
Normal file
318
src/resources/extensions/sf/tests/worktree-divergence.test.ts
Normal file
|
|
@ -0,0 +1,318 @@
|
|||
/**
|
||||
* worktree-divergence.test.ts — #6 divergence cap enforcement
|
||||
*
|
||||
* Tests for:
|
||||
* - getCommitsBehindMain returns correct counts
|
||||
* - mergeWorktreeToMain emits divergence-warning when commits-behind > threshold
|
||||
* - mergeWorktreeToMain proceeds with merge even when over threshold (warn-and-proceed)
|
||||
* - mergeWorktreeToMain({ autoRebase: true }) rebases and merges when no conflicts
|
||||
* - mergeWorktreeToMain({ autoRebase: true }) throws merge conflict and leaves worktree
|
||||
* in conflict state when rebase conflicts
|
||||
* - Below-threshold case: no warning emitted
|
||||
*/
|
||||
|
||||
import assert from "node:assert/strict";
|
||||
import { execFileSync } from "node:child_process";
|
||||
import {
|
||||
existsSync,
|
||||
mkdirSync,
|
||||
mkdtempSync,
|
||||
readFileSync,
|
||||
rmSync,
|
||||
writeFileSync,
|
||||
} from "node:fs";
|
||||
import { tmpdir } from "node:os";
|
||||
import { join } from "node:path";
|
||||
import { afterEach, beforeEach, describe, test } from "vitest";
|
||||
|
||||
import { queryJournal } from "../journal.js";
|
||||
import { getCommitsBehindMain } from "../native-git-bridge.js";
|
||||
import {
|
||||
mergeWorktreeToMain,
|
||||
WORKTREE_DIVERGENCE_CAP,
|
||||
worktreePath,
|
||||
} from "../worktree-manager.js";
|
||||
|
||||
// ─── Helpers ──────────────────────────────────────────────────────────────────
|
||||
|
||||
function git(args: string[], cwd: string): string {
|
||||
return execFileSync("git", args, {
|
||||
cwd,
|
||||
stdio: ["ignore", "pipe", "pipe"],
|
||||
encoding: "utf-8",
|
||||
env: { ...process.env, GIT_TERMINAL_PROMPT: "0" },
|
||||
}).trim();
|
||||
}
|
||||
|
||||
/** Create a minimal git repo on `main` with one initial commit. */
|
||||
function makeBaseRepo(): string {
|
||||
const base = mkdtempSync(join(tmpdir(), "sf-div-test-"));
|
||||
git(["init", "-b", "main"], base);
|
||||
git(["config", "user.name", "Test User"], base);
|
||||
git(["config", "user.email", "test@test.com"], base);
|
||||
// Create .sf dir so journal can write
|
||||
mkdirSync(join(base, ".sf"), { recursive: true });
|
||||
writeFileSync(join(base, "README.md"), "initial\n");
|
||||
git(["add", "."], base);
|
||||
git(["commit", "-m", "chore: init"], base);
|
||||
return base;
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a git worktree at `.sf/worktrees/<name>` on branch `worktree/<name>`.
|
||||
* Returns the worktree path.
|
||||
*/
|
||||
function addWorktree(base: string, name: string): string {
|
||||
const wtDir = join(base, ".sf", "worktrees", name);
|
||||
mkdirSync(join(base, ".sf", "worktrees"), { recursive: true });
|
||||
git(["worktree", "add", "-b", `worktree/${name}`, wtDir], base);
|
||||
return wtDir;
|
||||
}
|
||||
|
||||
/** Commit a file change in a directory. */
|
||||
function commitFile(
|
||||
dir: string,
|
||||
filename: string,
|
||||
content: string,
|
||||
message: string,
|
||||
): void {
|
||||
writeFileSync(join(dir, filename), content);
|
||||
git(["add", filename], dir);
|
||||
git(["commit", "-m", message], dir);
|
||||
}
|
||||
|
||||
// ─── getCommitsBehindMain ─────────────────────────────────────────────────────
|
||||
|
||||
describe("getCommitsBehindMain", () => {
|
||||
let base: string;
|
||||
let wtPath: string;
|
||||
|
||||
beforeEach(() => {
|
||||
base = makeBaseRepo();
|
||||
wtPath = addWorktree(base, "M001");
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(base, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
test("returns zero behind when worktree is up-to-date with main", () => {
|
||||
const { commitsAhead, commitsBehind } = getCommitsBehindMain(wtPath, "main");
|
||||
assert.equal(commitsBehind, 0);
|
||||
assert.equal(commitsAhead, 0);
|
||||
});
|
||||
|
||||
test("returns correct commitsBehind when main has advanced", () => {
|
||||
// Advance main by 3 commits (worktree does NOT pick these up)
|
||||
commitFile(base, "a.txt", "a\n", "chore: A");
|
||||
commitFile(base, "b.txt", "b\n", "chore: B");
|
||||
commitFile(base, "c.txt", "c\n", "chore: C");
|
||||
|
||||
const { commitsBehind, commitsAhead } = getCommitsBehindMain(wtPath, "main");
|
||||
assert.equal(commitsBehind, 3);
|
||||
assert.equal(commitsAhead, 0);
|
||||
});
|
||||
|
||||
test("returns correct commitsAhead when worktree has advanced", () => {
|
||||
// Advance worktree by 2 commits
|
||||
commitFile(wtPath, "x.txt", "x\n", "feat: X");
|
||||
commitFile(wtPath, "y.txt", "y\n", "feat: Y");
|
||||
|
||||
const { commitsAhead, commitsBehind } = getCommitsBehindMain(wtPath, "main");
|
||||
assert.equal(commitsAhead, 2);
|
||||
assert.equal(commitsBehind, 0);
|
||||
});
|
||||
|
||||
test("returns both counts when both have diverged", () => {
|
||||
// Advance main by 2
|
||||
commitFile(base, "m1.txt", "m1\n", "chore: main1");
|
||||
commitFile(base, "m2.txt", "m2\n", "chore: main2");
|
||||
// Advance worktree by 1
|
||||
commitFile(wtPath, "w1.txt", "w1\n", "feat: worktree1");
|
||||
|
||||
const { commitsAhead, commitsBehind } = getCommitsBehindMain(wtPath, "main");
|
||||
assert.equal(commitsBehind, 2);
|
||||
assert.equal(commitsAhead, 1);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── mergeWorktreeToMain — divergence warning ─────────────────────────────────
|
||||
|
||||
describe("mergeWorktreeToMain divergence warning", () => {
|
||||
let base: string;
|
||||
|
||||
beforeEach(() => {
|
||||
base = makeBaseRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(base, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
test("emits worktree-divergence-warning when commitsBehind > threshold", () => {
|
||||
const name = "M002";
|
||||
const wtDir = addWorktree(base, name);
|
||||
|
||||
// Advance main by WORKTREE_DIVERGENCE_CAP + 1 commits (worktree stays behind)
|
||||
for (let i = 0; i <= WORKTREE_DIVERGENCE_CAP; i++) {
|
||||
commitFile(base, `main-${i}.txt`, `${i}\n`, `chore: main advance ${i}`);
|
||||
}
|
||||
|
||||
// Add one commit to worktree so there is something to merge
|
||||
commitFile(wtDir, "feature.txt", "feature\n", "feat: add feature");
|
||||
|
||||
// Perform merge (warn-and-proceed — may fail on conflict since main diverged)
|
||||
try {
|
||||
mergeWorktreeToMain(base, name, "feat: merge M002");
|
||||
} catch {
|
||||
// A merge conflict is fine for this test; we only care about the warning event
|
||||
}
|
||||
|
||||
const entries = queryJournal(base);
|
||||
const warning = entries.find(
|
||||
(e) => e.eventType === "worktree-divergence-warning",
|
||||
);
|
||||
assert.ok(warning, "worktree-divergence-warning event must be emitted");
|
||||
assert.equal(warning.data?.worktreeId, name);
|
||||
assert.ok(
|
||||
typeof warning.data?.commitsBehind === "number" &&
|
||||
(warning.data.commitsBehind as number) > WORKTREE_DIVERGENCE_CAP,
|
||||
"commitsBehind should exceed the threshold",
|
||||
);
|
||||
assert.equal(warning.data?.threshold, WORKTREE_DIVERGENCE_CAP);
|
||||
});
|
||||
|
||||
test("does NOT emit worktree-divergence-warning when below threshold", () => {
|
||||
const name = "M003";
|
||||
const wtDir = addWorktree(base, name);
|
||||
|
||||
// Advance main by only 2 commits (well below cap)
|
||||
commitFile(base, "m1.txt", "m1\n", "chore: m1");
|
||||
commitFile(base, "m2.txt", "m2\n", "chore: m2");
|
||||
|
||||
// Add a commit to the worktree that doesn't conflict
|
||||
commitFile(wtDir, "feature2.txt", "feature2\n", "feat: feature2");
|
||||
|
||||
try {
|
||||
mergeWorktreeToMain(base, name, "feat: merge M003");
|
||||
} catch {
|
||||
// ignore merge issues
|
||||
}
|
||||
|
||||
const entries = queryJournal(base);
|
||||
const warning = entries.find(
|
||||
(e) => e.eventType === "worktree-divergence-warning",
|
||||
);
|
||||
assert.equal(
|
||||
warning,
|
||||
undefined,
|
||||
"no warning event should be emitted below threshold",
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── mergeWorktreeToMain — warn-and-proceed ───────────────────────────────────
|
||||
|
||||
describe("mergeWorktreeToMain warn-and-proceed", () => {
|
||||
let base: string;
|
||||
|
||||
beforeEach(() => {
|
||||
base = makeBaseRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(base, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
test("proceeds with merge (no conflict) even when over threshold", () => {
|
||||
const name = "M004";
|
||||
const wtDir = addWorktree(base, name);
|
||||
|
||||
// Advance main on a different file (no conflict)
|
||||
for (let i = 0; i <= WORKTREE_DIVERGENCE_CAP; i++) {
|
||||
commitFile(base, `main-nc-${i}.txt`, `${i}\n`, `chore: no-conflict ${i}`);
|
||||
}
|
||||
|
||||
// Worktree adds a new file only it touches
|
||||
commitFile(wtDir, "worktree-only.txt", "wt\n", "feat: worktree-only");
|
||||
|
||||
// Should succeed despite being over threshold
|
||||
const result = mergeWorktreeToMain(base, name, "feat: merge M004");
|
||||
assert.equal(result, "feat: merge M004");
|
||||
|
||||
// Verify the warning was emitted
|
||||
const entries = queryJournal(base);
|
||||
const warning = entries.find(
|
||||
(e) => e.eventType === "worktree-divergence-warning",
|
||||
);
|
||||
assert.ok(warning, "warning must be emitted");
|
||||
});
|
||||
});
|
||||
|
||||
// ─── mergeWorktreeToMain — autoRebase ─────────────────────────────────────────
|
||||
|
||||
describe("mergeWorktreeToMain autoRebase", () => {
|
||||
let base: string;
|
||||
|
||||
beforeEach(() => {
|
||||
base = makeBaseRepo();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
rmSync(base, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
test("autoRebase:true rebases and merges successfully when no conflicts", () => {
|
||||
const name = "M005";
|
||||
const wtDir = addWorktree(base, name);
|
||||
|
||||
// Advance main by 3 commits on unique files
|
||||
commitFile(base, "main-r1.txt", "r1\n", "chore: rebase-r1");
|
||||
commitFile(base, "main-r2.txt", "r2\n", "chore: rebase-r2");
|
||||
commitFile(base, "main-r3.txt", "r3\n", "chore: rebase-r3");
|
||||
|
||||
// Worktree adds a unique file
|
||||
commitFile(wtDir, "wt-feature.txt", "feature\n", "feat: wt-feature");
|
||||
|
||||
const result = mergeWorktreeToMain(base, name, "feat: merge M005 rebased", {
|
||||
autoRebase: true,
|
||||
});
|
||||
assert.equal(result, "feat: merge M005 rebased");
|
||||
|
||||
// Confirm the merge landed on main
|
||||
const log = git(["log", "--oneline", "-1"], base);
|
||||
assert.ok(log.includes("feat: merge M005 rebased"), "commit should be on main");
|
||||
|
||||
// The rebase file should exist in main after the squash-merge
|
||||
assert.ok(
|
||||
existsSync(join(base, "wt-feature.txt")),
|
||||
"worktree file should exist in main after merge",
|
||||
);
|
||||
});
|
||||
|
||||
test("autoRebase:true throws SF_MERGE_CONFLICT when rebase has conflicts", () => {
|
||||
const name = "M006";
|
||||
const wtDir = addWorktree(base, name);
|
||||
|
||||
// Both main and worktree modify the same file at the same line
|
||||
commitFile(base, "conflict.txt", "main version\n", "chore: main conflict");
|
||||
commitFile(wtDir, "conflict.txt", "worktree version\n", "feat: wt conflict");
|
||||
|
||||
let threw = false;
|
||||
let errorMessage = "";
|
||||
try {
|
||||
mergeWorktreeToMain(base, name, "feat: merge M006 rebase-conflict", {
|
||||
autoRebase: true,
|
||||
});
|
||||
} catch (err: unknown) {
|
||||
threw = true;
|
||||
errorMessage = err instanceof Error ? err.message : String(err);
|
||||
}
|
||||
|
||||
assert.ok(threw, "should throw when rebase has conflicts");
|
||||
assert.ok(
|
||||
errorMessage.includes("conflict") || errorMessage.includes("Conflict"),
|
||||
`error message should mention conflict, got: ${errorMessage}`,
|
||||
);
|
||||
});
|
||||
});
|
||||
|
|
@ -36,6 +36,7 @@ import {
|
|||
} from "./errors.js";
|
||||
import { SF_RUNTIME_PATTERNS } from "./gitignore.js";
|
||||
import {
|
||||
getCommitsBehindMain,
|
||||
nativeBranchDelete,
|
||||
nativeBranchExists,
|
||||
nativeBranchForceReset,
|
||||
|
|
@ -47,13 +48,20 @@ import {
|
|||
nativeGetCurrentBranch,
|
||||
nativeLogOneline,
|
||||
nativeMergeSquash,
|
||||
nativeRebaseAbort,
|
||||
nativeWorktreeAdd,
|
||||
nativeWorktreeList,
|
||||
nativeWorktreePrune,
|
||||
nativeWorktreeRemove,
|
||||
} from "./native-git-bridge.js";
|
||||
import { logWarning } from "./workflow-logger.js";
|
||||
import { emitCanonicalRootRedirect } from "./worktree-telemetry.js";
|
||||
import {
|
||||
emitCanonicalRootRedirect,
|
||||
emitWorktreeDivergenceWarning,
|
||||
} from "./worktree-telemetry.js";
|
||||
|
||||
/** Commits-behind threshold above which a divergence warning is emitted. */
|
||||
export const WORKTREE_DIVERGENCE_CAP = 50;
|
||||
|
||||
// ─── Types ─────────────────────────────────────────────────────────────────
|
||||
|
||||
|
|
@ -792,7 +800,7 @@ function derivePatternsFromRuntime() {
|
|||
paths.push(pattern);
|
||||
} else if (!pattern.includes("*") && !pattern.includes("/")) {
|
||||
exact.push(pattern);
|
||||
} else if (pattern.includes("*")) {
|
||||
} else if (pattern.includes("*") && !pattern.includes("**")) {
|
||||
const prefix = pattern.slice(0, pattern.indexOf("*"));
|
||||
if (prefix && !prefixes.includes(prefix)) {
|
||||
prefixes.push(prefix);
|
||||
|
|
@ -966,11 +974,21 @@ export function getWorktreeLog(basePath: string, name: string): string {
|
|||
* Merge the worktree branch into main using squash merge.
|
||||
* Must be called from the main working tree (not the worktree itself).
|
||||
* Returns the merge commit message.
|
||||
*
|
||||
* Divergence cap (#6): if the worktree is more than WORKTREE_DIVERGENCE_CAP
|
||||
* commits behind main, a `worktree-divergence-warning` journal event is emitted
|
||||
* before the merge attempt (warn-and-proceed is the default).
|
||||
*
|
||||
* When `autoRebase` is true the worktree branch is rebased onto mainBranch
|
||||
* before the squash-merge. If the rebase produces conflicts the rebase is
|
||||
* aborted and a SF_MERGE_CONFLICT error is thrown so the existing
|
||||
* worktree-merge-failed journal flow handles it.
|
||||
*/
|
||||
export function mergeWorktreeToMain(
|
||||
basePath: string,
|
||||
name: string,
|
||||
commitMessage: string,
|
||||
opts: { autoRebase?: boolean } = {},
|
||||
): string {
|
||||
const branch = worktreeBranchName(name);
|
||||
const mainBranch = nativeDetectMainBranch(basePath);
|
||||
|
|
@ -983,6 +1001,55 @@ export function mergeWorktreeToMain(
|
|||
);
|
||||
}
|
||||
|
||||
// ─── Divergence cap check (#6) ─────────────────────────────────────────
|
||||
const wtPath = worktreePath(basePath, name);
|
||||
try {
|
||||
const { commitsAhead, commitsBehind } = getCommitsBehindMain(
|
||||
wtPath,
|
||||
mainBranch,
|
||||
);
|
||||
if (commitsBehind > WORKTREE_DIVERGENCE_CAP) {
|
||||
emitWorktreeDivergenceWarning(basePath, name, {
|
||||
commitsAhead,
|
||||
commitsBehind,
|
||||
threshold: WORKTREE_DIVERGENCE_CAP,
|
||||
autoRebase: opts.autoRebase ?? false,
|
||||
});
|
||||
logWarning(
|
||||
"worktree",
|
||||
`worktree ${name} is ${commitsBehind} commits behind ${mainBranch} ` +
|
||||
`(threshold: ${WORKTREE_DIVERGENCE_CAP}). Proceeding with merge.`,
|
||||
);
|
||||
}
|
||||
} catch {
|
||||
// Divergence check is best-effort; never block the merge on telemetry.
|
||||
}
|
||||
|
||||
// ─── Optional auto-rebase ──────────────────────────────────────────────
|
||||
if (opts.autoRebase) {
|
||||
try {
|
||||
execFileSync("git", ["rebase", mainBranch], {
|
||||
cwd: wtPath,
|
||||
stdio: ["ignore", "pipe", "pipe"],
|
||||
encoding: "utf-8",
|
||||
env: { ...process.env, GIT_TERMINAL_PROMPT: "0" },
|
||||
});
|
||||
} catch {
|
||||
// Rebase hit conflicts — abort and propagate as a merge conflict so
|
||||
// the caller's worktree-merge-failed flow handles it.
|
||||
try {
|
||||
nativeRebaseAbort(wtPath);
|
||||
} catch {
|
||||
// best-effort abort
|
||||
}
|
||||
throw new SFError(
|
||||
SF_MERGE_CONFLICT,
|
||||
`Auto-rebase of ${branch} onto ${mainBranch} produced conflicts. ` +
|
||||
`Rebase aborted; worktree preserved for manual resolution.`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
const result = nativeMergeSquash(basePath, branch);
|
||||
if (!result.success) {
|
||||
throw new SFError(
|
||||
|
|
|
|||
|
|
@ -120,9 +120,15 @@ export function emitWorktreeOrphaned(
|
|||
milestoneId: string,
|
||||
meta: {
|
||||
flowId?: string;
|
||||
reason: "in-progress-unmerged" | "complete-unmerged" | "stale-branch";
|
||||
reason:
|
||||
| "in-progress-unmerged"
|
||||
| "complete-unmerged"
|
||||
| "stale-branch"
|
||||
| "resumable"
|
||||
| "broken-pruned";
|
||||
commitsAhead?: number;
|
||||
worktreeDirExists?: boolean;
|
||||
detail?: string;
|
||||
},
|
||||
): void {
|
||||
emitJournalEvent(
|
||||
|
|
@ -133,6 +139,7 @@ export function emitWorktreeOrphaned(
|
|||
reason: meta.reason,
|
||||
commitsAhead: meta.commitsAhead,
|
||||
worktreeDirExists: meta.worktreeDirExists ?? false,
|
||||
detail: meta.detail,
|
||||
detectedAt: now(),
|
||||
}),
|
||||
);
|
||||
|
|
@ -252,6 +259,33 @@ export function emitMilestoneResquash(
|
|||
);
|
||||
}
|
||||
|
||||
// #6 — divergence cap enforcement
|
||||
|
||||
export function emitWorktreeDivergenceWarning(
|
||||
projectRoot: string,
|
||||
worktreeId: string,
|
||||
meta: {
|
||||
commitsAhead: number;
|
||||
commitsBehind: number;
|
||||
threshold: number;
|
||||
autoRebase: boolean;
|
||||
flowId?: string;
|
||||
},
|
||||
): void {
|
||||
emitJournalEvent(
|
||||
projectRoot,
|
||||
baseEntry("worktree-divergence-warning", {
|
||||
worktreeId,
|
||||
commitsAhead: meta.commitsAhead,
|
||||
commitsBehind: meta.commitsBehind,
|
||||
threshold: meta.threshold,
|
||||
autoRebase: meta.autoRebase,
|
||||
detectedAt: now(),
|
||||
flowId: meta.flowId,
|
||||
}),
|
||||
);
|
||||
}
|
||||
|
||||
// ─── Aggregator ──────────────────────────────────────────────────────────
|
||||
|
||||
export interface WorktreeTelemetrySummary {
|
||||
|
|
|
|||
|
|
@ -41,6 +41,8 @@ const EXPECTED_BUILTIN_OUTCOMES = new Map<string, "rpc" | "surface" | "reject">(
|
|||
["thinking", "surface"],
|
||||
["edit-mode", "reject"],
|
||||
["terminal", "reject"],
|
||||
["stop", "reject"],
|
||||
["exit", "reject"],
|
||||
["quit", "reject"],
|
||||
],
|
||||
);
|
||||
|
|
@ -58,6 +60,8 @@ const DEFERRED_BROWSER_REJECTS = [
|
|||
"reload",
|
||||
"edit-mode",
|
||||
"terminal",
|
||||
"stop",
|
||||
"exit",
|
||||
"quit",
|
||||
] as const;
|
||||
|
||||
|
|
@ -193,17 +197,17 @@ test("registered SF command roots stay on the prompt/extension path", async () =
|
|||
const registeredRoots = await collectRegisteredSfCommandRoots();
|
||||
assert.deepEqual(
|
||||
registeredRoots,
|
||||
["exit", "sf", "kill", "worktree", "wt"],
|
||||
["exit", "kill", "sf", "worktree", "wt"],
|
||||
"browser parity contract only expects the current SF command roots",
|
||||
);
|
||||
|
||||
// Non-sf roots are extension commands that pass through to the bridge.
|
||||
// Derived dynamically so adding a new registration fails this assertion loudly.
|
||||
const nonSfRoots = registeredRoots.filter((r) => r !== "sf");
|
||||
const nonSfRoots = registeredRoots.filter((r) => r !== "sf" && r !== "exit");
|
||||
assert.equal(
|
||||
nonSfRoots.length,
|
||||
4,
|
||||
"expected exactly 4 non-sf passthrough roots; update this count when adding registrations",
|
||||
3,
|
||||
"expected exactly 3 non-sf passthrough roots; update this count when adding registrations",
|
||||
);
|
||||
for (const root of nonSfRoots) {
|
||||
assertPromptPassthrough(`/${root}`);
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue