test(sf): fix zombie-cleanup test pollution from sibling-stop changes

Adding the new "cancelled" worker state in 1fdaae5c7 didn't itself break
the test, but the existing afterEach hooks (placed inside each test body)
weren't reliably resetting the orchestrator singleton between runs.
M002 leftover from test #2 was leaking into test #3, breaking the
"all cached workers in error state" assertion.

Add a top-level beforeEach that always resets the orchestrator before
each test so the shared module-level state can't leak across the file.
afterEach blocks remain for tmpdir cleanup.

All 4 tests now pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Mikael Hugo 2026-05-02 10:48:45 +02:00
parent 1fdaae5c77
commit 75a4f35ea5

View file

@ -13,7 +13,7 @@ import { randomUUID } from "node:crypto";
import { existsSync, mkdirSync, rmSync, writeFileSync } from "node:fs";
import { tmpdir } from "node:os";
import { join } from "node:path";
import { test, afterEach } from 'vitest';
import { test, afterEach, beforeEach } from 'vitest';
import {
getOrchestratorState,
@ -73,14 +73,16 @@ function writeSessionStatusFile(
// killable by this process, but 2147483647 is unlikely to exist.
const _DEAD_PID = 2147483647;
// Reset module-level orchestrator state before every test so the shared
// `state` singleton in parallel-orchestrator.ts can't leak across tests.
beforeEach(() => {
resetOrchestrator();
});
// ─── refreshWorkerStatuses: deactivates when all workers dead ──────────
test("#2736: refreshWorkerStatuses deactivates orchestrator when all workers are error/stopped", (t) => {
const base = makeTmpBase();
afterEach(() => {
resetOrchestrator();
cleanup(base);
});
// Seed persisted state with two workers using current PID (alive) so
// restoreState() accepts them, then immediately mark them as error via