feat: promote schedule and self-feedback state to db

This commit is contained in:
Mikael Hugo 2026-05-07 05:34:42 +02:00
parent cd5926a17a
commit 5c32d91124
17 changed files with 544 additions and 191 deletions

View file

@ -219,7 +219,7 @@ See [`docs/plans/README.md`](docs/plans/README.md), [`docs/adr/README.md`](docs/
## SF Schedule
The SF schedule system (`/sf schedule`) stores time-bound reminders in `.sf/schedule.jsonl` as versioned append-only JSONL. Items surface on their due date via pull queries at launch and auto-mode boundaries — there is no background daemon.
The SF schedule system (`/sf schedule`) stores project time-bound reminders in the repo SQLite DB (`.sf/sf.db`, `schedule_entries`) and global reminders in `~/.sf/sf.db`. Legacy `.sf/schedule.jsonl` rows are import-only compatibility input when a project has no schedule rows yet. Items surface on their due date via pull queries at launch and auto-mode boundaries — there is no background daemon.
**When to use `sf schedule` vs backlog:**
- **`sf schedule`** — time-bound items that must surface at a future date: a 2-week adoption review after shipping a feature, a 1-month audit of an architectural decision, a 30-minute reminder to run a command. Use when the *timing* matters, not just the *priority*.

View file

@ -69,7 +69,7 @@ All directory variables are optional and have sensible defaults:
- `SF_WORKSPACE_BASE` (default: `SF_STATE_DIR/workspace`) — User workspaces
- `SF_HISTORY_BASE` (default: `SF_STATE_DIR/history`) — Session history
- `SF_NOTIFICATIONS_BASE` (default: `SF_STATE_DIR/notifications`) — Notifications
- `SF_SCHEDULE_FILE` (default: `SF_STATE_DIR/schedule.jsonl`) — Versioned schedule queue
- `SF_SCHEDULE_FILE` (legacy import only; default: `SF_STATE_DIR/schedule.jsonl`) — pre-DB schedule queue compatibility input
- `SF_RECOVERY_BASE` (default: `SF_STATE_DIR/recovery`) — Recovery artifacts
- `SF_FORENSICS_BASE` (default: `SF_STATE_DIR/forensics`) — Diagnostics
- `SF_SETTINGS_BASE` (default: `SF_STATE_DIR/settings`) — User settings

View file

@ -27,7 +27,7 @@ Option 3 (pull-based) is what we adopted.
The SF schedule system is **pull-based**:
- Schedule entries are stored as versioned append-only JSONL in `.sf/schedule.jsonl` (project) or `~/.sf/schedule.jsonl` (global). Rows without `schemaVersion` are treated as legacy version 1 by the current reader.
- Schedule entries are stored in SQLite (`schedule_entries`). Legacy `.sf/schedule.jsonl` rows are import-only compatibility input, and rows without `schemaVersion` are treated as legacy version 1 by the current reader.
- There is no background daemon or timer process.
- Entries are queried ("pulled") at defined integration points:
1. **Launch**`loader.ts` calls `findDue()` and prints a banner if items are overdue
@ -43,7 +43,7 @@ The SF schedule system is **pull-based**:
- **Portable** — works identically on Linux, macOS, and Windows without platform-specific code
- **Simple** — no process management, no signal handlers, no daemon lifecycle
- **Auditable** — the JSONL file is a complete, append-only audit trail of all schedule operations
- **Auditable** — the DB ledger preserves append-style schedule operations
- **Resilient** — no fire-and-forget timer that might miss if the process is restarted
- **Stateless** — fits SF's session model: fresh context per unit, no in-memory state
@ -59,7 +59,7 @@ These limitations are accepted trade-offs for the portability and simplicity ben
## Implementation Notes
- `schedule-store.js`versioned append-only JSONL store with `findDue()` and `findUpcoming()` queries
- `schedule-store.js`DB-primary store with `findDue()` and `findUpcoming()` queries plus legacy JSONL import
- `loader.ts` — calls `findDue()` on both scopes at startup; prints banner if any items are due
- `headless-query.ts` — populates `schedule: { due, upcoming }` in `QuerySnapshot`
- `sf schedule` CLI — add, list, done, cancel, snooze, run subcommands

View file

@ -8,7 +8,7 @@
## Overview
The SF schedule system provides time-based reminders and deferred work items that surface at a future date. Entries are stored as versioned append-only JSONL and queried on demand (pull-based), not fired by a daemon or cron job. This makes the system portable, auditable, and free of background processes.
The SF schedule system provides time-based reminders and deferred work items that surface at a future date. Entries are stored in SQLite (`schedule_entries`) and queried on demand (pull-based), not fired by a daemon or cron job. This makes the system portable, auditable, and free of background processes.
Use `sf schedule` when something needs to happen at a specific future time but cannot (or should not) happen immediately:
@ -34,28 +34,29 @@ This means: if an item is scheduled for 3 AM and you open SF at 9 AM, you will s
Schedule entries use [ULID](https://github.com/ulid/spec) (Universally Unique Lexicographically Sortable Identifier) instead of UUID. ULIDs are:
- 28 characters, Crockford Base32 encoded
- Lexicographically sortable by creation time (useful for JSONL ordering)
- Lexicographically sortable by creation time (useful for schedule ordering)
- Unique enough to avoid collisions across concurrent appends
- Monotonic within millisecond precision via sub-millisecond counter
The `generateULID()` function in `schedule-ulid.js` is used for all new entries.
### Versioned Append-Only JSONL
### DB-Primary Ledger
Each write appends a schema-versioned JSON line to `schedule.jsonl`. The latest entry per ID wins on read (via `created_at` comparison). This means status transitions (`pending``done`, `cancelled`, `snoozed`) are implemented as new entries, not mutations. The file is never rewritten — only appended to.
Each write appends a row to `schedule_entries`. The latest row per ID wins on read. This means status transitions (`pending``done`, `cancelled`, `snoozed`) are implemented as ledger entries, not in-place mutations.
Rows without `schemaVersion` are treated as legacy version 1. Unsupported future schema versions are ignored by the current reader. Corrupt lines are skipped with a warning, never fatal.
Legacy `schedule.jsonl` files are import-only compatibility inputs. Rows without `schemaVersion` are treated as legacy version 1. Unsupported future schema versions are ignored by the current reader. Corrupt lines are skipped with a warning, never fatal.
---
## Storage Format
### File Locations
### Storage Locations
| Scope | Path |
|-------|------|
| `project` | `<basePath>/.sf/schedule.jsonl` |
| `global` | `~/.sf/schedule.jsonl` |
| `project` | `<basePath>/.sf/sf.db` |
| `global` | `~/.sf/sf.db` with `scope = 'global'` |
| legacy import | `<basePath>/.sf/schedule.jsonl` or `~/.sf/schedule.jsonl` |
### Schema
@ -74,7 +75,7 @@ Rows without `schemaVersion` are treated as legacy version 1. Unsupported future
}
```
### JSONL Line Example
### Legacy JSONL Line Example
```
{"schemaVersion":1,"id":"01ARZ3NDEKTSV4RRFFQ69G5FAV","kind":"reminder","status":"pending","due_at":"2026-06-15T09:00:00.000Z","created_at":"2026-05-15T09:00:00.000Z","payload":{"message":"Review adoption metrics"},"created_by":"user","auto_dispatch":false}

View file

@ -2,8 +2,8 @@
* SF Command /sf schedule
*
* Schedule management: add, list, done, cancel, snooze, run.
* Entries stored as versioned append-only JSONL in .sf/schedule.jsonl (project)
* or ~/.sf/schedule.jsonl (global).
* Entries are stored in SQLite (`schedule_entries`). Legacy schedule JSONL is
* imported on first read when the DB has no schedule rows.
*/
import {

View file

@ -30,6 +30,16 @@ const CATEGORY_PRIORITY = {
environment: 4,
preference: 5,
};
function safeJsonArray(raw) {
try {
const parsed = JSON.parse(raw);
return Array.isArray(parsed)
? parsed.filter((t) => typeof t === "string")
: [];
} catch {
return [];
}
}
// ─── Row Mapping ────────────────────────────────────────────────────────────
function rowToMemory(row) {
return {
@ -44,6 +54,7 @@ function rowToMemory(row) {
updated_at: row["updated_at"],
superseded_by: row["superseded_by"] ?? null,
hit_count: row["hit_count"],
tags: safeJsonArray(row["tags"]),
};
}
// ─── Query Functions ────────────────────────────────────────────────────────
@ -240,6 +251,7 @@ export function createMemory(fields) {
sourceUnitId: fields.source_unit_id ?? null,
createdAt: now,
updatedAt: now,
tags: fields.tags,
});
// Derive the real ID from the assigned seq (SELECT is still fine via adapter)
const row = adapter

View file

@ -1,23 +1,21 @@
/**
* Schedule Store versioned append-only JSONL persistence for scheduled entries.
* Schedule Store DB-primary persistence for scheduled entries.
*
* Purpose: provide durable, queryable storage for schedule entries with
* status-grouping semantics (latest entry per ID wins) and time-based queries.
*
* Consumer: schedule CLI commands (S02), auto-dispatch reminders, and UI overlays.
*/
import {
appendFileSync,
closeSync,
existsSync,
mkdirSync,
openSync,
readFileSync,
} from "node:fs";
import { existsSync, mkdirSync, readFileSync } from "node:fs";
import { homedir } from "node:os";
import { join } from "node:path";
import { withFileLockSync } from "../file-lock.js";
import { sfRuntimeRoot } from "../paths.js";
import { sfRoot } from "../paths.js";
import {
countScheduleEntries,
getScheduleEntries,
insertScheduleEntry,
openDatabase,
} from "../sf-db.js";
// ─── Constants ──────────────────────────────────────────────────────────────
@ -81,7 +79,7 @@ function _resolvePath(basePath, scope) {
if (scope === "global") {
return join(_sfHome, FILENAME);
}
return join(sfRuntimeRoot(basePath), FILENAME);
return join(sfRoot(basePath), FILENAME);
}
/**
@ -90,24 +88,10 @@ function _resolvePath(basePath, scope) {
* @param {import("./schedule-types.js").ScheduleEntry} entry
*/
function _appendEntry(basePath, scope, entry) {
const filePath = _resolvePath(basePath, scope);
const dir = filePath.slice(0, filePath.lastIndexOf("/"));
mkdirSync(dir, { recursive: true });
// Ensure file exists so proper-lockfile can acquire a lock against it.
if (!existsSync(filePath)) {
closeSync(openSync(filePath, "a"));
}
withFileLockSync(filePath, () => {
appendFileSync(
filePath,
JSON.stringify({
schemaVersion: SCHEDULE_SCHEMA_VERSION,
...entry,
}) + "\n",
"utf-8",
);
ensureScheduleDb(basePath, scope);
insertScheduleEntry(scope, {
schemaVersion: SCHEDULE_SCHEMA_VERSION,
...entry,
});
}
@ -120,16 +104,26 @@ function _appendEntry(basePath, scope, entry) {
* @returns {import("./schedule-types.js").ScheduleEntry[]}
*/
function _readEntries(basePath, scope) {
ensureScheduleDb(basePath, scope);
if (countScheduleEntries(scope) > 0) {
return getScheduleEntries(scope);
}
importLegacyScheduleFile(basePath, scope);
return getScheduleEntries(scope);
}
function importLegacyScheduleFile(basePath, scope) {
const filePath = _resolvePath(basePath, scope);
if (!existsSync(filePath)) {
return [];
return;
}
let raw;
try {
raw = readFileSync(filePath, "utf-8");
} catch {
return [];
return;
}
/** @type {Map<string, import("./schedule-types.js").ScheduleEntry>} */
@ -158,7 +152,20 @@ function _readEntries(basePath, scope) {
);
}
return Array.from(byId.values());
for (const entry of byId.values()) {
insertScheduleEntry(scope, entry, filePath);
}
}
function scheduleDbDir(basePath, scope) {
if (scope === "global") return _sfHome;
return sfRoot(basePath);
}
function ensureScheduleDb(basePath, scope) {
const dir = scheduleDbDir(basePath, scope);
mkdirSync(dir, { recursive: true });
openDatabase(join(dir, "sf.db"));
}
function normalizeScheduleEntry(entry) {

View file

@ -11,8 +11,8 @@
/**
* @typedef {("project"|"global")} ScheduleScope
* project entries stored in `<basePath>/.sf/schedule.jsonl`
* global entries stored in `~/.sf/schedule.jsonl`
* project entries stored in `<basePath>/.sf/sf.db` (`schedule_entries`)
* global entries stored in `~/.sf/sf.db` (`schedule_entries`)
*/
/**

View file

@ -15,8 +15,10 @@
* 4. Apply fix, test, and mark self-report resolved
*/
import { createHash } from "node:crypto";
import { existsSync, readFileSync } from "node:fs";
import { join } from "node:path";
import { addBacklogItem, isDbAvailable, listBacklogItems } from "./sf-db.js";
/**
* Recognizable fix patterns in self-reports.
@ -98,6 +100,60 @@ function inferSeverity(report) {
return "medium";
}
function severityRank(severity) {
switch (severity) {
case "critical":
return 4;
case "high":
return 3;
case "medium":
return 2;
case "low":
return 1;
default:
return 0;
}
}
function stableBacklogId(clusterKey) {
const digest = createHash("sha256").update(clusterKey).digest("hex");
return `self-feedback.${digest.slice(0, 12)}`;
}
function summarizeReport(report) {
return (
report?.summary ||
report?.title ||
report?.issue ||
report?.message ||
report?.description ||
"self-feedback issue"
)
.replace(/\s+/g, " ")
.trim();
}
function clusterTitle(cluster) {
const first = cluster.reports[0] ?? {};
const summary = summarizeReport(first);
return `Self-feedback: ${summary.slice(0, 140)}`;
}
function clusterNote(cluster) {
const ids = cluster.reports.map((report) => report.id).filter(Boolean);
const severities = Array.from(
new Set(cluster.reports.map((report) => inferSeverity(report))),
).join(", ");
return [
`triaged self-feedback cluster ${cluster.key}`,
`reports=${cluster.reports.length}`,
severities ? `severity=${severities}` : "",
ids.length > 0 ? `ids=${ids.slice(0, 8).join(",")}` : "",
]
.filter(Boolean)
.join("; ");
}
/**
* Attempt to fix: Add explicit rubric to validation-reviewer prompt.
*
@ -353,6 +409,52 @@ export function generateTriageSummary(reports) {
};
}
/**
* Promote unresolved self-feedback clusters into durable DB backlog items.
*
* Purpose: close the self-feedback loop by giving autonomous dispatch a
* queryable work item for repeated warnings/blockers instead of leaving them as
* markdown-only observations.
*
* Consumer: triage-self-feedback after parsing reports and startup/doctor
* maintenance that wants deterministic backlog promotion.
*/
export function promoteSelfReportsToBacklog(
_basePath,
reports = [],
options = {},
) {
if (!isDbAvailable()) {
return { promoted: [], updated: [], skipped: ["db-unavailable"] };
}
const minSeverity = options.minSeverity ?? "medium";
const minRank = severityRank(minSeverity);
const openReports = reports.filter((report) => !report.resolvedAt);
const eligible = openReports.filter(
(report) => severityRank(inferSeverity(report)) >= minRank,
);
const clusters = dedupReports(eligible);
const existingIds = new Set(listBacklogItems().map((item) => item.id));
const promoted = [];
const updated = [];
for (const cluster of clusters) {
const id = stableBacklogId(cluster.key);
addBacklogItem({
id,
title: clusterTitle(cluster),
status: "pending",
note: clusterNote(cluster),
source: "self-feedback-triage",
triageRunId: options.triageRunId ?? null,
});
if (existingIds.has(id)) updated.push(id);
else promoted.push(id);
}
return { promoted, updated, skipped: [] };
}
export default {
FIX_PATTERNS,
classifyReportFixes,
@ -360,4 +462,5 @@ export default {
dedupReports,
categorizeBySeverity,
generateTriageSummary,
promoteSelfReportsToBacklog,
};

View file

@ -78,7 +78,7 @@ function openRawDb(path) {
loadProvider();
return new DatabaseSync(path);
}
const SCHEMA_VERSION = 36;
const SCHEMA_VERSION = 38;
function indexExists(db, name) {
return !!db
.prepare(
@ -159,6 +159,32 @@ function ensureBacklogTables(db) {
"CREATE INDEX IF NOT EXISTS idx_backlog_items_status_sequence ON backlog_items(status, sequence, id)",
);
}
function ensureScheduleTables(db) {
db.exec(`
CREATE TABLE IF NOT EXISTS schedule_entries (
seq INTEGER PRIMARY KEY AUTOINCREMENT,
scope TEXT NOT NULL DEFAULT 'project',
id TEXT NOT NULL,
schema_version INTEGER NOT NULL DEFAULT 1,
kind TEXT NOT NULL DEFAULT 'reminder',
status TEXT NOT NULL DEFAULT 'pending',
due_at TEXT NOT NULL DEFAULT '',
created_at TEXT NOT NULL DEFAULT '',
snoozed_at TEXT DEFAULT NULL,
payload_json TEXT NOT NULL DEFAULT '{}',
created_by TEXT NOT NULL DEFAULT 'user',
auto_dispatch INTEGER NOT NULL DEFAULT 0,
full_json TEXT NOT NULL DEFAULT '{}',
imported_from TEXT DEFAULT NULL
)
`);
db.exec(
"CREATE INDEX IF NOT EXISTS idx_schedule_entries_scope_id_created ON schedule_entries(scope, id, created_at DESC, seq DESC)",
);
db.exec(
"CREATE INDEX IF NOT EXISTS idx_schedule_entries_scope_due ON schedule_entries(scope, status, due_at)",
);
}
function ensureSolverEvalTables(db) {
db.exec(`
CREATE TABLE IF NOT EXISTS solver_eval_runs (
@ -493,7 +519,8 @@ function initSchema(db, fileBacked) {
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL,
superseded_by TEXT DEFAULT NULL,
hit_count INTEGER NOT NULL DEFAULT 0
hit_count INTEGER NOT NULL DEFAULT 0,
tags TEXT NOT NULL DEFAULT '[]'
)
`);
db.exec(`
@ -884,6 +911,7 @@ function initSchema(db, fileBacked) {
);
ensureRepoProfileTables(db);
ensureBacklogTables(db);
ensureScheduleTables(db);
ensureSolverEvalTables(db);
ensureHeadlessRunTables(db);
ensureUokMessageTables(db);
@ -1994,6 +2022,30 @@ function migrateSchema(db) {
":applied_at": new Date().toISOString(),
});
}
if (currentVersion < 37) {
ensureScheduleTables(db);
db.prepare(
"INSERT INTO schema_version (version, applied_at) VALUES (:version, :applied_at)",
).run({
":version": 37,
":applied_at": new Date().toISOString(),
});
}
if (currentVersion < 38) {
try {
db.exec(
"ALTER TABLE memories ADD COLUMN tags TEXT NOT NULL DEFAULT '[]'",
);
} catch {
// Column may already exist on fresh DBs
}
db.prepare(
"INSERT INTO schema_version (version, applied_at) VALUES (:version, :applied_at)",
).run({
":version": 38,
":applied_at": new Date().toISOString(),
});
}
db.exec("COMMIT");
} catch (err) {
db.exec("ROLLBACK");
@ -5177,6 +5229,111 @@ export function getUokMessageBusMetrics() {
};
}
}
function normalizeScheduleScope(scope) {
return scope === "global" ? "global" : "project";
}
function scheduleEntryFromRow(row) {
if (!row) return null;
const full = parseJsonObject(row.full_json, {});
return {
...full,
schemaVersion: row.schema_version ?? full.schemaVersion ?? 1,
id: row.id,
kind: row.kind,
status: row.status,
due_at: row.due_at,
created_at: row.created_at,
snoozed_at: row.snoozed_at ?? full.snoozed_at,
payload: parseJsonObject(row.payload_json, full.payload ?? {}),
created_by: row.created_by,
auto_dispatch: !!row.auto_dispatch,
};
}
/**
* Append a schedule entry to the DB-backed schedule ledger.
*
* Purpose: keep time-bound reminders in structured SQLite state so status,
* due-date, and scope queries are schema-owned instead of JSONL-owned.
*
* Consumer: schedule-store.js for /sf schedule and launch/auto due-item checks.
*/
export function insertScheduleEntry(scope, entry, importedFrom = null) {
if (!currentDb) return;
const normalizedScope = normalizeScheduleScope(scope);
const schemaVersion = entry.schemaVersion ?? 1;
const full = { schemaVersion, ...entry };
currentDb
.prepare(
`INSERT INTO schedule_entries (
scope, id, schema_version, kind, status, due_at, created_at,
snoozed_at, payload_json, created_by, auto_dispatch, full_json,
imported_from
) VALUES (
:scope, :id, :schema_version, :kind, :status, :due_at, :created_at,
:snoozed_at, :payload_json, :created_by, :auto_dispatch, :full_json,
:imported_from
)`,
)
.run({
":scope": normalizedScope,
":id": entry.id,
":schema_version": schemaVersion,
":kind": entry.kind ?? "reminder",
":status": entry.status ?? "pending",
":due_at": entry.due_at ?? "",
":created_at": entry.created_at ?? "",
":snoozed_at": entry.snoozed_at ?? null,
":payload_json": JSON.stringify(entry.payload ?? {}),
":created_by": entry.created_by ?? "user",
":auto_dispatch": entry.auto_dispatch ? 1 : 0,
":full_json": JSON.stringify(full),
":imported_from": importedFrom,
});
}
/**
* Return latest schedule entries per id for a scope.
*
* Purpose: preserve append-ledger semantics while serving queries from SQLite.
*
* Consumer: schedule-store.js readEntries/findDue/findUpcoming.
*/
export function getScheduleEntries(scope) {
if (!currentDb) return [];
const normalizedScope = normalizeScheduleScope(scope);
try {
const rows = currentDb
.prepare(
`SELECT s.*
FROM schedule_entries s
JOIN (
SELECT id, MAX(seq) AS max_seq
FROM schedule_entries
WHERE scope = :scope
GROUP BY id
) latest ON latest.id = s.id AND latest.max_seq = s.seq
WHERE s.scope = :scope
ORDER BY s.due_at ASC, s.created_at ASC, s.seq ASC`,
)
.all({ ":scope": normalizedScope });
return rows.map(scheduleEntryFromRow).filter(Boolean);
} catch {
return [];
}
}
export function countScheduleEntries(scope) {
if (!currentDb) return 0;
const normalizedScope = normalizeScheduleScope(scope);
try {
const row = currentDb
.prepare(
"SELECT COUNT(*) AS cnt FROM schedule_entries WHERE scope = :scope",
)
.get({ ":scope": normalizedScope });
return row?.cnt ?? 0;
} catch {
return 0;
}
}
function asStringOrNull(value) {
return typeof value === "string" && value.length > 0 ? value : null;
}
@ -5832,8 +5989,8 @@ export function bulkInsertLegacyHierarchy(payload) {
export function insertMemoryRow(args) {
if (!currentDb) throw new SFError(SF_STALE_STATE, "sf-db: No database open");
currentDb
.prepare(`INSERT INTO memories (id, category, content, confidence, source_unit_type, source_unit_id, created_at, updated_at)
VALUES (:id, :category, :content, :confidence, :source_unit_type, :source_unit_id, :created_at, :updated_at)`)
.prepare(`INSERT INTO memories (id, category, content, confidence, source_unit_type, source_unit_id, created_at, updated_at, tags)
VALUES (:id, :category, :content, :confidence, :source_unit_type, :source_unit_id, :created_at, :updated_at, :tags)`)
.run({
":id": args.id,
":category": args.category,
@ -5843,6 +6000,7 @@ export function insertMemoryRow(args) {
":source_unit_id": args.sourceUnitId,
":created_at": args.createdAt,
":updated_at": args.updatedAt,
":tags": JSON.stringify(args.tags ?? []),
});
}
export function rewriteMemoryId(placeholderId, realId) {

View file

@ -27,6 +27,7 @@ import { emitJournalEvent, queryJournal } from "../journal.js";
import { appendJudgment, readJudgmentLog } from "../judgment-log.js";
import { ModelLearner } from "../model-learner.js";
import { createScheduleStore } from "../schedule/schedule-store.js";
import { closeDatabase } from "../sf-db.js";
import { buildAuditEnvelope, emitUokAuditEvent } from "../uok/audit.js";
import {
parseParityEvents,
@ -43,6 +44,7 @@ import {
const tmpDirs = [];
afterEach(() => {
closeDatabase();
setLogBasePath(null);
_resetLogs();
while (tmpDirs.length > 0) {
@ -80,13 +82,11 @@ function makeScheduleEntry(overrides = {}) {
}
describe("SF JSONL schema versioning", () => {
test("schedule_store_writes_schema_version_and_reads_legacy_rows", () => {
test("schedule_store_imports_legacy_jsonl_rows_as_version_1", () => {
const project = makeProject();
const store = createScheduleStore(project);
store.appendEntry("project", makeScheduleEntry());
const path = store._filePathForScope("project");
assert.equal(readJsonl(path)[0].schemaVersion, 1);
mkdirSync(path.slice(0, path.lastIndexOf("/")), { recursive: true });
writeFileSync(
path,

View file

@ -17,6 +17,9 @@ import { tmpdir } from "node:os";
import { dirname, join } from "node:path";
import { describe, expect, test, vi } from "vitest";
const NODE_VERSION = parseInt(process.version.slice(1).split(".")[0], 10);
const HAS_SQLITE = NODE_VERSION >= 24;
// ─── Helpers ───────────────────────────────────────────────────────────────
function makeTempDir(prefix) {
@ -32,10 +35,10 @@ function cleanup(dir) {
// ─── json-persistence: fsync after rename (HIGH) ───────────────────────────
describe("saveJsonFile fsync", () => {
test("writes file that exists and is readable after save", () => {
test("writes file that exists and is readable after save", async () => {
const dir = makeTempDir("sf-json-test-");
const filePath = join(dir, "state.json");
const { saveJsonFile } = require("../json-persistence.js");
const { saveJsonFile } = await import("../json-persistence.js");
saveJsonFile(filePath, { foo: "bar" });
expect(existsSync(filePath)).toBe(true);
const raw = readFileSync(filePath, "utf-8");
@ -44,12 +47,12 @@ describe("saveJsonFile fsync", () => {
cleanup(dir);
});
test("cleans up orphaned .tmp.* files before writing", () => {
test("cleans up orphaned .tmp.* files before writing", async () => {
const dir = makeTempDir("sf-json-test-");
const filePath = join(dir, "state.json");
// Create orphaned tmp file
writeFileSync(`${filePath}.tmp.deadbeef`, "orphan", "utf-8");
const { saveJsonFile } = require("../json-persistence.js");
const { saveJsonFile } = await import("../json-persistence.js");
saveJsonFile(filePath, { foo: "bar" });
expect(existsSync(`${filePath}.tmp.deadbeef`)).toBe(false);
cleanup(dir);
@ -57,10 +60,10 @@ describe("saveJsonFile fsync", () => {
});
describe("writeJsonFileAtomic fsync", () => {
test("writes file atomically with correct content", () => {
test("writes file atomically with correct content", async () => {
const dir = makeTempDir("sf-json-test-");
const filePath = join(dir, "state.json");
const { writeJsonFileAtomic } = require("../json-persistence.js");
const { writeJsonFileAtomic } = await import("../json-persistence.js");
writeJsonFileAtomic(filePath, { baz: 42 });
expect(existsSync(filePath)).toBe(true);
const raw = readFileSync(filePath, "utf-8");
@ -73,10 +76,10 @@ describe("writeJsonFileAtomic fsync", () => {
// ─── atomic-write: sleepSync guard (HIGH) ──────────────────────────────────
describe("sleepSync", () => {
test("sleepSync warns when called from main thread", () => {
test("sleepSync warns when called from main thread", async () => {
const warnSpy = vi.spyOn(console, "warn").mockImplementation(() => {});
// Import the module fresh to trigger the guard evaluation
const { atomicWriteSync } = require("../atomic-write.js");
const { atomicWriteSync } = await import("../atomic-write.js");
// atomicWriteSync calls sleepSync internally on rename retry;
// we trigger it by forcing a transient error scenario.
expect(() => atomicWriteSync).not.toThrow();
@ -85,8 +88,8 @@ describe("sleepSync", () => {
warnSpy.mockRestore();
});
test("sleepSync function exists and is callable", () => {
const { atomicWriteSync } = require("../atomic-write.js");
test("sleepSync function exists and is callable", async () => {
const { atomicWriteSync } = await import("../atomic-write.js");
expect(typeof atomicWriteSync).toBe("function");
});
});
@ -94,44 +97,71 @@ describe("sleepSync", () => {
// ─── memory-extractor: apiKey resolved per invocation (MEDIUM) ─────────────
describe("buildMemoryLLMCall apiKey resolution", () => {
test("apiKey is resolved inside async body, not in closure", async () => {
const { buildMemoryLLMCall } = await import("../memory-extractor.js");
// buildMemoryLLMCall returns null when no models available in empty ctx
const ctx = {
modelRegistry: {
getAvailable: () => [],
},
};
const result = buildMemoryLLMCall(ctx);
expect(result).toBeNull();
});
test(
HAS_SQLITE
? "apiKey is resolved inside async body, not in closure"
: "apiKey is resolved inside async body, not in closure [SKIPPED: Node < 24]",
HAS_SQLITE
? async () => {
const { buildMemoryLLMCall } = await import("../memory-extractor.js");
// buildMemoryLLMCall returns null when no models available in empty ctx
const ctx = {
modelRegistry: {
getAvailable: () => [],
},
};
const result = buildMemoryLLMCall(ctx);
expect(result).toBeNull();
}
: () => {
// Skip: requires node:sqlite (Node 24+)
},
);
});
// ─── cache: invalidateAllCaches error isolation (MEDIUM) ───────────────────
describe("invalidateAllCaches", () => {
test("does not throw when individual cache clear fails", () => {
const { invalidateAllCaches } = require("../cache.js");
expect(() => invalidateAllCaches()).not.toThrow();
});
test(
HAS_SQLITE
? "does not throw when individual cache clear fails"
: "does not throw when individual cache clear fails [SKIPPED: Node < 24]",
HAS_SQLITE
? async () => {
const { invalidateAllCaches } = await import("../cache.js");
expect(() => invalidateAllCaches()).not.toThrow();
}
: () => {
// Skip: requires node:sqlite (Node 24+)
},
);
});
// ─── memory-store: rewriteMemoryId returns null on failure (MEDIUM) ────────
describe("createMemory", () => {
test("returns null when DB is unavailable", () => {
const { createMemory } = require("../memory-store.js");
// With no DB available, createMemory returns null
const result = createMemory({ category: "test", content: "hello" });
expect(result).toBeNull();
});
test(
HAS_SQLITE
? "returns null when DB is unavailable"
: "returns null when DB is unavailable [SKIPPED: Node < 24]",
HAS_SQLITE
? async () => {
const { createMemory } = await import("../memory-store.js");
// With no DB available, createMemory returns null
const result = createMemory({ category: "test", content: "hello" });
expect(result).toBeNull();
}
: () => {
// Skip: requires node:sqlite (Node 24+)
},
);
});
// ─── atomic-write: rename retry accumulates errors (MEDIUM) ────────────────
describe("atomicWriteSync error accumulation", () => {
test("throws error with attempt details on failure", () => {
const { atomicWriteSync } = require("../atomic-write.js");
test("throws error with attempt details on failure", async () => {
const { atomicWriteSync } = await import("../atomic-write.js");
const dir = makeTempDir("sf-atomic-test-");
const filePath = join(dir, "readonly", "file.txt");
// readonly parent directory causes write to fail
@ -150,8 +180,8 @@ describe("atomicWriteSync error accumulation", () => {
// ─── context-injector: truncation documented (LOW) ─────────────────────────
describe("injectContext truncation", () => {
test("injectContext exists and is a function", () => {
const { injectContext } = require("../context-injector.js");
test("injectContext exists and is a function", async () => {
const { injectContext } = await import("../context-injector.js");
expect(typeof injectContext).toBe("function");
});
});
@ -159,8 +189,8 @@ describe("injectContext truncation", () => {
// ─── definition-io: error includes path (LOW) ──────────────────────────────
describe("readFrozenDefinition error wrapping", () => {
test("throws error containing the defPath on missing file", () => {
const { readFrozenDefinition } = require("../definition-io.js");
test("throws error containing the defPath on missing file", async () => {
const { readFrozenDefinition } = await import("../definition-io.js");
const fakeDir = makeTempDir("sf-def-test-");
try {
readFrozenDefinition(fakeDir);
@ -176,11 +206,10 @@ describe("readFrozenDefinition error wrapping", () => {
// ─── memory-sleeper: seenKeys bounded (LOW) ────────────────────────────────
describe("memory-sleeper seenKeys", () => {
test("resetMemorySleeper clears seenKeys", () => {
const {
resetMemorySleeper,
observeMemorySleeperToolResult,
} = require("../memory-sleeper.js");
test("resetMemorySleeper clears seenKeys", async () => {
const { resetMemorySleeper, observeMemorySleeperToolResult } = await import(
"../memory-sleeper.js"
);
resetMemorySleeper();
// After reset, the same event should be processed again
const result = observeMemorySleeperToolResult({

View file

@ -8,13 +8,13 @@
* Consumer: CI test runner (vitest).
*/
import assert from "node:assert/strict";
import { execFileSync } from "node:child_process";
import { mkdirSync, readFileSync, rmSync } from "node:fs";
import { mkdirSync, rmSync } from "node:fs";
import { tmpdir } from "node:os";
import { join } from "node:path";
import { afterEach, beforeEach, describe, it } from "vitest";
import { createScheduleStore } from "../schedule/schedule-store.js";
import { generateULID } from "../schedule/schedule-ulid.js";
import { closeDatabase } from "../sf-db.js";
describe("schedule-e2e round-trip", () => {
/** @type {string} */
@ -32,6 +32,7 @@ describe("schedule-e2e round-trip", () => {
});
afterEach(() => {
closeDatabase();
try {
rmSync(testDir, { recursive: true });
} catch {
@ -201,79 +202,14 @@ describe("schedule-e2e round-trip", () => {
);
});
it("2 concurrent appends produce exactly 2 well-formed lines", () => {
// Pre-create the runtime directory so child processes don't race on mkdir.
const runtimeDir = join(testDir, ".sf", "runtime");
mkdirSync(runtimeDir, { recursive: true });
const scheduleFile = join(runtimeDir, "schedule.jsonl");
it("2 appends produce 2 DB-backed entries with unique IDs", () => {
const first = makeEntry({ due_at: "2020-01-01T00:00:00.000Z" });
const second = makeEntry({ due_at: "2020-01-01T00:00:00.000Z" });
store.appendEntry("project", first);
store.appendEntry("project", second);
// Inline child script: generates a ULID and appends one JSON line to the
// schedule file via OS-level O_APPEND. Uses CommonJS (no imports needed).
const childScript = [
"const fs = require('fs');",
"const path = require('path');",
"const crypto = require('crypto');",
"",
"const scheduleFile = process.env.SF_SCHEDULE_FILE;",
"const PREFIX = '01';",
"const CROCKFORD = '0123456789ABCDEFGHJKMNPQRSTVWXYZ';",
"",
"function encodeBase32(value, length) {",
" let result = '';",
" for (let i = 0; i < length; i++) {",
" result = CROCKFORD[Number(value & 0x1fn)] + result;",
" value = value >> 5n;",
" }",
" return result;",
"}",
"",
"function generateULID() {",
" const ts = Date.now();",
" const rand = BigInt('0x' + crypto.randomUUID().replace(/-/g, ''));",
" return PREFIX + encodeBase32(BigInt(ts), 10) + encodeBase32(rand & ((1n << 80n) - 1n), 16);",
"}",
"",
"const entry = {",
" id: generateULID(),",
" kind: 'reminder',",
" status: 'pending',",
" due_at: '2020-01-01T00:00:00.000Z',",
" created_at: new Date().toISOString(),",
" payload: { message: 'concurrent-test' },",
" created_by: 'user',",
"}",
"",
"// OS-level O_APPEND ensures each write is atomic.",
"fs.appendFileSync(scheduleFile, JSON.stringify(entry) + '\\n', 'utf-8');",
].join("\n");
// Spawn two OS-level child processes concurrently, each appending one line.
const childOpts = {
env: { ...process.env, SF_SCHEDULE_FILE: scheduleFile },
};
execFileSync(process.execPath, ["-e", childScript], childOpts);
execFileSync(process.execPath, ["-e", childScript], childOpts);
const raw = readFileSync(scheduleFile, "utf-8");
const lines = raw.split("\n").filter((l) => l.trim() !== "");
// Assert exactly 2 lines were written.
assert.equal(
lines.length,
2,
`Expected 2 lines, got ${lines.length}: ${raw}`,
);
// Both lines must be well-formed JSON.
const entries = lines.map((line, i) => {
try {
return JSON.parse(line);
} catch {
throw new Error(`Line ${i + 1} is not valid JSON: ${line}`);
}
});
// Both IDs must be unique.
const entries = store.readEntries("project");
assert.equal(entries.length, 2);
const ids = entries.map((e) => e.id);
assert.notEqual(ids[0], ids[1], "Expected two unique IDs");
});

View file

@ -8,7 +8,7 @@
* Consumer: CI test runner (vitest).
*/
import assert from "node:assert/strict";
import { mkdirSync, readFileSync, rmSync, writeFileSync } from "node:fs";
import { existsSync, mkdirSync, rmSync, writeFileSync } from "node:fs";
import { tmpdir } from "node:os";
import { join } from "node:path";
import { afterEach, beforeEach, describe, it } from "vitest";
@ -18,6 +18,7 @@ import {
} from "../schedule/schedule-store.js";
import { isValidKind } from "../schedule/schedule-types.js";
import { generateULID } from "../schedule/schedule-ulid.js";
import { closeDatabase } from "../sf-db.js";
describe("schedule-types", () => {
describe("isValidKind", () => {
@ -96,6 +97,7 @@ describe("schedule-store", () => {
});
afterEach(() => {
closeDatabase();
try {
rmSync(testDir, { recursive: true });
} catch {
@ -126,11 +128,11 @@ describe("schedule-store", () => {
assert.equal(entries[0].id, entry.id);
});
it("creates the file and directory if missing", () => {
it("writes DB-first without creating a legacy JSONL file", () => {
const entry = makeEntry();
store.appendEntry("project", entry);
const filePath = store._filePathForScope("project");
assert.ok(readFileSync(filePath, "utf-8").includes(entry.id));
assert.equal(existsSync(filePath), false);
});
it("appends multiple entries", () => {
@ -289,19 +291,27 @@ describe("schedule-store", () => {
});
});
describe("corrupt line handling", () => {
it("skips corrupt JSONL lines and returns valid entries", () => {
describe("legacy JSONL import", () => {
it("skips corrupt JSONL lines and imports valid entries into DB", () => {
const entry = makeEntry();
store.appendEntry("project", entry);
// Inject a corrupt line directly into the file
const filePath = store._filePathForScope("project");
const content = readFileSync(filePath, "utf-8");
writeFileSync(filePath, content + "this is not json\n", "utf-8");
mkdirSync(filePath.slice(0, filePath.lastIndexOf("/")), {
recursive: true,
});
writeFileSync(
filePath,
`${JSON.stringify(entry)}\nthis is not json\n`,
"utf-8",
);
const entries = store.readEntries("project");
assert.equal(entries.length, 1);
assert.equal(entries[0].id, entry.id);
writeFileSync(filePath, "", "utf-8");
const fromDb = store.readEntries("project");
assert.equal(fromDb.length, 1);
assert.equal(fromDb[0].id, entry.id);
});
});

View file

@ -5,15 +5,21 @@
* deduplication, and severity categorization work correctly.
*/
import { describe, expect, test } from "vitest";
import { afterEach, describe, expect, test } from "vitest";
import {
categorizeBySeverity,
classifyReportFixes,
dedupReports,
generateTriageSummary,
promoteSelfReportsToBacklog,
} from "../self-report-fixer.js";
import { closeDatabase, listBacklogItems, openDatabase } from "../sf-db.js";
describe("self-report-fixer", () => {
afterEach(() => {
closeDatabase();
});
test("detects validation-reviewer-rubric fix pattern", () => {
const report = {
id: "report-1",
@ -351,4 +357,63 @@ describe("self-report-fixer", () => {
const fixDescription = fixes[0].fixFunction.toString();
expect(fixDescription.length).toBeGreaterThan(0);
});
test("promoteSelfReportsToBacklog_when_db_available_creates_deduped_items", () => {
openDatabase(":memory:");
const reports = [
{
id: "report-1",
title: "runaway guard hard pause",
description: "Medium severity repeated pause in external repos",
severity: "medium",
resolvedAt: null,
},
{
id: "report-2",
title: "RUNAWAY guard hard pause",
description: "Medium severity repeated pause in external repos",
severity: "medium",
resolvedAt: null,
},
{
id: "report-3",
title: "low priority style note",
description: "Low severity note",
severity: "low",
resolvedAt: null,
},
];
const result = promoteSelfReportsToBacklog(process.cwd(), reports);
expect(result.promoted).toHaveLength(1);
expect(result.updated).toHaveLength(0);
const items = listBacklogItems();
expect(items).toHaveLength(1);
expect(items[0].id).toMatch(/^self-feedback\.[a-f0-9]{12}$/);
expect(items[0].source).toBe("self-feedback-triage");
expect(items[0].note).toContain("reports=2");
expect(items[0].note).toContain("report-1");
});
test("promoteSelfReportsToBacklog_when_repeated_is_idempotent", () => {
openDatabase(":memory:");
const reports = [
{
id: "report-1",
title: "gap audit orphan command",
description: "Medium severity repeated orphan command",
severity: "medium",
resolvedAt: null,
},
];
const first = promoteSelfReportsToBacklog(process.cwd(), reports);
const second = promoteSelfReportsToBacklog(process.cwd(), reports);
expect(first.promoted).toHaveLength(1);
expect(second.promoted).toHaveLength(0);
expect(second.updated).toEqual(first.promoted);
expect(listBacklogItems()).toHaveLength(1);
});
});

View file

@ -13,7 +13,9 @@ import { afterEach, test } from "vitest";
import {
closeDatabase,
getDatabase,
getScheduleEntries,
insertGateRun,
insertScheduleEntry,
openDatabase,
} from "../sf-db.js";
@ -199,7 +201,7 @@ test("openDatabase_migrates_v27_tasks_without_created_at_through_spec_backfill",
const version = db
.prepare("SELECT MAX(version) AS version FROM schema_version")
.get();
assert.equal(version.version, 36);
assert.equal(version.version, 38);
const taskSpec = db
.prepare(
"SELECT milestone_id, slice_id, task_id, verify FROM task_specs WHERE task_id = 'T01'",
@ -213,6 +215,26 @@ test("openDatabase_migrates_v27_tasks_without_created_at_through_spec_backfill",
});
});
test("openDatabase_when_fresh_db_supports_schedule_entries", () => {
assert.equal(openDatabase(":memory:"), true);
insertScheduleEntry("project", {
id: "sched-1",
schemaVersion: 1,
kind: "reminder",
status: "pending",
due_at: "2026-05-08T00:00:00.000Z",
created_at: "2026-05-07T00:00:00.000Z",
payload: { message: "check DB schedule" },
created_by: "user",
});
const rows = getScheduleEntries("project");
assert.equal(rows.length, 1);
assert.equal(rows[0].id, "sched-1");
assert.deepEqual(rows[0].payload, { message: "check DB schedule" });
});
test("openDatabase_when_fresh_db_supports_gate_run_micro_usd", () => {
assert.equal(openDatabase(":memory:"), true);

View file

@ -193,6 +193,7 @@ export async function applyTriageReport(basePath, report) {
let requirementsAdded = 0;
let entriesResolved = 0;
let reportsAutoFixed = 0;
let reportsPromotedToBacklog = 0;
// ── 1. Write promoted requirements ────────────────────────────────────────
if (report.promotedRequirements.length > 0) {
@ -267,9 +268,8 @@ export async function applyTriageReport(basePath, report) {
// Integration point for self-report-fixer: read open reports and auto-apply
// fixes where confidence > 0.85.
try {
const { autoFixHighConfidenceReports } = await import(
"./self-report-fixer.js"
);
const { autoFixHighConfidenceReports, promoteSelfReportsToBacklog } =
await import("./self-report-fixer.js");
const allOpen = [
...readAllSelfFeedback(basePath),
...readUpstreamSelfFeedback(),
@ -278,10 +278,20 @@ export async function applyTriageReport(basePath, report) {
if (allOpen.length > 0) {
const result = await autoFixHighConfidenceReports(basePath, allOpen);
reportsAutoFixed = result.applied.length;
const promoted = promoteSelfReportsToBacklog(basePath, allOpen, {
triageRunId: report.triageRunId ?? null,
});
reportsPromotedToBacklog =
promoted.promoted.length + promoted.updated.length;
}
} catch {
/* self-report fixer is optional; never block triage report application */
}
return { requirementsAdded, entriesResolved, reportsAutoFixed };
return {
requirementsAdded,
entriesResolved,
reportsAutoFixed,
reportsPromotedToBacklog,
};
}