- Import querySmMemories from sm-client.js
- Merge cross-project memories into getRelevantMemoriesRanked
- Cap cross-project confidence at 0.8 with 0.9 reduction (conservative)
- Gracefully degrade: fail-open if SM unavailable
- Preserve cosine ranking with relation boost for merged pool
- Tests: 3821 passing, no regressions
Implements Tier 1.2 Phase 3: Cross-project memory recall via Singularity Memory.
Enables dispatch to leverage patterns from other projects while maintaining
local autonomy via fail-open semantics.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Schema version bumped to 36
- Add migrateCostUsdToMicroUsd() helper for safe migration
- Convert cost_usd REAL to cost_micro_usd INTEGER in gate_runs
- Migration: multiply USD values by 1,000,000 to avoid float drift
- Update insertGateRun() to support cost_micro_usd field
- Old cost_usd column retained for backward compatibility
Benefits:
- Eliminates floating-point drift on accumulated costs
- Easier reasoning about cost totals
- Integer arithmetic is faster and more predictable
- Idempotent migration (safe to re-run)
Migration runs automatically on first database open for schema < 36.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Updated plan-milestone, plan-slice, plan-task to record planning evidence
- Updated complete-milestone, complete-slice, complete-task to record completion evidence
- All evidence includes relevant spec fields (goals, narratives, decisions, etc.)
- Evidence recorded atomically within transactions
- Enables audit trail queries to reconstruct planning and completion decisions
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Implements data layer functions for managing and querying spec/evidence data.
New export functions:
- insertMilestoneEvidence(): Append evidence for milestone
- insertSliceEvidence(): Append evidence for slice
- insertTaskEvidence(): Append evidence for task
- getMilestoneAuditTrail(): Query full audit trail (spec + evidence + runtime)
- getSliceAuditTrail(): Query slice audit trail with joined spec/evidence
- getTaskAuditTrail(): Query task audit trail with joined spec/evidence
- getMilestoneSpec(): Get spec only (immutable intent)
- getSliceSpec(): Get slice spec only
- getTaskSpec(): Get task spec only
Key properties:
- Evidence functions use timestamp for recording time (set at insertion)
- Audit trail queries JOIN runtime, spec, and evidence tables
- All queries support data archaeology (reconstruct decision history)
- Spec-only queries useful for validation and re-planning
- All functions include JSDoc with purpose and consumer
This completes Phase 3 of Tier 1.3 implementation. Phase 4 (tool updates) and
Phase 5 (integration tests) follow in next PRs.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Implements the 3-table normalization model for milestone, slice, and task entities:
- 9 new tables: {milestone,slice,task}_{specs,evidence} + runtime tables
- milestone_specs: immutable record of intent (vision, goals, risks, proof strategy)
- slice_specs: immutable slice-level intent
- task_specs: immutable task verification criteria
- {entity}_evidence: append-only audit trail with timestamps and phase metadata
- Indices on evidence tables for efficient chronological queries
Key improvements:
- Spec immutability: Write-once specs preserve original intent
- Audit trail: Evidence chain enables data archaeology and decision history
- Query efficiency: Each table contains only relevant columns
- Re-planning clarity: Multiple spec versions can exist for same entity ID
- Forensic capability: Timestamp + phase metadata on evidence rows
Migration:
- Schema version bumped to 32
- Migration runs on first open of existing databases
- No data loss; existing milestone/slice/task rows preserved
- Creates spec and evidence tables from existing columns (future work)
This is Phase 1 of Tier 1.3 implementation (schema definition + basic setup).
Phases 2-5 (migration, data layer updates, tool updates, tests) follow in next PRs.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Hook sync-scheduler into createMemory() so all new memories are queued for
async sync to Singularity Memory:
Changes to memory-store.js:
- Import queueMemorySync from sync-scheduler.js
- After successful memory creation with real ID, queue to scheduler
- Fire-and-forget: sync doesn't block memory creation
- Best-effort: catch scheduler errors, don't fail memory on sync issues
- Pass memory fields: category (type), content, projectId, confidence
This completes Tier 1.2 Phase 3a: Memory integration foundation.
Memories created locally are now automatically queued for SM sync:
- Batched in groups of 50 or every 5s
- Retried with exponential backoff on failure
- Gracefully degrades if SM unavailable
Next: add session-end flush to unit-runtime.js (Phase 3b)
Fixes: TIER_1_2_PHASE_3A
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Create vault-resolver.js: URI parser, auth chain (env → file → AppRole), in-memory caching
- Add resolveConfigValueAsync() to pi-coding-agent for lazy vault URI resolution
- Integrate vault credential resolution into auth-storage credential loading path
- Add doctor check (checkVaultHealth) for vault setup validation at startup
- Document vault setup, auth methods, examples, troubleshooting in preferences-reference.md
- Add comprehensive test suite (18 tests) for vault URI parsing, auth, caching, fallback
Auth Chain:
1. VAULT_TOKEN env var (simplest for local dev)
2. ~/.vault-token file (recommended for local dev)
3. VAULT_ROLE_ID + VAULT_SECRET_ID env vars (AppRole for CI/CD)
Fail-open behavior: If vault unavailable, falls back to plaintext URIs to allow continued operation.
URI Format: vault://secret/path/to/secret#fieldname
Example: ANTHROPIC_API_KEY=vault://secret/anthropic/prod#api_key
Tests: parseVaultUri, isVaultUri, resolveSecret, caching, edge cases all passing (18/18).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>