Helps users understand that 'anthropic-api' makes direct API calls (requires
API key / extra usage) while 'claude-code' routes through the local CLI
(uses subscription).
Two bugs prevented subscription users from routing through Claude Code CLI:
1. Retry handler regex only matched "third-party" errors but actual error is
"You're out of extra usage" — fallback never triggered
2. auto-model-selection actively rerouted bare model IDs back to anthropic
even after startup migration set claude-code as the session provider
Verify claude-code fallback only fires for anthropic provider and
does not reroute non-anthropic providers on similar error text.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Prevent _tryClaudeCodeFallback from firing for non-Anthropic providers
that may produce similar error text, avoiding unintended provider drift.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Anthropic now blocks third-party apps from using Pro/Max subscription
quotas via direct API calls. This change makes the claude-code provider
(which delegates to the local claude CLI binary) the default path for
Anthropic subscription users — TOS-compliant because requests flow
through Anthropic's own infrastructure.
Changes:
- Enhanced readiness check to verify CLI auth status (not just binary)
- Startup migration: auto-switch anthropic → claude-code when CLI ready
- Error recovery: auto-switch on third-party 400 block error
- Onboarding: removed Anthropic from OAuth, added Claude CLI option
- Added claude-code to flat-rate providers (no dynamic routing benefit)
Closes#3772
PR #3744 and #3765 introduced contentCursorRow which diverges from the
actual terminal cursor position after IME repositioning. computeLineDiff
computes ANSI escape movements which are relative to where the cursor
physically is — that must be hardwareCursorRow, not a phantom position.
Remove contentCursorRow entirely and revert computeLineDiff baseline to
hardwareCursorRow. The ghost-line test was asserting wrong movement
direction (UP from phantom position vs DOWN from actual cursor).
Closes#3764
PR #3744 fixed autocomplete ghost lines by introducing a local
contentCursorRow initialized from this.cursorRow, but this.cursorRow
tracks the content end (last line), not where the cursor actually
ended up after rendering. This caused computeLineDiff to compute
wrong movement deltas, making content clear and jump on every keystroke.
Fix: add an instance field contentCursorRow that stores finalCursorRow
after content rendering but before positionHardwareCursor moves the
cursor for IME. This correctly separates three cursor concepts:
- cursorRow: logical content end (viewport calculation)
- contentCursorRow: post-render cursor position (movement baseline)
- hardwareCursorRow: actual terminal cursor (may differ due to IME)
Closes#3764
Use the rendered content row as the shrink diff baseline instead of\nreusing the IME hardware cursor row. Add a focused TUI regression test\nthat reproduces the ghost-line cleanup path when autocomplete shrinks.\n\nCloses #3721
The provider manager let users navigate with arrow keys but pressing
Enter did nothing. Users had no way to set up authentication from within
the /provider command.
Adds selectConfirm (Enter) handler that routes to showLoginDialog for
the selected provider, with a hint in the status bar.
Closes#3579Closes#3567
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Backdrop was painting empty lines with dark gray background (48;5;233),
making the entire screen go black. Now uses dim + gray foreground only.
Message truncation now measures actual prefix width with visibleWidth()
instead of hardcoded 20-char estimate, and uses truncateToWidth() for
proper Unicode handling.
Use dark gray background + dim foreground for visible backdrop effect
instead of barely-perceptible SGR dim. Size overlay box to content
instead of padding to fill the entire viewport.
- Overlay layout: verify backdrop dims base lines, no dim without flag,
overlay composites on top of dimmed background
- Notification store: verify markAllRead and clearNotifications do not
delete a foreign process's lock file
The notification overlay was rendering too small with few entries, allowing
underlying content to bleed through. Added viewport padding to fill the
overlay box and a new `backdrop` option to OverlayOptions that dims the
background behind modal overlays.
newSession() only rebuilt the tool registry when cwd changed. When cwd
stayed the same (e.g., discuss → plan-slice in the same worktree), any
tool narrowing from setActiveTools() persisted — stripping gsd_plan_slice
and other DB tools from auto-mode subagent sessions.
Add an else-branch that calls _refreshToolRegistry with
includeAllExtensionTools:true on every session switch, regardless of cwd.
Also call resetExtensionLoaderCache() in DefaultResourceLoader.reload()
so hot-updated extension code on disk is re-compiled instead of served
from the stale jiti module cache.
Closes#3616
The schema overload detector counted ALL isError tool results toward the
consecutive-failure cap, including bash commands that returned non-zero exit
codes (e.g. rg/grep exit 1 = 'no matches'). Three consecutive exploratory
searches with no matches would trigger the cap and abort the session.
Root cause: the allToolsFailed check used toolResults.every(r => r.isError)
which conflates preparation-phase errors (schema validation, tool-not-found,
tool-blocked) with execution-phase errors (the tool ran successfully but
returned a non-zero exit code).
Fix: track preparationErrorCount alongside tool results. Only preparation
errors (schema/validation failures) increment the consecutive failure
counter. Tool execution errors — like bash exit code 1 — are valid usage
and do not count toward the cap.
Also fixes pre-existing StopReason type mismatches in agent-loop tests
(end_turn → stop, tool_use → toolUse).
Chat component cap: After 100 rendered components, oldest are removed
from the container (session transcript persists on disk via
SessionManager). Prevents unbounded memory growth in long sessions
where thousands of tool calls accumulate DOM-like component trees.
Orphan process prevention: On shutdown, listDescendants(process.pid)
finds ALL child processes (including those spawned by the Bash tool
that bg-shell doesn't track) and kills them with SIGTERM + 500ms
grace + SIGKILL. Prevents orphaned dev servers, build processes, etc.
from persisting after session exit.
Container.render() now returns a stable array reference when output is
unchanged — TUI.doRender() skips ALL post-processing (isImageLine scans,
applyLineResets, differential diffs) when the reference matches.
Loader decouples spinner frame rotation from Text content updates.
Previously every 80ms tick called setText() which invalidated Text's
wrapTextWithAnsi/visibleWidth caches. Now the frame is prepended in
render() while Text caches the message separately.
Text.setText() returns early when text is unchanged, avoiding cache
invalidation on redundant updates.
ToolExecutionComponent.dispose() clears heavy references (image maps,
diff previews, result data) so GC can reclaim memory when components
are removed from the chat history.
- Use `git reset --hard <sha>` for rollback instead of `git branch -f`
which fails on checked-out branches and worktrees
- Clear pendingProviderRegistrations after preflush to prevent duplicate
registration when bindCore() runs
- Process Ollama stream content on terminal `done:true` chunks to avoid
truncating trailing assistant text
Replace the OpenAI-compat shim with a native Ollama /api/chat streaming
provider that exposes all commonly-used Ollama options and surfaces
inference performance metrics.
Key changes:
- Native NDJSON streaming from /api/chat (no more OpenAI shim)
- Known models send num_ctx from capability table; unknown models defer
to Ollama's default to avoid OOM on constrained hosts
- Exposes: temperature, top_p, top_k, repeat_penalty, seed, num_gpu,
keep_alive, num_predict via per-model providerOptions
- Extracts <think>...</think> blocks for reasoning models (deepseek-r1, qwq)
- Surfaces InferenceMetrics (tokens/sec, durations) on AssistantMessage
- Adds remove and show actions to ollama_manage LLM tool
- Adds "ollama-chat" to KnownApi, providerOptions to Model<TApi>
- NDJSON parser uses strict mode for chat (fails on malformed frames)
- Mixed content+tool_call chunks handled independently
Closes#3544
Extension-provided models (e.g. claude-code/*) were unavailable during
findInitialModel() because pendingProviderRegistrations had not been
flushed yet, causing the fallback chain to select Google Gemini even
when the user explicitly configured claude-code as their default.
Three compounding issues fixed:
(A) Flush pendingProviderRegistrations in createAgentSession() before
findInitialModel() runs, so extension models are in the registry
when initial model selection happens.
(B) Re-apply the validated model to the session after
validateConfiguredModel() in both print and interactive CLI paths.
Previously, validation updated settingsManager but never called
session.setModel(), leaving the session on the wrong model.
(C) Update defaultModelPerProvider.anthropic from "claude-opus-4-6[1m]"
to "claude-opus-4-6" — the [1m] variant was removed from the model
registry when the base model was upgraded to 1M context, causing the
Anthropic fallback to silently fail and skip to Google.
Closes#3534