84 files spanning provider capabilities, model routing, headless
runtime, sf auto subsystems, gitbook docs, and test coverage. Snapshotted
so headless auto can resume M004 (Production Readiness) S03
(Verification Gate Validation) on a clean tree.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds direct xiaomi token-plan API access alongside the existing
OpenRouter-routed xiaomi entries. ADDITIVE only — OpenRouter cleanup is
a separate follow-up.
Three new region providers:
- xiaomi-token-plan-ams (Amsterdam, default for plain `xiaomi`)
- xiaomi-token-plan-sgp (Singapore)
- xiaomi-token-plan-cn (China)
All use Anthropic Messages API. Env-var resolution: XIAOMI_API_KEY →
XIAOMI_TOKEN_PLAN_API_KEY → MIMO_API_KEY (in that fallback order).
Three xiaomi MiMo models registered under each direct provider:
- mimo-v2-flash (256k ctx, 64k output, text-only, reasoning)
- mimo-v2-omni (256k ctx, 128k output, text+image, reasoning)
- mimo-v2-pro (1M ctx, 128k output, text-only, reasoning)
Same model literals × 4 provider keys, different baseUrls per region.
Test count assertion bumped 22 → 26 providers.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
LongCat (Meituan) ships an Anthropic-compatible endpoint at
https://api.longcat.chat/anthropic that authenticates via
`Authorization: Bearer $KEY` instead of Anthropic's native `x-api-key`
header. Without this change, pi sends x-api-key and LongCat replies
with 401 invalid_api_key / missing_api_key.
Same topology as the existing alibaba-coding-plan / minimax /
minimax-cn entries (#3783).
- Add "longcat" to usesAnthropicBearerAuth() so createClient routes
the key through authToken.
- Add "longcat": "LONGCAT_API_KEY" to env-api-keys.ts envMap so
getEnvApiKey() can resolve it when options.apiKey is absent.
- Add "longcat" to KnownProvider so the === literal check type-checks.
- Extend anthropic-auth.test.ts to assert usesAnthropicBearerAuth
returns true for longcat.
* feat(pi-ai): add Alibaba DashScope as standalone provider
Adds `alibaba-dashscope` for users with a regular DashScope API key,
separate from the existing `alibaba-coding-plan` free-tier provider.
- types.ts: register `alibaba-dashscope` as KnownProvider
- env-api-keys.ts: map to DASHSCOPE_API_KEY
- models.custom.ts: add qwen3-max, qwen3.5-plus, qwen3.5-flash,
qwen3-coder-plus with international endpoint and real pricing
- model-resolver.ts: default model qwen3.5-plus
- key-manager.ts: add alibaba-coding-plan and alibaba-dashscope
to PROVIDER_REGISTRY so /gsd keys add works for both
Co-Authored-By: Claude Code <noreply@anthropic.com>
* feat(pi-ai): add qwen3.6-plus to alibaba-dashscope provider
qwen3.6-plus is available on DashScope international endpoint.
Pricing: $0.5/M input, $3/M output (base tier, 0-256K tokens).
Supports thinking mode (reasoning: true).
Source: https://www.alibabacloud.com/help/en/model-studio/model-pricing
Co-Authored-By: Claude Code <noreply@anthropic.com>
* test(pi-ai): add tests for alibaba-dashscope provider and key-manager regression
- packages/pi-ai/src/models.test.ts: add describe block covering all 5
alibaba-dashscope models (presence, base URL, API, provider field,
context window, paid pricing, per-model reasoning/cost assertions,
independence from alibaba-coding-plan, failure path for unknown model)
- src/resources/extensions/gsd/tests/key-manager.test.ts: add regression
tests for #3891 — alibaba-coding-plan was missing from PROVIDER_REGISTRY,
causing /gsd keys add alibaba-coding-plan to fail silently; also covers
alibaba-dashscope registration, env var separation, and getAllKeyStatuses
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Code <noreply@anthropic.com>
Self-contained extension at src/resources/extensions/ollama/ that
auto-detects a running Ollama instance, discovers locally pulled models,
and registers them as a first-class provider with zero configuration.
Features:
- Auto-discovery of local models via /api/tags on session_start
- Capability detection (vision, reasoning, context window) for 40+ model families
- /ollama slash command with status, list, pull, remove, ps subcommands
- ollama_manage LLM-callable tool for agent-driven model operations
- Onboarding flow with auto-detect (no API key required)
- Non-blocking async probe — doesn't delay TUI paint
- Respects OLLAMA_HOST env var for non-default endpoints
Core changes (minimal):
- Add "ollama" to KnownProvider in pi-ai types
- Add "ollama" key resolution in env-api-keys.ts
- Add "ollama" default model in model-resolver.ts
- Add "Ollama (Local)" to onboarding wizard with probe flow
* feat: add anthropic-vertex provider for Claude models on Google Vertex AI
Add a new anthropic-vertex provider that enables using Claude models
(Opus 4.6, Sonnet 4.6, Haiku 4.5) through Google Vertex AI using the
@anthropic-ai/vertex-sdk package. Follows the same pattern as the
existing google/google-vertex provider split.
Detection uses ANTHROPIC_VERTEX_PROJECT_ID (same env var as Claude Code)
with CLOUD_ML_REGION for region selection, falling back to us-central1.
Extracts shared Anthropic utilities into anthropic-shared.ts (message
conversion, tool conversion, param building, stream processing) to
avoid duplication between anthropic.ts and anthropic-vertex.ts.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: add full Claude model set for anthropic-vertex provider
Add 200K context window variants for Opus 4.6 and Sonnet 4.6, plus
older models (Sonnet 4.5, Sonnet 4, Opus 4.5, Opus 4.1, Opus 4, Haiku 4.5).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add @anthropic-ai/vertex-sdk to root dependencies
Required for the published package to resolve the vertex SDK at runtime.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore: remove unnecessary comments to match codebase style
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove duplicate stream functions after rebase
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Nathan Roe <nathan.roe@carvana.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add Ollama Cloud (ollama.com) as a built-in provider with both model
hosting and web search/fetch capabilities.
Model provider:
- 13 curated models via OpenAI-compatible API (Llama 3.1, Qwen 3,
DeepSeek R1, Gemma 3, Mistral, Phi-4, GPT-OSS)
- Auth via OLLAMA_API_KEY environment variable
- Registered in onboarding, env hydration, and model resolver
Web tool provider:
- Search via POST ollama.com/api/web_search
- Page fetch via POST ollama.com/api/web_fetch (fallback after Jina)
- Added as third search provider option alongside Tavily and Brave
- /search-provider command updated with ollama option
Closes#430
Adds a "Custom (OpenAI-compatible)" provider option to the API key
flow in the onboarding wizard. When selected, prompts for base URL,
API key, and model ID, then writes the config to models.json.
Vendor all 4 Pi packages (tui, ai, agent-core, coding-agent) from
pi-mono v0.57.1 as @gsd/* workspace packages under packages/. This
replaces the compiled npm dependency (@mariozechner/pi-coding-agent)
and patch-package workflow, giving direct source access for
modifications.
- Copy Pi source from pi-mono v0.57.1 into packages/
- Create workspace package.json + tsconfig.json for each package
- Rename ~240 imports from @mariozechner/pi-* to @gsd/pi-*
- Apply existing patches as source edits (setModel persist, VT input)
- Remove @mariozechner/pi-coding-agent dep and patch-package
- Update build pipeline to build packages in dependency order
- Add pi-upstream git remote for future selective syncing
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>