singularity-forge/packages/pi-ai
Jeremy McSpadden 6d77724378 perf: lazy-load LLM provider SDKs to reduce startup time
All major LLM provider SDKs were loaded eagerly at startup, penalizing
users regardless of which provider they actually use. This change defers
SDK loading until first API call for:

- @anthropic-ai/sdk (anthropic.ts)
- openai (openai-responses.ts, openai-completions.ts, azure-openai-responses.ts)
- @google/genai (google-vertex.ts)

The Bedrock provider already used this pattern. Now all 5 remaining
providers use the same async lazy-loader pattern:
- Static import changed to `import type` (erased at compile time)
- Module-level `let _SdkClass` cache variable
- `async function getSdkClass()` loader with singleton caching
- `createClient()` made async, uses `await getSdkClass()`
- Call sites updated with `await createClient()`

For google-vertex.ts, ThinkingLevel enum usage replaced with equivalent
string literals to eliminate the runtime import entirely.

All packages build cleanly. The startup improvement is proportional to
how many providers were installed — on typical installs this eliminates
eager loading of 30-40MB of SDK code.
2026-03-16 18:33:24 -05:00
..
src perf: lazy-load LLM provider SDKs to reduce startup time 2026-03-16 18:33:24 -05:00
bedrock-provider.d.ts feat: vendor Pi source into workspace monorepo 2026-03-12 21:55:17 -06:00
bedrock-provider.js feat: vendor Pi source into workspace monorepo 2026-03-12 21:55:17 -06:00
package.json fix: add missing type declarations for typecheck 2026-03-16 12:29:45 -04:00
pnpm-lock.yaml fix: type errors in claude-import.ts and marketplace-discovery.ts 2026-03-16 14:46:31 -04:00
tsconfig.json feat: vendor Pi source into workspace monorepo 2026-03-12 21:55:17 -06:00