All major LLM provider SDKs were loaded eagerly at startup, penalizing users regardless of which provider they actually use. This change defers SDK loading until first API call for: - @anthropic-ai/sdk (anthropic.ts) - openai (openai-responses.ts, openai-completions.ts, azure-openai-responses.ts) - @google/genai (google-vertex.ts) The Bedrock provider already used this pattern. Now all 5 remaining providers use the same async lazy-loader pattern: - Static import changed to `import type` (erased at compile time) - Module-level `let _SdkClass` cache variable - `async function getSdkClass()` loader with singleton caching - `createClient()` made async, uses `await getSdkClass()` - Call sites updated with `await createClient()` For google-vertex.ts, ThinkingLevel enum usage replaced with equivalent string literals to eliminate the runtime import entirely. All packages build cleanly. The startup improvement is proportional to how many providers were installed — on typical installs this eliminates eager loading of 30-40MB of SDK code. |
||
|---|---|---|
| .. | ||
| src | ||
| bedrock-provider.d.ts | ||
| bedrock-provider.js | ||
| package.json | ||
| pnpm-lock.yaml | ||
| tsconfig.json | ||