Skip to content

feat(core): coerce interpolated env vars to native types in config.yaml#1203

Merged
christso merged 3 commits intomainfrom
feat/1202-interpolate-primitive-coercion
May 1, 2026
Merged

feat(core): coerce interpolated env vars to native types in config.yaml#1203
christso merged 3 commits intomainfrom
feat/1202-interpolate-primitive-coercion

Conversation

@christso
Copy link
Copy Markdown
Collaborator

@christso christso commented May 1, 2026

Closes #1202 ## Changes Single file change in packages/core/src/evaluation/interpolation.ts: - Added WHOLE_VAR_PATTERN regex to detect whole-string single-var substitutions - Added coercePrimitive() to map "true"/"false" to booleans and numeric strings to numbers - When the entire value is ${{ VAR }}, resolve and coerce; otherwise fall through to the existing string replacement logic ## Before / After Before: auto_push: ${{ AGENTV_AUTO_PUSH }} with AGENTV_AUTO_PUSH=true resolved to string "true", which failed the boolean type check and silently dropped the field. After: Same config resolves to boolean true and is accepted. ## Tests Added 9 new tests in interpolation.test.ts (coercion suite). All 21 tests pass.

When an entire config value is a single ${{ VAR }} reference, resolve and
coerce the result to its native type: 'true'/'false' -> boolean, numeric
strings -> number. Partial/inline substitutions remain strings.

This allows boolean fields like results.export.auto_push to be driven
by environment variables:

  auto_push: ${{ AGENTV_AUTO_PUSH }}   # AGENTV_AUTO_PUSH=true works

Closes #1202
@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages Bot commented May 1, 2026

Deploying agentv with  Cloudflare Pages  Cloudflare Pages

Latest commit: bbb073a
Status: ✅  Deploy successful!
Preview URL: https://48d3ae70.agentv.pages.dev
Branch Preview URL: https://feat-1202-interpolate-primit.agentv.pages.dev

View logs

@christso christso marked this pull request as ready for review May 1, 2026 03:15
@christso christso merged commit 994b0bc into main May 1, 2026
4 checks passed
@christso christso deleted the feat/1202-interpolate-primitive-coercion branch May 1, 2026 03:15
christso added a commit that referenced this pull request May 4, 2026
#1206)

* docs(plans): scope pi-ai migration spike (#1205)

Capture the actual call graph before any provider port: graders consume
provider.asLanguageModel() (Vercel LanguageModel) directly, not provider.invoke(),
so the migration needs either a Vercel LanguageModelV2 shim over pi-ai (Path A)
or a richer Provider API that drops asLanguageModel (Path B). Document the
trade-offs so the spike implementation path is decided before code lands.

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* chore: fix pre-existing import order in targets-validator

Pre-push lint was failing on a Biome organizeImports rule for
targets-validator.ts (introduced in #1203). Reorder the imports so
the lint passes — unblocks pushing from this branch.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* docs(plans): commit to Path B for pi-ai migration

Drop asLanguageModel() from the Provider interface; enrich Provider.invoke()
with optional `tools` + `maxSteps` and `steps` in the response so it covers
the hardest consumer (llm-grader built-in agent mode). Tools use JSON Schema
on the wire (provider-library-neutral). Document consumer migration order
(simplest first), provider port order, and open questions (Anthropic thinking
budget mapping, retry placement, token-usage shape).

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* refactor(core): port OpenAI provider + rubric-generator to pi-ai (step 1)

First consumer + first provider on Path B of #1205:

- OpenAIProvider.invoke() now calls @mariozechner/pi-ai's complete() instead
  of Vercel AI SDK's generateText. asLanguageModel() still returns the Vercel
  model so llm-grader, composite, and agentv-provider keep working until
  later steps migrate them.
- rubric-generator.ts switches from provider.asLanguageModel() + generateText()
  to provider.invoke(). It is the simplest consumer (single-shot, no tools)
  and validates the new shape end-to-end.
- pi-ai loaded via dynamic import + `any` casts, mirroring the pattern in
  pi-coding-agent.ts:250 — pi-ai's published d.ts files do not statically
  resolve named exports under NodeNext or Bundler module resolution.
- @mariozechner/pi-ai added as a regular dependency (was transitive via
  pi-coding-agent peer dep).
- chatPromptToPiContext only handles system + user roles; assistant /
  tool / function paths throw with a pointer to #1205. YAGNI for step 1 —
  later consumers (llm-grader multi-turn, tools) will add what they need.
- targets.test.ts: openai test now mocks pi-ai's complete/getModel and
  asserts those are called instead of ai-sdk's generateText.

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(core): handle assistant/tool roles + safer baseUrl/cost in pi-ai adapter

Address three review findings on the pi-ai adapter (#1205 step 1):

1. chatPromptToPiContext now passes assistant messages through and folds
   tool/function roles into prefixed assistant text, mirroring the Vercel
   path's toModelMessages. Previously turn 2+ of any multi-turn eval against
   an openai target threw on the prior turn's assistant message.

2. resolvePiModel falls back to https://api.openai.com/v1 for the openai
   provider when getModel misses and no baseUrl is configured, and throws
   a clear error otherwise. Empty baseUrl was forwarded into pi-ai's OpenAI
   client and failed opaquely.

3. mapPiResponse omits costUsd when pi-ai reports 0 (typically the fallback
   model descriptor with no pricing) instead of surfacing 0 as "free".
   Matches the Vercel path, which never sets costUsd.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* refactor(core): treat pi-ai as a normal dep — drop dynamic-import dance

Make pi-ai a first-class static dependency, like ai-sdk:
- Add @sinclair/typebox as a direct dep so pi-ai's transitive types resolve.
- Add packages/core/src/evaluation/providers/pi-ai-shim.d.ts that augments
  '@mariozechner/pi-ai' with the subset we use. Pi-ai's published d.ts has
  cross-module re-exports that don't surface at the package root under
  NodeNext (and Bundler) — only direct primary declarations leak through.
  Re-declaring just what we call gives us static imports + real types.
- ai-sdk.ts: replace `let piAiSdk: any | null` + lazy `loadPiAi()` + `as any`
  casts with plain top-level imports of `complete`, `getModel`,
  `registerBuiltInApiProviders`, and the Model/Message/AssistantMessage
  types. registerBuiltInApiProviders() runs once at module load.

The previous dynamic-import + any-cast pattern was inherited from
pi-coding-agent.ts where pi-ai is an optional peer dep. Now that pi-ai is
a real dep, that workaround was earning nothing and costing readability —
this PR drops it across the new code path. (pi-coding-agent.ts itself
keeps the lazy-load because the pi-coding-agent peer dep can be uninstalled.)

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* refactor(core): pre-resolve pi-ai Model in OpenAIProvider constructor

Lean into pi-ai's design rather than papering over it. Pi-ai treats Model
as plain data and apiKey as a per-call StreamOptions field — model and
credentials are orthogonal. Reflect that in the adapter:

- Add `private readonly piModel: PiModel` field; resolved once in the
  constructor via resolvePiModel().
- invoke() passes the prebuilt model + apiKey to invokePiAi(); no per-call
  registry lookup or field merge.
- InvokePiAiOptions shrinks from 7 fields to 5 — model is data, the call
  needs the model + auth + the request.

The previous shape rebuilt the model on every invoke from raw config
strings, conflating "what model" with "construction details" at the call
site. The new shape is both more efficient (resolve once) and more
faithful to pi-ai's API: a Model object you carry around, an apiKey you
pass when you actually call.

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(cli): declare @mariozechner/pi-ai as a runtime dep

The CLI bundles @agentv/core (noExternal), and core now imports pi-ai
directly. tsup keeps pi-ai external in the bundle (correct — it has
dynamic requires), so the published CLI needs pi-ai resolvable at
runtime. apps/cli/package.json wasn't listing it, which surfaced as
"Cannot find module '@mariozechner/pi-ai'" in CI's Validate Evals job.

Reproduces locally with `bun apps/cli/dist/cli.js validate ...`; passes
after adding the dep.

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* feat(core): teach Provider.invoke about tools + multi-step

Extend the Provider interface so invoke() can replace asLanguageModel() across
every grader call site. The new fields are additive — single-shot consumers
keep their current shape.

types.ts:
- Add ProviderTool: { name, description, parameters: JsonObject (JSON Schema),
  execute(input): unknown }
- ProviderRequest: optional tools, maxSteps
- ProviderResponse: optional steps: { count, toolCallCount }

ai-sdk.ts (invokePiAi):
- Run the agent loop when tools are provided: model turn → execute tool calls
  → next model turn, until the model stops requesting tools or maxSteps hits.
- Aggregate token usage and cost across all turns; surface step + tool counts
  on the response.
- Tool parameters flow as JSON Schema — pi-ai's openai-completions converter
  passes them through to the wire format unchanged.

pi-ai-shim.d.ts:
- Declare Tool, Context.tools so the loop typechecks.
- Declare ToolCall.thoughtSignature (set by some providers, optional).

No consumer changes yet; next commit migrates llm-grader / composite /
agentv-provider / rubric-generator off asLanguageModel onto invoke().

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* refactor(core): migrate all grader consumers off asLanguageModel

Every grader call site now goes through Provider.invoke(). The Vercel
LanguageModel branches are gone; provider.invoke() is the single API.

composite.ts:
- Drop the asLanguageModel + generateText branch; rely on provider.invoke()
  (which used to be the fallback path).

llm-grader.ts:
- LLM-judge mode (generateStructuredResponse): single invoke() call. Image
  inputs flow as ProviderRequest.images instead of ai-sdk image parts.
- Built-in agent mode (evaluateBuiltIn): replace generateText({tools, stopWhen})
  with provider.invoke({tools, maxSteps}); read step + tool counts off
  ProviderResponse.steps.
- Filesystem tools (createFilesystemTools) now return ProviderTool[] with
  JSON Schema parameters — no zod, no ai-sdk tool() helper.
- Drop ai-sdk imports (generateText, stepCountIs, tool); drop toAiSdkImageParts.

agentv-provider.ts:
- Was: throws on invoke(), exposes Vercel asLanguageModel().
- Now: parses provider:model into pi-ai (providerName, apiId), resolves the
  PiModel in the constructor, and routes invoke() through invokePiAi(). API
  keys come from pi-ai's env-var fallback (OPENAI_API_KEY, ANTHROPIC_API_KEY,
  GOOGLE_GENERATIVE_AI_API_KEY, ...).

ai-sdk.ts:
- Export resolvePiModel, invokePiAi, ProviderDefaults so other providers can
  be ported without copying the adapter.
- InvokePiAiOptions.apiKey is now optional (agentv provider relies on env
  fallback).
- invokePiAi handles the agent loop: tool calls → execute → next model turn,
  bounded by maxSteps. Aggregates token usage and cost across turns.

types.ts:
- ProviderRequest.images: optional ContentImage[] for multimodal grader inputs.

Tests:
- agentv-provider.test.ts: rewritten — mocks pi-ai, asserts the new
  provider:model → (providerName, modelId) routing and that invoke() calls
  pi-ai's complete().
- llm-grader-multimodal.test.ts: rewritten — verifies images flow through
  ProviderRequest.images instead of ai-sdk message parts.

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* refactor(core): drop ai-sdk entirely; all providers on pi-ai

Complete the #1205 migration. ai-sdk.ts no longer imports from @ai-sdk/* or
'ai'; all five direct-API providers (OpenAI, Azure, OpenRouter, Anthropic,
Gemini) route through the same invokePiAi() adapter.

Provider classes (ai-sdk.ts):
- All five resolve a pi-ai PiModel in their constructor and delegate invoke()
  to invokePiAi.
- Vercel `this.model` field, createOpenAI()/createAzure()/etc., and
  asLanguageModel() are gone.
- AnthropicProvider passes thinkingBudget through pi-ai's Anthropic-specific
  options as { thinkingEnabled, thinkingBudgetTokens } — no lossy bucket
  mapping for older models. Newer models (Opus/Sonnet 4.6) ignore it in
  favour of adaptive thinking, same as before.
- AzureProvider routes through pi-ai's azure-openai-responses for both
  apiFormat values. Behavior change: the legacy Vercel path used
  /chat/completions for apiFormat='chat' (default); pi-ai uses /responses
  for everything. Functionally equivalent for grader use cases. Users who
  hit a deployment that only exposes /chat/completions can route through
  `provider: openai` with a deployment-scoped baseURL instead.

Provider interface (types.ts):
- Drop asLanguageModel?(); the Vercel LanguageModel reference is gone.

invokePiAi:
- Now accepts providerOptions: Record<string, unknown> for provider-specific
  knobs (Anthropic thinking, Azure URL config). Pi-ai's
  ProviderStreamOptions = StreamOptions & Record<string, unknown> forwards
  these to the underlying provider impl.

Tests:
- targets.test.ts: dropped @ai-sdk/* / ai / @openrouter/ai-sdk-provider
  module mocks. createProvider tests now assert pi-ai routing
  (providerName + apiId + baseUrl + provider-specific options).

Dependencies removed:
- packages/core: @ai-sdk/anthropic, @ai-sdk/azure, @ai-sdk/google,
  @ai-sdk/openai, ai
- apps/cli: @ai-sdk/openai
- root: @openrouter/ai-sdk-provider

Verification:
- Build / typecheck / lint / 1741 unit tests all green.
- Live eval: examples/features/rubric/evals/dataset.eval.yaml run with
  target=openai routed via OpenRouter. All 3 grader-score baselines pass:
    ✓ code-quality-multi-eval / rubrics: 0.5 ∈ [0.3, 1]
    ✓ code-explanation-simple / rubrics: 1.0 ∈ [0.8, 1]
    ✓ technical-writing-detailed / rubrics: 1.0 ∈ [0.8, 1]

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* chore(core): rename ai-sdk.ts → llm-providers.ts; add pi-ai-shim sync check

Two cleanups closing out the #1205 migration:

1. Rename providers/ai-sdk.ts → providers/llm-providers.ts. The file is no
   longer the Vercel AI SDK adapter; it owns the five direct-API LLM provider
   classes (OpenAI, OpenRouter, Anthropic, Gemini, Azure) and delegates to
   pi-ai. Keeping the old name was misleading. `llm-providers.ts` also
   distinguishes from the agent providers (claude.ts, codex.ts, etc.) in the
   same directory. Updated callers in agentv-provider.ts and providers/index.ts.

2. Add scripts/check-pi-ai-shim.ts + a pre-push prek hook + bun script alias.
   The shim re-declares pi-ai's public surface so our static imports resolve
   under NodeNext (pi-ai's cross-module re-exports don't bubble up through
   `export * from`). If pi-ai ships a breaking change — renamed field,
   removed function — TypeScript stays happy against the shim while the
   runtime drifts. The check parses both d.ts files (regex + brace counting),
   confirms every interface name + field name in our shim exists upstream,
   and likewise for exported function names. Field types are not compared —
   too much surface for too little value; type-level breakage would surface
   in llm-providers.ts compilation, and runtime presence is exercised by
   the unit-test suite.

   Wired into .pre-commit-config.yaml as `check-pi-ai-shim` (pre-push) and
   exposed as `bun run check:pi-ai-shim` for manual runs.

   Verified the failure path by injecting a fake field into the shim — the
   script exits non-zero with a clear "interface X declares field Y not in
   upstream" message.

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(core): root-cause pi-ai type resolution; delete shim

The pi-ai-shim.d.ts wasn't working around a pi-ai bug — it was working
around a stale `declare module '@mariozechner/pi-ai'` block in our own
src/types/pi-sdk.d.ts that declared just `getModel(...): unknown`.

That stub was added when pi-ai was an optional peer-dep accessed via
dynamic import in pi-coding-agent.ts. When pi-ai became a direct dep with
its own published types, the stub started colliding: TypeScript merged
our `declare module` block with the real one and shadowed/dropped most
of pi-ai's exports (complete, Model, AssistantMessage, ...) — but only
when the full src/ tree was compiled, which is why it didn't reproduce
in a minimal project.

Confirmed the diagnosis by removing the stub block and watching pi-ai's
imports resolve cleanly with no other changes. The pi-ai-shim.d.ts and
the @sinclair/typebox direct dep we added were both unnecessary
workarounds for this self-inflicted issue.

Changes:
- src/types/pi-sdk.d.ts: drop the `declare module '@mariozechner/pi-ai'`
  block entirely. Keep the pi-coding-agent block (still a real optional
  peer-dep stub). Header comment now warns against re-adding a pi-ai
  block.
- src/evaluation/providers/pi-ai-shim.d.ts: deleted.
- src/evaluation/providers/llm-providers.ts: import pi-ai's real types.
  Add boundary casts where pi-ai's typed registry meets our runtime
  strings (PiKnownProvider for getModel's provider arg, `as never` for
  modelId, `as unknown as PiTool[]` for our JSON-Schema tools fed into
  pi-ai's TypeBox-typed parameters slot — pi-ai's openai-completions
  converter passes parameters through as JSON Schema unchanged).
- packages/core/package.json: drop @sinclair/typebox direct dep.
- scripts/check-pi-ai-shim.ts: deleted (no shim to validate).
- .pre-commit-config.yaml: drop the check-pi-ai-shim hook.
- package.json: drop the check:pi-ai-shim script.

Verified: typecheck / lint / 1741 unit tests / live UAT through
OpenRouter all green with no shim and pi-ai's real types in use.

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* chore(core): freshen comments + per-provider fallback metadata + cast docs

Three small follow-ups on the pi-ai migration:

1. llm-grader.ts: comments at line 208/474/478 still referenced "AI SDK
   generateText" / "Vercel AI SDK generateText()". Updated to describe the
   actual code path: provider.invoke() with filesystem tools, agent loop
   driven by pi-ai through the agentv provider.

2. llm-providers.ts: `resolvePiModel`'s synthesized fallback Model used a
   single hardcoded `contextWindow: 128000 / maxTokens: 16384` for every
   unknown (provider, modelId). These fields are metadata only — pi-ai
   uses them for cost telemetry, not to cap the API call (the real
   request size comes from StreamOptions.maxTokens, which we omit unless
   the caller set request.maxOutputTokens). Replaced with per-provider
   defaults via `defaultModelMetadata()`:
     - openai / azure-openai-responses: 400K / 128K (gpt-5 family)
     - anthropic: 200K / 32K (claude 4.x)
     - google: 1M / 64K (gemini 2.5)
     - openrouter: 200K / 32K
     - default: 128K / 16K
   Bump these if a custom gateway routes to bigger windows.

3. llm-providers.ts: tightened the two boundary casts with one-line "why
   safe" explanations citing the upstream proof:
     - `as unknown as PiTool[]` — pi-ai/dist/providers/openai-completions.js
       convertTools() forwards `parameters` unchanged ("TypeBox already
       generates JSON Schema").
     - `piGetModel(... as PiKnownProvider, ... as never)` —
       pi-ai/dist/models.js getModel() is a plain Map lookup that accepts
       any string and returns undefined on miss; the casts satisfy the
       generic constraint without changing runtime behavior. Also fixed
       the comment's "throws otherwise" → returns undefined, and made the
       cast `PiModel | undefined` to match.

Verified: typecheck / lint / 1741 unit tests / live UAT through OpenRouter
all green.

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* chore(core): simplify resolvePiModel fallback to universal 128K/16K

The per-provider defaultModelMetadata table was over-engineered. On the
complete()/streamOpenAICompletions code path we use, pi-ai only sets
max_tokens when the caller passes StreamOptions.maxTokens — model.maxTokens
is not consulted. Pi-ai's *simple* options builder
(simple-options.js:buildBaseOptions) does fall back to
Math.min(model.maxTokens, 32000) for the completeSimple/streamSimple path,
but we don't currently call that path.

Replace the switch statement with a universal { contextWindow: 128000,
maxTokens: 16384 } matching pi-coding-agent's ModelRegistry choice for
custom models — same numbers across both shims keeps behavior consistent
when callers eventually mix the two SDKs.

Comment now honestly describes pi-ai's actual maxTokens consumption: not
"metadata only", but "metadata on our path; would be a fallback ceiling
on the *Simple path we don't use".

Refs #1205

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* docs: add MiMo targets to targets.yaml; document max_output_tokens for custom providers

* docs: add MiMo direct API target with Bitwarden key; update targets.yaml template

* docs: remove bws references from targets.yaml templates

* chore(deps): bump @mariozechner/pi-ai ^0.62.0 → ^0.72.1

Pinned in both packages/core/package.json and apps/cli/package.json (the
two places that consume pi-ai's runtime). 10 minor versions of upstream
fixes and additions; no breaking changes for our adapter — index.d.ts
shape is unchanged on the named exports we use (complete, getModel,
registerBuiltInApiProviders) and the Model / Tool / Message / AssistantMessage
types still match our cast assumptions in llm-providers.ts.

Verified:
- typecheck / lint / 1741 unit tests all green
- live UAT: generateRubrics through OpenAIProvider routed at OpenRouter
  returns 6 valid rubrics

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* docs: remove spike plan doc (content moved to #1205)

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: coerce interpolated env vars to native types in config.yaml

1 participant