Fluent, resilient, type-safe AI SDK for OpenAI-compatible chat models.
Lowdeep helps you move from free-form model output to validated TypeScript objects. It uses Zod for runtime validation and retries automatically when responses do not match your schema.
- Why Lowdeep
- Installation
- Requirements
- Quick Start
- Detailed Examples
- How the Builder Works
- API Reference
- Self-Healing JSON Flow
- Provider Behavior
- Error Handling
- Development
- Security Notes
- License
- Fluent builder API:
lowdeep().key(...).model(...).chat(...) - Type-gated usage:
chat()is only available after required setup - Zod output validation with inferred TypeScript return types
- Optional Zod input validation before making provider requests
- Retry-on-validation-error loop with feedback sent back to the model
- Built-in conversation memory, plus custom history injection
bun add lowdeep zod
# or
npm install lowdeep zod- Node.js or Bun
- TypeScript
>=5 - API key for one of the supported providers
import lowdeep from "lowdeep";
const ai = lowdeep()
.key(process.env.OPENAI_API_KEY!)
.model("gpt-4o-mini")
.system("Be practical and concise.")
.temperature(0.4);
const answer = await ai.chat("Explain what an API is in one paragraph.");
console.log(answer);Use this when you only need plain text and do not need schema validation.
import lowdeep from "lowdeep";
const ai = lowdeep()
.key(process.env.GROQ_API_KEY!)
.model("llama-3.3-70b-versatile")
.retry(2);
const tips = await ai.chat("Give me 5 ways to learn TypeScript faster.");
console.log(tips);Use output schema when you need deterministic object shapes.
import { z } from "zod";
import lowdeep from "lowdeep";
const PlanSchema = z.object({
topic: z.string(),
difficulty: z.enum(["beginner", "intermediate", "advanced"]),
steps: z.array(
z.object({
title: z.string(),
estimateMinutes: z.number().int().min(1),
}),
),
});
const ai = lowdeep()
.key(process.env.OPENAI_API_KEY!)
.model("gpt-4o-mini")
.schema(PlanSchema)
.retry(3);
const plan = await ai.chat("Create a React hooks study plan for beginners.");
// typed values from z.infer<typeof PlanSchema>
console.log(plan.topic);
console.log(plan.steps[0]?.title);Use this when your input is also structured and must be validated before request.
import { z } from "zod";
import lowdeep from "lowdeep";
const InputSchema = z.object({
productName: z.string().min(2),
audience: z.enum(["developer", "manager", "founder"]),
tone: z.enum(["serious", "friendly"]),
});
const OutputSchema = z.object({
headline: z.string(),
bullets: z.array(z.string()).length(3),
cta: z.string(),
});
const ai = lowdeep()
.key(process.env.OPENAI_API_KEY!)
.model("gpt-4o-mini")
.schema(OutputSchema, InputSchema)
.system("Return concise marketing copy.");
const ad = await ai.chat({
productName: "Lowdeep",
audience: "developer",
tone: "friendly",
});
console.log(ad.headline);
console.log(ad.bullets);
console.log(ad.cta);If input does not match InputSchema, chat(...) throws immediately and no request is sent.
Lowdeep keeps conversation history in memory for the builder instance.
import lowdeep from "lowdeep";
const ai = lowdeep()
.key(process.env.OPENAI_API_KEY!)
.model("gpt-4o-mini");
await ai.chat("Remember this code: PX-19.");
const reply = await ai.chat("What code did I ask you to remember?");
console.log(reply);Use .use(...) to preload context from your application state.
import lowdeep from "lowdeep";
import type { ChatCompletionMessageParam } from "openai/resources";
const history: ChatCompletionMessageParam[] = [
{ role: "system", content: "You are a strict project assistant." },
{ role: "user", content: "Project codename is Atlas." },
];
const ai = lowdeep()
.use(history)
.key(process.env.OPENAI_API_KEY!)
.model("gpt-4o-mini");
console.log(await ai.chat("What is the project codename?"));Typical order:
lowdeep().key(...).model(...)- Optional config (
.system(),.temperature(),.retry(),.schema(),.use()) .chat(...)
Type behavior:
- You cannot call
chat()untilkeyandmodelare configured. - Without output schema, return type is text.
- With output schema, return type is inferred from Zod.
Creates a new builder instance.
Sets API key and infers provider from key prefix:
gsk_->groqsk_->openai- any other prefix ->
deepinfra
Sets model id passed to the provider.
Sets system instruction. Default: "Be a helpful assistant".
Sets temperature from 0 to 2.
Throws for values outside this range.
Default: 0.7.
Sets max retry attempts for the self-healing loop.
Default: 3.
outputSchema: validates model response and returns typed objectinputSchema: validateschat(data)payload before provider request
Replaces current internal history with your own message array.
- If
inputSchemaexists, input is validated first. - If
outputSchemais absent, returns model text. - If
outputSchemaexists, returns validated typed data.
A runtime provider("groq" | "openai" | "deepinfra") method exists for compatibility, but key-based provider inference is the intended approach.
When output schema is configured, Lowdeep:
- Injects JSON schema guidance in the system message.
- Requests strict JSON output.
- Cleans model output (including fenced JSON or reasoning tags).
- Parses and validates with Zod.
- On failure, appends validation errors and retries.
If all retries fail, Lowdeep prints a warning and returns undefined.
Supported providers:
- OpenAI
- Groq
- DeepInfra
All requests are sent through OpenAI-compatible chat completions.
Common failures:
- Provider rejects key/model
- Temperature out of range (
< 0or> 2) - Input schema validation failure
- Output schema still invalid after all retries
Recommended pattern:
try {
const result = await ai.chat("Return JSON with title and score");
console.log(result);
} catch (error) {
console.error("Lowdeep request failed:", error);
}Build the package:
bun run buildBuild output is generated in dist/.
- Do not hardcode API keys in committed files.
- Prefer
process.env.*for secrets. - Rotate keys immediately if exposed.
MIT