IMPORTANT: DO NOT WRITE COMMENTS INTO THE BODY OF ANY FUNCTIONS.
ReqLLM is a composable Elixir library for AI interactions built on Req, providing a unified interface to AI providers through a plugin-based architecture. The library uses OpenAI Chat Completions as the baseline API standard, with providers implementing translation layers for non-compatible APIs.
mix test- Run all tests using cached fixturesmix test test/req_llm_test.exs- Run specific test filemix test --only describe:"model/1 top-level API"- Run specific describe blockLIVE=true mix test- Run against real APIs and (re)generate fixturesmix compile- Compile the projectmix qualityormix q- Run quality checks (format, compile --warnings-as-errors, dialyzer, credo)
ReqLLM uses structured key/value tags for precise test filtering:
Tag Dimensions:
category- Test type (:core,:streaming,:tools,:embedding)provider- LLM provider (:anthropic,:openai,:google,:groq,:openrouter,:xai)
Examples:
mix test --only "category:core"- Run all core testsmix test --only "provider:anthropic"- Run Anthropic tests onlymix test --only "category:core" --only "provider:openrouter"- Run OpenRouter core testsLIVE=true mix test --only "category:core" --only "provider:anthropic"- Regenerate Anthropic core fixtures
mix format- Format Elixir codemix format --check-formatted- Check if code is properly formattedmix dialyzer- Run Dialyzer type analysismix credo --strict- Run Credo linting (includes custom rule to enforce no comments in function bodies)
lib/req_llm.ex- Main API facade with generate_text/3, stream_text/3, generate_object/4lib/req_llm/- Core modules (Model, Provider, Error structures)lib/req_llm/providers/- Provider-specific implementations (Anthropic, OpenAI, etc.)test/- Three-tier testing architecture (seetest/AGENTS.mdfor detailed testing guide)test/req_llm/- Core package tests (NO API calls, unit tests with mocks)test/provider/- Mocked provider-specific tests (NO API calls, tests provider nuances)test/coverage/- Live API coverage tests (fixture-based, high-level API only)test/support/- Shared helpers (live fixtures, HTTP mocks, test macros)
ReqLLM.Context- Conversation history as a collection of messagesReqLLM.Message- Single conversation message with multi-modal content supportReqLLM.Message.ContentPart- Individual content piece (text, image, tool call, etc.)ReqLLM.Tool- Function calling definition with schema and callbackReqLLM.StreamChunk- Unified streaming response format across providersReqLLM.Model- AI model configuration with provider and parametersReqLLM.Response- High-level LLM response with context and metadata
- Each provider implements
ReqLLM.Providerbehavior with callbacks:prepare_request/4- Configure operation-specific requests (non-streaming only)attach/3- Set up authentication and Req pipeline steps (non-streaming only)encode_body/1- Transform context to provider JSON (non-streaming only)decode_response/1- Parse API responses (non-streaming only)attach_stream/4- Build complete Finch streaming request (streaming only, optional)decode_sse_event/2- Decode provider SSE events to StreamChunk structs (streaming only, optional)extract_usage/2- Extract usage/cost data (optional)translate_options/3- Provider-specific parameter translation (optional)
- Providers use
ReqLLM.Provider.DSLmacro for registration and metadata loading - Non-streaming: Core API uses provider's
attach/3to compose Req requests with provider-specific steps - Streaming: Uses Finch with provider's
attach_stream/4to build streaming requests anddecode_sse_event/2to parse SSE events - Options Translation: Providers can implement
translate_options/3to handle model-specific parameter requirements (e.g., OpenAI o1 models requiremax_completion_tokensinstead ofmax_tokens)
- Provider callbacks handle encoding/decoding requests and responses
- Built-in defaults provide OpenAI-style wire format handling
- Providers can override
encode_body/1anddecode_response/1for custom formats
- Follow standard Elixir conventions and use
mix formatfor consistent formatting - Use
@moduledocand@docfor comprehensive documentation - Prefer pattern matching over conditionals where possible
- Use
{:ok, result}/{:error, reason}tuple returns for fallible operations - No inline comments in method bodies - code should be self-explanatory through clear naming and structure
- Minimize imports, prefer explicit module calls (e.g.,
ReqLLM.Model.from/1) - Group deps in mix.exs: runtime deps first, then dev/test deps with
, only: [:dev, :test]
- Use TypedStruct for structured data with
@typedefinitions - Validate options with NimbleOptions schemas in public APIs
- Use Splode for structured error handling with specific error types
- Return
{:ok, result}or{:error, %ReqLLM.Error{}}tuples - Use Splode error types:
ReqLLM.Error.API,ReqLLM.Error.Parse,ReqLLM.Error.Auth - Include helpful error messages and context in error structs
- Tests are grouped by capability, not by individual function-call
- All suites use
ReqLLM.Test.LiveFixture.use_fixture/3to abstract live vs cached responses - Cached JSON fixtures live next to the test in
fixtures/<provider>/<test_name>.jsonand are automatically written when theLIVE=trueenv-var is set - Most suites run
async: true. Suites that write fixtures are forced to synchronous execution via@moduletag :capture_log
defmodule CoreTest do
use ReqLLM.Test.LiveFixture, provider: :openai
use ExUnit.Case, async: true
describe "generate_text/3" do
test "basic happy-path" do
{:ok, text} =
use_fixture(:provider, "core-basic", fn ->
ReqLLM.generate_text!("openai:gpt-4o", "Hello!")
end)
assert text =~ "Hello"
end
end
end