A Swift DSL for the Open Responses API -- a multi-provider, interoperable LLM interface specification. Build type-safe LLM requests with result builders, streaming, tool calling, and conversation continuity via previous_response_id.
SwiftOpenResponsesDSL provides an embedded domain-specific language for interacting with any LLM provider that implements the Open Responses API specification. Instead of manually constructing JSON payloads, you use Swift result builders to compose requests declaratively.
Key features:
- Result builders --
@InputBuilderfor input items,@ResponseConfigBuilderfor configuration,@SessionBuilderfor mixed items and tools - Streaming -- Async sequence of semantic
StreamEventvalues (deltas, completions, errors) - Tool calling -- Define function tools with
FunctionToolParam, orchestrate multi-turn tool loops withToolSession - Conversation continuity -- Chain responses with
previous_response_idinstead of re-sending full message history - Actor-based client -- Thread-safe
LLMClientactor for concurrent usage
Add SwiftOpenResponsesDSL to your Package.swift:
dependencies: [
.package(url: "https://github.com/RichNasz/SwiftOpenResponsesDSL.git", from: "0.1.0")
]Then add it as a dependency to your target:
.target(
name: "YourTarget",
dependencies: ["SwiftOpenResponsesDSL"]
)import SwiftOpenResponsesDSL
let client = try LLMClient(
baseURL: "https://api.openai.com/v1/responses",
apiKey: "your-api-key"
)
let request = try ResponseRequest(model: "gpt-4o", text: "Hello, world!")
let response = try await client.send(request)
print(response.firstOutputText ?? "No response")Use the text initializer for simple prompts, or the @InputBuilder for structured conversations:
// Simple text input
let request = try ResponseRequest(model: "gpt-4o", text: "Explain Swift concurrency")
let response = try await client.send(request)
print(response.firstOutputText ?? "")
// Structured input with result builder
let request = try ResponseRequest(model: "gpt-4o") {
System("You are a helpful assistant.")
User("What is the Open Responses API?")
}
let response = try await client.send(request)
print(response.firstOutputText ?? "")Enable streaming by setting stream: true and using client.stream():
let request = try ResponseRequest(model: "gpt-4o", stream: true, text: "Write a haiku about Swift")
for try await event in client.stream(request) {
switch event {
case .contentPartDelta(let delta, _, _):
print(delta, terminator: "")
case .responseCompleted:
print() // newline
default:
break
}
}Use PreviousResponseId to chain responses without re-sending the full conversation history:
let first = try ResponseRequest(model: "gpt-4o", text: "My name is Alice.")
let firstResponse = try await client.send(first)
let followUp = try ResponseRequest(model: "gpt-4o", config: {
try PreviousResponseId(firstResponse.id)
}, text: "What is my name?")
let followUpResponse = try await client.send(followUp)
print(followUpResponse.firstOutputText ?? "")Define function tools with FunctionToolParam and use ToolSession to orchestrate the tool-calling loop:
let weatherTool = FunctionToolParam(
name: "get_weather",
description: "Get current weather for a city",
parameters: .object(
properties: [
"city": .string(description: "City name")
],
required: ["city"]
)
)
let session = ToolSession(
client: client,
tools: [weatherTool],
handlers: [
"get_weather": { arguments in
return "{\"temperature\": \"72F\", \"condition\": \"sunny\"}"
}
]
)
let result = try await session.run(
model: "gpt-4o",
input: [User("What's the weather in San Francisco?")]
)
print(result.response.firstOutputText ?? "")Use SwiftLLMToolMacros to define tools with zero boilerplate. The @LLMTool macro synthesizes JSON schema and decoding automatically:
import SwiftOpenResponsesDSL
import SwiftLLMToolMacros
/// Get the current weather for a location.
@LLMTool
struct GetCurrentWeather {
@LLMToolArguments
struct Arguments {
@LLMToolGuide(description: "City and state, e.g. Alpharetta, GA")
var location: String
@LLMToolGuide(description: "Temperature unit", .anyOf(["celsius", "fahrenheit"]))
var unit: String = "celsius"
}
func call(arguments: Arguments) async throws -> ToolOutput {
let temp = arguments.unit == "celsius" ? "22°C" : "72°F"
return ToolOutput(content: "{\"temperature\": \"\(temp)\"}")
}
}
let agent = try Agent(client: client, model: "gpt-4o") {
System("You are a weather assistant.")
AgentTool(GetCurrentWeather())
}
let reply = try await agent.run("What's the weather in Paris?")
print(reply)- Swift 6.2+
- macOS 13.0+ / iOS 16.0+
- Depends on SwiftLLMToolMacros 0.1.1+ for JSON Schema types and
@LLMToolmacro support
This project includes Agent Skills for AI coding assistants. Skills are optional — the package works the same without them. Skills are only useful if you use an agent that implements the agentskills.io specification (Claude Code, Cursor, Gemini CLI, etc.).
| Skill | Role | Path |
|---|---|---|
using-swift-open-responses-dsl |
Reference: API surface, config params, tool calling, streaming, reasoning, errors | skills/using-swift-open-responses-dsl/SKILL.md |
design-responses-app |
Process: step-by-step workflow for designing an app from requirements | skills/design-responses-app/SKILL.md |
The macro skills from SwiftLLMToolMacros are also relevant when defining tools:
| Skill | Role |
|---|---|
using-swift-llm-tool-macros |
Reference: macro API, type mapping, constraints, pitfalls |
design-llm-tool |
Process: step-by-step workflow for designing a tool from a description |
Adding SwiftOpenResponsesDSL as an SPM dependency does not make the skills available to your agent — SPM downloads sources into .build/checkouts/, which agents don't scan. You need to copy the skill folders into your project so your agent can discover them.
Step 1: Resolve packages (if not already done):
swift package resolveStep 2: Copy all four skills into your project's skills/ directory:
mkdir -p skills
# DSL skills (from this package)
cp -r .build/checkouts/SwiftOpenResponsesDSL/skills/using-swift-open-responses-dsl \
skills/using-swift-open-responses-dsl
cp -r .build/checkouts/SwiftOpenResponsesDSL/skills/design-responses-app \
skills/design-responses-app
# Macro skills (from the SwiftLLMToolMacros dependency)
cp -r .build/checkouts/SwiftLLMToolMacros/skills/using-swift-llm-tool-macros \
skills/using-swift-llm-tool-macros
cp -r .build/checkouts/SwiftLLMToolMacros/skills/design-llm-tool \
skills/design-llm-toolThis installs all four complementary skills:
| Skill | Package | Role |
|---|---|---|
using-swift-open-responses-dsl |
SwiftOpenResponsesDSL | Reference: DSL API surface |
design-responses-app |
SwiftOpenResponsesDSL | Process: designing an app |
using-swift-llm-tool-macros |
SwiftLLMToolMacros | Reference: macro API surface |
design-llm-tool |
SwiftLLMToolMacros | Process: designing a tool struct |
Install all four for the best experience, or pick only the ones relevant to your workflow.
Claude Code automatically discovers skills from a skills/ directory at your project root. After copying the skill folders (see above), Claude Code will load them when they match your task context.
You can also invoke a skill directly during a conversation:
/skill using-swift-open-responses-dsl
/skill design-responses-app
To verify skills are available, ask Claude Code: "What skills do you have for SwiftOpenResponsesDSL?"
Tip: If you are working in a monorepo or the skills are installed in a non-standard location, you can reference them from your project's CLAUDE.md:
## Skills
See `path/to/skills/` for Agent Skills that provide SwiftOpenResponsesDSL and SwiftLLMToolMacros API knowledge.If you use an AI coding agent, consider writing WHAT and HOW specs before generating code. See docs/SpecDrivenDevelopment.md for the workflow guide and Examples/Specs/ for sample specs.
SwiftOpenResponsesDSL is available under the Apache License 2.0. See LICENSE for details.