diff --git a/.agents/skills/agents/SKILL.md b/.agents/skills/agents/SKILL.md
index 352e7b0..61c2b4f 100644
--- a/.agents/skills/agents/SKILL.md
+++ b/.agents/skills/agents/SKILL.md
@@ -3,7 +3,11 @@ name: agents
description: Build voice AI agents with ElevenLabs. Use when creating voice assistants, customer service bots, interactive voice characters, or any real-time voice conversation experience.
license: MIT
compatibility: Requires internet access and an ElevenLabs API key (ELEVENLABS_API_KEY).
-metadata: {"openclaw": {"requires": {"env": ["ELEVENLABS_API_KEY"]}, "primaryEnv": "ELEVENLABS_API_KEY"}}
+metadata:
+ {
+ 'openclaw':
+ { 'requires': { 'env': ['ELEVENLABS_API_KEY'] }, 'primaryEnv': 'ELEVENLABS_API_KEY' }
+ }
---
# ElevenLabs Agents Platform
@@ -58,22 +62,22 @@ agent = client.conversational_ai.agents.create(
### JavaScript
```javascript
-import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";
+import { ElevenLabsClient } from '@elevenlabs/elevenlabs-js';
const client = new ElevenLabsClient();
const agent = await client.conversationalAi.agents.create({
- name: "My Assistant",
+ name: 'My Assistant',
conversationConfig: {
agent: {
- firstMessage: "Hello! How can I help?",
- language: "en",
+ firstMessage: 'Hello! How can I help?',
+ language: 'en',
prompt: {
- prompt: "You are a helpful assistant.",
- llm: "gemini-2.0-flash",
+ prompt: 'You are a helpful assistant.',
+ llm: 'gemini-2.0-flash',
temperature: 0.7
}
},
- tts: { voiceId: "JBFqnCBsd6RMkjVDRZzb" }
+ tts: { voiceId: 'JBFqnCBsd6RMkjVDRZzb' }
}
});
```
@@ -89,25 +93,28 @@ curl -X POST "https://api.elevenlabs.io/v1/convai/agents/create" \
## Starting Conversations
**Server-side (Python):** Get signed URL for client connection:
+
```python
signed_url = client.conversational_ai.conversations.get_signed_url(agent_id="your-agent-id")
```
**Client-side (JavaScript):**
+
```javascript
-import { Conversation } from "@elevenlabs/client";
+import { Conversation } from '@elevenlabs/client';
const conversation = await Conversation.startSession({
- agentId: "your-agent-id",
- onMessage: (msg) => console.log("Agent:", msg.message),
- onUserTranscript: (t) => console.log("User:", t.message),
+ agentId: 'your-agent-id',
+ onMessage: (msg) => console.log('Agent:', msg.message),
+ onUserTranscript: (t) => console.log('User:', t.message),
onError: (e) => console.error(e)
});
```
**React Hook:**
+
```typescript
-import { useConversation } from "@elevenlabs/react";
+import { useConversation } from '@elevenlabs/react';
const conversation = useConversation({ onMessage: (msg) => console.log(msg) });
// Get signed URL from backend, then:
@@ -116,13 +123,13 @@ await conversation.startSession({ signedUrl: token });
## Configuration
-| Provider | Models |
-|----------|--------|
-| OpenAI | `gpt-5`, `gpt-5-mini`, `gpt-5-nano`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `gpt-4o`, `gpt-4o-mini`, `gpt-4-turbo` |
-| Anthropic | `claude-sonnet-4-5`, `claude-sonnet-4`, `claude-haiku-4-5`, `claude-3-7-sonnet`, `claude-3-5-sonnet`, `claude-3-haiku` |
-| Google | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash`, `gemini-2.5-flash-lite`, `gemini-2.0-flash`, `gemini-2.0-flash-lite` |
-| ElevenLabs | `glm-45-air-fp8`, `qwen3-30b-a3b`, `gpt-oss-120b` |
-| Custom | `custom-llm` (bring your own endpoint) |
+| Provider | Models |
+| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
+| OpenAI | `gpt-5`, `gpt-5-mini`, `gpt-5-nano`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `gpt-4o`, `gpt-4o-mini`, `gpt-4-turbo` |
+| Anthropic | `claude-sonnet-4-5`, `claude-sonnet-4`, `claude-haiku-4-5`, `claude-3-7-sonnet`, `claude-3-5-sonnet`, `claude-3-haiku` |
+| Google | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash`, `gemini-2.5-flash-lite`, `gemini-2.0-flash`, `gemini-2.0-flash-lite` |
+| ElevenLabs | `glm-45-air-fp8`, `qwen3-30b-a3b`, `gpt-oss-120b` |
+| Custom | `custom-llm` (bring your own endpoint) |
**Popular voices:** `JBFqnCBsd6RMkjVDRZzb` (George), `EXAVITQu4vr4xnSDxMaL` (Sarah), `onwK4e9ZLuTAKqWW03F9` (Daniel), `XB0fDUnXU5powFXDhCwa` (Charlotte)
@@ -155,12 +162,13 @@ Extend agents with webhook, client, or built-in system tools. Tools are defined
```
**Client tools** run in browser:
+
```javascript
clientTools: {
show_product: async ({ productId }) => {
- document.getElementById("product").src = `/products/${productId}`;
+ document.getElementById('product').src = `/products/${productId}`;
return { success: true };
- }
+ };
}
```
@@ -170,7 +178,11 @@ See [Client Tools Reference](references/client-tools.md) for complete documentat
```html
-
+
```
Customize with attributes: `avatar-image-url`, `action-text`, `start-call-text`, `end-call-text`.
@@ -196,9 +208,9 @@ print(f"Call initiated: {response.conversation_id}")
```javascript
const response = await client.conversationalAi.twilio.outboundCall({
- agentId: "your-agent-id",
- agentPhoneNumberId: "your-phone-number-id",
- toNumber: "+1234567890",
+ agentId: 'your-agent-id',
+ agentPhoneNumberId: 'your-phone-number-id',
+ toNumber: '+1234567890'
});
```
diff --git a/.agents/skills/agents/references/agent-configuration.md b/.agents/skills/agents/references/agent-configuration.md
index 03d6b98..6405e03 100644
--- a/.agents/skills/agents/references/agent-configuration.md
+++ b/.agents/skills/agents/references/agent-configuration.md
@@ -50,14 +50,14 @@ conversation_config={
}
```
-| Field | Type | Default | Description |
-|-------|------|---------|-------------|
-| `first_message` | string | `""` | What the agent says when conversation starts |
-| `language` | string | `"en"` | ISO 639-1 language code (en, es, fr, etc.) |
-| `disable_first_message_interruptions` | bool | `false` | Prevent user from interrupting the first message |
-| `hinglish_mode` | bool | `false` | When enabled and language is Hindi, agent responds in Hinglish |
-| `dynamic_variables` | object | - | Config with `dynamic_variable_placeholders` containing key-value pairs |
-| `prompt` | object | - | LLM configuration (see prompt section below) |
+| Field | Type | Default | Description |
+| ------------------------------------- | ------ | ------- | ---------------------------------------------------------------------- |
+| `first_message` | string | `""` | What the agent says when conversation starts |
+| `language` | string | `"en"` | ISO 639-1 language code (en, es, fr, etc.) |
+| `disable_first_message_interruptions` | bool | `false` | Prevent user from interrupting the first message |
+| `hinglish_mode` | bool | `false` | When enabled and language is Hindi, agent responds in Hinglish |
+| `dynamic_variables` | object | - | Config with `dynamic_variable_placeholders` containing key-value pairs |
+| `prompt` | object | - | LLM configuration (see prompt section below) |
### tts (Text-to-Speech)
@@ -75,28 +75,28 @@ conversation_config={
}
```
-| Field | Type | Default | Description |
-|-------|------|---------|-------------|
-| `voice_id` | string | `"cjVigY5qzO86Huf0OWal"` | Voice to use |
-| `model_id` | string | - | TTS model (see below) |
-| `stability` | float | `0.5` | 0-1, lower = more expressive |
-| `similarity_boost` | float | `0.8` | 0-1, higher = closer to original voice |
-| `speed` | float | `1.0` | 0.7-1.2, speech speed multiplier |
-| `optimize_streaming_latency` | int | - | 0-4, higher = faster but lower quality |
-| `expressive_mode` | bool | `true` | Enable expressive voice generation |
-| `agent_output_audio_format` | string | - | Output audio codec format |
-| `pronunciation_dictionary_locators` | array | - | Pronunciation overrides |
+| Field | Type | Default | Description |
+| ----------------------------------- | ------ | ------------------------ | -------------------------------------- |
+| `voice_id` | string | `"cjVigY5qzO86Huf0OWal"` | Voice to use |
+| `model_id` | string | - | TTS model (see below) |
+| `stability` | float | `0.5` | 0-1, lower = more expressive |
+| `similarity_boost` | float | `0.8` | 0-1, higher = closer to original voice |
+| `speed` | float | `1.0` | 0.7-1.2, speech speed multiplier |
+| `optimize_streaming_latency` | int | - | 0-4, higher = faster but lower quality |
+| `expressive_mode` | bool | `true` | Enable expressive voice generation |
+| `agent_output_audio_format` | string | - | Output audio codec format |
+| `pronunciation_dictionary_locators` | array | - | Pronunciation overrides |
**Available TTS models for agents:**
-| Model ID | Languages | Latency |
-|----------|-----------|---------|
-| `eleven_flash_v2_5` | 32 | ~75ms (recommended) |
-| `eleven_flash_v2` | English | ~75ms |
-| `eleven_turbo_v2_5` | 32 | ~250-300ms |
-| `eleven_turbo_v2` | English | ~250-300ms |
-| `eleven_multilingual_v2` | 29 | Standard |
-| `eleven_v3_conversational` | 70+ | Standard |
+| Model ID | Languages | Latency |
+| -------------------------- | --------- | ------------------- |
+| `eleven_flash_v2_5` | 32 | ~75ms (recommended) |
+| `eleven_flash_v2` | English | ~75ms |
+| `eleven_turbo_v2_5` | 32 | ~250-300ms |
+| `eleven_turbo_v2` | English | ~250-300ms |
+| `eleven_multilingual_v2` | 29 | Standard |
+| `eleven_v3_conversational` | 70+ | Standard |
### asr (Automatic Speech Recognition)
@@ -110,12 +110,12 @@ conversation_config={
}
```
-| Field | Type | Default | Description |
-|-------|------|---------|-------------|
-| `quality` | string | `"high"` | Transcription quality level |
-| `provider` | string | `"elevenlabs"` | ASR provider (`elevenlabs` or `scribe_realtime`) |
-| `keywords` | array | - | Words to boost recognition accuracy |
-| `user_input_audio_format` | string | - | Input audio format (e.g., `pcm_16000`, `ulaw_8000`) |
+| Field | Type | Default | Description |
+| ------------------------- | ------ | -------------- | --------------------------------------------------- |
+| `quality` | string | `"high"` | Transcription quality level |
+| `provider` | string | `"elevenlabs"` | ASR provider (`elevenlabs` or `scribe_realtime`) |
+| `keywords` | array | - | Words to boost recognition accuracy |
+| `user_input_audio_format` | string | - | Input audio format (e.g., `pcm_16000`, `ulaw_8000`) |
### turn (Turn-Taking)
@@ -129,23 +129,23 @@ conversation_config={
}
```
-| Field | Type | Default | Description |
-|-------|------|---------|-------------|
-| `turn_timeout` | number | `7` | Seconds to wait before re-engaging the user |
-| `turn_eagerness` | string | `"normal"` | How quickly agent responds: `patient`, `normal`, or `eager` |
-| `silence_end_call_timeout` | number | `-1` | Seconds of silence before ending call (-1 = disabled) |
-| `initial_wait_time` | number | - | Seconds to wait for user to start speaking |
-| `spelling_patience` | string | `"auto"` | Entity detection patience: `auto` or `off` |
-| `speculative_turn` | bool | `false` | Enable speculative turn detection |
-| `soft_timeout_config` | object | - | Configures a message if user is silent (see below) |
+| Field | Type | Default | Description |
+| -------------------------- | ------ | ---------- | ----------------------------------------------------------- |
+| `turn_timeout` | number | `7` | Seconds to wait before re-engaging the user |
+| `turn_eagerness` | string | `"normal"` | How quickly agent responds: `patient`, `normal`, or `eager` |
+| `silence_end_call_timeout` | number | `-1` | Seconds of silence before ending call (-1 = disabled) |
+| `initial_wait_time` | number | - | Seconds to wait for user to start speaking |
+| `spelling_patience` | string | `"auto"` | Entity detection patience: `auto` or `off` |
+| `speculative_turn` | bool | `false` | Enable speculative turn detection |
+| `soft_timeout_config` | object | - | Configures a message if user is silent (see below) |
**soft_timeout_config:**
-| Field | Type | Default | Description |
-|-------|------|---------|-------------|
-| `timeout_seconds` | number | `-1` | Seconds before soft timeout (-1 = disabled) |
-| `message` | string | `"Hhmmmm...yeah."` | What agent says on timeout |
-| `use_llm_generated_message` | bool | `false` | Let LLM generate the timeout message |
+| Field | Type | Default | Description |
+| --------------------------- | ------ | ------------------ | ------------------------------------------- |
+| `timeout_seconds` | number | `-1` | Seconds before soft timeout (-1 = disabled) |
+| `message` | string | `"Hhmmmm...yeah."` | What agent says on timeout |
+| `use_llm_generated_message` | bool | `false` | Let LLM generate the timeout message |
## prompt (nested in conversation_config.agent)
@@ -167,35 +167,35 @@ conversation_config={
}
```
-| Field | Type | Default | Description |
-|-------|------|---------|-------------|
-| `prompt` | string | `""` | System prompt defining agent behavior |
-| `llm` | string | - | Model ID (see LLM providers below) |
-| `temperature` | float | `0` | 0-1, higher = more creative |
-| `max_tokens` | int | `-1` | Max tokens for LLM response (-1 = unlimited) |
-| `reasoning_effort` | string | - | Reasoning depth: `none`, `minimal`, `low`, `medium`, `high` (model-dependent) |
-| `thinking_budget` | int | - | Max thinking tokens for reasoning models |
-| `tools` | array | - | Webhook and client tool definitions |
-| `built_in_tools` | object | - | System tools (end_call, transfer, etc.) |
-| `tool_ids` | array | - | References to pre-configured tools |
-| `knowledge_base` | array | - | Documents for RAG |
-| `custom_llm` | object | - | Custom LLM endpoint config |
-| `timezone` | string | - | IANA timezone (e.g., `America/New_York`) |
-| `backup_llm_config` | object | - | Fallback LLM configuration |
-| `cascade_timeout_seconds` | number | `8` | Seconds before cascading to backup LLM (2-15) |
-| `mcp_server_ids` | array | - | MCP server IDs to connect |
-| `native_mcp_server_ids` | array | - | Native MCP server IDs |
-| `ignore_default_personality` | bool | - | Skip default personality instructions |
+| Field | Type | Default | Description |
+| ---------------------------- | ------ | ------- | ----------------------------------------------------------------------------- |
+| `prompt` | string | `""` | System prompt defining agent behavior |
+| `llm` | string | - | Model ID (see LLM providers below) |
+| `temperature` | float | `0` | 0-1, higher = more creative |
+| `max_tokens` | int | `-1` | Max tokens for LLM response (-1 = unlimited) |
+| `reasoning_effort` | string | - | Reasoning depth: `none`, `minimal`, `low`, `medium`, `high` (model-dependent) |
+| `thinking_budget` | int | - | Max thinking tokens for reasoning models |
+| `tools` | array | - | Webhook and client tool definitions |
+| `built_in_tools` | object | - | System tools (end_call, transfer, etc.) |
+| `tool_ids` | array | - | References to pre-configured tools |
+| `knowledge_base` | array | - | Documents for RAG |
+| `custom_llm` | object | - | Custom LLM endpoint config |
+| `timezone` | string | - | IANA timezone (e.g., `America/New_York`) |
+| `backup_llm_config` | object | - | Fallback LLM configuration |
+| `cascade_timeout_seconds` | number | `8` | Seconds before cascading to backup LLM (2-15) |
+| `mcp_server_ids` | array | - | MCP server IDs to connect |
+| `native_mcp_server_ids` | array | - | Native MCP server IDs |
+| `ignore_default_personality` | bool | - | Skip default personality instructions |
### LLM Providers
-| Provider | Model IDs |
-|----------|-----------|
-| OpenAI | `gpt-5`, `gpt-5-mini`, `gpt-5-nano`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `gpt-4o`, `gpt-4o-mini`, `gpt-4-turbo` |
-| Anthropic | `claude-sonnet-4-5`, `claude-sonnet-4`, `claude-haiku-4-5`, `claude-3-7-sonnet`, `claude-3-5-sonnet`, `claude-3-haiku` |
-| Google | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash`, `gemini-2.5-flash-lite`, `gemini-2.0-flash`, `gemini-2.0-flash-lite` |
-| ElevenLabs | `glm-45-air-fp8`, `qwen3-30b-a3b`, `gpt-oss-120b` (hosted, ultra-low latency) |
-| Custom | `custom-llm` (requires custom_llm config) |
+| Provider | Model IDs |
+| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
+| OpenAI | `gpt-5`, `gpt-5-mini`, `gpt-5-nano`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `gpt-4o`, `gpt-4o-mini`, `gpt-4-turbo` |
+| Anthropic | `claude-sonnet-4-5`, `claude-sonnet-4`, `claude-haiku-4-5`, `claude-3-7-sonnet`, `claude-3-5-sonnet`, `claude-3-haiku` |
+| Google | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash`, `gemini-2.5-flash-lite`, `gemini-2.0-flash`, `gemini-2.0-flash-lite` |
+| ElevenLabs | `glm-45-air-fp8`, `qwen3-30b-a3b`, `gpt-oss-120b` (hosted, ultra-low latency) |
+| Custom | `custom-llm` (requires custom_llm config) |
### Custom LLM
@@ -237,35 +237,35 @@ platform_settings={
### auth
-| Field | Type | Description |
-|-------|------|-------------|
-| `enable_auth` | bool | Require signed URLs/tokens for connections |
-| `allowlist` | array | Allowed origins for CORS |
-| `shareable_token` | string | Public conversation token |
+| Field | Type | Description |
+| ----------------- | ------ | ------------------------------------------ |
+| `enable_auth` | bool | Require signed URLs/tokens for connections |
+| `allowlist` | array | Allowed origins for CORS |
+| `shareable_token` | string | Public conversation token |
### call_limits
-| Field | Type | Description |
-|-------|------|-------------|
-| `agent_concurrency_limit` | int | Max simultaneous conversations (default: -1, unlimited) |
-| `daily_limit` | int | Max conversations per day (default: 100000) |
-| `bursting_enabled` | bool | Allow exceeding limits at 2x cost (default: true) |
+| Field | Type | Description |
+| ------------------------- | ---- | ------------------------------------------------------- |
+| `agent_concurrency_limit` | int | Max simultaneous conversations (default: -1, unlimited) |
+| `daily_limit` | int | Max conversations per day (default: 100000) |
+| `bursting_enabled` | bool | Allow exceeding limits at 2x cost (default: true) |
### conversation (inside conversation_config)
-| Field | Type | Default | Description |
-|-------|------|---------|-------------|
-| `max_duration_seconds` | int | `600` | Max conversation duration |
-| `text_only` | bool | `false` | Text-only mode (avoids audio pricing) |
-| `monitoring_enabled` | bool | `false` | Enable real-time WebSocket monitoring |
+| Field | Type | Default | Description |
+| ---------------------- | ---- | ------- | ------------------------------------- |
+| `max_duration_seconds` | int | `600` | Max conversation duration |
+| `text_only` | bool | `false` | Text-only mode (avoids audio pricing) |
+| `monitoring_enabled` | bool | `false` | Enable real-time WebSocket monitoring |
## Additional Top-Level Fields
-| Field | Type | Description |
-|-------|------|-------------|
-| `tags` | array | Classification labels for filtering (e.g., `["production"]`, `["test"]`) |
-| `coaching_settings` | object | Configuration for agent coaching and evaluation |
-| `workflow` | object | Conversation flow definition and tool interaction sequences |
+| Field | Type | Description |
+| ------------------- | ------ | ------------------------------------------------------------------------ |
+| `tags` | array | Classification labels for filtering (e.g., `["production"]`, `["test"]`) |
+| `coaching_settings` | object | Configuration for agent coaching and evaluation |
+| `workflow` | object | Conversation flow definition and tool interaction sequences |
## Knowledge Base / RAG
@@ -356,7 +356,7 @@ agent = client.conversational_ai.agents.get(agent_id="your-agent-id")
```
```javascript
-const agent = await client.conversationalAi.agents.get("your-agent-id");
+const agent = await client.conversationalAi.agents.get('your-agent-id');
```
```bash
@@ -368,6 +368,7 @@ curl -X GET "https://api.elevenlabs.io/v1/convai/agents/your-agent-id" -H "xi-ap
Only include fields you want to change. All other settings remain unchanged.
**Python:**
+
```python
# Update name
client.conversational_ai.agents.update(agent_id="id", name="New Name")
@@ -394,17 +395,19 @@ client.conversational_ai.agents.update(agent_id="id", platform_settings={
```
**JavaScript:**
+
```javascript
-await client.conversationalAi.agents.update("id", { name: "New Name" });
-await client.conversationalAi.agents.update("id", {
- conversationConfig: { tts: { voiceId: "EXAVITQu4vr4xnSDxMaL" } }
+await client.conversationalAi.agents.update('id', { name: 'New Name' });
+await client.conversationalAi.agents.update('id', {
+ conversationConfig: { tts: { voiceId: 'EXAVITQu4vr4xnSDxMaL' } }
});
-await client.conversationalAi.agents.update("id", {
- conversationConfig: { agent: { prompt: { prompt: "New instructions.", llm: "claude-sonnet-4" } } }
+await client.conversationalAi.agents.update('id', {
+ conversationConfig: { agent: { prompt: { prompt: 'New instructions.', llm: 'claude-sonnet-4' } } }
});
```
**cURL:**
+
```bash
curl -X PATCH "https://api.elevenlabs.io/v1/convai/agents/your-agent-id" \
-H "xi-api-key: $ELEVENLABS_API_KEY" -H "Content-Type: application/json" \
@@ -413,17 +416,17 @@ curl -X PATCH "https://api.elevenlabs.io/v1/convai/agents/your-agent-id" \
#### Updatable Fields
-| Section | Fields |
-|---------|--------|
-| Root | `name`, `tags` |
-| `conversation_config.agent` | `first_message`, `language`, `disable_first_message_interruptions`, `dynamic_variables` |
+| Section | Fields |
+| ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
+| Root | `name`, `tags` |
+| `conversation_config.agent` | `first_message`, `language`, `disable_first_message_interruptions`, `dynamic_variables` |
| `conversation_config.agent.prompt` | `prompt`, `llm`, `temperature`, `max_tokens`, `reasoning_effort`, `tools`, `built_in_tools`, `knowledge_base`, `custom_llm`, `timezone` |
-| `conversation_config.tts` | `voice_id`, `model_id`, `stability`, `similarity_boost`, `speed`, `optimize_streaming_latency`, `expressive_mode` |
-| `conversation_config.asr` | `quality`, `provider`, `keywords`, `user_input_audio_format` |
-| `conversation_config.turn` | `turn_timeout`, `turn_eagerness`, `silence_end_call_timeout`, `soft_timeout_config` |
-| `conversation_config.conversation` | `max_duration_seconds`, `text_only`, `monitoring_enabled` |
-| `platform_settings.auth` | `enable_auth`, `allowlist` |
-| `platform_settings.call_limits` | `agent_concurrency_limit`, `daily_limit`, `bursting_enabled` |
+| `conversation_config.tts` | `voice_id`, `model_id`, `stability`, `similarity_boost`, `speed`, `optimize_streaming_latency`, `expressive_mode` |
+| `conversation_config.asr` | `quality`, `provider`, `keywords`, `user_input_audio_format` |
+| `conversation_config.turn` | `turn_timeout`, `turn_eagerness`, `silence_end_call_timeout`, `soft_timeout_config` |
+| `conversation_config.conversation` | `max_duration_seconds`, `text_only`, `monitoring_enabled` |
+| `platform_settings.auth` | `enable_auth`, `allowlist` |
+| `platform_settings.call_limits` | `agent_concurrency_limit`, `daily_limit`, `bursting_enabled` |
### SDK: Delete Agent
@@ -432,7 +435,7 @@ client.conversational_ai.agents.delete(agent_id="your-agent-id")
```
```javascript
-await client.conversationalAi.agents.delete("your-agent-id");
+await client.conversationalAi.agents.delete('your-agent-id');
```
```bash
diff --git a/.agents/skills/agents/references/client-tools.md b/.agents/skills/agents/references/client-tools.md
index 35b19b5..9b85d97 100644
--- a/.agents/skills/agents/references/client-tools.md
+++ b/.agents/skills/agents/references/client-tools.md
@@ -4,11 +4,11 @@ Extend your agent with custom capabilities. Tools let the agent take actions bey
## Tool Types
-| Type | Execution | Use Case |
-|------|-----------|----------|
-| **Webhook** | Server-side via HTTP | Database queries, API calls, secure operations |
-| **Client** | Browser-side JavaScript | UI updates, local storage, navigation |
-| **System** | Built-in ElevenLabs | End call, transfer, standard actions |
+| Type | Execution | Use Case |
+| ----------- | ----------------------- | ---------------------------------------------- |
+| **Webhook** | Server-side via HTTP | Database queries, API calls, secure operations |
+| **Client** | Browser-side JavaScript | UI updates, local storage, navigation |
+| **System** | Built-in ElevenLabs | End call, transfer, standard actions |
## Where Tools Live
@@ -145,29 +145,29 @@ Or for structured data:
### Webhook Tool Options
-| Field | Type | Default | Description |
-|-------|------|---------|-------------|
-| `response_timeout_secs` | int | `20` | Timeout in seconds (5-120) |
-| `disable_interruptions` | bool | `false` | Prevent user interruptions during tool execution |
-| `execution_mode` | string | `"immediate"` | `immediate`, `post_tool_speech`, or `async` |
-| `tool_call_sound` | string | - | Sound during execution: `typing`, `elevator1`-`elevator4` |
-| `force_pre_tool_speech` | bool | `false` | Force agent to speak before executing tool |
-| `tool_error_handling_mode` | string | `"auto"` | `auto`, `summarized`, `passthrough`, or `hide` |
+| Field | Type | Default | Description |
+| -------------------------- | ------ | ------------- | --------------------------------------------------------- |
+| `response_timeout_secs` | int | `20` | Timeout in seconds (5-120) |
+| `disable_interruptions` | bool | `false` | Prevent user interruptions during tool execution |
+| `execution_mode` | string | `"immediate"` | `immediate`, `post_tool_speech`, or `async` |
+| `tool_call_sound` | string | - | Sound during execution: `typing`, `elevator1`-`elevator4` |
+| `force_pre_tool_speech` | bool | `false` | Force agent to speak before executing tool |
+| `tool_error_handling_mode` | string | `"auto"` | `auto`, `summarized`, `passthrough`, or `hide` |
**Note:** The default `api_schema.method` is `GET`. Always set `"method": "POST"` explicitly for webhook tools that send request bodies.
### Server Implementation (Node.js)
```javascript
-app.post("/webhook/get_weather", async (req, res) => {
+app.post('/webhook/get_weather', async (req, res) => {
const { parameters, conversation_id } = req.body;
- const { city, units = "fahrenheit" } = parameters;
+ const { city, units = 'fahrenheit' } = parameters;
// Fetch weather from your data source
const weather = await weatherService.get(city, units);
res.json({
- result: `It's ${weather.temp}°${units === "celsius" ? "C" : "F"} and ${weather.condition} in ${city}.`,
+ result: `It's ${weather.temp}°${units === 'celsius' ? 'C' : 'F'} and ${weather.condition} in ${city}.`
});
});
```
@@ -198,17 +198,17 @@ Execute JavaScript in the user's browser. Useful for UI updates, navigation, or
Client tools are registered when starting a conversation:
```javascript
-import { Conversation } from "@elevenlabs/client";
+import { Conversation } from '@elevenlabs/client';
const conversation = await Conversation.startSession({
- agentId: "your-agent-id",
+ agentId: 'your-agent-id',
clientTools: {
show_product: async ({ productId }) => {
// Update UI to show product
- const modal = document.getElementById("product-modal");
+ const modal = document.getElementById('product-modal');
modal.innerHTML = await fetchProductCard(productId);
modal.showModal();
- return { success: true, message: "Showing product" };
+ return { success: true, message: 'Showing product' };
},
navigate_to: async ({ page }) => {
@@ -221,8 +221,8 @@ const conversation = await Conversation.startSession({
// Store in localStorage
localStorage.setItem(key, value);
return { saved: true };
- },
- },
+ }
+ }
});
```
@@ -282,8 +282,8 @@ When users want to go somewhere, use navigate_to.""",
### Client Tool Options
-| Field | Type | Default | Description |
-|-------|------|---------|-------------|
+| Field | Type | Default | Description |
+| ------------------ | ---- | ------- | ------------------------------------------ |
| `expects_response` | bool | `false` | Whether the tool returns data to the agent |
### Client Tool Return Values
@@ -293,11 +293,11 @@ Return data that the agent can use in conversation:
```javascript
clientTools: {
check_cart: async () => {
- const cart = JSON.parse(localStorage.getItem("cart") || "[]");
+ const cart = JSON.parse(localStorage.getItem('cart') || '[]');
return {
itemCount: cart.length,
total: cart.reduce((sum, item) => sum + item.price, 0),
- items: cart.map((item) => item.name),
+ items: cart.map((item) => item.name)
};
};
}
@@ -404,7 +404,7 @@ Return helpful error messages:
```javascript
// Server webhook
-app.post("/webhook/lookup_order", async (req, res) => {
+app.post('/webhook/lookup_order', async (req, res) => {
const { order_id } = req.body.parameters;
const order = await db.orders.find(order_id);
@@ -413,8 +413,8 @@ app.post("/webhook/lookup_order", async (req, res) => {
return res.json({
result: {
error: true,
- message: `Order ${order_id} not found. Please verify the order ID.`,
- },
+ message: `Order ${order_id} not found. Please verify the order ID.`
+ }
});
}
diff --git a/.agents/skills/agents/references/installation.md b/.agents/skills/agents/references/installation.md
index aa62004..e94860d 100644
--- a/.agents/skills/agents/references/installation.md
+++ b/.agents/skills/agents/references/installation.md
@@ -48,14 +48,14 @@ npm install @elevenlabs/elevenlabs-js
> **Important:** Always use `@elevenlabs/elevenlabs-js`. The old `elevenlabs` npm package (v1.x) is deprecated and should not be used.
```javascript
-import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";
+import { ElevenLabsClient } from '@elevenlabs/elevenlabs-js';
// Option 1: Environment variable (recommended)
// Set ELEVENLABS_API_KEY in your environment
const client = new ElevenLabsClient();
// Option 2: Pass directly
-const client = new ElevenLabsClient({ apiKey: "your-api-key" });
+const client = new ElevenLabsClient({ apiKey: 'your-api-key' });
```
### Migrating from deprecated packages
@@ -75,10 +75,11 @@ npm install @elevenlabs/react # React hooks
```
**Import changes:**
+
```javascript
-import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";
-import { Conversation } from "@elevenlabs/client";
-import { useConversation } from "@elevenlabs/react";
+import { ElevenLabsClient } from '@elevenlabs/elevenlabs-js';
+import { Conversation } from '@elevenlabs/client';
+import { useConversation } from '@elevenlabs/react';
```
## Python
@@ -126,6 +127,6 @@ Or use the `setup-api-key` skill for guided setup.
## Environment Variables
-| Variable | Description |
-|----------|-------------|
+| Variable | Description |
+| -------------------- | ---------------------------------- |
| `ELEVENLABS_API_KEY` | Your ElevenLabs API key (required) |
diff --git a/.agents/skills/agents/references/outbound-calls.md b/.agents/skills/agents/references/outbound-calls.md
index 48a9e4c..1a50c06 100644
--- a/.agents/skills/agents/references/outbound-calls.md
+++ b/.agents/skills/agents/references/outbound-calls.md
@@ -14,12 +14,12 @@ See the [main agents skill](../SKILL.md#outbound-calls) for basic Python, JavaSc
## Request Parameters
-| Parameter | Type | Required | Description |
-|-----------|------|----------|-------------|
-| `agent_id` | string | Yes | The ID of your ElevenLabs agent |
-| `agent_phone_number_id` | string | Yes | The ID of the Twilio phone number linked to your agent |
-| `to_number` | string | Yes | The destination phone number (E.164 format) |
-| `conversation_initiation_client_data` | object | No | Override conversation settings for this call |
+| Parameter | Type | Required | Description |
+| ------------------------------------- | ------ | -------- | ------------------------------------------------------ |
+| `agent_id` | string | Yes | The ID of your ElevenLabs agent |
+| `agent_phone_number_id` | string | Yes | The ID of the Twilio phone number linked to your agent |
+| `to_number` | string | Yes | The destination phone number (E.164 format) |
+| `conversation_initiation_client_data` | object | No | Override conversation settings for this call |
## Response
@@ -32,12 +32,12 @@ See the [main agents skill](../SKILL.md#outbound-calls) for basic Python, JavaSc
}
```
-| Field | Type | Description |
-|-------|------|-------------|
-| `success` | boolean | Whether the call was initiated successfully |
-| `message` | string | Status message |
-| `conversation_id` | string | ElevenLabs conversation ID for tracking |
-| `callSid` | string | Twilio Call SID for reference |
+| Field | Type | Description |
+| ----------------- | ------- | ------------------------------------------- |
+| `success` | boolean | Whether the call was initiated successfully |
+| `message` | string | Status message |
+| `conversation_id` | string | ElevenLabs conversation ID for tracking |
+| `callSid` | string | Twilio Call SID for reference |
## Customizing the Call
@@ -72,24 +72,24 @@ response = client.conversational_ai.twilio.outbound_call(
```javascript
const response = await client.conversationalAi.twilio.outboundCall({
- agentId: "your-agent-id",
- agentPhoneNumberId: "your-phone-number-id",
- toNumber: "+1234567890",
+ agentId: 'your-agent-id',
+ agentPhoneNumberId: 'your-phone-number-id',
+ toNumber: '+1234567890',
conversationInitiationClientData: {
conversationConfigOverride: {
agent: {
- firstMessage: "Hello! This is a reminder about your appointment tomorrow.",
- language: "en",
+ firstMessage: 'Hello! This is a reminder about your appointment tomorrow.',
+ language: 'en'
},
tts: {
- voiceId: "JBFqnCBsd6RMkjVDRZzb",
- },
+ voiceId: 'JBFqnCBsd6RMkjVDRZzb'
+ }
},
dynamicVariables: {
- customer_name: "John",
- appointment_time: "2:00 PM",
- },
- },
+ customer_name: 'John',
+ appointment_time: '2:00 PM'
+ }
+ }
});
```
@@ -97,20 +97,20 @@ const response = await client.conversationalAi.twilio.outboundCall({
### Agent Settings
-| Option | Type | Description |
-|--------|------|-------------|
-| `first_message` | string | Custom greeting for this call |
-| `language` | string | Language code (e.g., "en", "es", "fr") |
-| `prompt` | object | Override agent prompt and LLM settings |
+| Option | Type | Description |
+| --------------- | ------ | -------------------------------------- |
+| `first_message` | string | Custom greeting for this call |
+| `language` | string | Language code (e.g., "en", "es", "fr") |
+| `prompt` | object | Override agent prompt and LLM settings |
### TTS Settings
-| Option | Type | Description |
-|--------|------|-------------|
-| `voice_id` | string | Voice ID to use for this call |
-| `stability` | number | Voice stability (0.0-1.0) |
+| Option | Type | Description |
+| ------------------ | ------ | -------------------------------- |
+| `voice_id` | string | Voice ID to use for this call |
+| `stability` | number | Voice stability (0.0-1.0) |
| `similarity_boost` | number | Voice similarity boost (0.0-1.0) |
-| `speed` | number | Speech speed multiplier |
+| `speed` | number | Speech speed multiplier |
### Dynamic Variables
diff --git a/.agents/skills/agents/references/widget-embedding.md b/.agents/skills/agents/references/widget-embedding.md
index 2d491aa..f8d3ddb 100644
--- a/.agents/skills/agents/references/widget-embedding.md
+++ b/.agents/skills/agents/references/widget-embedding.md
@@ -6,7 +6,11 @@ Add a voice AI agent to any website with the ElevenLabs conversation widget.
```html
-
+
```
This creates a floating button that users can click to start a voice conversation.
@@ -17,39 +21,39 @@ This creates a floating button that users can click to start a voice conversatio
### Required
-| Attribute | Description |
-|-----------|-------------|
-| `agent-id` | Your ElevenLabs agent ID |
+| Attribute | Description |
+| ------------ | ------------------------------------------------ |
+| `agent-id` | Your ElevenLabs agent ID |
| `signed-url` | Alternative to `agent-id` when using signed URLs |
### Appearance
-| Attribute | Description | Default |
-|-----------|-------------|---------|
-| `avatar-image-url` | URL for agent avatar image | ElevenLabs logo |
-| `avatar-orb-color-1` | Primary orb gradient color | `#2792dc` |
-| `avatar-orb-color-2` | Secondary orb gradient color | `#9ce6e6` |
+| Attribute | Description | Default |
+| -------------------- | ---------------------------- | --------------- |
+| `avatar-image-url` | URL for agent avatar image | ElevenLabs logo |
+| `avatar-orb-color-1` | Primary orb gradient color | `#2792dc` |
+| `avatar-orb-color-2` | Secondary orb gradient color | `#9ce6e6` |
### Text Labels
-| Attribute | Description | Default |
-|-----------|-------------|---------|
-| `action-text` | Tooltip when hovering | "Talk to AI" |
-| `start-call-text` | Button to start call | "Start call" |
-| `end-call-text` | Button to end call | "End call" |
-| `expand-text` | Expand chat button | "Open" |
-| `collapse-text` | Collapse chat button | "Close" |
-| `listening-text` | Listening state label | "Listening..." |
-| `speaking-text` | Speaking state label | "Assistant speaking" |
+| Attribute | Description | Default |
+| ----------------- | --------------------- | -------------------- |
+| `action-text` | Tooltip when hovering | "Talk to AI" |
+| `start-call-text` | Button to start call | "Start call" |
+| `end-call-text` | Button to end call | "End call" |
+| `expand-text` | Expand chat button | "Open" |
+| `collapse-text` | Collapse chat button | "Close" |
+| `listening-text` | Listening state label | "Listening..." |
+| `speaking-text` | Speaking state label | "Assistant speaking" |
### Behavior
-| Attribute | Description | Default |
-|-----------|-------------|---------|
-| `variant` | Widget style: `compact` or `expanded` | `compact` |
-| `server-location` | Server region (`us`, `eu-residency`, `in-residency`, `global`) | `us` |
-| `dismissible` | Allow the user to minimize the widget | `false` |
-| `disable-banner` | Hide "Powered by ElevenLabs" | `false` |
+| Attribute | Description | Default |
+| ----------------- | -------------------------------------------------------------- | --------- |
+| `variant` | Widget style: `compact` or `expanded` | `compact` |
+| `server-location` | Server region (`us`, `eu-residency`, `in-residency`, `global`) | `us` |
+| `dismissible` | Allow the user to minimize the widget | `false` |
+| `disable-banner` | Hide "Powered by ElevenLabs" | `false` |
## Examples
@@ -86,10 +90,7 @@ This creates a floating button that users can click to start a voice conversatio
### Expanded Variant
```html
-
+
```
### Full Customization
@@ -150,7 +151,7 @@ Access the widget element to control it programmatically:
```
@@ -180,9 +181,7 @@ Hide the default widget and use your own button:
}
-
+
```
@@ -197,11 +196,11 @@ For agents with authentication enabled, pass a signed URL:
@@ -281,8 +280,8 @@ You can have multiple widgets for different agents:
function App() {
useEffect(() => {
// Load widget script
- const script = document.createElement("script");
- script.src = "https://unpkg.com/@elevenlabs/convai-widget-embed";
+ const script = document.createElement('script');
+ script.src = 'https://unpkg.com/@elevenlabs/convai-widget-embed';
script.async = true;
document.body.appendChild(script);
@@ -307,11 +306,11 @@ function App() {
-