Skip to content
Merged

Dev #1051

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 68 additions & 3 deletions docs/features/analytics/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,22 @@ At the top right of the Analytics dashboard, you can filter all data by time per
- **Last 90 days** - Quarterly trends
- **All time** - Complete historical data

All metrics on the page update automatically when you change the time period.
Your selected time period is **automatically saved** and persists across browser sessions.

### Group Filtering

If you have [user groups](/features/rbac/groups) configured, the Analytics dashboard allows filtering by group:

- Use the **group dropdown** next to the time period selector
- Select a specific group to view analytics **only for users in that group**
- Choose "All Users" to see instance-wide analytics

This is useful for:
- **Department-level reporting** - Track usage for specific teams
- **Cost allocation** - Attribute token consumption to business units
- **Pilot programs** - Monitor adoption within test groups

All metrics on the page update automatically when you change the time period or group filter.

### Summary Statistics

Expand Down Expand Up @@ -90,13 +105,50 @@ A detailed breakdown of how each model is being used:
- **Sortable columns** - Click column headers to sort by name or message count
- **Model icons** - Visual identification with profile images
- **Token tracking** - See which models consume the most resources
- **Clickable rows** - Click any model to open the [Model Details Modal](#model-details-modal)

**Use cases:**
- Identify your most popular models
- Calculate cost per model (by multiplying tokens by provider rates)
- Decide which models to keep or remove
- Plan infrastructure upgrades based on usage

### Model Details Modal

Clicking on any model row opens a detailed modal with two tabs:

#### Overview Tab

The Overview tab provides:

- **Feedback Activity Chart** - Visual history of user feedback (thumbs up/down) over time
- Toggle between **30 days**, **1 year**, or **All time** views
- Weekly aggregation for longer time ranges
- **Tags** - Most common chat tags associated with this model (top 10)

This helps you understand:
- How users perceive model quality over time
- Which topics/use cases the model is handling
- Trends in user satisfaction

#### Chats Tab

:::info Admin Chat Access Required
The Chats tab is only visible when **Admin Chat Access** is enabled in your instance settings.
:::

The Chats tab shows conversations that used this model:

- **User info** - Who started each chat
- **Preview** - First message of each conversation
- **Timestamp** - When the chat was last updated
- **Click to open** - Navigate directly to the shared chat view

This is useful for:
- Understanding how users interact with specific models
- Auditing model usage for quality assurance
- Finding example conversations for training or documentation

### User Activity Table

Track user engagement and token consumption per user:
Expand Down Expand Up @@ -248,6 +300,7 @@ The following indexes optimize analytics queries:

For advanced users and integrations, Analytics provides REST API endpoints:

**Dashboard Endpoints:**
```
GET /api/v1/analytics/summary
GET /api/v1/analytics/models
Expand All @@ -257,13 +310,25 @@ GET /api/v1/analytics/daily
GET /api/v1/analytics/tokens
```

All endpoints support `start_date` and `end_date` parameters (Unix timestamps) for time-range filtering.
**Model Detail Endpoints:**
```
GET /api/v1/analytics/models/{model_id}/chats # Get chats using this model
GET /api/v1/analytics/models/{model_id}/overview # Get feedback history and tags
```

**Common Query Parameters:**

| Parameter | Type | Description |
|-----------|------|-------------|
| `start_date` | int | Unix timestamp (epoch seconds) - start of range |
| `end_date` | int | Unix timestamp (epoch seconds) - end of range |
| `group_id` | string | Filter to a specific user group (optional) |

:::tip API Access
All Analytics endpoints require admin authentication. Include your admin bearer token:
```bash
curl -H "Authorization: Bearer YOUR_ADMIN_TOKEN" \
https://your-instance.com/api/v1/analytics/summary
"https://your-instance.com/api/v1/analytics/summary?group_id=abc123"
```
:::

Expand Down
111 changes: 111 additions & 0 deletions docs/features/chat-features/message-queue.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
---
sidebar_position: 10
title: "Message Queue"
---

# Message Queue

The **Message Queue** feature allows you to continue composing and sending messages while the AI is still generating a response. Instead of blocking your input until the current response completes, your messages are queued and automatically sent in sequence.

---

## How It Works

When you send a message while the AI is generating a response:

1. **Your message is queued** - It appears in a compact queue area just above the input box
2. **You can continue working** - Add more messages, edit queued messages, or delete them
3. **Automatic processing** - Once the current response finishes, all queued messages are combined and sent together

This creates a seamless workflow where you can capture thoughts as they come without waiting for the AI to finish.

---

## Queue Management

Each queued message shows three action buttons:

| Action | Icon | Description |
|--------|------|-------------|
| **Send Now** | ↑ | Interrupts the current generation and sends this message immediately |
| **Edit** | ✏️ | Removes the message from the queue and puts it back in the input field |
| **Delete** | 🗑️ | Removes the message from the queue without sending |

### Combining Messages

When the AI finishes generating, all queued messages are **combined into a single prompt** (separated by blank lines) and sent together. This means:

- Multiple quick thoughts become one coherent message
- Context flows naturally between your queued inputs
- The AI receives everything at once for better understanding

---

## Persistence

The message queue is preserved when you navigate between chats within the same browser session:

- **Leaving a chat** - Queue is saved to session storage
- **Returning to the chat** - Queue is restored and processed
- **Closing the browser** - Queue is cleared (session storage only)

This means you can start a thought, switch to another chat to check something, and return to find your queued messages waiting.

---

## Settings

You can disable the Message Queue feature if you prefer the traditional behavior:

1. Go to **Settings** → **Interface**
2. Find **Enable Message Queue** under the Chat section
3. Toggle it off

When disabled, sending a message while the AI is generating will:
- **Interrupt** the current generation
- **Send** your new message immediately

:::tip Default Behavior
Message Queue is **enabled by default**. The toggle allows you to choose between:
- **Queue mode** (default) - Messages queue up until generation completes
- **Interrupt mode** - New messages stop current generation immediately
:::

---

## Use Cases

### 1. Stream of Consciousness

You're reading an AI response and have follow-up questions. Instead of waiting:
- Queue your first follow-up
- Queue a clarification
- Queue another thought

All are sent together when the AI finishes.

### 2. Adding Context

The AI is working on a complex response, but you remember additional context:
- Queue the extra information
- The AI receives it as part of the next message

### 3. Multitasking

You're in a long conversation and need to capture quick notes:
- Queue messages as reminders
- Edit or delete before they're sent
- Use "Send Now" for urgent interruptions

---

## Summary

| Feature | Behavior |
|---------|----------|
| **Enabled** | Messages queue during generation, sent together when complete |
| **Disabled** | New messages interrupt current generation immediately |
| **Persistence** | Queue survives navigation within session |
| **Actions** | Send Now, Edit, Delete for each queued item |

The Message Queue helps you maintain your flow of thought without the friction of waiting for AI responses to complete.
2 changes: 2 additions & 0 deletions docs/features/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -168,6 +168,8 @@ import { TopBanners } from "@site/src/components/TopBanners";

- 🔔 **Chat Completion Notifications**: Stay updated with instant in-UI notifications when a chat finishes in a non-active tab, ensuring you never miss a completed response.

- 📝 **Message Queue**: Continue composing messages while the AI is generating a response. Your messages are queued and automatically sent together when the current response completes. Edit, delete, or send queued messages immediately. [Learn about Message Queue](/features/chat-features/message-queue).

- 🌐 **Notification Webhook Integration**: Receive timely updates for long-running chats or external integration needs with configurable webhook notifications, even when your tab is closed. [Learn more about Webhooks](/features/interface/webhooks).

- 📚 **Channels (Beta)**: Explore real-time collaboration between users and AIs with Discord/Slack-style chat rooms, build bots for channels, and unlock asynchronous communication for proactive multi-agent workflows. [See Channels](/features/channels).
Expand Down
136 changes: 129 additions & 7 deletions docs/features/plugin/development/events.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,15 +24,26 @@ Think of Events like push notifications and modal dialogs that your plugin can t

## 🏁 Availability

:::important
Events are **only available for native Python Tools and Functions** defined directly in Open WebUI.
### Native Python Tools & Functions

Events are **NOT supported** for:
- **OpenAPI tool servers** (external REST APIs)
- **MCP tool servers** (Model Context Protocol)
Events are **fully available** for native Python Tools and Functions defined directly in Open WebUI using the `__event_emitter__` and `__event_call__` helpers.

These external tools communicate via HTTP request/response and cannot emit real-time UI events.
:::
### External Tools (OpenAPI & MCP)

External tools can emit events via a **dedicated REST endpoint**. Open WebUI passes the following headers to all external tool requests:

| Header | Description |
|--------|-------------|
| `X-Open-WebUI-Chat-Id` | The chat ID where the tool was invoked |
| `X-Open-WebUI-Message-Id` | The message ID associated with the tool call |

Your external tool can use these headers to emit events back to the UI via:

```
POST /api/v1/chats/{chat_id}/messages/{message_id}/event
```

See [External Tool Events](#-external-tool-events) below for details.

---

Expand Down Expand Up @@ -452,6 +463,117 @@ Yes—emit `"chat:message:delta"` events in a loop, then finish with `"chat:mess

---

## 🌐 External Tool Events

External tools (OpenAPI and MCP servers) can emit events to the Open WebUI UI via a REST endpoint. This enables features like status updates, notifications, and streaming content from tools running on external servers.

### Headers Provided by Open WebUI

When Open WebUI calls your external tool, it includes these headers:

| Header | Description |
|--------|-------------|
| `X-Open-WebUI-Chat-Id` | The chat ID where the tool was invoked |
| `X-Open-WebUI-Message-Id` | The message ID associated with the tool call |

### Event Endpoint

**Endpoint:** `POST /api/v1/chats/{chat_id}/messages/{message_id}/event`

**Authentication:** Requires a valid Open WebUI API key or session token.

**Request Body:**

```json
{
"type": "status",
"data": {
"description": "Processing your request...",
"done": false
}
}
```

### Supported Event Types

External tools can emit the same event types as native tools:
- `status` – Show progress/status updates
- `notification` – Display toast notifications
- `chat:message:delta` / `message` – Append content to the message
- `chat:message` / `replace` – Replace message content
- `files` / `chat:message:files` – Attach files
- `source` / `citation` – Add citations

:::note
Interactive events (`input`, `confirmation`, `execute`) require `__event_call__` and are **not supported** for external tools as they need bidirectional WebSocket communication.
:::

### Example: Python External Tool

```python
import httpx

def my_tool_handler(request):
# Extract headers from incoming request
chat_id = request.headers.get("X-Open-WebUI-Chat-Id")
message_id = request.headers.get("X-Open-WebUI-Message-Id")
api_key = "your-open-webui-api-key"

# Emit a status event
httpx.post(
f"http://your-open-webui-host/api/v1/chats/{chat_id}/messages/{message_id}/event",
headers={"Authorization": f"Bearer {api_key}"},
json={
"type": "status",
"data": {"description": "Working on it...", "done": False}
}
)

# ... do work ...

# Emit completion status
httpx.post(
f"http://your-open-webui-host/api/v1/chats/{chat_id}/messages/{message_id}/event",
headers={"Authorization": f"Bearer {api_key}"},
json={
"type": "status",
"data": {"description": "Complete!", "done": True}
}
)

return {"result": "success"}
```

### Example: JavaScript/Node.js External Tool

```javascript
async function myToolHandler(req) {
const chatId = req.headers['x-open-webui-chat-id'];
const messageId = req.headers['x-open-webui-message-id'];
const apiKey = 'your-open-webui-api-key';

// Emit a notification
await fetch(
`http://your-open-webui-host/api/v1/chats/${chatId}/messages/${messageId}/event`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
type: 'notification',
data: { type: 'info', content: 'Tool is processing...' }
})
}
);

return { result: 'success' };
}
```

---

## 📝 Conclusion

**Events** give you real-time, interactive superpowers inside Open WebUI. They let your code update content, trigger notifications, request user input, stream results, handle code, and much more—seamlessly plugging your backend intelligence into the chat UI.
Expand Down
Loading