Skip to content
Merged

Dev #1049

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
342 changes: 342 additions & 0 deletions docs/features/analytics/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,342 @@
---
sidebar_position: 1050
title: "Analytics"
---

# Analytics

The **Analytics** feature in Open WebUI provides administrators with comprehensive insights into usage patterns, token consumption, and model performance across their instance. This powerful tool helps you understand how your users are interacting with AI models and make data-driven decisions about resource allocation and model selection.

:::info Admin-Only Feature
Analytics is only accessible to users with **admin** role. Access it via **Admin Panel > Analytics**.
:::

## Overview

The Analytics dashboard gives you a bird's-eye view of your Open WebUI instance's activity, including:

- **Message volume** across different models and time periods
- **Token usage** tracking for cost estimation and resource planning
- **User activity** patterns to understand engagement
- **Time-series data** showing trends over hours, days, or months

All analytics data is derived from the message history stored in your instance's database. When the Analytics feature is enabled, Open WebUI automatically tracks and indexes messages to provide fast, queryable insights.

---

## Accessing Analytics

1. Log in with an **admin** account
2. Navigate to **Admin Panel** (click your profile icon → Admin Panel)
3. Click on the **Analytics** tab in the admin navigation

---

## Dashboard Features

### Time Period Selection

At the top right of the Analytics dashboard, you can filter all data by time period:

- **Last 24 hours** - Hourly granularity for real-time monitoring
- **Last 7 days** - Daily overview of the past week
- **Last 30 days** - Monthly snapshot
- **Last 90 days** - Quarterly trends
- **All time** - Complete historical data

All metrics on the page update automatically when you change the time period.

### Summary Statistics

The dashboard header displays key metrics for the selected time period:

- **Total Messages** - Number of assistant responses generated
- **Total Tokens** - Sum of all input and output tokens processed
- **Total Chats** - Number of unique conversations
- **Total Users** - Number of users who sent messages

:::note Message Counting
Analytics counts **assistant responses** rather than user messages. This provides a more accurate measure of AI model usage and token consumption.
:::

### Message Timeline Chart

The interactive timeline chart visualizes message volume over time, broken down by model. Key features:

- **Hourly or Daily granularity** - Automatically adjusts based on selected time period
- **Multi-model visualization** - Shows up to 8 models with distinct colors
- **Hover tooltips** - Display exact counts and percentages for each model at any point in time
- **Trend identification** - Quickly spot usage patterns, peak hours, and model adoption

This chart helps you:
- Identify busy periods for capacity planning
- Track model adoption after deployment
- Detect unusual activity spikes
- Monitor the impact of changes or announcements

### Model Usage Table

A detailed breakdown of how each model is being used:

| Column | Description |
|--------|-------------|
| **#** | Rank by message count |
| **Model** | Model name with icon |
| **Messages** | Total assistant responses generated |
| **Tokens** | Total tokens (input + output) consumed |
| **%** | Percentage share of total messages |

**Features:**
- **Sortable columns** - Click column headers to sort by name or message count
- **Model icons** - Visual identification with profile images
- **Token tracking** - See which models consume the most resources

**Use cases:**
- Identify your most popular models
- Calculate cost per model (by multiplying tokens by provider rates)
- Decide which models to keep or remove
- Plan infrastructure upgrades based on usage

### User Activity Table

Track user engagement and token consumption per user:

| Column | Description |
|--------|-------------|
| **#** | Rank by activity |
| **User** | Username with profile picture |
| **Messages** | Total messages sent by this user |
| **Tokens** | Total tokens consumed by this user |

**Features:**
- **Sortable columns** - Organize by name or activity level
- **User identification** - Profile pictures and display names
- **Token attribution** - See resource consumption per user

**Use cases:**
- Monitor power users and their token consumption
- Identify inactive or low-usage accounts
- Plan user quotas or rate limits
- Calculate per-user costs for billing purposes

---

## Token Usage Tracking

### What Are Tokens?

Tokens are the units that language models use to process text. Both the **input** (your prompt) and **output** (the model's response) consume tokens. Most AI providers charge based on token usage, making token tracking essential for cost management.

### How Token Tracking Works

Open WebUI automatically captures token usage from model responses and stores it with each message. The Analytics feature aggregates this data to show:

- **Input tokens** - Tokens in user prompts and context
- **Output tokens** - Tokens in model responses
- **Total tokens** - Sum of input and output

Token data is normalized across different model providers (OpenAI, Ollama, llama.cpp, etc.) to provide consistent metrics regardless of which backend you're using.

### Token Usage Metrics

The **Token Usage** section (accessible via the Tokens endpoint or dashboard) provides:

- **Per-model token breakdown** - Input, output, and total tokens for each model
- **Total token consumption** - Instance-wide token usage
- **Message count correlation** - Tokens per message for efficiency analysis

:::tip Cost Estimation
To estimate costs, multiply the token counts by your provider's pricing:
```
Cost = (input_tokens × input_price) + (output_tokens × output_price)
```

Example for GPT-4:
- Input: 1,000,000 tokens × $0.03/1K = $30
- Output: 500,000 tokens × $0.06/1K = $30
- **Total: $60**
:::

---

## Use Cases

### 1. Resource Planning

**Scenario:** You're running Open WebUI for a team and need to plan infrastructure capacity.

**How Analytics helps:**
- View the **Message Timeline** to identify peak usage hours
- Check **Model Usage** to see which models need more resources
- Monitor **Token Usage** to estimate future costs
- Track **User Activity** to plan for team growth

### 2. Model Evaluation

**Scenario:** You've deployed several models and want to know which ones your users prefer.

**How Analytics helps:**
- Compare **message counts** across models to see adoption rates
- Check **token efficiency** (tokens per message) to identify verbose models
- Monitor **trends** in the timeline chart after introducing new models
- Combine with the [Evaluation feature](../evaluation/index.mdx) for quality insights

### 3. Cost Management

**Scenario:** You're using paid API providers and need to control costs.

**How Analytics helps:**
- Track **total token consumption** by model and user
- Identify **high-usage users** for quota discussions
- Compare **token costs** across different model providers
- Set up regular reviews using time-period filters

### 4. User Engagement

**Scenario:** You want to understand how your team is using AI tools.

**How Analytics helps:**
- Monitor **active users** vs. registered accounts
- Identify **power users** who might need support or training
- Track **adoption trends** over time
- Correlate usage with team initiatives or training sessions

### 5. Compliance & Auditing

**Scenario:** Your organization requires usage reporting for compliance.

**How Analytics helps:**
- Generate **activity reports** for specific time periods
- Track **user attribution** for all AI interactions
- Monitor **model usage** for approved vs. unapproved models
- Export data via API for external reporting tools

---

## Technical Details

### Data Storage

Analytics data is stored in the `chat_message` table, which contains:

- **Message content** - User and assistant messages
- **Metadata** - Model ID, user ID, timestamps
- **Token usage** - Input, output, and total tokens
- **Relationships** - Links to parent messages and chats

When you enable Analytics (via migration), Open WebUI:
1. Creates the `chat_message` table with optimized indexes
2. **Backfills existing messages** from your chat history
3. **Dual-writes** new messages to both the chat JSON and the message table

This dual-write approach ensures:
- **Backward compatibility** - Existing features continue working
- **Fast queries** - Analytics doesn't impact chat performance
- **Data consistency** - All messages are captured

### Database Indexes

The following indexes optimize analytics queries:

- `chat_id` - Fast lookup of all messages in a chat
- `user_id` - Quick user activity reports
- `model_id` - Efficient model usage queries
- `created_at` - Time-range filtering
- Composite indexes for common query patterns

### API Endpoints

For advanced users and integrations, Analytics provides REST API endpoints:

```
GET /api/v1/analytics/summary
GET /api/v1/analytics/models
GET /api/v1/analytics/users
GET /api/v1/analytics/messages
GET /api/v1/analytics/daily
GET /api/v1/analytics/tokens
```

All endpoints support `start_date` and `end_date` parameters (Unix timestamps) for time-range filtering.

:::tip API Access
All Analytics endpoints require admin authentication. Include your admin bearer token:
```bash
curl -H "Authorization: Bearer YOUR_ADMIN_TOKEN" \
https://your-instance.com/api/v1/analytics/summary
```
:::

---

## Privacy & Data Considerations

### What Gets Tracked?

Analytics tracks:
- ✅ Message timestamps and counts
- ✅ Token usage per message
- ✅ Model IDs and user IDs
- ✅ Chat IDs and message relationships

Analytics **does not** track:
- ❌ Message content display in the dashboard (only metadata)
- ❌ External sharing or exports
- ❌ Individual message content outside the database

### Data Retention

Analytics data follows your instance's chat retention policy. When you delete:
- **A chat** - All associated messages are removed from analytics
- **A user** - All their messages are disassociated
- **Message history** - Analytics data is also cleared

---

## Frequently Asked Questions

### Why are message counts different from what I expected?

Analytics counts **assistant responses**, not user messages. If a chat has 10 user messages and 10 assistant responses, the count is 10. This provides a more accurate measure of AI usage and token consumption.

### How accurate is token tracking?

Token accuracy depends on your model provider:
- **OpenAI/Anthropic** - Exact counts from API responses
- **Ollama** - Accurate for models with token reporting
- **llama.cpp** - Reports tokens when available
- **Custom providers** - Depends on implementation

Missing token data appears as 0 in analytics.

### Can I export analytics data?

Yes, via the API endpoints. Use tools like `curl`, Python scripts, or BI tools to fetch and export data:

```bash
curl -H "Authorization: Bearer TOKEN" \
"https://instance.com/api/v1/analytics/summary?start_date=1704067200&end_date=1706745600" \
> analytics_export.json
```

---

## Summary

Open WebUI's Analytics feature transforms your instance into a data-driven platform by providing:

- 📊 **Real-time insights** into model and user activity
- 💰 **Token tracking** for cost management and optimization
- 📈 **Trend analysis** to understand usage patterns over time
- 👥 **User engagement** metrics for community building
- 🔒 **Privacy-focused** design keeping all data on your instance

Whether you're managing a personal instance or a large organizational deployment, Analytics gives you the visibility needed to optimize performance, control costs, and better serve your users.

---

## Related Features

- [**Evaluation**](../evaluation/index.mdx) - Measure model quality through user feedback
- [**RBAC**](../rbac/index.mdx) - Control access to models and features per user
- [**Data Controls**](../data-controls/index.mdx) - Manage chat history and exports
47 changes: 47 additions & 0 deletions docs/features/chat-features/follow-up-prompts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
---
sidebar_position: 9
title: "Follow-Up Prompts"
---

# Follow-Up Prompts

Open WebUI can automatically generate follow-up question suggestions after each model response. These suggestions appear as clickable chips below the response, helping you explore topics further without typing new prompts.

## Settings

Configure follow-up prompt behavior in **Settings > Interface** under the **Chat** section:

### Follow-Up Auto-Generation

**Default: On**

Automatically generates follow-up question suggestions after each response. These suggestions are generated by the [task model](/getting-started/env-configuration#task_model) based on the conversation context.

- **On**: Follow-up prompts are generated after each model response
- **Off**: No follow-up suggestions are generated

### Keep Follow-Up Prompts in Chat

**Default: Off**

By default, follow-up prompts only appear for the most recent message and disappear when you continue the conversation.

- **On**: Follow-up prompts are preserved and remain visible for all messages in the chat history
- **Off**: Only the last message shows follow-up prompts

:::tip Perfect for Knowledge Exploration
Enable this setting when exploring a knowledge base. You can see all the suggested follow-ups from previous responses, making it easy to revisit and explore alternative paths through the information.
:::

### Insert Follow-Up Prompt to Input

**Default: Off**

Controls what happens when you click a follow-up prompt.

- **On**: Clicking a follow-up inserts the text into the input field, allowing you to edit it before sending
- **Off**: Clicking a follow-up immediately sends it as your next message

## Regenerating Follow-Ups

If you want to regenerate follow-up suggestions for a specific response, you can use the [Regenerate Followups](https://openwebui.com/f/silentoplayz/regenerate_followups) action button from the community.
2 changes: 2 additions & 0 deletions docs/features/chat-features/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,5 @@ Open WebUI provides a comprehensive set of chat features designed to enhance you
- **[🕒 Temporal Awareness](./temporal-awareness.mdx)**: How models understand time and date, including native tools for precise time calculations.

- **[🧠 Reasoning & Thinking Models](./reasoning-models.mdx)**: Specialized support for models that generate internal chains of thought using thinking tags.

- **[💬 Follow-Up Prompts](./follow-up-prompts.md)**: Automatic generation of suggested follow-up questions after model responses.
Loading