Skip to content
Merged

Dev #1052

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 9 additions & 1 deletion docs/contributing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,12 @@ import { TopBanners } from "@site/src/components/TopBanners";

Your interest in contributing to Open WebUI is greatly appreciated. This document is here to guide you through the process, ensuring your contributions enhance the project effectively. Let's make Open WebUI even better, together!

## 📜 Code of Conduct

All contributors and community participants are expected to follow our **[Code of Conduct](https://github.com/open-webui/open-webui/blob/main/CODE_OF_CONDUCT.md)**. We operate under a **zero-tolerance policy** — disrespectful, demanding, or hostile behavior will result in immediate action without prior warning.

Our community is built on the work of volunteers who dedicate their free time to this project. Please treat every interaction with professionalism and respect.

## 💡 Contributing

Looking to contribute? Great! Here's how you can help:
Expand Down Expand Up @@ -79,7 +85,7 @@ Let's make Open WebUI usable for *everyone*.

### 🤔 Questions & Feedback

Got questions or feedback? Join our [Discord community](https://discord.gg/5rJgQTnV4s) or open an issue. We're here to help!
Got questions or feedback? Join our [Discord community](https://discord.gg/5rJgQTnV4s), visit our [Reddit](https://www.reddit.com/r/OpenWebUI/), or open an issue. We're here to help!

### 🚨 Reporting Issues

Expand All @@ -91,6 +97,8 @@ Noticed something off? Have an idea? Check our [Issues tab](https://github.com/o

- **Detail is Key:** To ensure your issue is understood and can be effectively addressed, it's imperative to include comprehensive details. Descriptions should be clear, including steps to reproduce, expected outcomes, and actual results. Lack of sufficient detail may hinder our ability to resolve your issue.

- **Respectful Communication:** All interactions — including issue reports and discussions — fall under our **[Code of Conduct](https://github.com/open-webui/open-webui/blob/main/CODE_OF_CONDUCT.md)**. Hostile, entitled, or demanding behavior toward contributors will not be tolerated.

:::

### 🧭 Scope of Support
Expand Down
81 changes: 75 additions & 6 deletions docs/faq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,26 @@ import { TopBanners } from "@site/src/components/TopBanners";

<TopBanners />

### Q: How can I get support or ask for help?

**A:** Open WebUI is a community-driven, open-source project. Support is provided by **volunteers** who contribute their time and expertise for free — there is no dedicated support team, and no one is obligated to provide personalized or on-demand assistance.

**To get the best help:**
1. **Search first.** Check these docs, [Discord](https://discord.gg/5rJgQTnV4s), [Reddit](https://www.reddit.com/r/OpenWebUI/), [GitHub Discussions](https://github.com/open-webui/open-webui/discussions), and [Issues](https://github.com/open-webui/open-webui/issues) — your question may already be answered.
2. **Try the Discord bot.** In our [Discord server](https://discord.gg/5rJgQTnV4s)'s **#questions** channel, we have an experimental bot that has access to all issues, all discussions, and the entire documentation. Simply ping the bot with your question in the same message, wait a few seconds, and it will answer you. As our documentation improves, so does the bot.
3. **Provide details.** When asking for help, always include: your Open WebUI version, deployment method (Docker/pip), model provider and model name, relevant settings (screenshots of the Admin Panel section are ideal), and steps to reproduce the issue.
4. **Be respectful.** Contributors are volunteers. Demanding, entitled, or hostile behavior is not tolerated and will result in immediate enforcement under our **[Code of Conduct](https://github.com/open-webui/open-webui/blob/main/CODE_OF_CONDUCT.md)**.

**Where to ask:**
- 🤖 **Quick Answers**: [Discord #questions channel](https://discord.gg/5rJgQTnV4s) — try the bot first, it can answer most Open WebUI questions
- 🐛 **Bugs**: [GitHub Issues](https://github.com/open-webui/open-webui/issues) — you **must** use the issue template and provide all requested information (Open WebUI version, browser, deployment method, expected vs. actual behavior, and logs). Most importantly, your report **must include clear steps to reproduce the issue along with all relevant settings** to replicate the situation. If we cannot reproduce it, we will not investigate it. Reports that skip the template or omit key details will be closed without investigation or converted to discussions. Our contributors are volunteers — incomplete reports waste their limited time.
- 💬 **Questions & Help**: [Discord](https://discord.gg/5rJgQTnV4s) (most active community), [Reddit](https://www.reddit.com/r/OpenWebUI/), or [GitHub Discussions](https://github.com/open-webui/open-webui/discussions)
- 💡 **Feature Requests**: [GitHub Discussions](https://github.com/open-webui/open-webui/discussions/new/choose)

:::important
All participants in the Open WebUI community are expected to follow our **[Code of Conduct](https://github.com/open-webui/open-webui/blob/main/CODE_OF_CONDUCT.md)**, which operates under a **zero-tolerance policy**. Unacceptable behavior — including hostility, entitlement, or persistent negativity toward contributors — will result in immediate action without prior warning.
:::

### Q: How do I customize the logo and branding?

**A:** You can customize the theme, logo, and branding with our **[Enterprise License](https://docs.openwebui.com/enterprise)**, which unlocks exclusive enterprise features.
Expand Down Expand Up @@ -225,15 +245,64 @@ For more optimization tips, see the **[Performance Tips Guide](troubleshooting/p

### Q: Is MCP (Model Context Protocol) supported in Open WebUI?

**A:** Yes, Open WebUI now includes **native support for MCP Streamable HTTP**, enabling direct, first-class integration with MCP tools that communicate over the standard HTTP transport. For any **other MCP transports or non-HTTP implementations**, you should use our official proxy adapter, **MCPO**, available at 👉 [https://github.com/open-webui/mcpo](https://github.com/open-webui/mcpo). MCPO provides a unified OpenAPI-compatible layer that bridges alternative MCP transports into Open WebUI safely and consistently. This architecture ensures maximum compatibility, strict security boundaries, and predictable tool behavior across different environments while keeping Open WebUI backend-agnostic and maintainable.
**A:** Yes, Open WebUI includes **native support for MCP Streamable HTTP**, enabling direct, first-class integration with MCP tools that communicate over the standard HTTP transport. For any **other MCP transports or non-HTTP implementations**, you should use our official proxy adapter, **MCPO**, available at 👉 [https://github.com/open-webui/mcpo](https://github.com/open-webui/mcpo). MCPO provides a unified OpenAPI-compatible layer that bridges alternative MCP transports into Open WebUI safely and consistently. This architecture ensures maximum compatibility, strict security boundaries, and predictable tool behavior across different environments while keeping Open WebUI backend-agnostic and maintainable.

### Q: Why doesn't Open WebUI natively support [Provider X]'s proprietary API?

**A:** Open WebUI is highly modular with a plugin system including tools, functions, and most notably **[pipes](/features/plugin/functions/pipe)**. These modular pipes allow you to add support for virtually any provider you want—you can build your own or choose from the many [community-built](https://openwebui.com/) and usually well-maintained ones already available.

That said, Open WebUI's core is built around **universal protocols**, not specific providers. Our stance is to support standard, widely-adopted APIs like the **OpenAI Chat Completions protocol**.

This protocol-centric design ensures that Open WebUI remains backend-agnostic and compatible with dozens of providers simultaneously. We avoid implementing proprietary, provider-specific APIs in the core to prevent unsustainable architectural bloat and to maintain a truly open ecosystem.

:::note Experimental: Open Responses
As new standards emerge that gain broad adoption, we may add experimental support. Connections can now optionally be configured to use **[Open Responses](https://www.openresponses.org/)**—an open specification for multi-provider interoperability with consistent streaming events and tool use patterns.
:::

We understand this request comes up frequently, especially for major providers. Here's why we've made this deliberate architectural decision:

#### 1. The Cascading Demand Problem

Supporting one proprietary API sets a precedent. Once that precedent exists, every other major provider becomes a reasonable request. What starts as "just one provider" quickly becomes many integrations, each with their own quirks, authentication schemes, and breaking changes.

#### 2. Maintenance is the Real Burden

Adding integration code is the easy part. **Maintaining it forever** is where the real cost lies:
- Each provider updates their API independently—when a provider changes something, we must update and test immediately
- Changes in one integration can break compatibility with others
- Every integration requires ongoing testing across multiple scenarios
- Bug reports flood in for each provider whenever they make changes

Contributors are **volunteers with full-time jobs**. Asking them to maintain 10+ provider integrations indefinitely is not sustainable.

### Q: Why doesn't Open WebUI support [Specific Provider]'s latest API (e.g. OpenAI Responses API)?
#### 3. Technical Complexity

**A:** Open WebUI is built around **universal protocols**, not specific providers. Our core philosophy is to support standard, widely-adopted APIs like the **OpenAI Chat Completions protocol**.
Each provider has different approaches to:
- Reasoning/thinking content format and structure
- Tool calling schemas and response formats
- Authentication and request signing
- Error handling and rate limiting

This protocol-centric design ensures that Open WebUI remains backend-agnostic and compatible with dozens of providers (like OpenRouter, LiteLLM, vLLM, and Groq) simultaneously. We avoid implementing proprietary, provider-specific APIs (such as OpenAI's stateful Responses API or Anthropic's Messages API) to prevent unsustainable architectural bloat and to maintain a truly open ecosystem.
This requires provider-specific logic throughout both the backend and frontend, significantly increasing the codebase complexity and tech debt.

If you need functionality exclusive to a proprietary API (like OpenAI's hidden reasoning traces), we recommend using a proxy like **LiteLLM** or **OpenRouter**, which translate those proprietary features into the standard Chat Completions protocol that Open WebUI supports.
#### 4. Scale and Stability Requirements

Open WebUI is used by major organizations worldwide. At this scale, stability is paramount—extensive testing and backwards compatibility guarantees become exponentially harder with each added provider.

#### 5. Pipes are the Modular Solution

The pipes architecture exists precisely to solve this problem. One-click install a community pipe and you get full provider API support. This is exactly the modularity that allows:
- Community members to maintain provider-specific integrations
- Users to choose only what they need
- The core project to remain stable and maintainable

:::tip The Recommended Path
For providers that don't follow widely adopted API standards, use:
- **[Open WebUI community](https://openwebui.com/)**: Community-maintained provider integrations (one-click install)
- **Middleware proxies**: Tools like LiteLLM or OpenRouter can translate proprietary APIs to widely adopted API formats

These solutions exist specifically to bridge the gap, and they're maintained by teams dedicated to that purpose.
:::

### Q: Why is the frontend integrated into the same Docker image? Isn't this unscalable or problematic?

Expand Down Expand Up @@ -280,4 +349,4 @@ We review every report in good faith and handle all submissions discreetly. Prot

### Need Further Assistance?

If you have any further questions or concerns, please reach out to our [GitHub Issues page](https://github.com/open-webui/open-webui/issues) or our [Discord channel](https://discord.gg/5rJgQTnV4s) for more help and information.
If you have any further questions or concerns, please reach out on our [Discord server](https://discord.gg/5rJgQTnV4s), [Reddit community](https://www.reddit.com/r/OpenWebUI/), [GitHub Issues](https://github.com/open-webui/open-webui/issues), or [GitHub Discussions](https://github.com/open-webui/open-webui/discussions) for more help and information. Please remember that all community interactions are governed by our **[Code of Conduct](https://github.com/open-webui/open-webui/blob/main/CODE_OF_CONDUCT.md)**.
2 changes: 1 addition & 1 deletion docs/features/chat-features/reasoning-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -347,7 +347,7 @@ Open WebUI serializes reasoning as text wrapped in tags (e.g., `<think>...</thin

**Why Open WebUI Doesn't Support This Natively:**

There is no standard way for storing reasoning content as part of the API payload across different providers. If Open WebUI implemented support for one provider's format (Anthropic), it would likely break existing deployments for many other inference providers. Given the wide variety of backends Open WebUI supports, we follow the OpenAI Completions API as the common standard.
There is no standard way for storing reasoning content as part of the API payload across different providers. If Open WebUI implemented support for one provider's format, it would likely break existing deployments for many other inference providers. Given the wide variety of backends Open WebUI supports, we follow the OpenAI Completions API as the common standard. For more details on this architectural decision, see our **[FAQ on protocol support](/faq#q-why-doesnt-open-webui-natively-support-provider-xs-proprietary-api)**.

**Workarounds:**

Expand Down
4 changes: 2 additions & 2 deletions docs/features/plugin/tools/openapi-servers/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@ By leveraging OpenAPI, we eliminate the need for a proprietary or unfamiliar com

While OpenAPI tools are powerful for extending Open WebUI with external services, they have some constraints compared to native Python tools:

- **No real-time UI events**: OpenAPI tools cannot emit status updates, notifications, or request user input via the [event system](/features/plugin/development/events). They communicate via standard HTTP request/response only.
- **One-way events only**: OpenAPI tools can emit status updates, notifications, and other [one-way events](/features/plugin/development/events#-external-tool-events) via the REST endpoint. Open WebUI passes `X-Open-WebUI-Chat-Id` and `X-Open-WebUI-Message-Id` headers to enable this. However, interactive events (user input prompts, confirmations) are only available for native Python tools.
- **No streaming output**: Tool responses are returned as complete results, not streamed token-by-token.

If you need real-time UI feedback (progress updates, confirmations, streaming), consider implementing your tool as a [native Python Tool](/features/plugin/tools/development) instead.
If you need interactive UI feedback (confirmations, user input prompts), consider implementing your tool as a [native Python Tool](/features/plugin/tools/development) instead.

## 🚀 Quickstart

Expand Down
4 changes: 2 additions & 2 deletions docs/features/plugin/tools/openapi-servers/mcp.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,8 @@ So even though adding mcpo might at first seem like "just one more layer"—in r

✨ With mcpo, your local-only AI tools become cloud-ready, UI-friendly, and instantly interoperable—without changing a single line of tool server code.

:::note
**Limitation**: Since mcpo exposes MCP tools as OpenAPI endpoints, they inherit OpenAPI's limitations—specifically, they cannot use Open WebUI's [event system](/features/plugin/development/events) for real-time UI updates (status messages, notifications, user input prompts). These features are only available for native Python tools.
:::tip
**Events Supported**: While mcpo exposes MCP tools as OpenAPI endpoints, they can still emit events (status updates, notifications) via Open WebUI's [event REST endpoint](/features/plugin/development/events#-external-tool-events). Open WebUI automatically passes the required `X-Open-WebUI-Chat-Id` and `X-Open-WebUI-Message-Id` headers to your tool. However, interactive events requiring user input are only available for native Python tools.
:::

### ✅ Quickstart: Running the Proxy Locally
Expand Down
3 changes: 3 additions & 0 deletions docs/getting-started/quick-start/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -169,6 +169,9 @@ You are now ready to start using Open WebUI!
## Using Open WebUI with Ollama
If you're using Open WebUI with Ollama, be sure to check out our [Starting with Ollama Guide](/getting-started/quick-start/starting-with-ollama) to learn how to manage your Ollama instances with Open WebUI.

## Experimental: Open Responses
Open WebUI has experimental support for the [Open Responses](https://www.openresponses.org/) specification. See the [Starting with Open Responses Guide](/getting-started/quick-start/starting-with-open-responses) to learn how to enable it.

## Help Us Improve Open WebUI 🧪

**Want to help make Open WebUI better?** Consider running the [development branch](/#using-the-dev-branch-)! Testing dev builds is one of the most valuable contributions you can make—no code required. Just use it and report any issues you find on [GitHub](https://github.com/open-webui/open-webui/issues).
Expand Down
6 changes: 5 additions & 1 deletion docs/getting-started/quick-start/starting-with-llama-cpp.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -82,12 +82,16 @@ Start the model server using the llama-server binary. Navigate to your llama.cpp
- --ctx-size: Token context length (can increase if RAM allows)
- --n-gpu-layers: Layers offloaded to GPU for faster performance

Once the server runs, it will expose a local OpenAI-compatible API on:
Once the server runs, it will expose a local **OpenAI-compatible API** (Chat Completions) on:

```
http://127.0.0.1:10000
```

:::tip
Open WebUI also supports the experimental [Open Responses](/getting-started/quick-start/starting-with-open-responses) specification for providers that implement it.
:::

---

## Step 4: Connect Llama.cpp to Open WebUI
Expand Down
Loading