Skip to content
This repository was archived by the owner on Mar 8, 2026. It is now read-only.

BrainDriveAI/openwebui-pipelines

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

⚠️ This repository has been archived. BrainDrive is building a new personal AI system on top of the Personal AI Architecture — an MIT-licensed, open architecture with zero lock-in. See the architecture repo for the foundation, and visit braindrive.ai for updates.


⚠️ Archived Project
This project is no longer actively maintained by the original authors.
The repository remains available for reference and community use.

BrainDriveAI's Custom Pipelines for OpenWebUI

Welcome to our collection of custom OpenWebUI pipelines! These pipelines enhance the capabilities of your OpenWebUI instance by integrating advanced logic, external services, and modular workflows.


🚀 Pipelines Overview

OpenWebUI pipelines enable flexible workflows, empowering users to handle complex tasks efficiently. With support for various providers (e.g., OpenAI, Ollama, PostgreSQL, Neo4j), these pipelines deliver robust memory management, transcript-based chat, and more.

Available Pipelines

1. Chat with YouTube Pipeline

OpenAI Version
  • Chat with YouTube (OpenAI)
    • Description:
      Searches YouTube videos, retrieves transcripts, generates summaries, and enables Q&A over video transcripts. Uses OpenAI's GPT for processing.
    • Features:
      • Video transcript retrieval and summarization.
      • Video content search and Q&A.
      • Integrates with OpenAI for natural language understanding.
Ollama Version
  • Chat with YouTube (Ollama)
    • Description:
      Similar to the OpenAI version, but uses Ollama's local LLMs for transcript processing.
    • Features:
      • Local transcript processing with Ollama.
      • No external API calls, ensuring privacy and cost efficiency.

2. Memory Pipelines

OpenAI + PostgreSQL (Supabase)
  • Memory Pipeline (OpenAI + PostgreSQL)
    • Description:
      A long-term memory pipeline that uses OpenAI for embeddings and Supabase PostgreSQL (with pgvector) for memory storage. Ideal for scalable cloud setups.
    • Features:
      • Stores and retrieves vectorized memories.
      • Embedding support via OpenAI models.
      • Memory storage in Supabase PostgreSQL.
OpenAI + Neo4j (Local/Docker)
  • Memory Pipeline (OpenAI + Neo4j)
    • Description:
      A local-first memory solution using OpenAI for embeddings and Neo4j for graph-based memory storage. Runs entirely on your device via Docker.
    • Features:
      • Local vectorized memory storage using Neo4j.
      • OpenAI-based embeddings for message processing.
      • Full data persistence on local devices.
Ollama + Neo4j (Local/Docker)
  • Memory Pipeline (Ollama + Neo4j)
    • Description:
      Similar to the OpenAI + Neo4j pipeline but uses Ollama’s local LLMs for embedding. Fully local solution with no external dependencies.
    • Features:
      • Local embeddings using Ollama.
      • Neo4j for graph-based memory storage.
      • Privacy-first and cost-effective.

📦 Installation and Setup

Installing Pipelines

  1. Copy the GitHub URL of the pipeline you want to install:

  2. Go to Admin Panel -> Settings -> Pipelines in your OpenWebUI instance.

  3. Paste the GitHub URL in the "Install from GitHub URL" field.

  4. Click the Install / Download icon to complete the installation.

Setting Up the Dockerized Neo4j Memory Pipelines

For local Neo4j-based memory pipelines, use the provided docker-compose.yml to set up Neo4j and OpenWebUI with pre-installed pipelines.

Steps:

  1. Copy the docker-compose.yml file to your system.
  2. Run the following command in the directory containing the file:
    docker-compose up -d
  3. Access Neo4j at http://localhost:7474 (username: neo4j, password: my_password123).
  4. Your OpenWebUI instance will have the pre-installed memory pipeline ready to use.

Troubleshooting

If you encounter issues like:

FieldValidatorDecoratorInfo.__init__() got an unexpected keyword argument 'json_schema_input_type'

Upgrade pydantic to version 2.7.4 inside the Docker container:

pip install --upgrade pydantic==2.7.4

📚 References


🌐 License

This project is licensed under the MIT License.

About

⚠️ Archived — BrainDrive is building a new system on personalaiarchitecture.org

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages