⚠️ This repository has been archived. BrainDrive is building a new personal AI system on top of the Personal AI Architecture — an MIT-licensed, open architecture with zero lock-in. See the architecture repo for the foundation, and visit braindrive.ai for updates.
⚠️ Archived Project
This project is no longer actively maintained by the original authors.
The repository remains available for reference and community use.
Welcome to our collection of custom OpenWebUI pipelines! These pipelines enhance the capabilities of your OpenWebUI instance by integrating advanced logic, external services, and modular workflows.
OpenWebUI pipelines enable flexible workflows, empowering users to handle complex tasks efficiently. With support for various providers (e.g., OpenAI, Ollama, PostgreSQL, Neo4j), these pipelines deliver robust memory management, transcript-based chat, and more.
- Chat with YouTube (OpenAI)
- Description:
Searches YouTube videos, retrieves transcripts, generates summaries, and enables Q&A over video transcripts. Uses OpenAI's GPT for processing. - Features:
- Video transcript retrieval and summarization.
- Video content search and Q&A.
- Integrates with OpenAI for natural language understanding.
- Description:
- Chat with YouTube (Ollama)
- Description:
Similar to the OpenAI version, but uses Ollama's local LLMs for transcript processing. - Features:
- Local transcript processing with Ollama.
- No external API calls, ensuring privacy and cost efficiency.
- Description:
- Memory Pipeline (OpenAI + PostgreSQL)
- Description:
A long-term memory pipeline that uses OpenAI for embeddings and Supabase PostgreSQL (with pgvector) for memory storage. Ideal for scalable cloud setups. - Features:
- Stores and retrieves vectorized memories.
- Embedding support via OpenAI models.
- Memory storage in Supabase PostgreSQL.
- Description:
- Memory Pipeline (OpenAI + Neo4j)
- Description:
A local-first memory solution using OpenAI for embeddings and Neo4j for graph-based memory storage. Runs entirely on your device via Docker. - Features:
- Local vectorized memory storage using Neo4j.
- OpenAI-based embeddings for message processing.
- Full data persistence on local devices.
- Description:
- Memory Pipeline (Ollama + Neo4j)
- Description:
Similar to the OpenAI + Neo4j pipeline but uses Ollama’s local LLMs for embedding. Fully local solution with no external dependencies. - Features:
- Local embeddings using Ollama.
- Neo4j for graph-based memory storage.
- Privacy-first and cost-effective.
- Description:
-
Copy the GitHub URL of the pipeline you want to install:
-
Go to Admin Panel -> Settings -> Pipelines in your OpenWebUI instance.
-
Paste the GitHub URL in the "Install from GitHub URL" field.
-
Click the Install / Download icon to complete the installation.
For local Neo4j-based memory pipelines, use the provided docker-compose.yml to set up Neo4j and OpenWebUI with pre-installed pipelines.
- Copy the
docker-compose.ymlfile to your system. - Run the following command in the directory containing the file:
docker-compose up -d
- Access Neo4j at http://localhost:7474 (username:
neo4j, password:my_password123). - Your OpenWebUI instance will have the pre-installed memory pipeline ready to use.
If you encounter issues like:
FieldValidatorDecoratorInfo.__init__() got an unexpected keyword argument 'json_schema_input_type'
Upgrade pydantic to version 2.7.4 inside the Docker container:
pip install --upgrade pydantic==2.7.4- OpenWebUI Pipelines Documentation
- Supabase PostgreSQL with pgvector
- Neo4j Graph Database
- OpenAI API Documentation
- Ollama Documentation
- Mem0 Documentation
This project is licensed under the MIT License.