A fully offline, secure, and open-source chatbot built with:
- 🔧 Ollama (local LLM engine)
- 🧱 Lit (web components for modern UI)
- 🧠 Langchain + Pyodide (LLM orchestration in WebAssembly)
- 🧘 And others
👉 🔗 Live Demo
Private Ollama is a client-side chatbot application that leverages modern web and AI technology to provide:
- ✅ Full local execution of LLM inference
- ✅ Zero data leakage – no internet access required post-load
- ✅ Cross-platform browser support
- ✅ Extendability with WebAssembly-based Python (Pyodide)
| Component | Technology | Role |
|---|---|---|
| 🧠 LLM Engine | Ollama | Serves LLM models locally (e.g., Mistral, LLaMA, etc.) |
| ⚛️ Frontend UI | Lit + Pico CSS | Creates modern, reactive web components |
| 🔗 LLM Pipeline | Langchain + Pyodide | Enables Python-based logic and toolchains in-browser |
| 🌐 Runtime | Pyodide + WebAssembly | Runs Python Langchain in the browser, no server needed |
- Once loaded in the browser, no internet is required to:
- Run the chatbot
- Query the model
- Process prompts or tools
- No telemetry, logging, or external API calls
- Everything runs:
- In your browser (UI + logic)
- On your machine (model served locally via Ollama)
- Secure research
- Education in remote/offline environments
- Local-only enterprise chatbots
- LLM experimentation sandbox
+---------------------------+
| Browser UI |
| (Lit Web Components) |
+------------+--------------+
|
v
+---------------------------+
| Pyodide (Python WASM) |
| + Langchain Orchestration|
+------------+--------------+
|
v
+---------------------------+
| Ollama Local Engine |
| (Runs on localhost:11434)|
+---------------------------+
-
Start Ollama locally on your machine
Download and install Ollama locally# Run llama3 ollama run llama3 # Test status curl http://localhost:11434 Ollama is running%
-
🧪 Open Live Example UI
👉 https://freedomson.github.io/midinho/
- No backend required
- Loads Pyodide + Langchain in-browser
- Connects to local Ollama for LLM completions
-
Start Ollama Locally on your machine
Download and install Ollama locally# Run llama3 ollama run llama3 # Test status curl http://localhost:11434 Ollama is running%
-
Serve Live LLM Chat Bot with Python
cd static python server.py -
🧪 Open Live Example UI
👉 http://localhost:8000/
💬 Request features or report issues:
👉 GitHub Issues
Private Ollama is open source and respects your digital freedom. Use it. Hack it. Share it.

