Skip to content

Latest commit

 

History

History
98 lines (55 loc) · 1.54 KB

File metadata and controls

98 lines (55 loc) · 1.54 KB

🤖 AI Chatbot Backend (Python + Ollama + TTS)

This project uses a fully open-source AI stack to provide:

💬 Streaming AI chat replies

🎤 Voice output (Text-to-Speech)

🧠 Local LLM via Ollama

🌐 FastAPI backend

📦 Requirements

Ollama (Local LLM Server)

Install Ollama:

After installing, pull a model:

  • ollama pull llama3

Start Ollama server:

Python Dependencies

Python (Recommended) Python 3.11.x -> (Required for TTS compatibility)

Install required packages:

  • pip install fastapi uvicorn requests TTS

🚀 Running the Backend

Start FastAPI server:

  • python -m uvicorn main:app --reload

Backend will run on:

🔁 Chat Streaming Endpoint

  • POST /chat-stream

  • Streams AI response from Ollama in real-time.

🔊 Voice (Text-to-Speech)

  • This project uses Coqui TTS (open-source) for natural AI voice.

Example model:

  • tts_models/en/ljspeech/glow-tts

Voice is generated after full AI message is received.

  • 🌍 Web Support (CORS Enabled)

  • FastAPI is configured with CORS to support.

🧠 Stack Overview

✅ Features

  • Fully offline capable (local AI)

  • Streaming chat responses

  • Voice replies

  • Multi-platform (Android, iOS, Web, macOS, Windows)

  • Open-source stack

⚠ Notes

Ollama must be running before starting backend