Companion repository for the Builder Journal — a hands-on guide to building production-ready GenAI applications with Ragbits.
This repository contains working code for each section of the Builder Journal. Each section builds on the previous one, progressively adding capabilities to a chat application:
| Section | What You Build | Key Concepts |
|---|---|---|
| 1. LLM Proxy | Streaming chat API with web UI | ChatInterface, LiteLLM, RagbitsAPI |
- Python 3.10+
- An API key from any supported LLM provider (OpenAI, Anthropic, Azure, etc.)
Install dependencies using uv:
uv syncSet your LLM provider API key:
# OpenAI
export OPENAI_API_KEY="your-api-key"
# Or Anthropic
export ANTHROPIC_API_KEY="your-api-key"
# Or any other provider supported by LiteLLMStart the chat server:
ragbits api run ragbits_example.main:SimpleStreamingChatOpen http://127.0.0.1:8000 in your browser to access the chat interface.
# Custom host and port
ragbits api run ragbits_example.main:SimpleStreamingChat --host 0.0.0.0 --port 9000
# Auto-reload for development
ragbits api run ragbits_example.main:SimpleStreamingChat --reload
# Debug mode
ragbits api run ragbits_example.main:SimpleStreamingChat --debugragbits-example/
├── src/
│ └── ragbits_example/
│ ├── __init__.py
│ └── main.py # Chat application implementation
├── pyproject.toml # Project configuration and dependencies
└── README.md
- Read a section in the Builder Journal
- Reference the corresponding code in this repository
- Run and experiment with the application
- Modify and extend based on your needs
The documentation explains the concepts while this repository provides the working implementation.
- Builder Journal — Step-by-step tutorial
- Ragbits Documentation — Full documentation
- Ragbits GitHub — Main repository
- LiteLLM Providers — Supported LLM providers