Turn raw documents into a living, interlinked personal wiki — compiled and maintained by Claude AI.
Compilo = Compile + Wiki. You feed it raw sources. Claude does the rest.
Drop in a PDF, spreadsheet, URL, image, or paste text → Claude reads it, writes a structured wiki article with a summary, tags, and [[backlinks]] to related articles — and keeps the whole knowledge base coherent over time.
Inspired by Andrej Karpathy's LLM Knowledge Base idea (April 2026). This is a full working implementation as a web app, not a script.
Built entirely on a phone using Claude Code. No laptop. No desktop.
Most personal knowledge tools make you do the work. You highlight, tag, link, summarise. You build the graph. You maintain it.
Compilo flips this. You throw in sources. Claude compiles them — the same way a compiler turns source code into something structured and useful. The wiki grows, stays linked, and stays coherent. You just read and ask questions.
No RAG. No vector database. No embeddings. Just Claude writing and maintaining plain Markdown — exactly as Karpathy described.
| Feature | Description |
|---|---|
| Compile | Any source → structured wiki article with summary, tags, [[backlinks]] |
| Wiki viewer | Read articles with rendered backlinks, navigate by clicking links, delete stale articles |
| Grounded Q&A | Ask questions in plain English, get answers with source citations, full history with download |
| Knowledge Graph | Obsidian-like force graph: labelled nodes, tag colours, hover highlights, double-click zoom |
| Health check | Claude audits for contradictions, missing concepts, unlinked pairs, coverage gaps |
| Search | Full-text search across all articles |
| Export | Download as Obsidian vault (.zip) or single Markdown file |
| Multiple KBs | Separate knowledge bases for different projects or topics |
| Schema editor | Give Claude per-KB instructions: topic, style, citation format |
| Format | How it's processed |
|---|---|
pdf-parse text extraction |
|
| DOCX | mammoth HTML→text |
| TXT / MD | Read directly |
| XLSX / XLS | SheetJS — sheets → tab-separated text |
| CSV | SheetJS — rows → tab-separated text |
| PPTX | officeparser — slide text extraction |
| JPG / PNG / GIF / WEBP | Sent to Claude Haiku vision API — described in text |
| Source | How it's handled |
|---|---|
| Any webpage | cheerio HTML extraction |
| YouTube videos | youtube-transcript + video metadata |
| GitHub repos | GitHub API metadata + raw README from raw.githubusercontent.com |
| Google Docs | Export URL (?format=txt) — public docs only |
| Google Sheets | Export URL (?format=csv) — public sheets only |
| Substack posts | Optimised selectors for .body.markup / article |
| Twitter / X threads | Nitter proxy with 3-instance fallback |
Type or paste any raw text directly — no file needed. Title is auto-detected from a # Heading in the content.
Ask questions in plain English. Claude answers using only your wiki — no outside knowledge. Every answer:
- Shows source citations (clickable links to articles)
- Is saved to per-KB history that persists across sessions
- Can be downloaded as a
.mdfile
| What | Cost |
|---|---|
| Claude Code | Free tier available |
| Replit or Railway | Free tier available |
No local setup required. No laptop required. Built and deployed entirely from a phone.
- Go to railway.app → New Project → Deploy from GitHub repo
- Select your fork of this repo
- Variables tab → add
ANTHROPIC_API_KEY=sk-ant-... - Railway builds and gives you a live HTTPS URL in ~2 minutes
The railway.toml in this repo handles everything automatically.
- Go to replit.com → Create Repl → Import from GitHub
- Paste your fork URL
- Secrets (lock icon) → add
ANTHROPIC_API_KEY=sk-ant-... - Click Run
The .replit config in this repo handles the build and start automatically.
git clone https://github.com/sdesaurabh/compilo
cd compilo
npm install
cp .env.example .env # add your ANTHROPIC_API_KEY
npm run dev # backend :3001 + frontend :5173Open http://localhost:5173.
You add a source
├── File upload (PDF, DOCX, XLSX, PPTX, image, …)
├── URL (webpage, YouTube, GitHub, Google Docs, Substack, Twitter/X)
└── Paste text directly
↓
Claude reads it and writes a wiki article (.md)
· YAML frontmatter: title, summary, tags, date
· [[backlinks]] to related articles already in your wiki
· Clean Markdown body
↓
Article saved locally (filesystem + SQLite metadata)
↓
Read · Search · Ask questions · Run health check · Export
Every article is a plain .md file. No lock-in. Works in Obsidian, VS Code, or any text editor. Export your entire wiki as an Obsidian vault any time.
| Variable | Required | Default | Description |
|---|---|---|---|
ANTHROPIC_API_KEY |
Yes | — | Get one at console.anthropic.com |
PORT |
No | 3001 |
Port the server listens on |
DATA_DIR |
No | ./data |
Where wiki files and the database are stored |
| Layer | Tech |
|---|---|
| Backend | Node.js, Express |
| AI | Anthropic Claude API (claude-opus-4-6 for compilation, claude-sonnet-4-6 for Q&A / health, claude-haiku-4-5 for image vision) |
| Storage | SQLite (metadata) + local filesystem (.md files) |
| Frontend | React 19, Vite 5, Tailwind CSS |
| Graph | force-graph (2D canvas) |
| File parsing | pdf-parse, mammoth, SheetJS (xlsx), officeparser, cheerio |
| Export | archiver (zip) |
compilo/
backend/
server.js ← Express entry point
routes/
upload.js ← File upload (PDF, DOCX, XLSX, PPTX, images, …)
compile.js ← POST /api/compile/:fileId
ingest.js ← URL ingestion (web, YouTube, GitHub, GDocs, Substack, Twitter)
ingestText.js ← POST /api/ingest-text (paste text)
wiki.js ← GET/DELETE /api/wiki/:slug
qa.js ← Q&A + history (GET/DELETE /api/qa/history)
search.js ← Full-text search
graph.js ← Force-graph node/link data
export.js ← Obsidian zip + single .md
healthcheck.js ← Claude wiki audit
kb.js ← Knowledge base CRUD
files.js ← Source file list + delete (cascades to article)
schema.js ← Per-KB schema editor
services/
claude.js ← All Claude API calls (compile, Q&A, health, image vision)
db.js ← SQLite queries (source_files, wiki_articles, qa_history)
wikiManager.js ← Markdown I/O, backlink parsing, article delete
fileParser.js ← Text extraction for all supported file types
youtubeExtractor.js ← YouTube transcript + metadata
frontend/
src/
App.jsx ← Router + responsive sidebar nav
api.js ← KB-aware fetch wrapper (injects X-KB-ID header)
pages/
Home.jsx ← Three-tab ingestion: Upload / URL / Paste Text
Wiki.jsx ← Article grid with search + per-card delete
Article.jsx ← Article viewer with metadata sidebar + delete button
QA.jsx ← Q&A with history panel, re-ask, download
Graph.jsx ← Obsidian-like force graph
History.jsx ← Source file history with badges
KBManager.jsx ← Knowledge base management
Settings.jsx ← Schema editor
HealthCheck.jsx ← Wiki audit UI
components/
Uploader.jsx ← Drag-and-drop file upload with type pills
URLIngestor.jsx ← URL input with platform pills + link-following UI
TextIngestor.jsx ← Paste/type text ingestion
WikiCard.jsx ← Article card with hover delete button
MarkdownViewer.jsx← [[wikilink]] renderer
KBSelector.jsx ← KB switcher dropdown
Sidebar.jsx ← Desktop collapsible nav
MobileDrawer.jsx ← Mobile slide-out nav
SourceBadge.jsx ← File-type badge component
docs/
ARCHITECTURE.md ← Technical deep-dive
ROADMAP.md ← What shipped and what's coming
See docs/ROADMAP.md for the full current state.
Top priorities still ahead:
- Semantic search via vector embeddings
- MCP server (query your wiki inside Claude Code)
- Marp slide generation from articles
- CLI (
compilo add paper.pdf) - Local model support (Ollama)
Contributions are welcome. See CONTRIBUTING.md.
Good first issues are labeled good first issue.
MIT — see LICENSE.