A Modern Literate Programming Tool
weft is a language-agnostic literate programming tool derived from
nuweb 1.64 by Preston Briggs. You write
a single .weft file that combines documentation with code in any programming
language. weft tangles it into executable source files and, when asked,
weaves it into typeset documentation (LaTeX or Markdown).
- Any programming language — C, Python, JavaScript, Go, Rust, Haskell, or anything else. One tool, no language-specific plugins.
- Single source of truth — one
.weftfile produces both runnable code and typeset documentation. No synchronization problems. - Named fragments — define code in the order that makes sense for human understanding, not the order the compiler demands. Fragments can be defined, extended, and composed freely.
- Fragment parameters — fragments accept arguments (
@1..@9), enabling reusable code templates without preprocessor tricks. - Automatic cross-references — the woven documentation includes indices of file definitions, fragment names, and user-specified identifiers, with clickable hyperlinks.
- Sections —
@s/@Spartition large documents into independent namespaces, keeping fragment names and identifiers from colliding across different parts of a project. - Global fragments —
@d+makes a fragment accessible across all sections;@+and@-export and import identifiers between sections. - Multiple output files — a single
.weftfile can produce any number of output files via@odirectives. - Include files —
@isplits large webs into manageable pieces, with up to 10 levels of nesting and configurable search paths (-I). - Per-file flags —
#linedirectives (-d), comment styles (-cc,-c+,-cp), tab control (-t), and indentation suppression (-i) on each@odirective. - Quoted fragments —
@qdefines fragments whose contents are not expanded, useful for embedding literal@sequences. - Version stamping —
-V stringsets a version that@vinserts into output files. - Hyperref support —
-rand-h optionsenable clickable cross-references in the PDF. - Listings integration —
-luses the LaTeXlistingspackage for pretty-printed scraps. - Portable C — builds on any POSIX system with a C compiler.
- Tangle by default —
weft myfile.wefttangles only. Weave is requested explicitly with-w. @Wweave format declaration —@W mdor@W texin the source file declares the weave format.weft -wuses this;-w mdor-w texon the command line overrides it.- Markdown weave output —
weft -w mdproduces a.mdfile that renders on GitHub, GitLab, and any Markdown viewer, without requiring a TeX installation. - Knuth-style section markers — every scrap in the tangled output is wrapped with opening and closing comments that reference the source file and line number, making it easy to trace errors back to the literate source.
- Language auto-detection —
@ltags and file extensions automatically select the right comment style and#linedirectives for over 250 built-in languages. - Custom language definitions — the
@Lcommand lets you define comment styles for languages not in the built-in table, or override defaults. - Unified
.weftextension — one extension for all literate source files, regardless of weave format. - Modular literate source — weft's own source is organized by
concept across 14
.weftfiles, not one monolithic file. - Clean C11 build — compiles with zero warnings under
-std=c11 -Wall -Wextra -pedantic. @llanguage tag —@l pythonbefore@oor@dsets the language explicitly. Used for section markers in the tangle andclass="language-X"in the Markdown weave.-sper-file flag — suppresses section markers and#linedirectives while preserving the language tag for Markdown highlighting.- Syntax highlighting in Markdown — the Markdown weave emits
<code class="language-X">automatically, enabling highlight.js, Prism, and GitHub syntax colouring. - Reverse map (
-R) — given a tangled output file and optionally a line number, emits JSON mapping back to the.weftsource. No.weftfiles needed — operates on section markers already in the tangled output. Enables CI/CD error translation and AI agent workflows. --helpfor humans and AI — comprehensive help output covering all options, usage examples, directive quick reference, and an AI-specific section with rules and step-by-step workflow.- 61 automated tests.
git clone <repo-url> weft
cd weft
make bootstrap # src/*.c → build/*.o → ./weftThe repository includes pre-generated .c files in src/ so you can
build weft with nothing more than a C compiler — no prior installation
needed. Object files go to build/, the binary lands in the project root.
vim literate/*.weft # 1. edit source of truth
make weft # 2. tangle → src/*.c → build/*.o → ./weft
make check # 3. run all 57 testsmake doc # → weft.pdf (developer reference with code)
make user-guide # → weft-user-guide.pdfmake dist # bootstrap (if needed) → tangle → compile → test → docsProduces weft, src/*.c (bootstrap), weft.pdf, and
weft-user-guide.pdf. No intermediate files are left behind.
make clean # remove build/ (object files)
make veryclean # remove build/ + weft binary + PDFs + LaTeX intermediatesRule of thumb: never edit src/*.c or build/. All changes go
through .weft files.
./weft --help # comprehensive reference (options, examples, AI workflow)# Tangle only (default behavior)
./weft myprogram.weft
# Weave using format declared by @@W in the source
./weft -w myprogram.weft
# Weave to Markdown (override, ignores @@W)
./weft -w md myprogram.weft
# Weave to LaTeX (override, ignores @@W)
./weft -w tex myprogram.weft
# Tangle + weave
./weft -w md myprogram.weft # tangles AND weaves
# Suppress tangle (weave only)
./weft -o -w md myprogram.weft\documentclass{article}
\begin{document}
Here is a complete C program:
@o hello.c
@{#include <stdio.h>
int main() {
@<Print greeting@>
return 0;
}
@}
The greeting is straightforward:
@d Print greeting
@{printf("Hello, literate world!\n");
@}
\end{document}Running weft example.weft produces hello.c:
#include <stdio.h>
int main() {
printf("Hello, literate world!\n");
return 0;
}weft also accepts Markdown-based literate files. The same example in Markdown style:
# Hello
Here is a complete C program:
@l c
@o hello.c
@{#include <stdio.h>
int main() {
@<Print greeting@>
return 0;
}
@}
The greeting is straightforward:
@d Print greeting
@{printf("Hello, literate world!\n");
@}The @l c tag automatically enables /* */ comments and
#line directives in the tangled output.
When a language is detected (from @l tags, @L definitions,
or file extensions), weft wraps each scrap in the tangled output
with opening and closing markers:
// {3: routes.weft:42}
app.get("/api/users", async (req, res) => {
// {7: database.weft:18}
const users = await db.query("SELECT * FROM users");
// {:7}
res.json(users);
});
// {:3}For C/C++, #line directives are also generated so the compiler
reports errors against the .weft source directly.
The marker format adapts to each language's comment syntax:
| Style | Languages (examples) | Opening | Closing |
|---|---|---|---|
/* */ |
C, CSS | /* {1: f.weft:5} */ |
/* {:1} */ |
// |
JS, Go, Rust, Java | // {1: f.weft:5} |
// {:1} |
# |
Python, Shell, Ruby | # {1: f.weft:5} |
# {:1} |
-- |
Lua, SQL, Haskell | -- {1: f.weft:5} |
-- {:1} |
<!-- --> |
HTML, XML | <!-- {1: f.weft:5} --> |
<!-- {:1} --> |
For languages not in the built-in table, define them in the preamble:
@L nim # % Nim uses # comments
@L glsl // +d % GLSL uses // and supports #line
@L c // % override built-in: use // for C
Syntax: @L name style [+d] where style is //, #, --,
/*, or <!--, and +d enables #line directives.
| Flag | Effect |
|---|---|
-t |
Suppress weave output (even if -w is given) |
-o |
Suppress tangled output files |
-c |
Overwrite output files without comparing |
-v |
Verbose progress reporting |
-n |
Sequential scrap numbering (instead of page numbers) |
-s |
Suppress scrap lists after each scrap |
-d |
List dangling identifier references |
-x |
Include cross-references in output file comments |
-w [fmt] |
Weave: -w alone uses @@W; -w md or -w tex overrides |
-m |
Emit JSON map of document structure to stdout (no tangle/weave) |
-e name |
Extract fragment and dependencies as Markdown to stdout |
-R file[:line] |
Reverse map: emit JSON tracing tangled output back to .weft source |
-r |
Enable hyperlinks (hyperref) in LaTeX output |
-h opts |
Provide hyperref package options |
-l |
Use listings package for scrap formatting |
-p path |
Prepend path to output file names |
-I path |
Add directory to include search path |
-V str |
Set version string for @v substitution |
--help |
Comprehensive help: options, examples, directives, AI workflow |
Flags on @o directives control per-file behavior:
| Flag | Effect |
|---|---|
-d |
Generate #line directives |
-i |
Suppress fragment indentation |
-t |
Suppress tab expansion |
-cc |
C comments (/* */) |
-c+ |
C++ comments (//) |
-cp |
Shell/Perl comments (#) |
-s |
Suppress section markers and #line directives |
Explicit flags override auto-detection.
weft is itself a literate program. The source is organized by concept:
weft.weft <- master file (includes all others)
literate/ <- source of truth (.weft files)
├── introduction.weft <- user-facing documentation (Chapter 1)
├── architecture.weft <- architecture and limits
├── main.weft <- main(), command-line parsing
├── parser.weft <- pass 1: scanning and parsing
├── names.weft <- name table, language table, @L
├── scraps.weft <- scrap storage and tangle output
├── output-files.weft <- output file generation
├── latex-output.weft <- LaTeX weave pass
├── markdown-output.weft <- Markdown weave pass
├── source-io.weft <- source file I/O
├── indexes.weft <- index generation
├── search-labels.weft <- label cross-references
├── arena.weft <- arena memory allocator
└── man-page.weft <- UNIX man page
src/ <- generated C (bootstrap for fresh clones)
├── main.c, pass1.c, ...
└── global.h
build/ <- object files (created by make, gitignored)
skill/ <- Claude Code skill for literate programming
├── SKILL.md <- main skill reference
└── references/ <- detailed guides
bib/ <- bibliography for make doc
test/
└── 00/ <- 57 automated tests
The skill/ directory is a self-contained reference for literate
programming with weft. It is useful both as documentation for humans
and as a skill for AI assistants.
- Complete weft syntax — all commands, flags, scrap modes, and CLI options
- Architecture guide — concept-oriented organization, assembly pattern
- Build pipeline — justfile templates for Go, Rust, Python, Flutter, etc.
- Debugging workflow — tracing errors from tangled output back to
.weftsource - AI navigation — using
-m(JSON map),-e(fragment extraction), and-R(reverse map) - Naming conventions — chunk naming, narrative rules, anti-patterns
- Worked example — a single concept spanning Go + Flutter + SQL + Protobuf
Read skill/SKILL.md for an overview, then dive into skill/references/
for detailed guides:
skill/
├── SKILL.md <- overview and quick start
└── references/
├── weft-syntax.md <- complete syntax reference
├── concept-architecture.md <- organizing code by concept
├── build-pipeline.md <- build automation templates
├── debugging-tangled-code.md <- tracing errors to .weft source
├── literate-philosophy.md <- LP philosophy and principles
├── naming-conventions.md <- chunk naming and narrative rules
└── worked-example.md <- cross-system example
Install it so Claude Code can create, navigate, and maintain literate projects autonomously:
cp -r skill/ ~/.claude/skills/literate-programming/Example prompts:
> Create a literate project for a REST API in Go with auth and payments
> I have a bug at server.js:47, help me trace it back to the .weft source
> (uses: weft -R server.js:47)
> Add a notifications concept to my literate project
> Show me the dependency graph of my literate project
> I have an existing Express app in src/. Reorganize it into .weft concept files so I can benefit from literate programming
BSD 3-Clause. See the license headers in weft.weft for details.
Original nuweb code copyright (c) 1996, Preston Briggs. weft extensions copyright (c) 2024-2026, Cuauhtemoc Pacheco.