Demystify AI agents by building them yourself. Local LLMs, no black boxes, real understanding of function calling, memory, and ReAct patterns.
-
Updated
Jan 14, 2026 - JavaScript
Demystify AI agents by building them yourself. Local LLMs, no black boxes, real understanding of function calling, memory, and ReAct patterns.
Demystify RAG by building it from scratch. Local LLMs, no black boxes - real understanding of embeddings, vector search, retrieval, and context-augmented generation.
run llms and slms on your hardware & browser
Build an AI communication analyzer from scratch to understand how AI products actually work. Learn prompt engineering, reasoning pipelines, and local LLM integration using Node.js - no frameworks, no abstractions, just fundamentals
The friendly and powerful desktop AI chatbot supporting both local and cloud AI models
A comprehensive Next.js application for running and exploring .gguf open-source LLM models locally.
AI-powered web development with local LLM inference using Electron and node-llama-cpp
Add a description, image, and links to the node-llama-cpp topic page so that developers can more easily learn about it.
To associate your repository with the node-llama-cpp topic, visit your repo's landing page and select "manage topics."