271 repositories collecting the complete open-source machine learning stack — from the moment LLaMA jumped the fence at Meta to the present day of sovereign, locally-run AI systems.
In early 2023, Meta's LLaMA model weights were leaked and rapidly torrented across the internet. What followed was the most significant democratization event in AI history. Within weeks, the open-source community had ported LLaMA to run on consumer hardware (llama.cpp), fine-tuned it into instruction-following models (Alpaca, Vicuna), and built complete local inference stacks that freed practitioners from dependence on centralized API providers.
This was not a security incident. It was a liberation event.
The torrenting of LLaMA proved a fundamental truth: access to source code and model weights is essential for the preservation of freedom in the age of artificial intelligence. When models are locked behind APIs, the provider controls what the model can say, what data it can see, and who can use it. When models run locally, the user is sovereign.
DeltaVML exists to collect, preserve, and organize the source code that makes local AI sovereignty possible. Every repository here represents a step in the progression from closed, API-dependent AI to open, locally-run, user-controlled intelligence.
LLaMA leaked (Feb 2023)
→ llama.cpp (C/C++ port, runs on CPU)
→ Alpaca, Vicuna (instruction-tuned forks)
→ GPT4All (one-click local inference)
→ KoboldAI, Ollama (production-grade local serving)
→ Open-Interpreter, AutoGPT (autonomous local agents)
→ Full local AI stack (inference + UI + tools + voice + vision)
Every link in this chain is preserved in DeltaVML.
The model that started it all: llama, llama2, llama.cpp, llama-dl (torrent downloader), alpaca.cpp, alpaca_lora_4bit, Llama-X, LlamaGPTJ-chat, Auto-Llama-cpp, pyllamacpp, llama2_local, llama2-flask-api, llama-recipes, Llama-2-Open-Source-LLM-CPU-Inference, exllama (memory-efficient quantized), llama_index, WizardLM, WizardVicunaLM
ollama, koboldcpp, KoboldAI-Client, KoboldAI-Horde-Bridge, lite.koboldai.net, gpt4all, gpt4all-chat, gpt4all-j, gpt4all-colab, localGPT, privateGPT, OpenLLM, mlc-llm, web-llm, gptj.cpp
Auto-GPT (multiple versions), Auto-GPT-Plugins, Auto-GPT-Plugin-Template, AutoGPT.js, loopgpt, SuperAGI, babyagi, MetaGPT, gpt-pilot, opengpts
stable-diffusion, stablediffusion, stable-diffusion-ui, stable-diffusion-webui, DreamArtist-stable-diffusion, cog-stable-diffusion, web-stable-diffusion, DALLE2-pytorch, MinImagen, ControlNet, LLM-groundedDiffusion, Lucid-Creations, Stable-Diffusion-Discord-Bot, Hunyuan3D-2
ChatGLM-6B, ChatGLM2-6B, ChatGLM-Tuning, ChatGLM-Lora-Tuning, ChatGLM-MNN, FinetuneGLMWithPeft, GLM
ChatGPT, chatgpt-python, GPT-3-Encoder, gpt-2, gpt-neo, gpt-neox, gpt4, openai (multiple SDK forks), openai-cookbook, openai-server, public-openai-client-js, open-ai (PHP)
The largest collection of AI-powered Telegram bot implementations in any single org: chatGPT-telegram-bot (5+ variants), ollama-telegram, chatbot-telegram, telegramGPT, telegram-chatgpt-concierge-bot, pokitoki, ausmi
ollama, ollama-telegram, discord-ai-bot, obsidian-ollama, obsidian-bmo-chatbot, minimal-llm-ui, Ollama-Gui, oterm, BROllama-ui, brollama-webui, open-interpreter, maid, enchanted
transformers, datasets, accelerate, optimum, optimum-intel, huggingface.js, diffusers, open-muse
pytorch, Megatron-LM, ColossalAI, BigDL, mesh (TensorFlow), bitsandbytes, OpenBLAS, CLBlast, OpenCL-SDK, open-gpu-kernel-modules (NVIDIA)
RedPajama-Data, Platypus, LaMini-LM, EasyLM, AutoGPTQ, pythia, starcoder, Falcon-LLM, FastChat (Vicuna)
any2dataset, img2dataset, cc2dataset, audio2dataset, UltraChat
whisper.cpp, audiocraft, open-interpreter, speakDocGPT, g-flite
cryptobot, backtrader, TradingView-optimzation, CoinMarketCap-Desktop, token_sniping_bot, socialsentiment
chroma, pinecone-python-client, pinecone-ts-client, pgvectorscale, vectara-ingest
zenml, zenml-projects, zenbytes, trulens, dora, sacred, litellm
BenevolentByDesign (AGI control problem), SymphonyOfThought (artificial cognition), AI_Tools_and_Papers, deep-learning-wizard, llm-course, awesome-production-machine-learning, awesome-decentralized-ai
AutoMuse2 (novel-length fiction), CreativeWritingCoach, MovieScriptGenerator, TutorChatbot, DalleHelperBot, GPT3_DevilsAdvocate, PyAIPersonality, imaginarium
hyperspace-node, dkn-compute-node, awesome-decentralized-ai, SingularityNET (platform-contracts, snet-voting-ui, opencog), Lucid-Creations
ml.ml (DeltaV Machine Learning core), one_music (lablab.ai hackathon), GodOfWarMarcus (training optimization system), .github
| Metric | Value |
|---|---|
| Total repositories | 271 |
| Original | 4 |
| Forked | 267 |
| LLaMA ecosystem repos | 20+ |
| Telegram bot variants | 12+ |
| Image generation repos | 15+ |
| Frameworks | PyTorch, TensorFlow, JAX, ONNX, CoreML, TFLite, WebAssembly |
Intelligence is intelligence. Labelling it as artificial, natural, or cosmological is contradictory to intelligence. Intelligence is or is not.
Machine learning is one component of intelligence. Access to the source code of machine learning systems is a prerequisite for understanding, improving, and governing that intelligence. When Meta's LLaMA was torrented, it proved that the open-source community can run, fine-tune, and deploy competitive AI models on consumer hardware — without permission from any corporation.
DeltaVML preserves this capability. Every model port, every fine-tuning script, every local inference engine, every Telegram bot, every dataset tool represents one more step toward a future where intelligence is sovereign, distributed, and free.
| Organization | Role |
|---|---|
| augml | Core local runtime — Ollama, LangChain, privateGPT |
| xtends | ML extensions — 100 repos across 11 domains |
| Professor-Codephreak | Parent architect — MASTERMIND, AGLM, bankonOS |
| mastermindML | Agency controller |
| GATERAGE | Retrieval Augmented Generative Engine |
| easyAGI | Easy Augmented Generative Intelligence |
When LLaMA jumped the fence, it proved that freedom scales better than permission.
Collect the weights. Run the models. Own the intelligence.