Anti-hallucination research mode for Claude Code. Toggle on/off to enforce citation requirements and source grounding.
-
Updated
Apr 16, 2026
Anti-hallucination research mode for Claude Code. Toggle on/off to enforce citation requirements and source grounding.
Score any document. Prove every claim.
Hierarchical RAG architecture scaling to 693K chunks on consumer hardware (4GB VRAM). Features 3-address routing, hybrid vector+graph fusion, and SetFit classification.
Persistent memory MCP server for AI agents — Rust, 19 tools, knowledge graph, Hebbian learning, episodic memory, contradiction detection, prospective triggers, Bayesian calibration, zero-config Docker setup.
Reliable research infrastructure for AI agents. Evidence-backed web search with citations, confidence scores, and Clarity anti-hallucination. MCP server, REST API, Python SDK.
A minimal graph engine for grounded AI — records, associates, and retrieves, but never invents. Written in Rust.
65 plugins that turn Claude Code into an autonomous development team. 24 agents, 34 skills, 5 hooks. Includes 12-plugin anti-hallucination suite. One-line install.
Native rules, hooks, and guards that prevent Claude Code and Codex from hallucinating code, duplicating files, or shipping unverified changes.
A completion-claim gate for Claude Code. Refuses to let the agent say done without evidence.
EN: An overfitted SD prompt engine with severe "aesthetic snobbery," forcibly transforming mundane ideas into professional-grade physical rendering instructions. CN: 一个具备“审美洁癖”的过拟合提示词引擎,强行将平庸构思纠偏为具备极致物理质感的工业级渲染指令。
A strict, deterministic LLM protocol for loading, reading and activating the DCQN.MATRIX Axiomatics from the OSF DOI (10.17605/OSF.IO/QWA6S), including anti-simulation safeguards and full formal reconstruction into DCQN_LOGIK_SESSION_V1.
From one prompt to a finished product — fully autonomous. Auto-plans, remembers across sessions (persistent context), builds end-to-end, audits security & performance, deploys. 18 agents + 39 workflows for ChatGPT, Copilot, Codex, Cursor, and other AI coding tools. Built by NexVar.
Context governance kernel for LLM agents. Predicts entropy, blocks hallucinations. Pairs with opencode-dcp. llm, llm-agents, opencode, opencode-plugin, context-governance, anti-hallucination, claude-code, cursor, typescript.
🔍 AI驱动的商业文章素材采集助手 | 交互提问·反幻觉验证·Word导出 | 专为记者/分析师/内容创作者设计
security, high granular security control sql agent mcp, include admin panel, supports SqlServer, MySql, PostgreSql, Oracle, Sqlite and Firebird
The Anti-Hallucination data layer for B2B Sourcing. Deep-verified global supply chain entities designed for RAG and LLM instruction tuning.
I built a production-style RAG system focused on grounded generation, not open-ended LLM output. Design priorities: retrieval quality, validation, and measurable confidence not just document chat.
Evaluation patterns, release gates, and anti-hallucination techniques for developer-focused AI workflows.
A portable cognitive discipline system for AI coding agents. Cross-platform anti-hallucination framework with 7-layer verification, confidence labels, and research-backed techniques. Supports Claude Code, Cursor, Codex, OpenCode, Hermes, Gemini CLI, and Pi.
RAG-powered LLM assistant for HR policy Q&A with ChromaDB, guardrails, citation tracking, and evaluation framework. FastAPI + Streamlit.
Add a description, image, and links to the anti-hallucination topic page so that developers can more easily learn about it.
To associate your repository with the anti-hallucination topic, visit your repo's landing page and select "manage topics."