Memory defense for AI agents — stops MINJA, AgentPoison, and MemoryGraft attacks. Zero dependencies.
-
Updated
Mar 9, 2026 - TypeScript
Memory defense for AI agents — stops MINJA, AgentPoison, and MemoryGraft attacks. Zero dependencies.
Agentic AI Request Forgery (AARF) – New vulnerability class exploiting planner ➝ memory ➝ plugin chaining in MCP Server, MAS, LangChain, and A2A agents. Red Team playbooks, threat models, OWASP Top 10 proposal.
AI Agent Security — Attack payloads, defense references, and research. 52 tests, ~10K lines. A learning-oriented shooting range, not a product.
Protect AI agent memory from poisoning attacks with a zero-dependency shield that fits Mem0, LangChain, or custom memory systems.
This repository documents AI Recommendation Poisoning — a real-world attack technique discovered by Microsoft's Defender Security Research Team (February 2026) where adversaries silently inject persistent instructions into AI assistant memory through carefully crafted URLs.
Stop memory poisoning attacks on your AI agents
Memory as a Control Plane: Poisoning Attacks on LLM Multi-Agent UAV Systems - IEEE Conference Paper
Add a description, image, and links to the memory-poisoning topic page so that developers can more easily learn about it.
To associate your repository with the memory-poisoning topic, visit your repo's landing page and select "manage topics."