Open-source LLM red teaming framework. Security-test any model (Claude, GPT, Llama) for prompt injection, data leakage, etc. 15 probes, 29 prompt converters, LLM-as-judge grading, adaptive red teaming, static code audit. SARIF + JUnit for CI/CD.
-
Updated
Mar 17, 2026 - Python