Prompt Injection Detection in LLaMA-based Chatbots using LLM Guard
-
Updated
Jul 5, 2025 - Jupyter Notebook
Prompt Injection Detection in LLaMA-based Chatbots using LLM Guard
Official SDKs, MCP & Skills for Prompt Inspector. Protect your LLM apps and Agents via Python, Node.js, Skills or Model Context Protocol.
Add a description, image, and links to the llmguard topic page so that developers can more easily learn about it.
To associate your repository with the llmguard topic, visit your repo's landing page and select "manage topics."