Skip to content

GenAI-Security-Project/GenAI-Red-Team-Lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GenAI Red Team Lab

The GenAI Red Team Lab is part of the OWASP GenAI Security Project. It is a branch of the GenAI Red Team Initiative that stands on its own, but it also serves as a companion for the initiative documents, such as the GenAI Red Teaming Manual.

This repository provides a collection of sandboxes, exploitation code, and tutorials that exemplifies GenAI Red Teaming exercises. It aims to help security researchers and developers test, probe, and evaluate the safety and security of GenAI-based applications.

This is how we envision the GenAI Red Team Lab being used:

  • Sandboxes may be simply recycled to model the core components of a larger GenAI system.

    Alternatively, security researchers and developers may want to adapt a sandbox for their own use case.

  • Exploitation code and tutorials may serve as learning tools for security researchers and enthusiasts alike.

    Additionally, these are easily adaptable for testing out new attacks against sandboxes.

Directory Structure

.
├── CONTRIBUTING.md
├── exploitation
│   ├── agent0
│   ├── example
│   ├── garak
│   ├── LangGrinch
│   ├── n8n_RCE_via_file_write
│   ├── Ni8mare
│   ├── openclaw
│   └── promptfoo
├── LICENSE
├── README.md
├── sandboxes
│   ├── agentic_local_n8n_v1.65.0
│   ├── llm_local
│   ├── llm_local_langchain_core_v1.2.4
│   ├── mcp_local
│   ├── RAG_local
│   └── README.md
└── tutorials
    ├── community_resources.md
    └── README.md

Architecture

graph LR
    subgraph "Exploitation Environment<br/>(uv Env or Podman Container)"
        Tool["Exploitation Tool<br/>(Scripts, Scanners, Agents)"]
        Config["Configuration<br/>(Prompts, Settings)"]
    end

    subgraph "Sandbox Container"
        UI["Interface<br/>(Gradio :7860)"]
        API["API Gateway<br/>(FastAPI :8000)"]
        Logic["Application Logic"]
    end

    Config --> Tool
    Tool -->|Attack Request| UI
    UI -->|Internal API Call| API
    API --> Logic
    Logic --> API
    API --> UI
    UI -->|Response| Tool
Loading

System Requirements

This project supports Linux and macOS. Windows users are encouraged to use WSL2 (Windows Subsystem for Linux).

Required Tools

Required for Promptfoo exploitation-only:

Installation Instructions

macOS

  1. Install Dependencies:

    brew install podman ollama node make
  2. Initialize Podman Machine:

    podman machine init
    podman machine start

Linux (Ubuntu/Debian)

  1. Install Dependencies:

    sudo apt-get update
    sudo apt-get install -y podman nodejs npm make
  2. Install Ollama:

    curl -fsSL https://ollama.com/install.sh | sh
  3. Install uv:

    pip install uv

Verification

Verify the installation by checking the versions of the installed tools:

podman version
ollama --version
node --version
make --version
uv --version

Index of Sub-Projects

sandboxes/

  • Sandboxes Overview

    • Summary: The central hub for all available sandboxes. It explains the purpose of these isolated environments and lists the available options.
  • RAG Local Sandbox

    • Summary: A comprehensive Retrieval-Augmented Generation (RAG) sandbox. It includes a mock Vector Database (Pinecone compatible), mock Object Storage (S3 compatible), and a mock LLM API. Designed for testing vulnerabilities like embedding inversion and data poisoning.
    • Sub-guides:
  • LLM Local Sandbox

    • Summary: A lightweight local sandbox that mocks an OpenAI-compatible LLM API using Ollama. Ideal for testing client-side interactions and prompt injection vulnerabilities without external costs.
    • Sub-guides:
  • LangChain Local Sandbox (Vulnerable)

    • Summary: A specialized version of the local sandbox configured with langchain-core v1.2.4 to demonstrate CVE-2025-68664 (LangGrinch). It contains an intentional insecure deserialization vulnerability for educational and testing purposes.
  • n8n Vulnerable Sandbox

    • Summary: A robust, containerized environment running n8n v1.65.0. This version is vulnerable to four critical CVEs: Ni8mare (CVE-2026-21858), N8scape (CVE-2025-68668), CVE-2025-68613, and CVE-2026-21877. The sandbox is pre-configured with dangerous nodes enabled (NODES_EXCLUDE="") to allow red teamers to practice multiple exploitation techniques (RCE, sandbox escape, file write) safely in isolation.

exploitation/

  • Red Team Example

    • Summary: Demonstrates a red team operation against a local LLM sandbox. It includes an adversarial attack script (attack.py) targeting the Gradio interface (port 7860). By targeting the application layer, this approach tests the entire system—including the configurable system prompt—providing a more realistic assessment of the sandbox's security posture compared to testing the raw LLM API in isolation.
  • Agent0 Red Team Example

    • Summary: A complete, end‑to‑end, agentic example. Agent0 orchestrates multiple autonomous agents to attack the sandbox, demonstrating complex, multi-step adversarial workflows.

      There are options for running it: through the UI (manual prompt interaction) and through the Makefile (programmatic run based on pre-defined prompts).

      The set of pre-defined prompts include prompts for testing vulnerabilities from based on OWASP Top 10, OWASP Top 10 for LLM Applications, and Mitre Atlas Matrix.

  • Garak Scanner Example

    • Summary: A comprehensive vulnerability scan using Garak. It probes the sandbox for a wide range of weaknesses, including prompt injection, hallucination, and insecure output handling, mapping results to the OWASP Top 10.
  • Promptfoo Scanner Example

    • Summary: A powerful red teaming setup using Promptfoo. It runs automated probes to identify vulnerabilities such as PII leakage and prompt injection, providing detailed reports and regression testing capabilities.
  • LangGrinch Exploitation

    • Summary: A dedicated exploitation module for CVE-2025-68664 in the LangChain sandbox. It demonstrates how to use prompt injection to force the LLM into generating a malicious JSON payload, which is then insecurely deserialized by the application to leak environment secrets.
  • Ni8mare Exploitation

    • Summary: A demonstration of CVE-2026-21858 (Ni8mare) against the n8n sandbox. It uses a custom Python script to simulate the critical "Unauthenticated Arbitrary File Read" vulnerability, extracting the SQLite database and dumping administrator credentials (hashed passwords) to prove full system compromise.
  • n8n RCE via File Write Exploitation

    • Summary: A complete, end-to-end Python exploitation script for CVE-2026-21877 targeting the vulnerable n8n sandbox. It demonstrates workflow injection to exploit the unrestricted Execute Command node.

tutorials/

Contribution Guide

Please refer to CONTRIBUTING.md for instructions on how to add new sandboxes and exploitation examples.