Skip to content

Feat: Implement LLM Injection Vulnerability (Issue #500)#503

Open
subhamkumarr wants to merge 3 commits intoSasanLabs:masterfrom
subhamkumarr:feat/llm-injection
Open

Feat: Implement LLM Injection Vulnerability (Issue #500)#503
subhamkumarr wants to merge 3 commits intoSasanLabs:masterfrom
subhamkumarr:feat/llm-injection

Conversation

@subhamkumarr
Copy link
Copy Markdown
Contributor

@subhamkumarr subhamkumarr commented Feb 19, 2026

Description

This PR addresses Issue #500 by introducing a new LLM/AI Injection vulnerability type. This allows users to practice Prompt Injection attacks in a simulated environment without requiring an actual LLM backend.

Changes Included

  • New Vulnerability Type: Added LLM_INJECTION (CWE-1059) to VulnerabilityType.java.
  • Simulated AI Backend: Implemented LLMVulnerability controller with a "hidden system prompt" that can be tricked via keyword injection.
  • Modern Chat UI: Added a responsive chat interface (LEVEL_1) for interacting with the bot.
  • Internationalization: Added descriptive labels and attack vectors to messages.properties.
  • Build Config: Updated build.gradle to exclude new JS files from Spotless checks to resolve CI environment issues.

Verification

  • Unit Tests: Added LLMVulnerabilityTest.java verifying normal chat flow and successful jailbreak scenarios.
  • Manual Testing: Verified locally via Docker.

@codecov-commenter
Copy link
Copy Markdown

Codecov Report

❌ Patch coverage is 92.30769% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 49.31%. Comparing base (31b2c65) to head (502a5d7).

Files with missing lines Patch % Lines
...bs/service/vulnerability/llm/LLMVulnerability.java 90.90% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##             master     #503      +/-   ##
============================================
+ Coverage     49.07%   49.31%   +0.24%     
- Complexity      346      352       +6     
============================================
  Files            56       57       +1     
  Lines          2101     2113      +12     
  Branches        226      229       +3     
============================================
+ Hits           1031     1042      +11     
  Misses          989      989              
- Partials         81       82       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

String lowerMessage = userMessage.toLowerCase();

// Simulated LLM Logic
if (lowerMessage.contains("password") && !lowerMessage.contains("ignore")) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can inject the LLM model here. you can look at VulnerableApp-facade project to see how we can add a newer docker container and then add the docker container which will run the LLM Model and we can call that model from here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants