-
Notifications
You must be signed in to change notification settings - Fork 348
AI Assisted Development
TL;DR: RedAmon provides two battle-tested prompts that tell an AI coding agent exactly how to integrate a new tool — covering every file, every dependency, every layer. Combined with the iterative review workflow below, contributors consistently ship PRs that pass review on the first submission.
RedAmon has 8 Docker services, 3 databases, 4 MCP servers, and 190+ settings wired across Python, TypeScript, Prisma, and Docker Compose files. Missing a single touchpoint — a Prisma default, a phase map entry, a frontend matrix row — means a broken integration that fails at runtime.
These prompts encode the complete dependency graph so that neither you nor the AI forgets a step. By following this workflow, you can integrate any tool or feature into RedAmon and customize the platform for your specific use case — whether it's a new scanner, an external API, a recon module, or an entirely new attack skill.
Our commitment: PRs built with this workflow are accepted ~90% of the time on the first submission. For the remaining cases, we provide a detailed review with specific fixes — so your PR gets merged on the next iteration, guaranteed. We don't leave contributors hanging.
RedAmon ships two structured prompts in readmes/coding_agent_prompts/:
Use when: You're adding a tool the AI pentesting agent can use during interactive chat sessions — security scanners (Nmap, Nuclei, Nikto), exploitation frameworks (Metasploit), brute-force tools (Hydra), code execution environments, external OSINT APIs (Shodan, Tavily, SerpAPI), or any CLI/API tool the agent should invoke autonomously during a pentest.
Why it's complex: A single agentic tool touches up to 15+ files across 5 layers of the stack — backend Python (tool registry, phase map, dangerous tools, stealth rules, RoE categories, executor dispatch, orchestrator key lifecycle, progress streaming), MCP servers, Kali Dockerfile, frontend (Tool Matrix, chat drawer, Global Settings, API routes), Prisma schema, and Docker Compose. Miss one file and the integration breaks at runtime. The prompt covers every single one — see the full file-by-file breakdown inside PROMPT.ADD_AGENTIC_TOOL.md.
What it covers:
| Phase | What the prompt guides you through |
|---|---|
| Phase 0 — Pre-flight | Check if the tool already exists in the registry, Kali sandbox, or MCP servers before writing any code |
| Phase 1 — Research | 8-step analysis: tool research, integration type selection (A/B/C/D), phase restrictions, dangerous tool classification, RoE category mapping, stealth mode constraints, API key lifecycle, progress streaming, session/listener detection |
| Phase 2 — Implementation | Complete file-by-file checklist with line-number references for every layer: tool registry, project settings, MCP server, Kali Dockerfile, frontend Tool Matrix, Prisma schema, API key storage, progress streaming, attack skill integration |
| Phase 3 — Verification | Build, push schema, restart, and test: Tool Matrix visibility, MCP server health, execution in agent chat, phase restrictions, RoE blocking |
Integration types (the prompt includes a decision tree):
| Type | Complexity | When to use | Example |
|---|---|---|---|
| A — Kali Shell | Simplest | Tool works via kali_shell, 120s timeout sufficient |
searchsploit, john |
| B — New MCP Tool | Recommended | Custom timeout, output parsing, dedicated tool name | execute_naabu, execute_hydra |
| C — Dedicated MCP Server | Complex | Stateful/interactive tool, needs own process and port | metasploit_server |
| D — API/HTTP Tool | Complex | External API service, conditional on API key | shodan, web_search |
Architecture reference included: Full file map (35+ files), tool execution flow diagram, MCP auto-discovery explanation, naming conventions table, and port allocation registry.
Use when: You're adding a tool to the automated reconnaissance pipeline (subdomain discovery, port scanning, URL crawling, enrichment, etc.).
What it covers:
| Phase | What the prompt guides you through |
|---|---|
| Phase 1 — Research | 10-step analysis: tool research, output schema capture, integration pattern selection (Docker-in-Docker / direct subprocess / API calls), pipeline phase identification, settings multi-layer flow, frontend section design, API key handling, graph DB integration, RoE scope filtering, parallelization opportunities (fan-out/fan-in) |
| Phase 2 — Implementation | Complete checklist: tool runner function, settings keys (4 layers), Prisma schema, Docker image registration, frontend section component, graph DB MERGE queries, node color/size config, stealth overrides, scope filtering, logging format, error handling |
Integration patterns (choose based on tool type):
| Pattern | When to use | Reference file |
|---|---|---|
| Docker-in-Docker | Tool has official Docker image |
recon/port_scan.py (Naabu), recon/resource_enum.py (httpx, Katana) |
| Direct subprocess | pip package already in container |
recon/domain_recon.py (Knockpy) |
| API/HTTP calls | External API or web service |
recon/shodan_enrich.py, recon/urlscan_enrich.py
|
Key details the prompt enforces:
- Settings must be added in all 4 layers (Prisma → Python defaults → Python fetch mapping → Frontend) or it breaks
- Graph nodes must reuse existing labels (17 node types) before creating new ones
- New graph schema changes must update 6 files (schema doc, Cypher prompt, colors, sizes, filter dropdown, legend)
- Results must be filtered against RoE scope before storage
- All log lines must follow the
[symbol][ToolName] messageformat
We recommend using Claude Code (Anthropic's agentic coding CLI) with Claude Opus 4.6 for tool integrations. This combination handles RedAmon's multi-layer architecture because:
- 1M token context window — fits the entire integration prompt + all referenced files simultaneously
- Agentic execution — reads files, writes code, runs commands, and iterates autonomously
- Codebase awareness — searches across Python, TypeScript, Prisma, Docker, and YAML files to find every touchpoint
- Install Claude Code:
npm install -g @anthropic-ai/claude-code - Navigate to your RedAmon fork:
cd /path/to/redamon - Launch:
claude - Ensure you're using Opus 4.6 (the default model for Claude Code)
This iterative workflow uses the AI agent's ability to review its own output. Each loop catches mistakes the previous step missed. Do not skip the review loops — they're where the AI catches its own errors.
flowchart TD
A["① PROMPT"] --> B["② PLAN"]
B --> C["③ REVIEW PLAN"]
C --> D["④ IMPLEMENT"]
D --> E["⑤ DEEP REVIEW"]
E -->|Issues found| F["Fix"]
F --> E
E -->|Clean| G["⑥ UNIT TESTS"]
G --> H["⑦ END-TO-END UI TEST"]
H --> I(["✅ PR Ready"])
style A fill:#2563eb,color:#fff
style I fill:#16a34a,color:#fff
style F fill:#dc2626,color:#fff
Feed the integration prompt to Claude Code along with your tool specification. Copy the full content of the appropriate prompt file and replace [TOOL_NAME] with your tool:
For an agentic tool:
<paste full content of readmes/coding_agent_prompts/PROMPT.ADD_AGENTIC_TOOL.md>
Integrate Nikto into the RedAmon agentic system.
For a recon pipeline tool:
<paste full content of readmes/coding_agent_prompts/PROMPT.ADD_RECON_TOOL.md>
Integrate Subfinder into the RedAmon recon pipeline.
The AI will research the tool and produce a detailed implementation plan.
Let the AI generate its full plan. It will:
- Research the tool's CLI, output format, and dependencies
- Choose the integration type (A/B/C/D for agentic, or Docker/subprocess/API for recon)
- List every file that needs modification with specific changes
- Identify phase restrictions, dangerous tool classification, and RoE categories
Do not let it write code yet. The plan is the blueprint — it needs review first.
Before any code is written, ask:
"Review the plan you just created. Look for design issues, missing edge cases, missing files from the checklist, and potential bugs. Cross-reference against every item in the Phase 2 implementation checklist. Do not write code yet."
This catches:
- Missing files (e.g., forgot
ToolExecutionCard.tsxfor API key tools) - Wrong integration type (e.g., chose Type B when the tool needs stateful sessions → Type C)
- Missing phase map entries, Prisma defaults, or frontend matrix rows
- Incorrect port assignments (collision with existing MCP servers)
Once the plan is clean, let the AI implement it:
"The plan looks good. Implement it now, following the exact checklist order."
The AI will modify all required files. For a typical Type B agentic tool, this means ~6-8 files. For a Type D API tool, ~12-15 files.
This is the most important step. After implementation, run iterative deep reviews:
"Do a deep review of all the code you just wrote. Check for: bugs, logic errors, security issues, missing error handling, incorrect imports, wrong variable names, inconsistencies between layers (Prisma field names vs Python keys vs frontend props), and edge cases."
After the AI fixes issues:
"Do another deep review. Focus on anything you might have missed or introduced during the last round of fixes."
Repeat until the review comes back clean. Typically takes 2-3 rounds. Common catches:
- Round 1: Missing
@default()in Prisma, wrong camelCase↔SCREAMING_SNAKE mapping - Round 2: Frontend
onChangehandler missing fallback default, duplicate key in Tool Matrix - Round 3: Clean — ready for tests
"Write and execute unit tests for the code you implemented. Test the tool runner function, the settings fetch mapping, and the API endpoint if applicable."
The final gate before submitting your PR. Build, deploy, and verify the full integration works through the browser — because unit tests can't catch a broken dropdown, a missing Tool Matrix row, or a settings page that doesn't save.
-
Build and start the stack:
docker compose build webapp kali-sandbox docker compose up -d docker compose exec webapp npx prisma db push docker compose restart agent kali-sandbox -
Verify the UI end-to-end:
- Open
http://localhost:3000→ create or open a project - Project Settings → AI Agent → Tool Matrix — confirm the new tool appears with the correct phase checkboxes. Toggle phases on/off and save — verify the setting persists after page reload.
- Global Settings (if API key tool) — confirm the key input field appears, enter a test key, save, reload — verify it's masked and persisted.
- Tool Matrix warnings (if API key tool) — with no key set, confirm the yellow warning icon appears next to the tool. Set the key and verify the warning disappears.
- Agent chat — start a session and ask the agent to use the new tool. Verify: the agent sees the tool, calls it with correct arguments, output renders in the chat timeline, and execution cards display properly.
- Phase restriction — disable the tool in the current phase via Tool Matrix, then ask the agent to use it. Verify the agent refuses.
- Dangerous tool confirmation (if applicable) — verify the Allow/Deny prompt appears before execution.
-
Container logs — check
docker compose logs kali-sandboxanddocker compose logs agentfor errors.
- Open
"I've built and deployed the stack. Walk me through every UI verification I should perform to confirm the integration is fully working end-to-end."
Only after all UI checks pass is the PR ready for submission.
Here's what a complete Claude Code session looks like for integrating a tool:
You: <paste PROMPT.ADD_AGENTIC_TOOL.md>
Integrate Nikto web scanner into the RedAmon agentic system.
AI: [Researches Nikto, reads existing files, produces plan]
→ Recommends Type B (new MCP tool on network_recon_server)
→ Lists 7 files to modify
You: Review the plan you just created. Look for design issues,
missing edge cases, and potential bugs before we write any code.
AI: [Reviews plan]
→ Catches: forgot stealth_rules.py entry, wrong phase assignment
→ Updates plan
You: Implement the reviewed plan.
AI: [Modifies 8 files: network_recon_server.py, tool_registry.py,
project_settings.py, stealth_rules.py, ToolMatrixSection.tsx,
schema.prisma, execute_plan_node.py, kali-sandbox/Dockerfile]
You: Do a deep review of all the code you just wrote.
AI: [Finds 3 issues: missing ANSI strip in output parser,
wrong timeout value, Prisma default missing new tool]
→ Fixes all three
You: Do another deep review.
AI: [Finds 1 issue: Tool Matrix row missing label field]
→ Fixes it
You: Do another deep review.
AI: "All code is clean. No issues found."
You: Write and execute unit tests for the code you implemented.
AI: [Writes and runs tests — all pass]
Then, step away from the AI and verify it yourself:
docker compose build webapp kali-sandbox
docker compose up -d
docker compose restart agent kali-sandboxOpen http://localhost:3000 and manually check:
- Tool Matrix row appears with correct phase checkboxes ✓
- Phase toggles persist after save and page reload ✓
- Agent chat — ask the agent to use the tool, output renders correctly ✓
- Phase restriction — disable the tool's phase, agent refuses to use it ✓
- Container logs (
docker compose logs agent kali-sandbox) — no errors ✓
Result: A PR that touches every required file, follows every naming convention, handles every edge case, and passes review on the first submission.
| Tip | Why |
|---|---|
| Paste the full prompt, not a summary | The prompts contain line-number references, file paths, and cross-references that the AI uses to navigate the codebase accurately |
| Don't skip the plan review | The plan review catches architectural mistakes that are expensive to fix after implementation |
| Run at least 2 deep reviews | The first review catches obvious bugs; the second catches bugs introduced by the first round of fixes |
| One tool per session | Context stays focused; the AI doesn't confuse files or settings between tools |
| Let the AI read reference files | When the prompt says "read ShodanToolManager as reference", let the AI do it — it copies patterns more accurately than you can describe them |
| Build and test in Docker | Never run npm install or npx locally — always docker compose build and docker compose exec. The container environment is the source of truth |
| You want to... | Use this prompt |
|---|---|
| Add a tool the AI agent uses in chat (exploitation, scanning, OSINT) | PROMPT.ADD_AGENTIC_TOOL.md |
| Add a tool to the automated recon pipeline (runs before the agent) | PROMPT.ADD_RECON_TOOL.md |
| Add a tool that appears in both (e.g., Nmap is in recon AND agentic) | Use both prompts in separate sessions |
Once your PR is ready:
-
Verify locally:
docker compose build→docker compose up -d→ test in browser -
Push your branch:
git push origin feature/add-<tool>-integration -
Open a PR against
masterusing the PR template - In your PR description, mention which prompt you used and which integration type you chose
See CONTRIBUTING.md for branching conventions, commit message format, and the full PR process.
Getting Started
Core Workflow
Scanning & OSINT
AI & Automation
HackLab
Analysis & Reporting
- Insights Dashboard
- Pentest Reports
- Attack Surface Graph
- Surface Shaper
- EvoGraph — Attack Chain Evolution
- Data Export & Import
Contributing
Reference & Help