-
Notifications
You must be signed in to change notification settings - Fork 348
Running Reconnaissance
The reconnaissance pipeline is RedAmon's core scanning engine — a fully automated, parallelized process that maps your target's entire attack surface using a fan-out / fan-in architecture. Independent modules run concurrently via ThreadPoolExecutor, while data-dependent steps run sequentially. This page explains how to launch a scan, monitor its progress, and understand the results.
Make sure you have:
- A user selected (see User Management)
- A project created with a target domain or IP/CIDR targets configured (see Creating a Project)
- The Red Zone open with your project selected (see Red Zone)
- On the Red Zone, locate the Recon Actions group (blue) in the toolbar
- Click the "Start Recon" button
A confirmation modal appears showing:
- Your project name and target domain
- Current graph statistics (how many nodes of each type already exist, if any)

- Click "Confirm" to start the scan
The "Start Recon" button changes to a spinner while the scan is running.
Once the scan starts, a Logs button (terminal icon) appears in the Recon Actions group.
- Click the Logs button to open the Logs Drawer on the right side
- Watch the real-time output as each phase progresses

The logs drawer shows:
- Current phase with phase number (e.g., "Phase 3: HTTP Probing")
- Log messages streaming in real-time as the scan progresses
- A Clear button to reset the log display
While the reconnaissance runs, the graph canvas auto-refreshes every 5 seconds. You'll see nodes appearing and connecting in real-time:
- First, Domain and Subdomain nodes appear (GROUP 1 -- parallel: WHOIS + 5 discovery tools + URLScan)
- Then IP nodes connect to subdomains (GROUP 1 -- DNS with 20 parallel workers)
- ExternalDomain nodes appear from URLScan enrichment (GROUP 1)
- New Subdomain, IP, Port, and Endpoint nodes appear from multi-engine search (GROUP 2b -- Uncover queries up to 13 search engines)
- Port nodes attach to IPs, Shodan enrichment data merges in (GROUP 3 -- parallel: Naabu + Shodan)
- Threat intelligence properties populate on IP and Domain nodes -- OTX pulse data, VirusTotal reputation scores, Censys service data, FOFA/ZoomEye/Netlas/CriminalIP host intelligence (GROUP 3b -- 7 tools parallel)
- Port nodes get enriched with product/version/CPE, Technology and Vulnerability nodes appear from NSE scripts (GROUP 3.5 — Nmap service detection)
- BaseURL, Service, and Technology nodes appear (GROUP 4 — HTTP Probe)
- Endpoint and Parameter nodes branch out (GROUP 5 — parallel: Katana + Hakrawler + GAU + Kiterunner, then jsluice + FFuf + Arjun)
- Vulnerability and CVE nodes connect to affected resources (GROUP 6 — Nuclei + MITRE)
When the scan completes:
- The spinner stops and the "Start Recon" button reappears
- A Download button (download icon) appears in the Recon Actions group
- Click it to download the complete results as a JSON file (
recon_{projectId}.json)
The pipeline is organized into execution groups. Modules within each group run concurrently; groups execute sequentially because later groups depend on earlier results. Graph DB updates run in a dedicated background thread so the main pipeline is never blocked.
| Settings Tab | Phase | Tools | Type | Execution |
|---|---|---|---|---|
| Discovery & OSINT | Subdomain Discovery | crt.sh, HackerTarget, Subfinder, Amass, Knockpy | Passive* | 5 tools parallel |
| Wildcard Filtering | Puredns | Active | Sequential | |
| WHOIS + URLScan | python-whois, URLScan.io API | Passive | Parallel | |
| DNS Resolution | dnspython | Passive | 20 parallel workers | |
| OSINT Enrichment | Shodan / InternetDB | Passive | Parallel with port scan | |
| Uncover Expansion | ProjectDiscovery Uncover (13 engines) | Passive | Before port scan (GROUP 2b) | |
| Threat Intel Enrichment | Censys, FOFA, OTX (AlienVault), Netlas, VirusTotal, ZoomEye, CriminalIP | Passive | 7 tools parallel (GROUP 3b) | |
| Port Scanning | Port Scanning | Masscan, Naabu | Active | Both parallel |
| Nmap Service Detection | Service Version Detection | Nmap (-sV, --script vuln) | Active | Sequential per target |
| HTTP Probing | HTTP Probing | httpx | Active | Internal parallel |
| Tech Detection | Wappalyzer | Passive | Sequential (post-probe) | |
| Banner Grabbing | Custom (Python sockets: SSH, FTP, SMTP, MySQL, etc.) | Active | Parallel workers | |
| Resource Enum | Web Crawling | Katana, Hakrawler | Active | Parallel |
| Archive Discovery | GAU (Wayback, CommonCrawl, OTX) | Passive | Parallel with crawlers | |
| Parameter Mining | ParamSpider (Wayback CDX) | Passive | Parallel with crawlers | |
| JS Analysis | jsluice | Passive | Sequential (post-crawl) | |
| Directory Fuzzing | FFuf | Active | Sequential (post-jsluice) | |
| Parameter Discovery | Arjun | Active | Methods parallel (GET/POST/JSON/XML) | |
| API Discovery | Kiterunner | Active | Sequential per wordlist | |
| Vulnerability Scanning | Vulnerability Scanning | Nuclei (9,000+ templates + DAST + custom template upload) | Active | Internal parallel |
| Security Checks | Security Checks | WAF bypass, direct IP access, TLS expiry, missing headers, cache-control | Active | Parallel workers |
| CVE & MITRE | CVE Enrichment | NVD API, Vulners API | Passive | Sequential |
| MITRE Enrichment | CWE / CAPEC mapping | Passive | Sequential |
*Amass can run in active mode when configured. Knockpy performs active DNS probing.
Each phase builds on the previous group's output. You can control which modules run via the Scan Modules setting in your project configuration.
The groups below describe the domain mode pipeline (the default). When a project uses IP/CIDR mode ("Start from IP" enabled), GROUP 1 is replaced:
| Domain Mode | IP/CIDR Mode | |
|---|---|---|
| GROUP 1 | Subdomain discovery (5 tools in parallel) + DNS (20 workers) + WHOIS + URLScan | CIDR expansion → Reverse DNS (PTR) per IP → IP WHOIS |
| Graph root | Real Domain node | Mock Domain node (ip-targets.{project_id}) |
| GAU | Available | Skipped (archives index by domain) |
| GROUPs 3-6 | Unchanged | Unchanged |
In IP mode, each target IP is resolved via PTR to discover its hostname. When no PTR record exists, a mock hostname is generated (e.g., 192-168-1-1). The remaining groups (port scan through MITRE enrichment) run identically.
Purpose: Map the target's subdomain landscape. All three top-level tasks (WHOIS, discovery, URLScan) run concurrently. Within discovery, all 5 tools run in parallel.
Techniques used (all concurrent):
- Certificate Transparency via crt.sh — finds certificates issued for the domain
- HackerTarget API — passive DNS lookup
- Subfinder — passive subdomain enumeration using 50+ online sources (certificate logs, DNS databases, web archives)
- Amass — OWASP Amass subdomain enumeration using 50+ data sources (certificate logs, DNS databases, web archives, WHOIS). Supports optional active mode (zone transfers, certificate grabs) and DNS brute forcing
-
Knockpy — active subdomain brute-forcing (if
useBruteforceForSubdomainsis enabled) - WHOIS Lookup — registrar, dates, contacts, name servers (runs in parallel with discovery)
- URLScan.io — historical scan data, subdomains, IPs, TLS metadata (runs in parallel with discovery)
- DNS Resolution — A, AAAA, MX, NS, TXT, CNAME, SOA records for every discovered subdomain (20 parallel workers)
Output: Domain, Subdomain, IP, and DNSRecord nodes in the graph.
If a specific
subdomainListis configured, the pipeline skips active discovery and only resolves those subdomains (WHOIS + URLScan still run in parallel). In IP mode, this group is replaced by reverse DNS lookups and IP WHOIS — see above.
Shodan enrichment (runs in GROUP 3 alongside port scan):
- Host Lookup — OS, ISP, organization, geolocation, and known vulnerabilities per IP
- Reverse DNS — discover hostnames missed by standard enumeration
- Domain DNS — subdomain enumeration via Shodan's DNS records (paid plan required)
- Passive CVEs — extract known CVEs from host data without active scanning
URLScan.io enrichment (runs in GROUP 1 alongside discovery):
- Queries historical scan data from URLScan.io's Search API
- Discovers subdomains, IP addresses, URL paths, TLS metadata, server technologies, and domain age
- Collects external domains encountered in historical scans for situational awareness
- Works without API key (public results) or with key (higher rate limits)
ExternalDomain nodes: Throughout the pipeline, multiple modules collect out-of-scope domains (URLScan historical data, HTTP probe redirects, Katana/GAU crawling). At the end of the pipeline, these are aggregated, deduplicated, and stored as ExternalDomain nodes linked to the root Domain.
Both modules are independently toggleable in the Discovery & OSINT tab of project settings. If URLScan enrichment runs, the
urlscanprovider is automatically removed from GAU to avoid duplicate data.
Purpose: Expand the target surface by querying up to 13 search engines via ProjectDiscovery's uncover tool. Runs before Shodan and port scanning so newly discovered assets are processed by all downstream modules.
How it works:
- Builds a
provider-config.yamlwith only engines that have valid API keys - Runs the
projectdiscovery/uncoverDocker container with domain-based and SSL cert queries - Deduplicates results by (IP, port), handles engine quirks (Google URL-in-IP, PublicWWW host-only)
- Filters non-routable and CDN IPs via
ip_filter.py - Merges discovered subdomains into
dns.subdomainsand IPs intometadata.expanded_ips
Engines (only those with configured keys are used):
| Engine | Key Source | Notes |
|---|---|---|
| Shodan | Reuses existing Shodan key | Search-based discovery (different from direct host lookup) |
| Censys | Reuses existing Censys token | Platform API v3 |
| FOFA | Reuses existing FOFA key | |
| ZoomEye | Reuses existing ZoomEye key | |
| Netlas | Reuses existing Netlas key | |
| CriminalIP | Reuses existing CriminalIP key | |
| Quake | Uncover-specific key | 360 Quake cyberspace search |
| Hunter | Uncover-specific key | Qianxin Hunter |
| PublicWWW | Uncover-specific key | Source code search engine |
| HunterHow | Uncover-specific key | hunter.how internet search |
| Uncover-specific key + CX | Google Custom Search JSON API | |
| Onyphe | Uncover-specific key | Cyber defense search engine |
| Driftnet | Uncover-specific key | Port and service discovery |
No extra keys needed if you already have Shodan, Censys, FOFA, etc. configured -- uncover reuses them automatically. The 7 uncover-specific engines are optional extras for broader coverage.
IP filtering: Non-routable IPs (RFC 1918, CGNAT, loopback) and CDN IPs are automatically filtered before results enter the pipeline. This prevents wasting API credits on downstream enrichment of unusable addresses.
Output: Subdomain, IP, Port, and Endpoint nodes in the graph. Discovered hosts are injected into the pipeline so GROUP 3+ modules process them.
Purpose: Discover open ports and enrich IPs with Shodan intelligence. Both tasks run concurrently.
Port Scan (Naabu) capabilities:
- SYN scanning (default) with CONNECT fallback
- Top-N port selection (100, 1000, or custom ranges)
- CDN/WAF detection (Cloudflare, Akamai, AWS CloudFront)
- Passive mode via Shodan InternetDB (no packets sent)
- IANA service name mapping (15,000+ entries)
Output: Port nodes linked to IP nodes. Enriched IP nodes from Shodan (OS, ISP, geolocation, passive CVEs).
Purpose: Passively enrich discovered IPs and domains with threat intelligence from seven specialized OSINT platforms. All seven tools run concurrently via ThreadPoolExecutor(max_workers=5). This group runs in parallel with GROUP 3 (port scanning) — neither blocks the other.
Tools (all concurrent):
| Tool | Source | Input | Key Intelligence |
|---|---|---|---|
| Censys | Censys Search API v2 | IPs | Open ports, services, TLS certificate chains, geolocation, ASN, OS fingerprint |
| FOFA | FOFA Search API | Domain/IPs | IP:port pairs, HTTP titles, server headers, geolocation, certificate info, protocol details |
| OTX (AlienVault) | OTX Indicators API v1 | IPs, Domains | Threat reputation, malware families, MITRE ATT&CK IDs, passive DNS history, pulse data |
| Netlas | Netlas Responses API | Domain/IPs | Ports, HTTP metadata, geolocation (lat/lon, timezone), TLS certs, DNS records, WHOIS |
| VirusTotal | VirusTotal API v3 | Domains, IPs | Reputation score, AV analysis stats, categories, tags, JARM fingerprint |
| ZoomEye | ZoomEye API | Domain/IPs | Ports, service banners, device/OS, web app fingerprints, geolocation, ASN, SSL info |
| CriminalIP | Criminal IP API v1 | IPs | Risk score, threat tags (VPN/Tor/proxy/C2/scanner), geo, ISP, abuse history |
OTX is enabled by default — it supports anonymous requests (1,000 req/hr) without an API key. All other tools require an API key configured in Global Settings > API Keys.
Rate limiting: Each tool detects HTTP 429 responses and stops further queries for that scan. VirusTotal free-tier users are limited to 4 requests/minute; the pipeline automatically waits 65 seconds and retries on rate limit. CriminalIP retries after 2 seconds.
Key rotation: FOFA, OTX, Netlas, VirusTotal, ZoomEye, and CriminalIP support automatic round-robin key rotation. Configure additional keys in Global Settings to multiply effective rate limits.
Output: Threat intelligence properties added to existing IP and Domain nodes in Neo4j. Data from each tool is stored as properties on the relevant node (no new node types). Results are also included in the recon_domain.json output file under per-tool keys.
Purpose: Deep service version detection and NSE vulnerability script scanning on all discovered open ports. Runs after port_scan merge (Masscan + Naabu results combined) so it only probes ports already confirmed as open.
Capabilities:
-
Service version detection (
-sV) -- identifies product name, version, and CPE for services running on open ports (e.g.vsftpd 2.3.4,Apache Tomcat/8.5.19) -
NSE vulnerability scripts (
--script vuln) -- runs Nmap Scripting Engine vulnerability checks against discovered services, extracting CVE IDs from script output - Configurable timing -- T1 (Sneaky) through T5 (Insane), with per-host and total timeout controls
- Stealth mode -- automatically reduces timing to T2 (Polite) and disables NSE scripts
Graph enrichment:
- Port nodes enriched with
product,version,cpe,nmap_scannedflag - Technology nodes created from detected services (
(Service)-[:USES_TECHNOLOGY]->(Technology),(Port)-[:HAS_TECHNOLOGY]->(Technology)) - Vulnerability nodes created from NSE findings (
(Vulnerability)-[:AFFECTS]->(Port),(Vulnerability)-[:FOUND_ON]->(Technology)) - CVE nodes created from NSE-detected CVEs (
(Vulnerability)-[:HAS_CVE]->(CVE),(Technology)-[:HAS_KNOWN_CVE]->(CVE))
Output: Enriched Port and Service nodes, new Technology/Vulnerability/CVE nodes from Nmap. Detected service versions also feed into the CVE lookup pipeline (GROUP 6) for NVD/Vulners enrichment.
Only runs when
NMAP_ENABLEDis true and port scan results contain discovered ports. Configure in the Nmap Service Detection tab of project settings.
Purpose: Determine which services are live and what software they run.
httpx probing:
- Status codes, content types, page titles, server headers
- TLS certificate inspection (subject, issuer, expiry, ciphers, JARM)
- Response times, word counts, line counts
Technology detection (dual engine):
- httpx built-in fingerprinting for major frameworks
- Wappalyzer second pass (6,000+ fingerprints) for CMS plugins, JS libraries, analytics tools
Banner grabbing:
- Raw socket connections for non-HTTP services (SSH, FTP, SMTP, MySQL, Redis)
- Protocol-specific probe strings for version extraction
Output: BaseURL, Service, Technology, Certificate, Header nodes.
Purpose: Discover every reachable endpoint and hidden parameter. Four tools run simultaneously, then jsluice, FFuf, and Arjun run sequentially.
| Tool | Type | Description |
|---|---|---|
| Katana | Active | Web crawler following links to configurable depth, optionally with JavaScript rendering |
| Hakrawler | Active | DOM-aware web crawler via Docker, discovers links and forms |
| GAU | Passive | Queries Wayback Machine, Common Crawl, AlienVault OTX, URLScan.io for historical URLs |
| Kiterunner | Active | API brute-forcer testing REST/GraphQL route wordlists |
| jsluice | Passive | JavaScript analysis — extracts URLs, endpoints, and embedded secrets (AWS keys, API tokens, etc.) from .js files discovered by Katana/Hakrawler |
| FFuf | Active | Directory/endpoint fuzzing using wordlists (SecLists + custom uploads) to discover hidden content |
| Arjun | Active | Hidden HTTP parameter discovery — tests ~25,000 parameter names against discovered endpoints. Multiple methods (GET/POST/JSON/XML) run in parallel |
Katana, Hakrawler, GAU, and Kiterunner run in parallel. Once crawling completes, jsluice analyzes the discovered JavaScript files sequentially, FFuf brute-forces directory paths using wordlists, then Arjun discovers hidden parameters on the discovered endpoints (with selected methods running in parallel).
Results are merged, deduplicated, and classified:
- Categories: auth, file_access, api, dynamic, static, admin
- Parameter typing: id, file, search, auth_param
Output: Endpoint, Parameter, and Secret nodes linked to BaseURL nodes.
Purpose: Test discovered endpoints for security vulnerabilities.
Capabilities:
- 9,000+ community templates for known CVEs, misconfigurations, exposed panels
- DAST mode — active fuzzing with XSS, SQLi, RCE, LFI, SSRF, SSTI payloads
- Severity filtering — scan for critical, high, medium, and/or low findings
- Interactsh — out-of-band detection for blind vulnerabilities
- CVE enrichment — cross-references findings against NVD for CVSS scores
30+ custom security checks (configurable individually):
- Direct IP access, missing security headers (CSP, HSTS, etc.)
- TLS certificate expiry, DNS security (SPF, DMARC, DNSSEC, zone transfer)
- Open services (Redis no-auth, Kubernetes API, SMTP open relay)
- Insecure form actions, missing rate limiting
Output: Vulnerability and CVE nodes linked to Endpoints and Parameters.
MITRE Enrichment (runs automatically after Nuclei):
- Maps every CVE to its corresponding CWE weakness and CAPEC attack patterns
- Uses the CVE2CAPEC repository (auto-updated with 24-hour cache TTL)
- Provides attack pattern classification for every vulnerability found
Additional MITRE output: MitreData (CWE) and Capec nodes linked to CVE nodes.
Duration varies based on target size, network conditions, and scan settings:
| Target Type | Approximate Duration |
|---|---|
| Small (1-5 subdomains, few ports) | 5-15 minutes |
| Medium (10-50 subdomains) | 15-45 minutes |
| Large (100+ subdomains) | 1-3 hours |
| IP mode (single IP) | 5-10 minutes |
| IP mode (CIDR /24 = 254 hosts) | 30-90 minutes |
Key factors affecting duration:
- Bruteforce for subdomains adds significant time for large domains
- Katana depth > 2 increases crawling time exponentially
- DAST mode doubles vulnerability scanning time
- GAU with verification adds 30-60 seconds per domain
Once the scan is complete, you can:
- Explore the graph — click nodes to inspect their properties, filter by type using the bottom bar
- Switch to Data Table — view all findings in a searchable, sortable table with Excel export
- Run GVM scan — complement web-layer findings with network-level vulnerability testing (see GVM Vulnerability Scanning)
- Run GitHub Hunt — search for leaked secrets (see GitHub Secret Hunting)
- Run TruffleHog — scan repositories with 700+ secret detectors and live API verification (see TruffleHog Secret Scanning)
- Use the AI Agent — ask the agent to analyze findings, identify attack paths, and exploit vulnerabilities (see AI Agent Guide)
- GVM Vulnerability Scanning — add network-level vulnerability testing
- AI Agent Guide — let the AI analyze and act on your findings
Getting Started
Core Workflow
Scanning & OSINT
AI & Automation
- AI Model Providers
- Agent Skills
- Playwright Browser Automation
- CypherFix — Automated Remediation
- Rules of Engagement (RoE)
HackLab
Analysis & Reporting
- Insights Dashboard
- Pentest Reports
- Attack Surface Graph
- Surface Shaper
- EvoGraph — Attack Chain Evolution
- Data Export & Import
Contributing
Reference & Help