diff --git a/skills/firecrawl-agent/SKILL.md b/skills/firecrawl-agent/SKILL.md deleted file mode 100644 index e310dfc..0000000 --- a/skills/firecrawl-agent/SKILL.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -name: firecrawl-agent -description: | - AI-powered autonomous data extraction that navigates complex sites and returns structured JSON. Use this skill when the user wants structured data from websites, needs to extract pricing tiers, product listings, directory entries, or any data as JSON with a schema. Triggers on "extract structured data", "get all the products", "pull pricing info", "extract as JSON", or when the user provides a JSON schema for website data. More powerful than simple scraping for multi-page structured extraction. -allowed-tools: - - Bash(firecrawl *) - - Bash(npx firecrawl *) ---- - -# firecrawl agent - -AI-powered autonomous extraction. The agent navigates sites and extracts structured data (takes 2-5 minutes). - -## When to use - -- You need structured data from complex multi-page sites -- Manual scraping would require navigating many pages -- You want the AI to figure out where the data lives - -## Quick start - -```bash -# Extract structured data -firecrawl agent "extract all pricing tiers" --wait -o .firecrawl/pricing.json - -# With a JSON schema for structured output -firecrawl agent "extract products" --schema '{"type":"object","properties":{"name":{"type":"string"},"price":{"type":"number"}}}' --wait -o .firecrawl/products.json - -# Focus on specific pages -firecrawl agent "get feature list" --urls "" --wait -o .firecrawl/features.json -``` - -## Options - -| Option | Description | -| ---------------------- | ----------------------------------------- | -| `--urls ` | Starting URLs for the agent | -| `--model ` | Model to use: spark-1-mini or spark-1-pro | -| `--schema ` | JSON schema for structured output | -| `--schema-file ` | Path to JSON schema file | -| `--max-credits ` | Credit limit for this agent run | -| `--wait` | Wait for agent to complete | -| `--pretty` | Pretty print JSON output | -| `-o, --output ` | Output file path | - -## Tips - -- Always use `--wait` to get results inline. Without it, returns a job ID. -- Use `--schema` for predictable, structured output — otherwise the agent returns freeform data. -- Agent runs consume more credits than simple scrapes. Use `--max-credits` to cap spending. -- For simple single-page extraction, prefer `scrape` — it's faster and cheaper. - -## See also - -- [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — simpler single-page extraction -- [firecrawl-instruct](../firecrawl-instruct/SKILL.md) — scrape + interact for manual page interaction (more control) -- [firecrawl-crawl](../firecrawl-crawl/SKILL.md) — bulk extraction without AI diff --git a/skills/firecrawl-cli/SKILL.md b/skills/firecrawl-cli/SKILL.md index 41f4cdd..5755483 100644 --- a/skills/firecrawl-cli/SKILL.md +++ b/skills/firecrawl-cli/SKILL.md @@ -71,17 +71,31 @@ For detailed command reference, run `firecrawl --help`. ## When to Load References -- **Searching the web or finding sources first** -> [firecrawl-search](../firecrawl-search/SKILL.md) -- **Scraping a known URL** -> [firecrawl-scrape](../firecrawl-scrape/SKILL.md) -- **Finding URLs on a known site** -> [firecrawl-map](../firecrawl-map/SKILL.md) -- **Bulk extraction from a docs section or site** -> [firecrawl-crawl](../firecrawl-crawl/SKILL.md) -- **AI-powered structured extraction from complex sites** -> [firecrawl-agent](../firecrawl-agent/SKILL.md) -- **Clicks, forms, login, pagination, or post-scrape browser actions** -> [firecrawl-instruct](../firecrawl-instruct/SKILL.md) -- **Downloading a site to local files** -> [firecrawl-download](../firecrawl-download/SKILL.md) +- **Searching the web or finding sources first** -> [references/search.md](references/search.md) +- **Scraping a known URL** -> [references/scrape.md](references/scrape.md) +- **Finding URLs on a known site** -> [references/map.md](references/map.md) +- **Bulk extraction from a docs section or site** -> [references/crawl.md](references/crawl.md) +- **AI-powered structured extraction from complex sites** -> [references/agent.md](references/agent.md) +- **Clicks, forms, login, pagination, or post-scrape browser actions** -> [references/interact.md](references/interact.md) +- **Downloading a site to local files** -> [references/download.md](references/download.md) - **Install, auth, or setup problems** -> [rules/install.md](rules/install.md) - **Output handling and safe file-reading patterns** -> [rules/security.md](rules/security.md) - **Integrating Firecrawl into an app, adding `FIRECRAWL_API_KEY` to `.env`, or choosing endpoint usage in product code** -> install the Firecrawl skills repo with `npx skills add firecrawl/skills` +## References + +| Need | Reference | +| ---- | --------- | +| Search the web or discover sources | [references/search.md](references/search.md) | +| Scrape a known URL | [references/scrape.md](references/scrape.md) | +| Find URLs on a known site | [references/map.md](references/map.md) | +| Crawl a site section | [references/crawl.md](references/crawl.md) | +| Extract structured data with AI | [references/agent.md](references/agent.md) | +| Interact with a page after scraping | [references/interact.md](references/interact.md) | +| Download a site to local files | [references/download.md](references/download.md) | +| Fix install or auth issues | [rules/install.md](rules/install.md) | +| Review output-handling guidance | [rules/security.md](rules/security.md) | + ## Output & Organization Unless the user specifies to return in context, write results to `.firecrawl/` with `-o`. Add `.firecrawl/` to `.gitignore`. Always quote URLs - shell interprets `?` and `&` as special characters. diff --git a/skills/firecrawl-cli/references/agent.md b/skills/firecrawl-cli/references/agent.md new file mode 100644 index 0000000..ace189c --- /dev/null +++ b/skills/firecrawl-cli/references/agent.md @@ -0,0 +1,35 @@ +# Agent Reference + +Use this when manual scraping would require navigating many pages or when the user wants structured JSON output with a schema. + +## Quick start + +```bash +# Extract structured data +firecrawl agent "extract all pricing tiers" --wait -o .firecrawl/pricing.json + +# With a JSON schema +firecrawl agent "extract products" --schema '{"type":"object","properties":{"name":{"type":"string"},"price":{"type":"number"}}}' --wait -o .firecrawl/products.json + +# Focus on specific pages +firecrawl agent "get feature list" --urls "" --wait -o .firecrawl/features.json +``` + +## Options + +| Option | Description | +| ---------------------- | ----------------------------------------- | +| `--urls ` | Starting URLs for the agent | +| `--model ` | Model to use: spark-1-mini or spark-1-pro | +| `--schema ` | JSON schema for structured output | +| `--schema-file ` | Path to JSON schema file | +| `--max-credits ` | Credit limit for this agent run | +| `--wait` | Wait for agent to complete | +| `--pretty` | Pretty print JSON output | +| `-o, --output ` | Output file path | + +## Tips + +- Use `--schema` for predictable output whenever possible. +- Agent runs usually cost more and take longer than plain scrape or crawl. +- For single-page extraction, prefer `scrape` first. diff --git a/skills/firecrawl-cli/references/crawl.md b/skills/firecrawl-cli/references/crawl.md new file mode 100644 index 0000000..ca88a90 --- /dev/null +++ b/skills/firecrawl-cli/references/crawl.md @@ -0,0 +1,37 @@ +# Crawl Reference + +Use this when you need content from many pages on one site, such as a docs section or large content area. + +## Quick start + +```bash +# Crawl a docs section +firecrawl crawl "" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json + +# Full crawl with depth limit +firecrawl crawl "" --max-depth 3 --wait --progress -o .firecrawl/crawl.json + +# Check status of a running crawl +firecrawl crawl +``` + +## Options + +| Option | Description | +| ------------------------- | ------------------------------------------- | +| `--wait` | Wait for crawl to complete before returning | +| `--progress` | Show progress while waiting | +| `--limit ` | Max pages to crawl | +| `--max-depth ` | Max link depth to follow | +| `--include-paths ` | Only crawl URLs matching these paths | +| `--exclude-paths ` | Skip URLs matching these paths | +| `--delay ` | Delay between requests | +| `--max-concurrency ` | Max parallel crawl workers | +| `--pretty` | Pretty print JSON output | +| `-o, --output ` | Output file path | + +## Tips + +- Use `--wait` when you need results in the current session. +- Scope crawls with `--include-paths` instead of crawling an entire domain unnecessarily. +- Large crawls consume credits per page, so check `firecrawl credit-usage` first. diff --git a/skills/firecrawl-cli/references/download.md b/skills/firecrawl-cli/references/download.md new file mode 100644 index 0000000..14134d7 --- /dev/null +++ b/skills/firecrawl-cli/references/download.md @@ -0,0 +1,36 @@ +# Download Reference + +Use this when the goal is to save an entire site or section as local files rather than just extract content once. + +## Quick start + +```bash +# Interactive wizard +firecrawl download https://docs.example.com + +# With screenshots +firecrawl download https://docs.example.com --screenshot --limit 20 -y + +# Filter to specific sections +firecrawl download https://docs.example.com --include-paths "/features,/sdks" + +# Skip translations +firecrawl download https://docs.example.com --exclude-paths "/zh,/ja,/fr,/es,/pt-BR" +``` + +## Options + +| Option | Description | +| ------------------------- | ------------------------------------------- | +| `--limit ` | Max pages to download | +| `--search ` | Filter URLs by search query | +| `--include-paths ` | Only download matching paths | +| `--exclude-paths ` | Skip matching paths | +| `--allow-subdomains` | Include subdomain pages | +| `-y` | Skip confirmation prompt in automated flows | + +## Tips + +- Always pass `-y` in automated flows to skip the confirmation prompt. +- All scrape options still work with download. +- Use download when the end goal is local files, not just extraction output. diff --git a/skills/firecrawl-cli/references/interact.md b/skills/firecrawl-cli/references/interact.md new file mode 100644 index 0000000..6020135 --- /dev/null +++ b/skills/firecrawl-cli/references/interact.md @@ -0,0 +1,37 @@ +# Interact Reference + +Use this when content requires clicks, forms, login, pagination, or multi-step navigation and plain scrape is not enough. + +## Quick start + +```bash +# 1. Scrape a page +firecrawl scrape "" + +# 2. Interact with natural language +firecrawl interact --prompt "Click the login button" +firecrawl interact --prompt "Fill in the email field with test@example.com" + +# 3. Or use code for precise control +firecrawl interact --code "agent-browser click @e5" --language bash + +# 4. Stop the session +firecrawl interact stop +``` + +## Options + +| Option | Description | +| --------------------- | -------------------------------------- | +| `--prompt ` | Natural language instruction | +| `--code ` | Code to execute in the browser session | +| `--language ` | Language for code: bash, python, node | +| `--timeout ` | Execution timeout | +| `--scrape-id ` | Target a specific scrape | +| `-o, --output ` | Output file path | + +## Tips + +- Always scrape first. `interact` needs a scrape ID from a previous scrape. +- Use `firecrawl interact stop` to free resources when done. +- Do not use interact for general web search. Use `search` for that. diff --git a/skills/firecrawl-cli/references/map.md b/skills/firecrawl-cli/references/map.md new file mode 100644 index 0000000..d1cc584 --- /dev/null +++ b/skills/firecrawl-cli/references/map.md @@ -0,0 +1,29 @@ +# Map Reference + +Use this when the user knows the site but not the exact page, or wants to list and filter URLs before scraping. + +## Quick start + +```bash +# Find a specific page on a large site +firecrawl map "" --search "authentication" -o .firecrawl/filtered.txt + +# Get all URLs +firecrawl map "" --limit 500 --json -o .firecrawl/urls.json +``` + +## Options + +| Option | Description | +| --------------------------------- | ---------------------------- | +| `--limit ` | Max number of URLs to return | +| `--search ` | Filter URLs by search query | +| `--sitemap ` | Sitemap handling strategy | +| `--include-subdomains` | Include subdomain URLs | +| `--json` | Output as JSON | +| `-o, --output ` | Output file path | + +## Tips + +- `map --search` plus `scrape` is a common pattern for large docs sites. +- If the goal is to download or extract lots of pages, graduate to `crawl` or `download`. diff --git a/skills/firecrawl-cli/references/scrape.md b/skills/firecrawl-cli/references/scrape.md new file mode 100644 index 0000000..0ac425b --- /dev/null +++ b/skills/firecrawl-cli/references/scrape.md @@ -0,0 +1,38 @@ +# Scrape Reference + +Use this when you already have a URL and want content from a static page or JS-rendered SPA. + +## Quick start + +```bash +# Basic markdown extraction +firecrawl scrape "" -o .firecrawl/page.md + +# Main content only +firecrawl scrape "" --only-main-content -o .firecrawl/page.md + +# Wait for JS to render +firecrawl scrape "" --wait-for 3000 -o .firecrawl/page.md + +# Multiple URLs +firecrawl scrape https://example.com https://example.com/blog https://example.com/docs +``` + +## Options + +| Option | Description | +| ------------------------ | ---------------------------------------------------------------- | +| `-f, --format ` | Output formats: markdown, html, rawHtml, links, screenshot, json | +| `-Q, --query ` | Ask a question about the page content (5 credits) | +| `-H` | Include HTTP headers in output | +| `--only-main-content` | Strip nav, footer, sidebar and keep the main content | +| `--wait-for ` | Wait for JS rendering before scraping | +| `--include-tags ` | Only include these HTML tags | +| `--exclude-tags ` | Exclude these HTML tags | +| `-o, --output ` | Output file path | + +## Tips + +- Prefer plain scrape over `--query` when you want the full page content to inspect yourself. +- Try scrape before interact. Escalate only when you truly need clicks, forms, login, or pagination. +- Multiple URLs are scraped concurrently. Check `firecrawl --status` for the current concurrency limit. diff --git a/skills/firecrawl-search/SKILL.md b/skills/firecrawl-cli/references/search.md similarity index 50% rename from skills/firecrawl-search/SKILL.md rename to skills/firecrawl-cli/references/search.md index aeea8ed..0928392 100644 --- a/skills/firecrawl-search/SKILL.md +++ b/skills/firecrawl-cli/references/search.md @@ -1,21 +1,6 @@ ---- -name: firecrawl-search -description: | - Web search with full page content extraction. Use this skill whenever the user asks to search the web, find articles, research a topic, look something up, find recent news, discover sources, or says "search for", "find me", "look up", "what are people saying about", or "find articles about". Returns real search results with optional full-page markdown — not just snippets. Provides capabilities beyond Claude's built-in WebSearch. -allowed-tools: - - Bash(firecrawl *) - - Bash(npx firecrawl *) ---- +# Search Reference -# firecrawl search - -Web search with optional content scraping. Returns search results as JSON, optionally with full page content. - -## When to use - -- You don't have a specific URL yet -- You need to find pages, answer questions, or discover sources -- First step in the [workflow escalation pattern](firecrawl-cli): search → scrape → map → crawl → interact +Use this when you do not have a specific URL yet and need to find pages, answer questions, discover sources, or gather recent results. ## Quick start @@ -47,13 +32,6 @@ firecrawl search "your query" --sources news --tbs qdr:d -o .firecrawl/news.json ## Tips -- **`--scrape` fetches full content** — don't re-scrape URLs from search results. This saves credits and avoids redundant fetches. +- `--scrape` already fetches full page content, so do not re-scrape the same result URLs. - Always write results to `.firecrawl/` with `-o` to avoid context window bloat. -- Use `jq` to extract URLs or titles: `jq -r '.data.web[].url' .firecrawl/search.json` -- Naming convention: `.firecrawl/search-{query}.json` or `.firecrawl/search-{query}-scraped.json` - -## See also - -- [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — scrape a specific URL -- [firecrawl-map](../firecrawl-map/SKILL.md) — discover URLs within a site -- [firecrawl-crawl](../firecrawl-crawl/SKILL.md) — bulk extract from a site +- Use `jq -r '.data.web[].url' .firecrawl/search.json` to extract URLs quickly. diff --git a/skills/firecrawl-crawl/SKILL.md b/skills/firecrawl-crawl/SKILL.md deleted file mode 100644 index ca6e6b5..0000000 --- a/skills/firecrawl-crawl/SKILL.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -name: firecrawl-crawl -description: | - Bulk extract content from an entire website or site section. Use this skill when the user wants to crawl a site, extract all pages from a docs section, bulk-scrape multiple pages following links, or says "crawl", "get all the pages", "extract everything under /docs", "bulk extract", or needs content from many pages on the same site. Handles depth limits, path filtering, and concurrent extraction. -allowed-tools: - - Bash(firecrawl *) - - Bash(npx firecrawl *) ---- - -# firecrawl crawl - -Bulk extract content from a website. Crawls pages following links up to a depth/limit. - -## When to use - -- You need content from many pages on a site (e.g., all `/docs/`) -- You want to extract an entire site section -- Step 4 in the [workflow escalation pattern](firecrawl-cli): search → scrape → map → **crawl** → interact - -## Quick start - -```bash -# Crawl a docs section -firecrawl crawl "" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json - -# Full crawl with depth limit -firecrawl crawl "" --max-depth 3 --wait --progress -o .firecrawl/crawl.json - -# Check status of a running crawl -firecrawl crawl -``` - -## Options - -| Option | Description | -| ------------------------- | ------------------------------------------- | -| `--wait` | Wait for crawl to complete before returning | -| `--progress` | Show progress while waiting | -| `--limit ` | Max pages to crawl | -| `--max-depth ` | Max link depth to follow | -| `--include-paths ` | Only crawl URLs matching these paths | -| `--exclude-paths ` | Skip URLs matching these paths | -| `--delay ` | Delay between requests | -| `--max-concurrency ` | Max parallel crawl workers | -| `--pretty` | Pretty print JSON output | -| `-o, --output ` | Output file path | - -## Tips - -- Always use `--wait` when you need the results immediately. Without it, crawl returns a job ID for async polling. -- Use `--include-paths` to scope the crawl — don't crawl an entire site when you only need one section. -- Crawl consumes credits per page. Check `firecrawl credit-usage` before large crawls. - -## See also - -- [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — scrape individual pages -- [firecrawl-map](../firecrawl-map/SKILL.md) — discover URLs before deciding to crawl -- [firecrawl-download](../firecrawl-download/SKILL.md) — download site to local files (uses map + scrape) diff --git a/skills/firecrawl-download/SKILL.md b/skills/firecrawl-download/SKILL.md deleted file mode 100644 index d2beeb7..0000000 --- a/skills/firecrawl-download/SKILL.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -name: firecrawl-download -description: | - Download an entire website as local files — markdown, screenshots, or multiple formats per page. Use this skill when the user wants to save a site locally, download documentation for offline use, bulk-save pages as files, or says "download the site", "save as local files", "offline copy", "download all the docs", or "save for reference". Combines site mapping and scraping into organized local directories. -allowed-tools: - - Bash(firecrawl *) - - Bash(npx firecrawl *) ---- - -# firecrawl download - -> **Experimental.** Convenience command that combines `map` + `scrape` to save an entire site as local files. - -Maps the site first to discover pages, then scrapes each one into nested directories under `.firecrawl/`. All scrape options work with download. Always pass `-y` to skip the confirmation prompt. - -## When to use - -- You want to save an entire site (or section) to local files -- You need offline access to documentation or content -- Bulk content extraction with organized file structure - -## Quick start - -```bash -# Interactive wizard (picks format, screenshots, paths for you) -firecrawl download https://docs.example.com - -# With screenshots -firecrawl download https://docs.example.com --screenshot --limit 20 -y - -# Multiple formats (each saved as its own file per page) -firecrawl download https://docs.example.com --format markdown,links --screenshot --limit 20 -y -# Creates per page: index.md + links.txt + screenshot.png - -# Filter to specific sections -firecrawl download https://docs.example.com --include-paths "/features,/sdks" - -# Skip translations -firecrawl download https://docs.example.com --exclude-paths "/zh,/ja,/fr,/es,/pt-BR" - -# Full combo -firecrawl download https://docs.example.com \ - --include-paths "/features,/sdks" \ - --exclude-paths "/zh,/ja" \ - --only-main-content \ - --screenshot \ - -y -``` - -## Download options - -| Option | Description | -| ------------------------- | -------------------------------------------------------- | -| `--limit ` | Max pages to download | -| `--search ` | Filter URLs by search query | -| `--include-paths ` | Only download matching paths | -| `--exclude-paths ` | Skip matching paths | -| `--allow-subdomains` | Include subdomain pages | -| `-y` | Skip confirmation prompt (always use in automated flows) | - -## Scrape options (all work with download) - -`-f `, `-H`, `-S`, `--screenshot`, `--full-page-screenshot`, `--only-main-content`, `--include-tags`, `--exclude-tags`, `--wait-for`, `--max-age`, `--country`, `--languages` - -## See also - -- [firecrawl-map](../firecrawl-map/SKILL.md) — just discover URLs without downloading -- [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — scrape individual pages -- [firecrawl-crawl](../firecrawl-crawl/SKILL.md) — bulk extract as JSON (not local files) diff --git a/skills/firecrawl-instruct/SKILL.md b/skills/firecrawl-instruct/SKILL.md deleted file mode 100644 index c5fcb6f..0000000 --- a/skills/firecrawl-instruct/SKILL.md +++ /dev/null @@ -1,83 +0,0 @@ ---- -name: firecrawl-instruct -description: | - Control and interact with a live browser session on any scraped page — click buttons, fill forms, navigate flows, and extract data using natural language prompts or code. Replaces the old firecrawl-browser command. Use when the user needs to interact with a webpage beyond simple scraping: logging into a site, submitting forms, clicking through pagination, handling infinite scroll, navigating multi-step checkout or wizard flows, or when a regular scrape failed because content is behind JavaScript interaction. Also useful for authenticated scraping via profiles. Triggers on "browser", "instruct", "click", "fill out the form", "log in to", "sign in", "submit", "paginated", "next page", "infinite scroll", "interact with the page", "navigate to", "open a session", or "scrape failed". -allowed-tools: - - Bash(firecrawl *) - - Bash(npx firecrawl *) ---- - -# firecrawl instruct - -Interact with scraped pages in a live browser session. Scrape a page first, then use natural language prompts or code to click, fill forms, navigate, and extract data. - -## When to use - -- Content requires interaction: clicks, form fills, pagination, login -- `scrape` failed because content is behind JavaScript interaction -- You need to navigate a multi-step flow -- Last resort in the [workflow escalation pattern](firecrawl-cli): search → scrape → map → crawl → **instruct** -- **Never use instruct for web searches** — use `search` instead - -## Quick start - -```bash -# 1. Scrape a page (scrape ID is saved automatically) -firecrawl scrape "" - -# 2. Interact with the page using natural language -firecrawl interact --prompt "Click the login button" -firecrawl interact --prompt "Fill in the email field with test@example.com" -firecrawl interact --prompt "Extract the pricing table" - -# 3. Or use code for precise control -firecrawl interact --code "agent-browser click @e5" --language bash -firecrawl interact --code "agent-browser snapshot -i" --language bash - -# 4. Stop the session when done -firecrawl interact stop -``` - -## Options - -| Option | Description | -| --------------------- | ------------------------------------------------- | -| `--prompt ` | Natural language instruction (use this OR --code) | -| `--code ` | Code to execute in the browser session | -| `--language ` | Language for code: bash, python, node | -| `--timeout ` | Execution timeout (default: 30, max: 300) | -| `--scrape-id ` | Target a specific scrape (default: last scrape) | -| `-o, --output ` | Output file path | - -## Profiles - -Use `--profile` on the scrape to persist browser state (cookies, localStorage) across scrapes: - -```bash -# Session 1: Login and save state -firecrawl scrape "https://app.example.com/login" --profile my-app -firecrawl interact --prompt "Fill in email with user@example.com and click login" - -# Session 2: Come back authenticated -firecrawl scrape "https://app.example.com/dashboard" --profile my-app -firecrawl interact --prompt "Extract the dashboard data" -``` - -Read-only reconnect (no writes to profile state): - -```bash -firecrawl scrape "https://app.example.com" --profile my-app --no-save-changes -``` - -## Tips - -- Always scrape first — `interact` requires a scrape ID from a previous `firecrawl scrape` call -- The scrape ID is saved automatically, so you don't need `--scrape-id` for subsequent interact calls -- Use `firecrawl interact stop` to free resources when done -- For parallel work, scrape multiple pages and interact with each using `--scrape-id` - -## See also - -- [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — try scrape first, escalate to instruct only when needed -- [firecrawl-search](../firecrawl-search/SKILL.md) — for web searches (never use instruct for searching) -- [firecrawl-agent](../firecrawl-agent/SKILL.md) — AI-powered extraction (less manual control) diff --git a/skills/firecrawl-map/SKILL.md b/skills/firecrawl-map/SKILL.md deleted file mode 100644 index 77eaacf..0000000 --- a/skills/firecrawl-map/SKILL.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -name: firecrawl-map -description: | - Discover and list all URLs on a website, with optional search filtering. Use this skill when the user wants to find a specific page on a large site, list all URLs, see the site structure, find where something is on a domain, or says "map the site", "find the URL for", "what pages are on", or "list all pages". Essential when the user knows which site but not which exact page. -allowed-tools: - - Bash(firecrawl *) - - Bash(npx firecrawl *) ---- - -# firecrawl map - -Discover URLs on a site. Use `--search` to find a specific page within a large site. - -## When to use - -- You need to find a specific subpage on a large site -- You want a list of all URLs on a site before scraping or crawling -- Step 3 in the [workflow escalation pattern](firecrawl-cli): search → scrape → **map** → crawl → interact - -## Quick start - -```bash -# Find a specific page on a large site -firecrawl map "" --search "authentication" -o .firecrawl/filtered.txt - -# Get all URLs -firecrawl map "" --limit 500 --json -o .firecrawl/urls.json -``` - -## Options - -| Option | Description | -| --------------------------------- | ---------------------------- | -| `--limit ` | Max number of URLs to return | -| `--search ` | Filter URLs by search query | -| `--sitemap ` | Sitemap handling strategy | -| `--include-subdomains` | Include subdomain URLs | -| `--json` | Output as JSON | -| `-o, --output ` | Output file path | - -## Tips - -- **Map + scrape is a common pattern**: use `map --search` to find the right URL, then `scrape` it. -- Example: `map https://docs.example.com --search "auth"` → found `/docs/api/authentication` → `scrape` that URL. - -## See also - -- [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — scrape the URLs you discover -- [firecrawl-crawl](../firecrawl-crawl/SKILL.md) — bulk extract instead of map + scrape -- [firecrawl-download](../firecrawl-download/SKILL.md) — download entire site (uses map internally) diff --git a/skills/firecrawl-scrape/SKILL.md b/skills/firecrawl-scrape/SKILL.md deleted file mode 100644 index bf5008d..0000000 --- a/skills/firecrawl-scrape/SKILL.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -name: firecrawl-scrape -description: | - Extract clean markdown from any URL, including JavaScript-rendered SPAs. Use this skill whenever the user provides a URL and wants its content, says "scrape", "grab", "fetch", "pull", "get the page", "extract from this URL", or "read this webpage". Handles JS-rendered pages, multiple concurrent URLs, and returns LLM-optimized markdown. Use this instead of WebFetch for any webpage content extraction. -allowed-tools: - - Bash(firecrawl *) - - Bash(npx firecrawl *) ---- - -# firecrawl scrape - -Scrape one or more URLs. Returns clean, LLM-optimized markdown. Multiple URLs are scraped concurrently. - -## When to use - -- You have a specific URL and want its content -- The page is static or JS-rendered (SPA) -- Step 2 in the [workflow escalation pattern](firecrawl-cli): search → **scrape** → map → crawl → interact - -## Quick start - -```bash -# Basic markdown extraction -firecrawl scrape "" -o .firecrawl/page.md - -# Main content only, no nav/footer -firecrawl scrape "" --only-main-content -o .firecrawl/page.md - -# Wait for JS to render, then scrape -firecrawl scrape "" --wait-for 3000 -o .firecrawl/page.md - -# Multiple URLs (each saved to .firecrawl/) -firecrawl scrape https://example.com https://example.com/blog https://example.com/docs - -# Get markdown and links together -firecrawl scrape "" --format markdown,links -o .firecrawl/page.json - -# Ask a question about the page -firecrawl scrape "https://example.com/pricing" --query "What is the enterprise plan price?" -``` - -## Options - -| Option | Description | -| ------------------------ | ---------------------------------------------------------------- | -| `-f, --format ` | Output formats: markdown, html, rawHtml, links, screenshot, json | -| `-Q, --query ` | Ask a question about the page content (5 credits) | -| `-H` | Include HTTP headers in output | -| `--only-main-content` | Strip nav, footer, sidebar — main content only | -| `--wait-for ` | Wait for JS rendering before scraping | -| `--include-tags ` | Only include these HTML tags | -| `--exclude-tags ` | Exclude these HTML tags | -| `-o, --output ` | Output file path | - -## Tips - -- **Prefer plain scrape over `--query`.** Scrape to a file, then use `grep`, `head`, or read the markdown directly — you can search and reason over the full content yourself. Use `--query` only when you want a single targeted answer without saving the page (costs 5 extra credits). -- **Try scrape before interact.** Scrape handles static pages and JS-rendered SPAs. Only escalate to `interact` when you need interaction (clicks, form fills, pagination). -- Multiple URLs are scraped concurrently — check `firecrawl --status` for your concurrency limit. -- Single format outputs raw content. Multiple formats (e.g., `--format markdown,links`) output JSON. -- Always quote URLs — shell interprets `?` and `&` as special characters. -- Naming convention: `.firecrawl/{site}-{path}.md` - -## See also - -- [firecrawl-search](../firecrawl-search/SKILL.md) — find pages when you don't have a URL -- [firecrawl-instruct](../firecrawl-instruct/SKILL.md) — when scrape can't get the content, use `interact` to click, fill forms, etc. -- [firecrawl-download](../firecrawl-download/SKILL.md) — bulk download an entire site to local files