MCP server + Claude Code plugin for ComfyUI — execute workflows, generate images, visualize pipelines, manage models, control VRAM, and explore custom nodes, all from your AI coding assistant.
Works on macOS, Linux, and Windows. Auto-detects your ComfyUI installation and port.
31 MCP tools | 10 slash commands | 4 knowledge skills | 3 autonomous agents | 3 hooks
1. Install ComfyUI (if you haven't already): ComfyUI Desktop or from source
2. Add the MCP server to your Claude Code config (~/.claude/settings.json):
{
"mcpServers": {
"comfyui": {
"command": "npx",
"args": ["-y", "comfyui-mcp"],
"env": {
"CIVITAI_API_TOKEN": ""
}
}
}
}3. Start using it. With ComfyUI running, ask Claude to generate an image:
> Generate an image of a sunset over mountains
Claude will find (or download) a checkpoint, build a workflow, execute it, and return the image.
Note: This runs as a standalone MCP server — no need to clone this repo.
npxwill download and run it automatically.
This package also ships as a Claude Code plugin, providing slash commands, skills, agents, and hooks on top of the MCP tools.
claude plugin install comfyui-mcp| Command | Description |
|---|---|
/comfy:gen <prompt> |
Generate an image from a text description — auto-selects checkpoint, builds workflow, returns image |
/comfy:viz <workflow> |
Visualize a workflow as a Mermaid diagram with nodes grouped by category |
/comfy:node-skill <pack> |
Generate a Claude skill for a custom node pack from Registry ID or GitHub URL |
/comfy:debug [prompt_id] |
Diagnose why a workflow failed — reads history, logs, traces root cause, suggests fixes |
/comfy:batch <prompt, params> |
Parameter sweep generation across cfg, sampler, steps, seed, etc. |
/comfy:convert <file> |
Convert between UI format and API format workflows |
/comfy:install <pack> |
Install a custom node pack — git clone, pip install, optional restart |
/comfy:gallery [filter] |
Browse generated outputs with metadata — filter by date, count, or filename |
/comfy:compare <a vs b> |
Diff two workflows side by side — shows added/removed nodes and changed parameters |
/comfy:recipe <name> <prompt> |
Multi-step recipes: portrait, hires-fix, style-transfer, product-shot |
| Skill | Description |
|---|---|
| comfyui-core | Workflow format, node types, data flow patterns, pipeline architecture, MCP tool usage guide |
| prompt-engineering | CLIP weight syntax (word:1.3), BREAK tokens, embeddings, model-specific prompting for SD1.5/SDXL/Flux/SD3 |
| troubleshooting | Common error catalog — OOM, dtype mismatches, missing nodes, NaN tensors, black images, CUDA errors, with VRAM estimates per model |
| model-compatibility | Compatibility matrix — loaders, resolutions, CFG, samplers, ControlNets, LoRAs, and VAEs per model family (SD1.5/SDXL/Turbo/Lightning/Flux/SD3/LTXV) |
| Agent | Model | Description |
|---|---|---|
| comfy-explorer | Sonnet | Researches custom node packs — reads docs, queries /object_info, generates comprehensive skill files |
| comfy-debugger | Sonnet | Autonomously diagnoses workflow failures — gathers logs + history, identifies failing node, checks models + custom nodes, proposes and optionally applies fixes |
| comfy-optimizer | Sonnet | Analyzes workflows for performance — detects redundant nodes, VRAM waste, wrong CFG/steps for model family, precision issues, suggests optimizations |
| Event | Trigger | Action |
|---|---|---|
| PreToolUse | enqueue_workflow |
VRAM watchdog — checks GPU memory via /system_stats and warns if < 1GB free before execution |
| PreToolUse | stop_comfyui, restart_comfyui |
Save warning — prompts user to save unsaved workflow changes before stopping ComfyUI |
| PostToolUse | Any comfyui tool | Job completion notify — checks for completed jobs and injects completion summaries into the conversation |
| Script | Description |
|---|---|
monitor-progress.mjs |
Progress monitor — connects to ComfyUI's WebSocket for real-time step progress (e.g., step 5/14 (36%)). Run as a background Bash task after enqueuing workflows. Reports completion with output filenames, errors with node details. Replaces polling get_job_status in a loop. |
31 tools organized into 13 categories:
| Tool | Description |
|---|---|
enqueue_workflow |
Submit a workflow (API format JSON) — returns prompt_id immediately, non-blocking |
get_job_status |
Check execution status of a job by prompt ID |
get_queue |
View the current execution queue (running + pending) |
cancel_job |
Interrupt the currently running job |
get_system_stats |
Get system info — GPU, VRAM, Python version, OS |
| Tool | Description |
|---|---|
visualize_workflow |
Convert a workflow to a Mermaid flowchart with nodes grouped by category |
mermaid_to_workflow |
Convert a Mermaid diagram back to executable workflow JSON |
| Tool | Description |
|---|---|
create_workflow |
Generate a workflow from templates: txt2img, img2img, upscale, inpaint |
modify_workflow |
Apply operations: set_input, add_node, remove_node, connect, insert_between |
get_node_info |
Query available node types from ComfyUI's /object_info endpoint |
| Tool | Description |
|---|---|
validate_workflow |
Dry-run validation — checks missing nodes, broken connections, invalid output indices, missing model files |
| Tool | Description |
|---|---|
list_workflows |
List saved workflows from ComfyUI's user library |
get_workflow |
Load a specific saved workflow by filename |
save_workflow |
Save a workflow to the ComfyUI user library |
| Tool | Description |
|---|---|
upload_image |
Copy a local image into ComfyUI's input/ directory for img2img, inpaint, or ControlNet |
workflow_from_image |
Extract embedded workflow metadata from a ComfyUI-generated PNG (reads prompt and workflow tEXt chunks) |
list_output_images |
Browse recently generated images from the output directory, sorted newest-first |
| Tool | Description |
|---|---|
search_models |
Search HuggingFace for compatible models (checkpoints, LoRAs, VAEs, etc.) |
download_model |
Download a model from a URL to the correct ComfyUI subdirectory |
list_local_models |
List installed models by type: checkpoints, loras, vae, upscale_models, controlnet, embeddings, clip, unet |
| Tool | Description |
|---|---|
clear_vram |
Free GPU VRAM by unloading cached models — calls ComfyUI's /free endpoint, reports before/after stats |
get_embeddings |
List installed textual inversion embeddings |
| Tool | Description |
|---|---|
search_custom_nodes |
Search the ComfyUI Registry for custom node packs by keyword |
get_node_pack_details |
Get full details of a custom node pack (description, author, nodes, install info) |
generate_node_skill |
Generate a Claude skill .md file from a Registry ID or GitHub URL |
| Tool | Description |
|---|---|
get_logs |
Get ComfyUI server logs with optional keyword filter (e.g., error, warning, a node name) |
get_history |
Get execution history with full error details, Python tracebacks, timing, and cached node info |
| Tool | Description |
|---|---|
stop_comfyui |
Stop the running ComfyUI process (saves PID and launch args for restart) |
start_comfyui |
Start ComfyUI using info saved from a previous stop |
restart_comfyui |
Stop and restart ComfyUI, preserving all launch arguments |
| Tool | Description |
|---|---|
suggest_settings |
Suggest proven sampler/scheduler/steps/CFG settings from local generation history — query by model family, LoRA hash, or text search |
generation_stats |
Show local generation tracking statistics — total runs, unique combos, breakdown by model family |
Every enqueue_workflow call automatically logs settings to a local SQLite database (generations.db). Same settings combos get a reuse_count bump instead of duplicates, creating a natural popularity signal. Models and LoRAs are identified by content hash (AutoV2 / SHA256), not filenames — so renamed files still group together.
# View local stats from the CLI
npm run generations:statsCommunity-maintained preset library (model-settings.json) with research-backed sampler, scheduler, steps, and CFG values for 10+ model families. User overrides in model-settings.user.jsonc (auto-created from template on install, gitignored).
> /comfy:gen a cyberpunk city at night with neon lights
Claude will:
- Check installed checkpoints (download one if needed)
- Build a txt2img workflow with your prompt
- Execute it on ComfyUI
- Return the generated image
> /comfy:viz ~/workflows/my-workflow.json
Produces a Mermaid diagram with nodes grouped by category:
flowchart LR
subgraph Loaders
1["CheckpointLoaderSimple"]
end
subgraph Conditioning
2(["Positive Prompt"])
3(["Negative Prompt"])
end
subgraph Sampling
5{{"KSampler<br/>steps:20 cfg:8"}}
end
1 -->|MODEL| 5
2 -->|CONDITIONING| 5
3 -->|CONDITIONING| 5
> /comfy:debug
Automatically reads the last execution history and logs, identifies the failing node, checks for missing models or node packs, and suggests a fix.
> /comfy:debug abc123-def456
Diagnose a specific execution by prompt ID.
> /comfy:batch a cat in a field, cfg:5-10:2, sampler:euler,dpmpp_2m
Generates a grid of images across all parameter combinations and presents a summary table with results.
Supported sweep parameters: cfg, steps, sampler, scheduler, seed, denoise, width, height.
> /comfy:recipe hires-fix a dramatic fantasy landscape with castles
Runs a two-pass pipeline: txt2img at 512x768, then img2img upscale to 1024x1536 with detail enhancement.
Available recipes:
| Recipe | Description |
|---|---|
portrait |
Generate at 1024x1024, then 2x upscale to 2048x2048 |
hires-fix |
Low-res generation → img2img upscale with denoise 0.4-0.5 |
style-transfer |
Apply a style prompt to an existing image via img2img |
product-shot |
Product image with clean white background |
> /comfy:convert ~/workflows/my-ui-workflow.json
Converts between ComfyUI's UI format (nodes + links arrays) and API format (node IDs → {class_type, inputs}).
> /comfy:install comfyui-impact-pack
Searches the registry, shows details, clones the repo to custom_nodes/, installs dependencies, and offers to restart ComfyUI.
> /comfy:gallery last 5
> /comfy:gallery today
Lists recent outputs with embedded metadata — shows checkpoint, prompt, seed, steps, CFG, sampler for each image.
> /comfy:compare workflow-a.json vs workflow-b.json
Shows added/removed nodes, changed parameters (old → new values), and optional Mermaid diagrams for visual comparison.
> Validate this workflow before I run it
Checks for missing node types, broken connections, invalid output indices, and missing model files — without executing.
> What checkpoints do I have installed?
> Search HuggingFace for SDXL turbo models
> Download this model to my checkpoints folder
> Free my VRAM
> What embeddings do I have?
> Extract the workflow from this image: ~/outputs/ComfyUI_00042_.png
Reads the PNG metadata chunks to recover the exact workflow and prompt used to generate the image.
> /comfy:node-skill comfyui-impact-pack
Generates a comprehensive skill file documenting every node, its inputs/outputs, and usage patterns.
> Restart ComfyUI
> Stop ComfyUI
> Start ComfyUI back up
Set your API key to use Comfy Cloud instead of local ComfyUI. No local installation needed — workflows run on cloud GPUs.
claude mcp add comfyui-cloud \
--env COMFYUI_API_KEY=your-key-here \
-- npx -y comfyui-cloud-mcpOr in your Claude Code config (~/.claude/settings.json):
{
"mcpServers": {
"comfyui-cloud": {
"command": "npx",
"args": ["-y", "comfyui-cloud-mcp"],
"env": {
"COMFYUI_API_KEY": "your-key-here"
}
}
}
}When COMFYUI_API_KEY is set, the MCP server automatically switches to cloud mode:
- All workflow submissions go to
https://cloud.comfy.org(or a customCOMFYUI_CLOUD_URL) - Local port detection is skipped — no local ComfyUI instance required
- Job status is polled via the cloud API instead of WebSocket
- Authentication is handled via
X-API-Keyheader on all requests
| Category | Status | Notes |
|---|---|---|
| Workflow execution | Supported | enqueue_workflow, get_job_status, get_history |
| System stats | Supported | Returns cloud placeholder data |
| Queue management | Partial | get_queue returns empty (cloud manages its own queue); cancel_job requires prompt_id |
| Samplers/schedulers | Supported | Returns common defaults |
| Workflow visualization | Supported | Pure logic, no server dependency |
| Workflow composition | Supported | Template-based, no server dependency |
| Registry search | Supported | Uses external registry API |
These tools require a local ComfyUI installation and will return clear error messages in cloud mode:
- Filesystem tools:
list_workflows,get_workflow,save_workflow,upload_image,list_output_images,workflow_from_image - Local model management:
list_local_models,download_model,get_checkpoints,get_loras,get_vaes - Memory management:
clear_vram,get_embeddings - Process control:
stop_comfyui,start_comfyui,restart_comfyui - Diagnostics:
get_logs(server logs are not exposed by the cloud API) - Node info:
get_node_info(depends on installed nodes on the cloud worker)
The server auto-detects your ComfyUI installation and port. Override with environment variables if needed:
| Variable | Default | Description |
|---|---|---|
COMFYUI_HOST |
127.0.0.1 |
ComfyUI server address |
COMFYUI_PORT |
(auto-detect) | ComfyUI server port (tries 8188, then 8000) |
COMFYUI_PATH |
(auto-detect) | Path to ComfyUI data directory |
COMFYUI_API_KEY |
Comfy Cloud API key (enables cloud mode when set) | |
COMFYUI_CLOUD_URL |
https://cloud.comfy.org |
Comfy Cloud API endpoint (override for self-hosted) |
CIVITAI_API_TOKEN |
CivitAI API token for model downloads | |
HUGGINGFACE_TOKEN |
HuggingFace token for higher API rate limits | |
GITHUB_TOKEN |
GitHub token for skill generation (avoids rate limits) | |
LOG_LEVEL |
info |
Logging verbosity: debug, info, warn, error |
Port: Probes 8188 (CLI default) then 8000 (Desktop app default) via /system_stats.
Path: Checks common locations in order:
~/Documents/ComfyUI(macOS/Windows Desktop app data directory)~/Library/Application Support/ComfyUI(macOS)~/AppData/Local/Programs/ComfyUI/resources/ComfyUI(Windows Desktop app install)~/AppData/Local/ComfyUI(Windows)~/ComfyUI,~/code/ComfyUI,~/projects/ComfyUI,~/src/ComfyUI/opt/ComfyUI,~/.local/share/ComfyUI(Linux)- Scans
~/Documentsand~/My Documentsfor any directory containing "ComfyUI"
Set COMFYUI_PATH to skip detection and use an explicit path.
The server communicates with ComfyUI through its REST API and WebSocket interface:
- WebSocket — enqueue workflows, receive real-time progress updates (step-by-step via background monitor script), get execution results
- REST API — system stats, node definitions (
/object_info), logs, history, queue management, workflow library, VRAM control (/free), embeddings - File system — read/write models directory, detect installation paths, upload images, extract PNG metadata, browse outputs
- External APIs — HuggingFace (model search), ComfyUI Registry (custom node discovery), GitHub (skill generation), CivitAI (model downloads)
All communication with the MCP client (Claude Code) happens over stdio using the Model Context Protocol. Logs go to stderr to avoid polluting the protocol stream.
git clone https://github.com/artokun/comfyui-mcp.git
cd comfyui-mcp
npm install| Script | Description |
|---|---|
npm run dev |
Run from source with tsx (hot reload) |
npm run build |
Compile TypeScript to dist/ |
npm start |
Run compiled output |
npm test |
Run unit tests (vitest) |
npm run test:integration |
Run integration tests (requires running ComfyUI) |
npm run lint |
Type-check without emitting |
npm run generations:stats |
Show local generation tracking statistics |
Point Claude Code at your local build instead of the npm package:
{
"mcpServers": {
"comfyui": {
"command": "node",
"args": ["/path/to/comfyui-mcp/dist/index.js"],
"env": {}
}
}
}Or test the plugin directly:
claude --plugin-dir ./pluginmodel-settings.json # Community-maintained model presets (shipped)
model-settings.user.jsonc.example # User override template (copied on install)
scripts/
postinstall.mjs # Auto-creates user config from template
generation-stats.mjs # CLI: npm run generations:stats
src/
index.ts # MCP server entry point (stdio transport)
config.ts # Auto-detection & environment config
comfyui/
client.ts # ComfyUI WebSocket/HTTP client wrapper
types.ts # TypeScript interfaces
services/
workflow-executor.ts # Execute workflows, handle images & errors
workflow-composer.ts # Templates (txt2img, img2img, upscale, inpaint)
workflow-validator.ts # Dry-run validation (missing nodes, models, connections)
image-management.ts # Upload images, extract PNG metadata, list outputs
mermaid-converter.ts # Workflow → Mermaid diagram
mermaid-parser.ts # Mermaid diagram → Workflow
model-resolver.ts # HuggingFace search, local models, downloads
generation-tracker.ts # SQLite generation log, settings dedup, stats
file-hasher.ts # SHA256 hashing of .safetensors with cache
civitai-lookup.ts # CivitAI API lookup by content hash
workflow-settings-extractor.ts # Extract settings from workflow JSON
process-control.ts # Stop, start, restart ComfyUI process
registry-client.ts # ComfyUI Registry API
skill-generator.ts # Generate node pack skill docs
tools/ # MCP tool registration (one file per group)
workflow-execute.ts # enqueue_workflow, get_system_stats
workflow-visualize.ts # visualize_workflow, mermaid_to_workflow
workflow-compose.ts # create_workflow, modify_workflow, get_node_info
workflow-validate.ts # validate_workflow
workflow-library.ts # list_workflows, get_workflow, save_workflow
image-management.ts # upload_image, workflow_from_image, list_output_images
model-management.ts # search_models, download_model, list_local_models
memory-management.ts # clear_vram, get_embeddings
registry-search.ts # search_custom_nodes, get_node_pack_details
skill-generator.ts # generate_node_skill
generation-tracker.ts # suggest_settings, generation_stats
diagnostics.ts # get_logs, get_history
process-control.ts # stop_comfyui, start_comfyui, restart_comfyui
index.ts # Registers all tool groups
utils/
errors.ts # Custom error hierarchy with MCP integration
logger.ts # stderr-only logging (safe for stdio transport)
image.ts # Base64 encoding utilities
plugin/
.claude-plugin/ # Plugin manifest
.mcp.json # MCP server config for plugin
commands/ # Slash commands
gen.md # /comfy:gen — image generation
viz.md # /comfy:viz — workflow visualization
node-skill.md # /comfy:node-skill — skill generation
debug.md # /comfy:debug — failure diagnosis
batch.md # /comfy:batch — parameter sweeps
convert.md # /comfy:convert — format conversion
install.md # /comfy:install — node pack installation
gallery.md # /comfy:gallery — output browser
compare.md # /comfy:compare — workflow diff
recipe.md # /comfy:recipe — multi-step pipelines
skills/ # Knowledge bases
comfyui-core/ # Workflow format, node types, pipeline patterns
prompt-engineering/ # CLIP syntax, model-specific prompting
troubleshooting/ # Error catalog with patterns and fixes
model-compatibility/ # Compatibility matrix per model family
agents/ # Autonomous agents
explorer.md # Research custom node packs, generate skills
debugger.md # Diagnose workflow failures
optimizer.md # Analyze and optimize workflows
hooks/ # Pre/post tool-use hooks
hooks.json # Hook configuration
vram-check.mjs # VRAM watchdog before execution
save-warning.mjs # Save prompt before stop/restart
job-complete-notify.mjs # Job completion notification via temp files
scripts/ # Background scripts
monitor-progress.mjs # Real-time WebSocket progress monitor
"ComfyUI not detected on ports 8188, 8000"
Make sure ComfyUI is running. The Desktop app uses port 8000 by default; the CLI uses 8188. Set COMFYUI_PORT if you're using a custom port.
"COMFYUI_PATH is not configured"
The auto-detection couldn't find your ComfyUI data directory. Set COMFYUI_PATH to the directory containing your models/ folder (e.g., ~/Documents/ComfyUI).
"Multiple ComfyUI installations detected"
This is informational — the server uses the first one found. Set COMFYUI_PATH to pick a specific installation.
Model downloads fail
For HuggingFace gated models, set HUGGINGFACE_TOKEN. For CivitAI, set CIVITAI_API_TOKEN.
Workflow execution errors
Use /comfy:debug to automatically diagnose failures. Or use get_history / get_logs directly to see detailed error messages including Python tracebacks from ComfyUI.
Out of memory (OOM)
Use clear_vram to free GPU memory before running large workflows. The VRAM watchdog hook will warn you automatically if memory is critically low. See the troubleshooting skill for model-specific VRAM estimates.
Missing custom nodes
Use /comfy:install <pack> to install missing node packs from the registry. The debug command will detect and suggest missing packs automatically.
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Make your changes and ensure
npm run lintpasses - Submit a pull request
MIT — see LICENSE for details.