A local-first platform for creating, validating, and curating AI agent skills as standards-compliant artifacts.
Skill Fleet transforms natural language descriptions into production-ready agent skills using an intelligent three-phase workflow: Understanding → Generation → Validation. Built on DSPy for reliable, optimizable LLM programs.
# Create a skill in minutes
uv run skill-fleet chat "Create a React hooks mastery skill for intermediate developers"Traditional prompt engineering creates fragile, unversioned prompts that break when models change. Skill Fleet creates structured, validated, reusable artifacts that agentskills.io-compliant agents can consume.
| Feature | Traditional Prompts | Skill Fleet |
|---|---|---|
| Creation | Manual, trial-and-error | AI-assisted, structured workflow |
| Validation | Ad-hoc testing | Multi-phase validation with quality gates |
| Format | Plain text | agentskills.io compliant SKILL.md |
| Dependencies | Implicit | Explicitly declared and validated |
| Versioning | None | Git-tracked with promotion workflow |
| Discovery | None | Hierarchical taxonomy with search |
- Python 3.12+
- uv package manager
- Git
- API Key: Google (Gemini) or LiteLLM proxy
# Clone and setup
git clone https://github.com/qredence/skill-fleet.git
cd skill-fleet
uv sync --group dev
# Configure environment
cp .env.example .env
# Edit .env and add your GOOGLE_API_KEY or LITELLM credentials# Start the API server
uv run skill-fleet serve
# In another terminal, create a skill interactively
uv run skill-fleet chat "Create a skill for Python decorators"
# Or create non-interactively with auto-approval
uv run skill-fleet create "Build a React testing skill" --auto-approve
# Validate the generated skill
uv run skill-fleet validate skills/_drafts/<job_id>
# Promote to taxonomy when ready
uv run skill-fleet promote <job_id># terminal 1 (repo root): start FastAPI
uv run skill-fleet serve --auto-accept
# terminal 2 (repo root): start frontend
cd src/frontend
bun install
# optional override (defaults to http://127.0.0.1:8000)
export VITE_API_BASE_URL=http://127.0.0.1:8000
bun run devThe frontend package implements the dark chat-ai frame (4041:3) and now wires chat UI to:
POST /api/v1/chat/stream(primary SSE transport)POST /api/v1/chat/message(fallback transport)POST /api/v1/agent/stream(ReAct SSE transport, feature-flagged)POST /api/v1/agent/message(ReAct fallback transport)
Notes:
- Runtime is split: Vite frontend + FastAPI backend (no static mount in this pass).
- Session history/listing endpoints remain backend stubs (
/api/v1/chat/session*). - Toggle frontend backend mode with
VITE_CHAT_BACKEND_MODE=chat|agent(default:chat). - Optional frontend user context for agent calls:
VITE_CHAT_USER_ID(default:default). - In
agentmode, ReAct orchestrates the skill lifecycle across existing endpoints:- starts
/api/v1/skillsworkflows - tracks
/api/v1/jobs/{job_id} - handles
/api/v1/hitl/{job_id}/prompt|response - optionally promotes via
/api/v1/drafts/{job_id}/promote
- starts
- Agent stream emits workflow-aware events (
workflow_status,hitl_required,hitl_submitted,workflow_complete) while preservingstream/predictioncompatibility.
Skill Fleet uses a three-phase workflow powered by DSPy:
User Request
↓
┌─────────────────────────────────────────────────────────┐
│ Phase 1: Understanding │
│ - Extract requirements │
│ - Analyze intent │
│ - Build execution plan │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ Phase 2: Generation │
│ - Create SKILL.md with YAML frontmatter │
│ - Generate code examples │
│ - Apply category-specific templates │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ Phase 3: Validation │
│ - Structure validation │
│ - Compliance checking │
│ - Quality assessment (Best-of-N) │
└─────────────────────────────────────────────────────────┘
↓
Draft Ready for Review → Promote to Taxonomy
- Draft Phase: Skills are generated into
skills/_drafts/<job_id>/ - Review Phase: Human-in-the-loop (HITL) for feedback and refinement
- Promotion Phase: Validated skills moved to stable taxonomy paths
| Command | Description | Example |
|---|---|---|
serve |
Start FastAPI server | uv run skill-fleet serve --reload |
dev |
Start server + TUI | uv run skill-fleet dev |
chat |
Interactive skill creation | uv run skill-fleet chat "Create..." |
create |
Non-interactive creation | uv run skill-fleet create "..." --auto-approve |
validate |
Validate skill directory via API | uv run skill-fleet validate skills/_drafts/<job_id>/<skill-name> |
promote |
Promote draft to taxonomy | uv run skill-fleet promote job_123 |
generate-xml |
Export skills as XML via API | uv run skill-fleet generate-xml |
analytics |
Show usage analytics via API | uv run skill-fleet analytics --user-id all |
# Development mode with auto-reload
uv run skill-fleet serve --reload
# Skip database initialization
uv run skill-fleet serve --skip-db-init
# Custom port
uv run skill-fleet serve --port 8080# Validate with JSON output for scripting
uv run skill-fleet validate ./my-skill --json
# Disable LLM-backed validation (rule-based only)
uv run skill-fleet validate ./my-skill --no-llm
# Override API server URL
uv run skill-fleet validate ./my-skill --api-url http://localhost:8000Create .env file (copy from .env.example):
| Variable | Required | Description |
|---|---|---|
GOOGLE_API_KEY |
Yes* | Gemini API key (or use LiteLLM) |
LITELLM_API_KEY |
Yes* | LiteLLM proxy API key |
LITELLM_BASE_URL |
With LiteLLM | LiteLLM proxy root (e.g. http://localhost:4000) |
DATABASE_URL |
Production | PostgreSQL connection string |
SKILL_FLEET_ENV |
No | development (default) or production |
SKILL_FLEET_CORS_ORIGINS |
Production | Comma-separated allowed origins |
SKILL_FLEET_API_URL |
CLI | API base URL for API-first CLI commands |
SKILL_FLEET_USER_ID |
CLI | Default user id for CLI context |
* Choose either Google API key OR LiteLLM credentials.
Note: LITELLM_BASE_URL must point to the LiteLLM proxy root (or optional /v1). Provider endpoints like .../generateContent are invalid and will error.
Development Mode (SKILL_FLEET_ENV=development):
- SQLite fallback (no DATABASE_URL required)
- CORS allows
* - Debug logging
- Auto-reload enabled
Production Mode (SKILL_FLEET_ENV=production):
- PostgreSQL required
- Explicit CORS origins required
- Structured logging
- Security headers enabled
When the server is running, access interactive documentation:
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/api/v1/skills |
POST | Create skill (returns job ID) |
/api/v1/skills/validate |
POST | Validate by taxonomy-relative skill path |
/api/v1/skills/{id} |
GET | Get skill by ID |
/api/v1/skills/{id}/validate |
POST | Validate existing skill |
/api/v1/jobs/{id} |
GET | Get job status and results |
/api/v1/taxonomy |
GET | List taxonomy categories |
/api/v1/taxonomy/xml |
GET | Export <available_skills> XML |
/api/v1/analytics |
GET | Usage analytics |
/api/v1/analytics/recommendations |
GET | Personalized recommendations |
/api/v1/hitl/responses |
POST | Submit HITL response |
For real-time progress updates:
curl -X POST http://localhost:8000/api/v1/skills/stream \
-H "Content-Type: application/json" \
-d '{
"task_description": "Create a Python asyncio skill",
"user_id": "developer-1",
"enable_hitl": true
}'Returns Server-Sent Events (SSE) with:
- Phase transitions (Understanding → Generation → Validation)
- Real-time reasoning and thoughts
- Progress updates
- HITL suspension points
# Install dependencies
uv sync --group dev
# Run linting and formatting
uv run ruff check --fix .
uv run ruff format .
# Run type checker
uv run ty check
# Run tests
uv run pytest
# Run specific test
uv run pytest tests/unit/test_async_utils.py -v
# Frontend package checks
cd src/frontend
bun run typecheck
bun run build
bun run testFrontend runtime env:
VITE_API_BASE_URL(default:http://127.0.0.1:8000)
# Install hooks
uv run pre-commit install
# Run manually
uv run pre-commit run --all-filessrc/skill_fleet/
├── api/ # FastAPI application
│ ├── v1/ # API endpoints (skills, jobs, HITL)
│ └── services/ # Business logic layer
├── cli/ # Typer CLI application
├── core/ # Core business logic
│ ├── modules/ # DSPy modules (generation, validation)
│ ├── signatures/ # DSPy signatures
│ └── workflows/ # Workflow orchestration
├── taxonomy/ # Taxonomy management
└── infrastructure/ # Database, monitoring, tracing
| Document | Description |
|---|---|
docs/README.md |
Documentation index and navigation |
docs/tutorials/getting-started.md |
Step-by-step onboarding |
docs/how-to-guides/create-a-skill.md |
End-to-end creation guide |
docs/how-to-guides/validate-a-skill.md |
Validation details |
docs/reference/api/endpoints.md |
Complete API reference |
AGENTS.md |
Development workflow guide |
SECURITY.md |
Security policy |
We welcome contributions! Please see docs/explanation/development/contributing.md for guidelines.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Run tests and linting (
uv run pre-commit run --all-files) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Apache License 2.0. See LICENSE for details.
- DSPy - The framework powering our LLM programs
- agentskills.io - The skill standard we implement
- LiteLLM - Proxy for multiple LLM providers
Version: 0.3.6 Status: Alpha Last Updated: 2026-02-03
Built with ❤️ by the Qredence team