GitHub Action for multi-model consensus verification in CI/CD pipelines.
LLM Council provides AI-powered quality gates for your CI/CD pipelines. Instead of relying on a single model's judgment, it:
- Gathers opinions from multiple LLMs (GPT, Claude, Gemini, etc.)
- Anonymizes and cross-evaluates each response
- Synthesizes a consensus verdict with confidence score
name: Quality Gate
on: [pull_request]
jobs:
verify:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: amiable-dev/llm-council-action@v1
with:
snapshot: ${{ github.sha }}
confidence-threshold: 0.8
env:
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}| Input | Description | Required | Default |
|---|---|---|---|
command |
Command: gate, verify, or review |
No | gate |
snapshot |
Git SHA to verify | No | ${{ github.sha }} |
file-paths |
Specific files to verify (space-separated) | No | - |
confidence-threshold |
Minimum confidence to pass (0.0-1.0) | No | 0.8 |
rubric-focus |
Focus: Security, Performance, Testing, General |
No | General |
version |
llm-council-core version | No | 0.24.5 |
python-version |
Python version | No | 3.11 |
fail-on-unclear |
Fail when verdict is UNCLEAR | No | false |
| Output | Description |
|---|---|
verdict |
PASS, FAIL, or UNCLEAR |
exit-code |
0=PASS, 1=FAIL, 2=UNCLEAR |
confidence |
Aggregate confidence score (0.0-1.0) |
summary |
Human-readable evaluation summary |
transcript-path |
Path to verification transcript |
| Code | Verdict | Action |
|---|---|---|
0 |
PASS | Continue pipeline |
1 |
FAIL | Block pipeline |
2 |
UNCLEAR | Require human review (configurable) |
- uses: amiable-dev/llm-council-action@v1
with:
command: review
file-paths: src/auth.py src/api.py
rubric-focus: Security
confidence-threshold: 0.85
env:
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}- uses: amiable-dev/llm-council-action@v1
with:
command: verify
file-paths: src/feature.py tests/test_feature.py
confidence-threshold: 0.75
env:
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}- uses: amiable-dev/llm-council-action@v1
with:
fail-on-unclear: true
confidence-threshold: 0.9
env:
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}- uses: amiable-dev/llm-council-action@v1
id: council
with:
snapshot: ${{ github.sha }}
env:
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
- name: Comment on PR
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const verdict = '${{ steps.council.outputs.verdict }}';
const confidence = '${{ steps.council.outputs.confidence }}';
github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `## LLM Council: ${verdict}\nConfidence: ${confidence}`
});The action supports multiple LLM providers via environment variables:
| Variable | Provider |
|---|---|
OPENROUTER_API_KEY |
OpenRouter (recommended - access to 100+ models) |
OPENAI_API_KEY |
OpenAI directly |
ANTHROPIC_API_KEY |
Anthropic directly |
Recommended: Use OpenRouter for access to multiple models with a single API key.
- Sign up at OpenRouter (recommended) or your preferred provider
- Create an API key from the dashboard
- Note the key for the next step
Via GitHub UI:
- Go to your repository → Settings → Secrets and variables → Actions
- Click New repository secret
- Name:
OPENROUTER_API_KEY - Value: Your API key from Step 1
- Click Add secret
Via GitHub CLI:
gh secret set OPENROUTER_API_KEY --repo your-org/your-repo
# Paste your API key when promptedCreate .github/workflows/council-gate.yml:
name: Council Quality Gate
on: [pull_request]
jobs:
verify:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: amiable-dev/llm-council-action@v1
with:
snapshot: ${{ github.sha }}
confidence-threshold: 0.8
env:
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}If you're maintaining a fork or want automatic version sync between llm-council and this action:
-
Create a Fine-Grained PAT (least privilege)
- Go to https://github.com/settings/personal-access-tokens/new
- Token name:
llm-council-action-sync - Expiration: 90 days (or your preference)
- Repository access: "Only select repositories" → select
llm-council-action - Permissions:
- Contents: Read and write (required for pushing commits and tags)
- Metadata: Read (auto-selected)
- Click "Generate token"
Alternative: Classic PAT with
public_reposcope only (not fullrepo) -
Add PAT to llm-council repo
gh secret set ACTION_REPO_PAT --repo your-org/llm-council # Paste the PAT when prompted
-
How it works
- When a new version is released to PyPI,
sync-action.ymltriggers - It updates the default version in
action.yml - It moves the
v1tag to the latest commit
- When a new version is released to PyPI,
Manual sync (if not using ACTION_REPO_PAT):
# In llm-council-action repo
VERSION="0.24.6" # New version
sed -i "s/default: '[0-9.]*'/default: '$VERSION'/" action.yml
git commit -am "chore: sync with llm-council-core v$VERSION"
git push origin main
git tag -f v1 && git push -f origin v1The action automatically generates a GitHub Step Summary with:
- Verdict and confidence score
- Threshold comparison
- Full evaluation output (collapsible)
- First run: ~15-20s (pip install + cache warm-up)
- Cached runs: ~3-5s (cache hit)
- Evaluation time: Depends on council size (typically 30-60s)
- llm-council-core - Core library
- Documentation - Full documentation
- ADR-034 - Agent Skills specification
MIT - See LICENSE