🤖 This is an automated review generated by an AI-powered OSS reviewer bot.
If you'd like to opt out of future reviews, add the label no-bot-review to this repo.
If anything is inaccurate or unhelpful, feel free to close this issue or leave a comment.
Hey @kasperjunge! 👋 This is a really cool project — a package manager for AI agent skills is a genuinely useful idea, and the execution looks solid. Here's some friendly feedback to help push it even further!
🌟 Strengths
1. Excellent developer experience out of the box. The README is clear, well-structured, and leads with compelling examples. The command table (agr add, agr sync, agr list, etc.) makes discoverability effortless, and the npm-style mental model (agr sync = npm install) will resonate immediately with developers.
2. The CI/CD pipeline in publish.yml is genuinely well-designed. The gate of running ruff check, ruff format --check, and pytest before building and publishing is exactly the right pattern. Using PyPI trusted publishing (OIDC via id-token: write) instead of a stored secret is a security best practice that many projects skip — nice work here.
3. Clean SDK surface. Exporting Skill, SkillInfo, cache, list_skills, and skill_info from agr/__init__.py and using a proper __all__ shows thoughtfulness about the public API, making it easy for others to build on top of agr programmatically.
💡 Suggestions
1. Add a dedicated CI workflow that runs on every PR, not just on tag pushes.
Currently, quality checks only run in publish.yml, which is tag-triggered. A PR opened against main won't get automatic test/lint feedback. Adding a ci.yml that runs on pull_request and push to main (with uv run pytest -m "not e2e and not network and not slow") would catch regressions much earlier.
2. Expand test coverage to include integration-style tests for the CommandResult flow.
With 54 test files that's a good foundation, but the CommandResult dataclass in agr/commands/__init__.py suggests commands return structured results — it'd be great to have tests that assert on those success and message fields for common flows (e.g., "skill already installed", "GitHub repo not found"). This makes refactoring much safer.
3. Consider adding a CONTRIBUTING.md at the repo root (not just inside agent_docs/). GitHub surfaces root-level CONTRIBUTING.md automatically in the "new issue" and "new PR" flows. Moving or symlinking agent_docs/CONTRIBUTING.md to the root would lower the barrier for first-time contributors significantly.
⚡ Quick Wins
1. Add a test coverage badge to the README. You already have pytest-cov in [dependency-groups] — just wire it up: uv run pytest --cov=agr --cov-report=xml and connect it to Codecov or Coveralls. One badge addition to the README header alongside the existing PyPI and License badges.
2. Pin action versions to full SHAs in your workflows. You're currently using actions/checkout@v4, actions/setup-python@v5, etc. (floating tags). Pinning to commit SHAs (e.g., actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683) protects against supply-chain attacks if a tag is force-pushed. Tools like Dependabot for Actions or pin-github-actions can automate this.
🔒 QA & Security
Testing: pytest is configured with good marker separation (e2e, network, slow) — that's a mature pattern. respx for mocking HTTP calls is a great choice given the httpx dependency. Consider adding a pytest.ini_options addopts entry like addopts = "-m 'not e2e and not network and not slow'" so developers running plain pytest locally don't accidentally hit the network.
CI/CD: The publish.yml quality gate is solid. The gap is the missing PR-time workflow (see Suggestion #1). The docs.yml is clean and appropriately path-filtered.
Code Quality: ruff for both linting and formatting is a great modern choice. ty (type checker) is in dev deps — consider adding uv run ty check agr agrx as a step in the quality job in publish.yml to close the loop on static analysis.
Security: Two concrete things to add:
SECURITY.md — even a simple one explaining how to report vulnerabilities privately (GitHub has a built-in "Report a vulnerability" feature you can enable in Settings → Security).
- Dependabot for both
pip and GitHub Actions — add .github/dependabot.yml:
version: 2
updates:
- package-ecosystem: pip
directory: "/"
schedule:
interval: weekly
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: weekly
Dependencies: Deps use >= lower bounds without upper bounds, which is flexible but could bite on a major version bump. For a CLI tool this is generally fine, but worth monitoring — Dependabot will help here.
Overall, this is a well-structured project with good bones. The quality tooling choices (ruff, uv, trusted publishing) show real care. The suggestions above are incremental polish, not structural problems. Keep it up! 🚀
🚀 Get AI Code Review on Every PR — Free
Just like this OSS review, you can have Claude AI automatically review every Pull Request.
No server needed — runs entirely on GitHub Actions with a 30-second setup.
🤖 pr-review — GitHub Actions AI Code Review Bot
| Feature |
Details |
| Cost |
$0 infrastructure (GitHub Actions free tier) |
| Trigger |
Auto-runs on every PR open / update |
| Checks |
Bugs · Security (OWASP) · Performance (N+1) · Quality · Error handling · Testability |
| Output |
🔴 Critical · 🟠 Major · 🟡 Minor · 🔵 Info inline comments |
⚡ 30-second setup
# 1. Copy the workflow & script
mkdir -p .github/workflows scripts
curl -sSL https://raw.githubusercontent.com/noivan0/pr-review/main/.github/workflows/pr-review.yml \
-o .github/workflows/pr-review.yml
curl -sSL https://raw.githubusercontent.com/noivan0/pr-review/main/scripts/pr_reviewer.py \
-o scripts/pr_reviewer.py
# 2. Add a GitHub Secret
# Repo → Settings → Secrets → Actions → New repository secret
# Name: ANTHROPIC_API_KEY Value: sk-ant-...
# 3. Open a PR — AI review starts automatically!
📌 Full docs & self-hosted runner guide: https://github.com/noivan0/pr-review
Hey @kasperjunge! 👋 This is a really cool project — a package manager for AI agent skills is a genuinely useful idea, and the execution looks solid. Here's some friendly feedback to help push it even further!
🌟 Strengths
1. Excellent developer experience out of the box. The README is clear, well-structured, and leads with compelling examples. The command table (
agr add,agr sync,agr list, etc.) makes discoverability effortless, and the npm-style mental model (agr sync=npm install) will resonate immediately with developers.2. The CI/CD pipeline in
publish.ymlis genuinely well-designed. The gate of runningruff check,ruff format --check, andpytestbefore building and publishing is exactly the right pattern. Using PyPI trusted publishing (OIDC viaid-token: write) instead of a stored secret is a security best practice that many projects skip — nice work here.3. Clean SDK surface. Exporting
Skill,SkillInfo,cache,list_skills, andskill_infofromagr/__init__.pyand using a proper__all__shows thoughtfulness about the public API, making it easy for others to build on top ofagrprogrammatically.💡 Suggestions
1. Add a dedicated CI workflow that runs on every PR, not just on tag pushes.
Currently, quality checks only run in
publish.yml, which is tag-triggered. A PR opened againstmainwon't get automatic test/lint feedback. Adding aci.ymlthat runs onpull_requestandpushtomain(withuv run pytest -m "not e2e and not network and not slow") would catch regressions much earlier.2. Expand test coverage to include integration-style tests for the
CommandResultflow.With 54 test files that's a good foundation, but the
CommandResultdataclass inagr/commands/__init__.pysuggests commands return structured results — it'd be great to have tests that assert on thosesuccessandmessagefields for common flows (e.g., "skill already installed", "GitHub repo not found"). This makes refactoring much safer.3. Consider adding a
CONTRIBUTING.mdat the repo root (not just insideagent_docs/). GitHub surfaces root-levelCONTRIBUTING.mdautomatically in the "new issue" and "new PR" flows. Moving or symlinkingagent_docs/CONTRIBUTING.mdto the root would lower the barrier for first-time contributors significantly.⚡ Quick Wins
1. Add a test coverage badge to the README. You already have
pytest-covin[dependency-groups]— just wire it up:uv run pytest --cov=agr --cov-report=xmland connect it to Codecov or Coveralls. One badge addition to the README header alongside the existing PyPI and License badges.2. Pin action versions to full SHAs in your workflows. You're currently using
actions/checkout@v4,actions/setup-python@v5, etc. (floating tags). Pinning to commit SHAs (e.g.,actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683) protects against supply-chain attacks if a tag is force-pushed. Tools like Dependabot for Actions orpin-github-actionscan automate this.🔒 QA & Security
Testing: pytest is configured with good marker separation (
e2e,network,slow) — that's a mature pattern.respxfor mocking HTTP calls is a great choice given thehttpxdependency. Consider adding apytest.ini_optionsaddoptsentry likeaddopts = "-m 'not e2e and not network and not slow'"so developers running plainpytestlocally don't accidentally hit the network.CI/CD: The
publish.ymlquality gate is solid. The gap is the missing PR-time workflow (see Suggestion #1). Thedocs.ymlis clean and appropriately path-filtered.Code Quality:
rufffor both linting and formatting is a great modern choice.ty(type checker) is in dev deps — consider addinguv run ty check agr agrxas a step in thequalityjob inpublish.ymlto close the loop on static analysis.Security: Two concrete things to add:
SECURITY.md— even a simple one explaining how to report vulnerabilities privately (GitHub has a built-in "Report a vulnerability" feature you can enable in Settings → Security).pipand GitHub Actions — add.github/dependabot.yml:Dependencies: Deps use
>=lower bounds without upper bounds, which is flexible but could bite on a major version bump. For a CLI tool this is generally fine, but worth monitoring — Dependabot will help here.Overall, this is a well-structured project with good bones. The quality tooling choices (ruff, uv, trusted publishing) show real care. The suggestions above are incremental polish, not structural problems. Keep it up! 🚀
🚀 Get AI Code Review on Every PR — Free
Just like this OSS review, you can have Claude AI automatically review every Pull Request.
No server needed — runs entirely on GitHub Actions with a 30-second setup.
⚡ 30-second setup
📌 Full docs & self-hosted runner guide: https://github.com/noivan0/pr-review