Align Codacy coverage upload and Sonar hotspot blocking#113
Conversation
Co-authored-by: Codex <noreply@openai.com>
|
Unable to trigger custom agent "Code Reviewer". You have run out of credits 😔 |
📝 WalkthroughWalkthroughThe pull request extends the Sonar quality gate check to detect and report open security hotspots, adds corresponding test coverage, and integrates Codacy coverage uploads into the CI workflow. A Sonar workflow step label was also updated to reflect hotspot evaluation. Changes
Sequence Diagram(s)sequenceDiagram
participant Script as Sonar Check Script
participant API as Sonar API
participant Eval as Status Evaluator
participant Render as Markdown Renderer
Script->>API: Query status (issues, gate)
API-->>Script: Return open_issues, quality_gate
Script->>API: Query hotspots
API-->>Script: Return open_hotspots count
Script->>Eval: Evaluate with issues, gate, hotspots
Eval-->>Script: Assessment result + findings
Script->>Render: Render with all metrics
Render-->>Script: Markdown output with hotspots line
Script->>Script: Return payload with hotspots
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
apps/api/tests/test_scripts_quality_gates.py (1)
161-180: Assert the hotspot request URL in this test.This only proves tuple plumbing via ordered mock responses. It would still pass if
_query_sonar_status()hit the wrong endpoint or droppedstatus=TO_REVIEW/ branch scoping.🧪 Tighten the assertion
def test_sonar_query_status_reads_hotspots(monkeypatch): module = _load_module("check_sonar_zero") responses = [ {"paging": {"total": 4}}, {"projectStatus": {"status": "OK"}}, {"paging": {"total": 2}}, ] - monkeypatch.setattr(module, "_request_json", lambda _url, _auth: responses.pop(0)) + urls: list[str] = [] + + def fake_request(url, _auth): + urls.append(url) + return responses.pop(0) + + monkeypatch.setattr(module, "_request_json", fake_request) @@ _expect(open_issues == 4, "Expected open issues to be parsed") _expect(quality_gate == "OK", "Expected quality gate to be parsed") _expect(open_hotspots == 2, "Expected open hotspots to be parsed") + _expect("/api/hotspots/search?" in urls[2], "Expected hotspot search endpoint") + _expect("status=TO_REVIEW" in urls[2], "Expected hotspot status filter") + _expect("branch=main" in urls[2], "Expected branch scoping")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/tests/test_scripts_quality_gates.py` around lines 161 - 180, The test currently only relies on ordered mock responses; update the monkeypatch of module._request_json inside test_sonar_query_status_reads_hotspots to capture the requested URL and assert it contains the expected SonarCloud endpoint and query parameters (e.g. the hotspots endpoint path, status=TO_REVIEW and branch=main) before returning the mocked responses; keep using module._query_sonar_status and the same response sequence but add an assertion on the captured URL to ensure the hotspot request is to the correct endpoint and includes the branch and status parameters.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/codecov-analytics.yml:
- Around line 94-103: The workflow is using the account-scoped CODACY_API_TOKEN
for a single-repo coverage upload; replace it with the repository-scoped
CODACY_PROJECT_TOKEN everywhere in this job (env block and the runtime check) so
the job exports CODACY_PROJECT_TOKEN instead of CODACY_API_TOKEN, validates that
CODACY_PROJECT_TOKEN is present, and fails if missing; update references to
CODACY_USERNAME/CODACY_PROJECT_NAME unchanged but ensure any downstream steps
that read the token consume CODACY_PROJECT_TOKEN.
In `@scripts/quality/check_sonar_zero.py`:
- Around line 133-136: The CLI flag --ignore-open-issues is misleading because
open security hotspots (open_hotspots) still trigger failures; update either the
flag name or help text and/or the check logic so behavior matches the
description: either rename the flag to something explicit like
--ignore-open-hotspots (and update all references) or change the help text to
clarify that hotspots are treated as quality-gate failures, and if desired
modify the conditional that appends to findings (the open_hotspots check) to
respect the ignore_open_issues boolean; reference variables/strings:
--ignore-open-issues, ignore_open_issues, open_issues, open_hotspots, and
findings to locate the code to change.
---
Nitpick comments:
In `@apps/api/tests/test_scripts_quality_gates.py`:
- Around line 161-180: The test currently only relies on ordered mock responses;
update the monkeypatch of module._request_json inside
test_sonar_query_status_reads_hotspots to capture the requested URL and assert
it contains the expected SonarCloud endpoint and query parameters (e.g. the
hotspots endpoint path, status=TO_REVIEW and branch=main) before returning the
mocked responses; keep using module._query_sonar_status and the same response
sequence but add an assertion on the captured URL to ensure the hotspot request
is to the correct endpoint and includes the branch and status parameters.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cd723f43-0ce0-4de5-a1ff-9b2e3a280c99
📒 Files selected for processing (4)
.github/workflows/codecov-analytics.yml.github/workflows/sonar-zero.ymlapps/api/tests/test_scripts_quality_gates.pyscripts/quality/check_sonar_zero.py
| env: | ||
| CODACY_API_TOKEN: ${{ secrets.CODACY_API_TOKEN }} | ||
| CODACY_ORGANIZATION_PROVIDER: gh | ||
| CODACY_USERNAME: Prekzursil | ||
| CODACY_PROJECT_NAME: ${{ github.event.repository.name }} | ||
| run: | | ||
| if [ -z "${CODACY_API_TOKEN}" ]; then | ||
| echo "Missing CODACY_API_TOKEN" >&2 | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Codacy coverage reporter authentication CODACY_PROJECT_TOKEN single repository
💡 Result:
For a single repository, Codacy Coverage Reporter authenticates using a Project (repository) API token, exposed as CODACY_PROJECT_TOKEN (or passed as --project-token). You don’t need CODACY_API_TOKEN unless you’re automating uploads for multiple repositories from the same workflow/account. [1][3]
Single-repo (recommended)
- Create/get the Project/Repository API token in Codacy (Project/Repository settings → Integrations → Project API token), then set it in CI as:
- Environment variable:
CODACY_PROJECT_TOKEN=<token>[1] - Or CLI flag:
codacy-coverage-reporter report --project-token <token> ...[3]
- Environment variable:
GitHub Actions example (single repo)
- Store the token as a repo secret named
CODACY_PROJECT_TOKEN, then pass it asproject-token: ${{ secrets.CODACY_PROJECT_TOKEN }}. [2]
When to use CODACY_API_TOKEN instead
- If you’re uploading coverage for multiple repositories, use an Account API token (
CODACY_API_TOKEN) plusCODACY_ORGANIZATION_PROVIDER,CODACY_USERNAME, andCODACY_PROJECT_NAME. [1][2]
Sources
- Codacy docs – Coverage Reporter setup/authentication (single repo vs multiple repos) [1]
codacy/codacy-coverage-reporter-actionREADME (project-token vs api-token) [2]- Codacy Coverage Reporter CLI help/options (
--project-token,--api-token, etc.) [3]
Use a repository-scoped Codacy token.
This workflow uploads coverage for a single repository but uses CODACY_API_TOKEN, which Codacy documents for multi-repository automation. For single-repo uploads, CODACY_PROJECT_TOKEN is the recommended approach and avoids granting unnecessary account-level access in CI.
Narrow the secret scope
- name: Upload coverage to Codacy
if: ${{ always() }}
env:
- CODACY_API_TOKEN: ${{ secrets.CODACY_API_TOKEN }}
- CODACY_ORGANIZATION_PROVIDER: gh
- CODACY_USERNAME: Prekzursil
- CODACY_PROJECT_NAME: ${{ github.event.repository.name }}
+ CODACY_PROJECT_TOKEN: ${{ secrets.CODACY_PROJECT_TOKEN }}
run: |
- if [ -z "${CODACY_API_TOKEN}" ]; then
- echo "Missing CODACY_API_TOKEN" >&2
+ if [ -z "${CODACY_PROJECT_TOKEN}" ]; then
+ echo "Missing CODACY_PROJECT_TOKEN" >&2
exit 1
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| env: | |
| CODACY_API_TOKEN: ${{ secrets.CODACY_API_TOKEN }} | |
| CODACY_ORGANIZATION_PROVIDER: gh | |
| CODACY_USERNAME: Prekzursil | |
| CODACY_PROJECT_NAME: ${{ github.event.repository.name }} | |
| run: | | |
| if [ -z "${CODACY_API_TOKEN}" ]; then | |
| echo "Missing CODACY_API_TOKEN" >&2 | |
| exit 1 | |
| fi | |
| env: | |
| CODACY_PROJECT_TOKEN: ${{ secrets.CODACY_PROJECT_TOKEN }} | |
| run: | | |
| if [ -z "${CODACY_PROJECT_TOKEN}" ]; then | |
| echo "Missing CODACY_PROJECT_TOKEN" >&2 | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/codecov-analytics.yml around lines 94 - 103, The workflow
is using the account-scoped CODACY_API_TOKEN for a single-repo coverage upload;
replace it with the repository-scoped CODACY_PROJECT_TOKEN everywhere in this
job (env block and the runtime check) so the job exports CODACY_PROJECT_TOKEN
instead of CODACY_API_TOKEN, validates that CODACY_PROJECT_TOKEN is present, and
fails if missing; update references to CODACY_USERNAME/CODACY_PROJECT_NAME
unchanged but ensure any downstream steps that read the token consume
CODACY_PROJECT_TOKEN.
| if not ignore_open_issues and open_issues != 0: | ||
| findings.append(f"Sonar reports {open_issues} open issues (expected 0).") | ||
| if open_hotspots != 0: | ||
| findings.append(f"Sonar reports {open_hotspots} open security hotspots pending review (expected 0).") |
There was a problem hiding this comment.
--ignore-open-issues is now misleading.
Line 135 still fails when hotspots are present, so the CLI help on Lines 45-47 no longer matches the behavior. Please update the flag text or rename the flag so callers do not read “quality gate only” and then get a hotspot failure anyway.
📝 Suggested text update
parser.add_argument(
"--ignore-open-issues",
action="store_true",
- help="Skip open-issue enforcement and evaluate quality gate only.",
+ help="Skip open-issue enforcement; quality-gate and hotspot checks still apply.",
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/quality/check_sonar_zero.py` around lines 133 - 136, The CLI flag
--ignore-open-issues is misleading because open security hotspots
(open_hotspots) still trigger failures; update either the flag name or help text
and/or the check logic so behavior matches the description: either rename the
flag to something explicit like --ignore-open-hotspots (and update all
references) or change the help text to clarify that hotspots are treated as
quality-gate failures, and if desired modify the conditional that appends to
findings (the open_hotspots check) to respect the ignore_open_issues boolean;
reference variables/strings: --ignore-open-issues, ignore_open_issues,
open_issues, open_hotspots, and findings to locate the code to change.
Coverage summary from CodacySee diff coverage on Codacy
Coverage variation details
Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: Diff coverage details
Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: See your quality gate settings Change summary preferencesFootnotes
|
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
|



Adds Codacy coverage upload to the existing Codecov workflow and extends Sonar Zero to block unresolved security hotspots.
Co-authored-by: Codex noreply@openai.com
Summary by CodeRabbit
New Features
Improvements
Chores