Problem
ensure_label() runs gh label list --json name every time it is called:
def ensure_label(name: str, color: str = "0075ca", description: str = "") -> None:
result = gh("label", "list", "--json", "name")
existing = {label["name"] for label in json.loads(result.stdout)}
if name not in existing:
...
create_issue() calls ensure_label() for each label on each issue. When posting 5 issues with 3 labels each, that is 15 gh label list calls — all returning the same data. Each call is a subprocess + GitHub API round-trip. This adds unnecessary latency and API rate limit consumption to every scan that posts issues.
Definition of done
You know this is fixed when:
- Running a scan that posts multiple issues with overlapping labels shows only one
gh label list call in the process output (not one per label per issue)
- Labels that were already confirmed to exist in the current run are not re-checked
- Labels that are created during the run are tracked in the cache so subsequent issues don't re-check them
Out of scope
- Caching across runs (within a single process is enough)
- Caching other
gh calls (this is specific to the label list hot path)
- Changes to how labels are created (only the redundant read is the problem)
Problem
ensure_label()runsgh label list --json nameevery time it is called:create_issue()callsensure_label()for each label on each issue. When posting 5 issues with 3 labels each, that is 15gh label listcalls — all returning the same data. Each call is a subprocess + GitHub API round-trip. This adds unnecessary latency and API rate limit consumption to every scan that posts issues.Definition of done
You know this is fixed when:
gh label listcall in the process output (not one per label per issue)Out of scope
ghcalls (this is specific to the label list hot path)