Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 15 minutes and 51 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (30)
📝 WalkthroughWalkthroughWebhookでPRイベントを受け取り、取得(fetch)と解析/processを分離するために Changes
Sequence Diagram(s)sequenceDiagram
participant GitHub as GitHub App
participant Webhook as API Webhook Handler
participant Install as Installation Handler
participant Pull as Pull Handler
participant DB as Tenant DB
participant Durably as Durably
GitHub->>Webhook: POST (installation / pull_request / etc.)
alt installation event
Webhook->>Install: runInstallationWebhookInTransaction(event, payload)
Install->>DB: findActiveLinkByInstallation / update githubAppLinks / integrations
Install-->>Webhook: organizationId | null
alt orgId returned
Webhook->>Webhook: clearOrgCache(organizationId)
end
else pull_request event
Webhook->>Pull: handlePullWebhookEvent(event, payload)
Pull->>DB: findActiveLinkByInstallation -> lookup repository by owner/repo
alt repository tracked
Pull->>Durably: durably.jobs.crawl.trigger(..., concurrencyKey: crawlConcurrencyKey(orgId))
Durably-->>Pull: Job queued
end
end
Webhook-->>GitHub: HTTP 204
sequenceDiagram
participant Crawl as Crawl Job
participant Store as PR Store
participant Durably as Durably
participant Process as Process Job
participant DB as Database
Crawl->>Crawl: fetch PR metadata + files
Crawl->>Store: savePrData(prWithFiles, fetchedAt)
Store->>DB: upsert githubRawData (only if excluded.fetchedAt >= existing)
Crawl->>Crawl: determine updatedPrNumbers / pullCount
Crawl->>Durably: decide shouldTriggerFullOrgProcessJob(...)
alt full-org
Durably->>Process: trigger process(orgId, scopes omitted, concurrencyKey: processConcurrencyKey)
else scoped
Durably->>Process: trigger process(orgId, scopes: [{repositoryId, prNumbers}], concurrencyKey: processConcurrencyKey)
end
Process->>DB: analyze & upsert & export (filtered by scopes)
Process->>Durably: trigger classify job
Process-->>Durably: { pullCount }
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
batch/github/backfill-repo.ts (1)
33-43:⚠️ Potential issue | 🟠 Major
filesバックフィル経路でfetchedAtが鮮度ガードをすり抜けます。Line 33 で取得した古い
prスナップショットに対して、Line 42 の「更新時刻(now)」をfetchedAtに使うと、並行で先に保存された新しいデータを後から古い JSON で上書きし得ます。fetchedAtはスナップショット取得時点で固定すべきです。修正イメージ
if (options?.files) { + const fetchedAt = new Date().toISOString() const prs = await store.loader.pullrequests() @@ - const fetchedAt = new Date().toISOString() await store.updatePrMetadata([{ pr: prWithFiles, fetchedAt }])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@batch/github/backfill-repo.ts` around lines 33 - 43, The code sets fetchedAt after awaiting fetcher.files, which can cause a stale-snapshot overwrite; capture the timestamp immediately when you start handling each PR (e.g., compute fetchedAt = new Date().toISOString() at the top of the for-loop or right after reading pr) before calling fetcher.files, then use that fetchedAt when building prWithFiles and calling store.updatePrMetadata; keep references to prs, pr, fetcher.files, fetchedAt, and store.updatePrMetadata to locate and change the logic.app/services/durably.server.ts (1)
22-27:⚠️ Potential issue | 🟠 Major旧
recalculateランの再実行互換性が失われる可能性があります。Line 22-27 で
recalculateの job registration が消えているため、デプロイ前に作成されたrecalculateの pending/failed ランが再実行できず失敗するリスクがあります(retainRuns: '7d'の期間中)。移行期間はエイリアス登録を残すのが安全です。互換性維持の最小案
jobs: { backfill: backfillJob, classify: classifyJob, crawl: crawlJob, + recalculate: processJob, // temporary alias during migration window process: processJob, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/durably.server.ts` around lines 22 - 27, The job registration removed the legacy "recalculate" alias, breaking rerun compatibility for existing pending/failed runs; restore a "recalculate" entry in the jobs object so older runs still map to a handler (e.g., add recalculate: recalculateJob or map it to the new equivalent handler such as recalculate: processJob if you consolidated logic), keeping the same jobs object shape (jobs: { backfill, classify, crawl, process, recalculate }) so runs created before deployment can still be retried during the retainRuns window.
🧹 Nitpick comments (2)
app/services/jobs/process.server.ts (1)
33-35: 空のscopes配列に対する早期リターン。
scopesが明示的に空配列で渡された場合、処理をスキップしてpullCount: 0を返すのは適切です。ただし、scopes: []とscopes: undefinedの挙動の違いに注意が必要です:
undefined: フル org 処理[]: 何も処理しないこの設計は意図的と思われますが、ドキュメントに明記することを検討してください。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/jobs/process.server.ts` around lines 33 - 35, Add explicit documentation (JSDoc or an inline comment) near the function that reads input.scopes (referencing input.scopes and the early return that yields { pullCount: 0 }) to clarify the intended semantics: when scopes is undefined the job should perform a full-org processing, whereas when scopes is an explicit empty array ([]) the job intentionally skips work and returns pullCount: 0; update any public API or README entries that describe the function's inputs to reflect this difference so callers know to pass undefined for full processing and [] to explicitly opt out.app/routes/$orgSlug/settings/repositories/$repository/$pull/index.tsx (1)
130-171: リフレッシュロジックの変更は妥当。triggerAndWaitのエラーハンドリング追加を検討。インライン処理から
processジョブへの移行、およびfetchedAtガードの導入は PR の設計方針に沿っている。ただし、triggerAndWaitはジョブ完了まで HTTP リクエストをブロックするため、ジョブ失敗時のユーザー体験を改善する余地がある。現状ではジョブ失敗時に汎用的な 500 エラーが返るが、ユーザーにより明確なエラーメッセージを表示できると良い。
♻️ エラーハンドリング追加の提案
const { durably } = await import('~/app/services/durably.server') - await durably.jobs.process.triggerAndWait( - { - organizationId: organization.id, - scopes: [{ repositoryId, prNumbers: [pullId] }], - }, - { - concurrencyKey: processConcurrencyKey(organization.id), - labels: { organizationId: organization.id }, - }, - ) + try { + await durably.jobs.process.triggerAndWait( + { + organizationId: organization.id, + scopes: [{ repositoryId, prNumbers: [pullId] }], + }, + { + concurrencyKey: processConcurrencyKey(organization.id), + labels: { organizationId: organization.id }, + }, + ) + } catch (e) { + throw new Response(`Process job failed: ${getErrorMessage(e)}`, { status: 500 }) + } return { intent: 'refresh' as const, success: true }※
getErrorMessageは既にインポート済みのファイル(data-management)で使用されているパターン。このファイルでも同様にインポートが必要。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/routes/`$orgSlug/settings/repositories/$repository/$pull/index.tsx around lines 130 - 171, The call to durably.jobs.process.triggerAndWait can fail and currently bubbles up as a generic 500; wrap the await durably.jobs.process.triggerAndWait(...) call in a try/catch, import and use the existing getErrorMessage helper to extract a user-friendly message, log the original error and return a clear failure result (e.g., return { intent: 'refresh', success: false, error: getErrorMessage(err) } or throw a Response with that message) instead of letting an unhandled exception propagate; reference durably.jobs.process.triggerAndWait, processConcurrencyKey, and getErrorMessage when locating where to add the try/catch and error handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@app/services/durably.server.ts`:
- Around line 22-27: The job registration removed the legacy "recalculate"
alias, breaking rerun compatibility for existing pending/failed runs; restore a
"recalculate" entry in the jobs object so older runs still map to a handler
(e.g., add recalculate: recalculateJob or map it to the new equivalent handler
such as recalculate: processJob if you consolidated logic), keeping the same
jobs object shape (jobs: { backfill, classify, crawl, process, recalculate }) so
runs created before deployment can still be retried during the retainRuns
window.
In `@batch/github/backfill-repo.ts`:
- Around line 33-43: The code sets fetchedAt after awaiting fetcher.files, which
can cause a stale-snapshot overwrite; capture the timestamp immediately when you
start handling each PR (e.g., compute fetchedAt = new Date().toISOString() at
the top of the for-loop or right after reading pr) before calling fetcher.files,
then use that fetchedAt when building prWithFiles and calling
store.updatePrMetadata; keep references to prs, pr, fetcher.files, fetchedAt,
and store.updatePrMetadata to locate and change the logic.
---
Nitpick comments:
In `@app/routes/`$orgSlug/settings/repositories/$repository/$pull/index.tsx:
- Around line 130-171: The call to durably.jobs.process.triggerAndWait can fail
and currently bubbles up as a generic 500; wrap the await
durably.jobs.process.triggerAndWait(...) call in a try/catch, import and use the
existing getErrorMessage helper to extract a user-friendly message, log the
original error and return a clear failure result (e.g., return { intent:
'refresh', success: false, error: getErrorMessage(err) } or throw a Response
with that message) instead of letting an unhandled exception propagate;
reference durably.jobs.process.triggerAndWait, processConcurrencyKey, and
getErrorMessage when locating where to add the try/catch and error handling.
In `@app/services/jobs/process.server.ts`:
- Around line 33-35: Add explicit documentation (JSDoc or an inline comment)
near the function that reads input.scopes (referencing input.scopes and the
early return that yields { pullCount: 0 }) to clarify the intended semantics:
when scopes is undefined the job should perform a full-org processing, whereas
when scopes is an explicit empty array ([]) the job intentionally skips work and
returns pullCount: 0; update any public API or README entries that describe the
function's inputs to reflect this difference so callers know to pass undefined
for full processing and [] to explicitly opt out.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3f7b73cd-f9ef-4e5e-9d4f-50d34b4a2883
📒 Files selected for processing (30)
.prettierignore.takt/config.yaml.takt/tasks.yamlapp/routes/$orgSlug/settings/data-management/+components/job-history.tsxapp/routes/$orgSlug/settings/data-management/index.tsxapp/routes/$orgSlug/settings/repositories/$repository/$pull/index.tsxapp/routes/api.github.webhook.test.tsapp/routes/api.github.webhook.tsapp/services/durably.server.tsapp/services/github-webhook-installation.server.tsapp/services/github-webhook-pull.server.tsapp/services/github-webhook-shared.server.tsapp/services/github-webhook.server.test.tsapp/services/github-webhook.server.tsapp/services/jobs/concurrency-keys.server.tsapp/services/jobs/crawl-process-handoff.server.test.tsapp/services/jobs/crawl-process-handoff.server.tsapp/services/jobs/crawl.server.tsapp/services/jobs/process.server.tsapp/services/jobs/recalculate.server.tsapp/services/jobs/shared-steps.server.tsbatch/cli.tsbatch/commands/backfill.tsbatch/commands/crawl.tsbatch/commands/process.tsbatch/db/mutations.tsbatch/github/backfill-repo.tsbatch/github/store.test.tsbatch/github/store.tsbatch/job-scheduler.ts
💤 Files with no reviewable changes (2)
- app/routes/api.github.webhook.ts
- app/services/jobs/recalculate.server.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/routes/`$orgSlug/settings/repositories/$repository/$pull/index.tsx:
- Around line 161-165: pullId (parsed as pull: zx.NumAsString) is a string but
prNumbers in durably.jobs.process.triggerAndWait must be an array of numbers
(process.server.ts uses z.array(z.number())); convert pullId to a number before
passing it to prNumbers (e.g., Number(...) or parseInt and handle NaN) so the
call to durably.jobs.process.triggerAndWait({ organizationId, scopes: [{
repositoryId, prNumbers: [/* numeric value */] }] }) supplies a numeric PR ID
and matches downstream Set/compare logic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 415b705a-2b68-4d3a-84d5-46694c55ddfd
📒 Files selected for processing (3)
app/routes/$orgSlug/settings/repositories/$repository/$pull/index.tsxapp/services/durably.server.tsbatch/github/backfill-repo.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- app/services/durably.server.ts
- batch/github/backfill-repo.ts
- fetchedAt ガードで古いデータによる raw 上書きを防止 - crawl ジョブを fetch 専任にリファクタリング - 新規 process ジョブで analyze/upsert/export/classify を統合 - recalculate ジョブを廃止し process に統合 - webhook handler を拡張し PR イベントで fetch + process を trigger - coalesce: 'skip' で trigger 圧縮(N 回の webhook → 最大 2 run) - process ジョブは concurrencyKey で org 単位直列化 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
edit だと pnpm 実行やファイル削除ができず implement が失敗する。 coder persona は shell 実行が必須なので full が正しい。 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- findActiveLinkByInstallation を shared に統合(重複クエリ解消) - concurrency key をヘルパー関数に集約(8箇所の文字列リテラル散在を解消) - crawl の trigger-process 内の到達不能コード(scopes.length === 0)を削除 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- backfill-repo: fetchedAt を fetcher.files() の前に取得(stale write 防止) - durably: recalculate エイリアスを移行期間中残す(pending/failed run 互換) - $pull route: triggerAndWait にエラーハンドリング追加 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CodeRabbit指摘: zx.NumAsString で文字列型の pullId を z.array(z.number()) の prNumbers にそのまま渡していた。Number() で変換して型を一致させる。 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
34f8dcf to
1d13ca5
Compare
Summary
Closes #255
GitHub App の webhook を活用して PR データをほぼリアルタイムに更新する。現在の 30 分ポーリングからの脱却。
主な変更
fetchedAtを比較し、古いデータによる上書きを防止pull_request/pull_request_review/pull_request_review_commentで fetch + process を triggercrawlConcurrencyKey()/processConcurrencyKey()で 8 箇所の文字列散在を解消アーキテクチャ
Test plan
pnpm validate通過(42 ファイル、310 テスト)processコマンドが動作すること🤖 Generated with Claude Code
Summary by CodeRabbit
新機能
改善