Skip to content

feat: Webhook によるリアルタイム PR 更新#273

Open
coji wants to merge 5 commits intomainfrom
feat/webhook-realtime-pr-update
Open

feat: Webhook によるリアルタイム PR 更新#273
coji wants to merge 5 commits intomainfrom
feat/webhook-realtime-pr-update

Conversation

@coji
Copy link
Copy Markdown
Owner

@coji coji commented Mar 31, 2026

Summary

Closes #255

GitHub App の webhook を活用して PR データをほぼリアルタイムに更新する。現在の 30 分ポーリングからの脱却。

主な変更

  • fetchedAt ガード: raw データ保存時に fetchedAt を比較し、古いデータによる上書きを防止
  • crawl ジョブを fetch 専任に: analyze/upsert/export は新しい process ジョブに分離
  • process ジョブ新設: recalculate を置き換え、full-org / scoped 両対応
  • recalculate 廃止: ジョブ定義・CLI・UI すべて process に統合
  • webhook handler 拡張: pull_request / pull_request_review / pull_request_review_comment で fetch + process を trigger
  • coalesce: 'skip': N 回の webhook → 最大 2 run(1 running + 1 pending)に圧縮
  • concurrency key 集約: crawlConcurrencyKey() / processConcurrencyKey() で 8 箇所の文字列散在を解消
  • webhook handler 分割: installation 系 / PR 系 / shared ユーティリティに責務分離

アーキテクチャ

webhook → crawl job (fetch only) → process job (analyze/upsert/export/classify)
                                    ↑
scheduler (hourly) → crawl job ────┘
                                    ↑
Data Management UI ────────────────┘

Test plan

  • pnpm validate 通過(42 ファイル、310 テスト)
  • webhook で PR イベント受信 → 該当 PR のデータが自動更新されること
  • Data Management ページから process を手動 trigger できること
  • CLI process コマンドが動作すること
  • 定期 crawl が引き続き動作すること

🤖 Generated with Claude Code

Summary by CodeRabbit

  • 新機能

    • 非同期「process」ジョブを導入し、プルリクエスト処理パイプラインをジョブ駆動に移行しました。
    • GitHub webhookのPR関連イベントで自動的に処理を起動するよう拡張しました。
    • フェッチ時刻(fetchedAt)を記録し、データ鮮度を扱う仕組みを追加しました。
  • 改善

    • ジョブの同時実行キー管理とスケジューリングを統一・整理しました。
    • CLIや管理画面の表記を「recalculate」→「process」に更新しました。

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 31, 2026

Warning

Rate limit exceeded

@coji has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 15 minutes and 51 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 15 minutes and 51 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 979462fc-d72b-41d8-8e4d-225cbafc2d97

📥 Commits

Reviewing files that changed from the base of the PR and between 34f8dcf and 1d13ca5.

📒 Files selected for processing (30)
  • .prettierignore
  • .takt/config.yaml
  • .takt/tasks.yaml
  • app/routes/$orgSlug/settings/data-management/+components/job-history.tsx
  • app/routes/$orgSlug/settings/data-management/index.tsx
  • app/routes/$orgSlug/settings/repositories/$repository/$pull/index.tsx
  • app/routes/api.github.webhook.test.ts
  • app/routes/api.github.webhook.ts
  • app/services/durably.server.ts
  • app/services/github-webhook-installation.server.ts
  • app/services/github-webhook-pull.server.ts
  • app/services/github-webhook-shared.server.ts
  • app/services/github-webhook.server.test.ts
  • app/services/github-webhook.server.ts
  • app/services/jobs/concurrency-keys.server.ts
  • app/services/jobs/crawl-process-handoff.server.test.ts
  • app/services/jobs/crawl-process-handoff.server.ts
  • app/services/jobs/crawl.server.ts
  • app/services/jobs/process.server.ts
  • app/services/jobs/recalculate.server.ts
  • app/services/jobs/shared-steps.server.ts
  • batch/cli.ts
  • batch/commands/backfill.ts
  • batch/commands/crawl.ts
  • batch/commands/process.ts
  • batch/db/mutations.ts
  • batch/github/backfill-repo.ts
  • batch/github/store.test.ts
  • batch/github/store.ts
  • batch/job-scheduler.ts
📝 Walkthrough

Walkthrough

WebhookでPRイベントを受け取り、取得(fetch)と解析/processを分離するためにprocessジョブを導入。recalculateを廃止しcrawlはfetch専任に、rawデータにfetchedAtを追加して古い上書きを防止。Webhook処理とDurablyジョブの呼び出しを各所で切り替え。

Changes

Cohort / File(s) Summary
ジョブ定義とキュー
app/services/jobs/process.server.ts, app/services/jobs/recalculate.server.ts, app/services/jobs/crawl.server.ts, app/services/jobs/crawl-process-handoff.server.ts, app/services/jobs/concurrency-keys.server.ts
processジョブを追加、recalculateを削除。crawlはfetch専任化。concurrency keyヘルパー(crawlConcurrencyKey/processConcurrencyKey)追加。
Durably登録とワイヤリング
app/services/durably.server.ts, batch/job-scheduler.ts
Durably登録をprocessへ切替え。既存のrecalculateはエイリアスで互換維持。スケジューラでcrawlConcurrencyKeyを利用。
Webhookハンドリング分割
app/services/github-webhook.server.ts, app/services/github-webhook-shared.server.ts, app/services/github-webhook-installation.server.ts, app/services/github-webhook-pull.server.ts, app/routes/api.github.webhook.ts, app/routes/api.github.webhook.test.ts, app/services/github-webhook.server.test.ts
インストール系ロジックをトランザクション内サービスへ移動し(runInstallationWebhookInTransaction)、PR系イベントは専用ハンドラでcrawlをキック。Webhookはすべて処理に渡すよう変更し、テスト追加・調整。
ストレージ(fetchedAt) とバックフィル
batch/github/store.ts, batch/github/store.test.ts, batch/github/backfill-repo.ts
savePrData/updatePrMetadatafetchedAt追加。DB upsertはexcluded.fetchedAt >= githubRawData.fetchedAt条件で新しさガード。テストを更新し鮮度ロジックを検証。
UI/ルートの命名・行動変更
app/routes/$orgSlug/settings/data-management/index.tsx, app/routes/$orgSlug/settings/data-management/+components/job-history.tsx
RecalculateProcessに名称・Intent・ボタン等を変更。ジョブ名やエラーメッセージもprocessへ。index.tsxでconcurrencyKeyをヘルパー経由に。
リポジトリPR刷新フロー
app/routes/$orgSlug/settings/repositories/$repository/$pull/index.tsx
refreshでPRメタ+filesを保存し、同期分析/upsertを廃止。代わりにdurably.jobs.process.triggerAndWaitで非同期処理を起動(processConcurrencyKey使用)。
CLI/バッチの名称変更と引数調整
batch/cli.ts, batch/commands/process.ts, batch/commands/crawl.ts, batch/commands/backfill.ts
recalculateコマンドをprocessへ置換。crawl--repo--repositoryへ変更しrepositoryId解決を追加。Durably呼び出しをprocessへ更新。
テスト/タスク/設定小修正
.prettierignore, .takt/config.yaml, .takt/tasks.yaml, batch/db/mutations.ts, batch/github/store.test.ts, app/services/jobs/crawl-process-handoff.server.test.ts
.claude/をPrettier無視追加、.takt設定変更、ドキュメントコメント更新、テスト追加(shouldTriggerFullOrgProcessJob等)。

Sequence Diagram(s)

sequenceDiagram
    participant GitHub as GitHub App
    participant Webhook as API Webhook Handler
    participant Install as Installation Handler
    participant Pull as Pull Handler
    participant DB as Tenant DB
    participant Durably as Durably

    GitHub->>Webhook: POST (installation / pull_request / etc.)
    alt installation event
        Webhook->>Install: runInstallationWebhookInTransaction(event, payload)
        Install->>DB: findActiveLinkByInstallation / update githubAppLinks / integrations
        Install-->>Webhook: organizationId | null
        alt orgId returned
            Webhook->>Webhook: clearOrgCache(organizationId)
        end
    else pull_request event
        Webhook->>Pull: handlePullWebhookEvent(event, payload)
        Pull->>DB: findActiveLinkByInstallation -> lookup repository by owner/repo
        alt repository tracked
            Pull->>Durably: durably.jobs.crawl.trigger(..., concurrencyKey: crawlConcurrencyKey(orgId))
            Durably-->>Pull: Job queued
        end
    end
    Webhook-->>GitHub: HTTP 204
Loading
sequenceDiagram
    participant Crawl as Crawl Job
    participant Store as PR Store
    participant Durably as Durably
    participant Process as Process Job
    participant DB as Database

    Crawl->>Crawl: fetch PR metadata + files
    Crawl->>Store: savePrData(prWithFiles, fetchedAt)
    Store->>DB: upsert githubRawData (only if excluded.fetchedAt >= existing)
    Crawl->>Crawl: determine updatedPrNumbers / pullCount
    Crawl->>Durably: decide shouldTriggerFullOrgProcessJob(...)
    alt full-org
        Durably->>Process: trigger process(orgId, scopes omitted, concurrencyKey: processConcurrencyKey)
    else scoped
        Durably->>Process: trigger process(orgId, scopes: [{repositoryId, prNumbers}], concurrencyKey: processConcurrencyKey)
    end
    Process->>DB: analyze & upsert & export (filtered by scopes)
    Process->>Durably: trigger classify job
    Process-->>Durably: { pullCount }
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related issues

Possibly related PRs

Poem

🐇 古い再計算は眠りへ、
新しき Process が扉を開く、
webhook 来たりて fetchedAt を守り、
crawl は取るのみ、process は磨く、
さあリアルタイムへ跳ねるよ! 🥕✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 17.86% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'feat: Webhook によるリアルタイム PR 更新' accurately summarizes the main change: implementing real-time PR updates via GitHub App webhooks, which is the primary objective of this changeset.
Linked Issues check ✅ Passed The PR fully implements the requirements from issue #255: fetchedAt guard added to store.ts/store.test.ts, process job created in process.server.ts, crawl job refactored to fetch-only role, recalculate job removed and consolidated into process, webhook handlers extended for pull_request/pull_request_review/pull_request_review_comment events, and durably job wiring updated with concurrency keys.
Out of Scope Changes check ✅ Passed All changes are within scope of issue #255's webhook real-time update implementation: webhook event handling expansion, job architecture refactoring (crawl/process separation), fetchedAt guards, concurrency key consolidation, and supporting CLI/batch command updates. No unrelated refactoring or feature additions detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/webhook-realtime-pr-update

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
batch/github/backfill-repo.ts (1)

33-43: ⚠️ Potential issue | 🟠 Major

files バックフィル経路で fetchedAt が鮮度ガードをすり抜けます。

Line 33 で取得した古い pr スナップショットに対して、Line 42 の「更新時刻(now)」を fetchedAt に使うと、並行で先に保存された新しいデータを後から古い JSON で上書きし得ます。fetchedAt はスナップショット取得時点で固定すべきです。

修正イメージ
   if (options?.files) {
+    const fetchedAt = new Date().toISOString()
     const prs = await store.loader.pullrequests()
@@
-        const fetchedAt = new Date().toISOString()
         await store.updatePrMetadata([{ pr: prWithFiles, fetchedAt }])
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@batch/github/backfill-repo.ts` around lines 33 - 43, The code sets fetchedAt
after awaiting fetcher.files, which can cause a stale-snapshot overwrite;
capture the timestamp immediately when you start handling each PR (e.g., compute
fetchedAt = new Date().toISOString() at the top of the for-loop or right after
reading pr) before calling fetcher.files, then use that fetchedAt when building
prWithFiles and calling store.updatePrMetadata; keep references to prs, pr,
fetcher.files, fetchedAt, and store.updatePrMetadata to locate and change the
logic.
app/services/durably.server.ts (1)

22-27: ⚠️ Potential issue | 🟠 Major

recalculate ランの再実行互換性が失われる可能性があります。

Line 22-27 で recalculate の job registration が消えているため、デプロイ前に作成された recalculate の pending/failed ランが再実行できず失敗するリスクがあります(retainRuns: '7d' の期間中)。移行期間はエイリアス登録を残すのが安全です。

互換性維持の最小案
   jobs: {
     backfill: backfillJob,
     classify: classifyJob,
     crawl: crawlJob,
+    recalculate: processJob, // temporary alias during migration window
     process: processJob,
   },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/services/durably.server.ts` around lines 22 - 27, The job registration
removed the legacy "recalculate" alias, breaking rerun compatibility for
existing pending/failed runs; restore a "recalculate" entry in the jobs object
so older runs still map to a handler (e.g., add recalculate: recalculateJob or
map it to the new equivalent handler such as recalculate: processJob if you
consolidated logic), keeping the same jobs object shape (jobs: { backfill,
classify, crawl, process, recalculate }) so runs created before deployment can
still be retried during the retainRuns window.
🧹 Nitpick comments (2)
app/services/jobs/process.server.ts (1)

33-35: 空の scopes 配列に対する早期リターン。

scopes が明示的に空配列で渡された場合、処理をスキップして pullCount: 0 を返すのは適切です。ただし、scopes: []scopes: undefined の挙動の違いに注意が必要です:

  • undefined: フル org 処理
  • []: 何も処理しない

この設計は意図的と思われますが、ドキュメントに明記することを検討してください。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/services/jobs/process.server.ts` around lines 33 - 35, Add explicit
documentation (JSDoc or an inline comment) near the function that reads
input.scopes (referencing input.scopes and the early return that yields {
pullCount: 0 }) to clarify the intended semantics: when scopes is undefined the
job should perform a full-org processing, whereas when scopes is an explicit
empty array ([]) the job intentionally skips work and returns pullCount: 0;
update any public API or README entries that describe the function's inputs to
reflect this difference so callers know to pass undefined for full processing
and [] to explicitly opt out.
app/routes/$orgSlug/settings/repositories/$repository/$pull/index.tsx (1)

130-171: リフレッシュロジックの変更は妥当。triggerAndWait のエラーハンドリング追加を検討。

インライン処理から process ジョブへの移行、および fetchedAt ガードの導入は PR の設計方針に沿っている。ただし、triggerAndWait はジョブ完了まで HTTP リクエストをブロックするため、ジョブ失敗時のユーザー体験を改善する余地がある。

現状ではジョブ失敗時に汎用的な 500 エラーが返るが、ユーザーにより明確なエラーメッセージを表示できると良い。

♻️ エラーハンドリング追加の提案
       const { durably } = await import('~/app/services/durably.server')
-      await durably.jobs.process.triggerAndWait(
-        {
-          organizationId: organization.id,
-          scopes: [{ repositoryId, prNumbers: [pullId] }],
-        },
-        {
-          concurrencyKey: processConcurrencyKey(organization.id),
-          labels: { organizationId: organization.id },
-        },
-      )
+      try {
+        await durably.jobs.process.triggerAndWait(
+          {
+            organizationId: organization.id,
+            scopes: [{ repositoryId, prNumbers: [pullId] }],
+          },
+          {
+            concurrencyKey: processConcurrencyKey(organization.id),
+            labels: { organizationId: organization.id },
+          },
+        )
+      } catch (e) {
+        throw new Response(`Process job failed: ${getErrorMessage(e)}`, { status: 500 })
+      }

       return { intent: 'refresh' as const, success: true }

getErrorMessage は既にインポート済みのファイル(data-management)で使用されているパターン。このファイルでも同様にインポートが必要。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/routes/`$orgSlug/settings/repositories/$repository/$pull/index.tsx around
lines 130 - 171, The call to durably.jobs.process.triggerAndWait can fail and
currently bubbles up as a generic 500; wrap the await
durably.jobs.process.triggerAndWait(...) call in a try/catch, import and use the
existing getErrorMessage helper to extract a user-friendly message, log the
original error and return a clear failure result (e.g., return { intent:
'refresh', success: false, error: getErrorMessage(err) } or throw a Response
with that message) instead of letting an unhandled exception propagate;
reference durably.jobs.process.triggerAndWait, processConcurrencyKey, and
getErrorMessage when locating where to add the try/catch and error handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@app/services/durably.server.ts`:
- Around line 22-27: The job registration removed the legacy "recalculate"
alias, breaking rerun compatibility for existing pending/failed runs; restore a
"recalculate" entry in the jobs object so older runs still map to a handler
(e.g., add recalculate: recalculateJob or map it to the new equivalent handler
such as recalculate: processJob if you consolidated logic), keeping the same
jobs object shape (jobs: { backfill, classify, crawl, process, recalculate }) so
runs created before deployment can still be retried during the retainRuns
window.

In `@batch/github/backfill-repo.ts`:
- Around line 33-43: The code sets fetchedAt after awaiting fetcher.files, which
can cause a stale-snapshot overwrite; capture the timestamp immediately when you
start handling each PR (e.g., compute fetchedAt = new Date().toISOString() at
the top of the for-loop or right after reading pr) before calling fetcher.files,
then use that fetchedAt when building prWithFiles and calling
store.updatePrMetadata; keep references to prs, pr, fetcher.files, fetchedAt,
and store.updatePrMetadata to locate and change the logic.

---

Nitpick comments:
In `@app/routes/`$orgSlug/settings/repositories/$repository/$pull/index.tsx:
- Around line 130-171: The call to durably.jobs.process.triggerAndWait can fail
and currently bubbles up as a generic 500; wrap the await
durably.jobs.process.triggerAndWait(...) call in a try/catch, import and use the
existing getErrorMessage helper to extract a user-friendly message, log the
original error and return a clear failure result (e.g., return { intent:
'refresh', success: false, error: getErrorMessage(err) } or throw a Response
with that message) instead of letting an unhandled exception propagate;
reference durably.jobs.process.triggerAndWait, processConcurrencyKey, and
getErrorMessage when locating where to add the try/catch and error handling.

In `@app/services/jobs/process.server.ts`:
- Around line 33-35: Add explicit documentation (JSDoc or an inline comment)
near the function that reads input.scopes (referencing input.scopes and the
early return that yields { pullCount: 0 }) to clarify the intended semantics:
when scopes is undefined the job should perform a full-org processing, whereas
when scopes is an explicit empty array ([]) the job intentionally skips work and
returns pullCount: 0; update any public API or README entries that describe the
function's inputs to reflect this difference so callers know to pass undefined
for full processing and [] to explicitly opt out.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3f7b73cd-f9ef-4e5e-9d4f-50d34b4a2883

📥 Commits

Reviewing files that changed from the base of the PR and between 3893102 and 185c6da.

📒 Files selected for processing (30)
  • .prettierignore
  • .takt/config.yaml
  • .takt/tasks.yaml
  • app/routes/$orgSlug/settings/data-management/+components/job-history.tsx
  • app/routes/$orgSlug/settings/data-management/index.tsx
  • app/routes/$orgSlug/settings/repositories/$repository/$pull/index.tsx
  • app/routes/api.github.webhook.test.ts
  • app/routes/api.github.webhook.ts
  • app/services/durably.server.ts
  • app/services/github-webhook-installation.server.ts
  • app/services/github-webhook-pull.server.ts
  • app/services/github-webhook-shared.server.ts
  • app/services/github-webhook.server.test.ts
  • app/services/github-webhook.server.ts
  • app/services/jobs/concurrency-keys.server.ts
  • app/services/jobs/crawl-process-handoff.server.test.ts
  • app/services/jobs/crawl-process-handoff.server.ts
  • app/services/jobs/crawl.server.ts
  • app/services/jobs/process.server.ts
  • app/services/jobs/recalculate.server.ts
  • app/services/jobs/shared-steps.server.ts
  • batch/cli.ts
  • batch/commands/backfill.ts
  • batch/commands/crawl.ts
  • batch/commands/process.ts
  • batch/db/mutations.ts
  • batch/github/backfill-repo.ts
  • batch/github/store.test.ts
  • batch/github/store.ts
  • batch/job-scheduler.ts
💤 Files with no reviewable changes (2)
  • app/routes/api.github.webhook.ts
  • app/services/jobs/recalculate.server.ts

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/routes/`$orgSlug/settings/repositories/$repository/$pull/index.tsx:
- Around line 161-165: pullId (parsed as pull: zx.NumAsString) is a string but
prNumbers in durably.jobs.process.triggerAndWait must be an array of numbers
(process.server.ts uses z.array(z.number())); convert pullId to a number before
passing it to prNumbers (e.g., Number(...) or parseInt and handle NaN) so the
call to durably.jobs.process.triggerAndWait({ organizationId, scopes: [{
repositoryId, prNumbers: [/* numeric value */] }] }) supplies a numeric PR ID
and matches downstream Set/compare logic.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 415b705a-2b68-4d3a-84d5-46694c55ddfd

📥 Commits

Reviewing files that changed from the base of the PR and between 185c6da and 34f8dcf.

📒 Files selected for processing (3)
  • app/routes/$orgSlug/settings/repositories/$repository/$pull/index.tsx
  • app/services/durably.server.ts
  • batch/github/backfill-repo.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • app/services/durably.server.ts
  • batch/github/backfill-repo.ts

coji and others added 5 commits April 6, 2026 12:14
- fetchedAt ガードで古いデータによる raw 上書きを防止
- crawl ジョブを fetch 専任にリファクタリング
- 新規 process ジョブで analyze/upsert/export/classify を統合
- recalculate ジョブを廃止し process に統合
- webhook handler を拡張し PR イベントで fetch + process を trigger
- coalesce: 'skip' で trigger 圧縮(N 回の webhook → 最大 2 run)
- process ジョブは concurrencyKey で org 単位直列化

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
edit だと pnpm 実行やファイル削除ができず implement が失敗する。
coder persona は shell 実行が必須なので full が正しい。

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- findActiveLinkByInstallation を shared に統合(重複クエリ解消)
- concurrency key をヘルパー関数に集約(8箇所の文字列リテラル散在を解消)
- crawl の trigger-process 内の到達不能コード(scopes.length === 0)を削除

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- backfill-repo: fetchedAt を fetcher.files() の前に取得(stale write 防止)
- durably: recalculate エイリアスを移行期間中残す(pending/failed run 互換)
- $pull route: triggerAndWait にエラーハンドリング追加

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CodeRabbit指摘: zx.NumAsString で文字列型の pullId を z.array(z.number()) の
prNumbers にそのまま渡していた。Number() で変換して型を一致させる。

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coji coji force-pushed the feat/webhook-realtime-pr-update branch from 34f8dcf to 1d13ca5 Compare April 6, 2026 03:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Webhook によるリアルタイム PR 更新

1 participant