Replace OpenNext Cloudflare build with vinext#58
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplaces OpenNext/Next.js Cloudflare build with a vinext + Vite workflow, centralizes Cloudflare env resolution via resolveCloudflareEnv, introduces a new worker entry (worker/index.ts), removes OpenNext artifacts and patches, updates archive modal/pagination flow, CI, tests, docs, and package/tooling for the vinext migration. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant Resolver as resolveCloudflareEnv
participant Workers as cloudflare:workers
participant Override as GlobalOverride
App->>Resolver: request env
Resolver->>Workers: loadCloudflareWorkersEnv()
alt Workers available
Workers-->>Resolver: env
Resolver-->>App: env
else
Workers-->>Resolver: undefined
Resolver->>Override: loadGlobalEnvOverride()
alt Override present
Override-->>Resolver: env
Resolver-->>App: env
else
Resolver-->>App: undefined
end
end
sequenceDiagram
participant Client as Browser
participant Worker as Cloudflare Worker (worker/index.ts)
participant Vinext as Vinext Router
participant Cron as scheduled handler
Client->>Worker: HTTP request (e.g., /archive)
Worker->>Vinext: delegate to handler.fetch
Vinext-->>Worker: response
Worker-->>Client: deliver response
Cron->>Worker: scheduled event
Worker-->>Cron: run scheduled handler
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ❌ Deployment failed View logs |
doom-index | e988104 | Mar 15 2026, 01:18 AM |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (4)
src/app/opengraph-image.tsx (1)
457-461: Consider: Redundant env resolution in fallback path.The fallback block re-resolves Cloudflare env even though the main block already attempted this. If the context resolution itself failed, this will fail again. However, this is acceptable since the main block could fail for other reasons (DB, fetch, transform), and the ultimate black-image fallback at line 543 handles total failure gracefully.
💡 Optional: Cache the env from the main block
If you want to avoid redundant resolution, you could lift
envoutside the try block:export default async function Image(): Promise<Response> { const startTime = Date.now(); logger.info("ogp.start"); + let env: Cloudflare.Env | undefined; try { logger.info("ogp.step-init-context"); - const env = await resolveCloudflareEnv(); + env = await resolveCloudflareEnv(); if (!env) { throw new Error("Cloudflare context not available"); } // ... rest of main flow } catch (error) { // ... error logging try { - const env = await resolveCloudflareEnv(); - if (!env) { - throw new Error("Cloudflare context not available"); - } + if (!env) { + env = await resolveCloudflareEnv(); + } + if (!env) { + throw new Error("Cloudflare context not available"); + } // ... rest of fallback flowThis is optional since the current implementation is correct and the extra resolution is low-cost.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/opengraph-image.tsx` around lines 457 - 461, The code re-calls resolveCloudflareEnv() in the fallback path even though the same env was already resolved earlier; to avoid redundant resolution, lift the env variable out of the try scope and reuse it for the fallbackAssetsFetcher instead of re-invoking resolveCloudflareEnv(), i.e. keep a single env reference produced by resolveCloudflareEnv() and use that for fallbackAssetsFetcher (referencing env and resolveCloudflareEnv and fallbackAssetsFetcher to locate the spots to change).tests/unit/lib/cache.test.ts (1)
137-147: Consider extracting a tiny helper for cache override/restore.The new
cachesBeforeTest+try/finallypattern is correct, but it’s duplicated several times. A small helper (e.g.,withCachesOverride) would reduce repetition and make these tests easier to maintain.Also applies to: 210-221, 344-358, 429-439
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/cache.test.ts` around lines 137 - 147, Extract a small helper (e.g., withCachesOverride) that accepts an override value and an async callback, saves the original (cachesBeforeTest), sets globalThis.caches to the override, awaits the callback, and finally restores the original value; replace the repeated try/finally blocks in tests with calls to withCachesOverride(undefined, async () => { const { get } = await import("@/lib/cache"); const result = await get<string>("test-key"); expect(result).toBeNull(); }) and similarly for the other occurrences (lines referencing get/import), so all cache override/restore logic is centralized and reused.src/server/db/index.ts (1)
41-46: Unnecessary try-catch block around property access.After the null check at lines 36-39,
envis guaranteed to be defined. Accessingenv.DBis a simple property access that cannot throw. SinceresolveCloudflareEnv()never throws (it returnsundefinedon failure), this try-catch block is unreachable dead code.♻️ Proposed simplification
if (!env) { logger.error("Failed to get Cloudflare context"); throw new Error("Failed to get Cloudflare context for D1 binding"); } - try { - binding = (env as DbBindings).DB; - } catch (error) { - logger.error("Failed to get Cloudflare context", { error }); - throw new Error("Failed to get Cloudflare context for D1 binding"); - } + binding = (env as DbBindings).DB; if (!binding) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/server/db/index.ts` around lines 41 - 46, The try-catch around accessing env.DB is unnecessary because resolveCloudflareEnv() returns undefined rather than throwing; remove the try-catch and directly assign binding = (env as DbBindings).DB (referencing env, DbBindings, and binding) and eliminate the logger.error and thrown Error in that block; rely on the existing null/undefined check around env (and any subsequent guard) to handle the failure path instead.src/components/archive/archive-content.tsx (1)
39-53: Extract the archive query builder before it drifts.This
page/from/toserialization now exists here and insrc/components/archive/painting-item.tsx. A shared helper will keep modal-open and modal-close URLs aligned when archive state changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/archive/archive-content.tsx` around lines 39 - 53, Extract the page/from/to serialization logic into a shared helper (e.g., buildArchiveQuery or useArchiveQueryParams) and replace the inline useMemo in archive-content (closeHref) and the duplicate logic in painting-item.tsx with calls to that helper; the helper should accept an object or parameters (page, from, to) and return the same query string or full path so both closeHref in the component and the modal-open link in painting-item use identical URL generation logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/archive/painting-item.tsx`:
- Around line 31-44: The current detailHref useMemo builds a query-targeted URL
(/archive?selected=...) which breaks canonical/permalink routing; change
detailHref to point at the canonical item route `/archive/${item.id}` and, if
needed, append the state-preserving query string for page/from/to (keep params
for page, from, to but remove `selected`), e.g. return
`/archive/${item.id}${paramsString}`; update the other similar link-generation
occurrence mentioned (same pattern at the second occurrence) so both use the
canonical `/archive/${item.id}` path with optional page/from/to query params.
In `@tests/preload.ts`:
- Around line 119-125: The test is force-casting a partial fixture to
CloudflareEnv (globalThis.__DOOM_INDEX_CLOUDFLARE_ENV__) which hides missing
bindings; stop asserting a full CloudflareEnv for a 4-field stub. Instead create
a proper test fixture or builder that returns either a fully populated
CloudflareEnv or use Partial<CloudflareEnv> for the test variable and update
usages to accept Partial where appropriate; locate the assignment to
__DOOM_INDEX_CLOUDFLARE_ENV__ in tests/preload.ts and replace the cast to
CloudflareEnv with a safer approach (e.g., buildTestEnv() that fills required
bindings or mark the value as Partial<CloudflareEnv>) and adjust any consumers
that expect CloudflareEnv (functions referencing D1Database, Fetcher,
KVNamespace, Ai) to accept the partial fixture or to throw/assert when a real
binding is required.
In `@worker/index.ts`:
- Around line 5-14: The local Env interface is incomplete and mismatched to the
runtime; replace the custom interface declaration named "Env" with the provided
Cloudflare.Env type and update any references to use that type; specifically
remove or rename the local IMAGES type and adopt the official ImagesBinding
shape from cloudflare-env.d.ts, and ensure the env passed into
handleScheduledEvent (and any usages that reference DB, AI, VIEWER_KV,
NEXT_INC_CACHE_R2_BUCKET, and other environment variables) uses Cloudflare.Env
so those bindings are present and correctly typed.
- Around line 30-36: Update the Env.IMAGES binding types to match Cloudflare
ImagesBinding: change the input signature on Env.IMAGES (and any related types
used by transformImage) to accept ReadableStream | BufferSource, and update the
output method options to make quality optional and add the optional anim boolean
(i.e., output: (options: { format: string; quality?: number; anim?: boolean })
=> Promise<{ response: () => Response }>); adjust any callers such as
transformImage to compile with the new signatures.
In `@wrangler.jsonc`:
- Around line 85-103: The dev environment in wrangler.jsonc is reusing
production resource identifiers (r2_buckets doom-index-next-inc-cache with
binding NEXT_INC_CACHE_R2_BUCKET, d1_databases entry DB with database_id
f953738f-09c7-47bd-8235-2affa996db8e, and kv_namespaces VIEWER_KV id
8c588d03a8ab4cd19e0c458dcabf1bec); replace these with dev-only resources to
avoid mutating production: create and reference a separate R2 bucket name
doom-index-next-inc-cache-dev for the NEXT_INC_CACHE_R2_BUCKET binding,
provision a distinct D1 database and update the DB database_id to the new dev
ID, and provision a separate KV namespace and update the VIEWER_KV id to the new
dev namespace ID in the env.dev block.
---
Nitpick comments:
In `@src/app/opengraph-image.tsx`:
- Around line 457-461: The code re-calls resolveCloudflareEnv() in the fallback
path even though the same env was already resolved earlier; to avoid redundant
resolution, lift the env variable out of the try scope and reuse it for the
fallbackAssetsFetcher instead of re-invoking resolveCloudflareEnv(), i.e. keep a
single env reference produced by resolveCloudflareEnv() and use that for
fallbackAssetsFetcher (referencing env and resolveCloudflareEnv and
fallbackAssetsFetcher to locate the spots to change).
In `@src/components/archive/archive-content.tsx`:
- Around line 39-53: Extract the page/from/to serialization logic into a shared
helper (e.g., buildArchiveQuery or useArchiveQueryParams) and replace the inline
useMemo in archive-content (closeHref) and the duplicate logic in
painting-item.tsx with calls to that helper; the helper should accept an object
or parameters (page, from, to) and return the same query string or full path so
both closeHref in the component and the modal-open link in painting-item use
identical URL generation logic.
In `@src/server/db/index.ts`:
- Around line 41-46: The try-catch around accessing env.DB is unnecessary
because resolveCloudflareEnv() returns undefined rather than throwing; remove
the try-catch and directly assign binding = (env as DbBindings).DB (referencing
env, DbBindings, and binding) and eliminate the logger.error and thrown Error in
that block; rely on the existing null/undefined check around env (and any
subsequent guard) to handle the failure path instead.
In `@tests/unit/lib/cache.test.ts`:
- Around line 137-147: Extract a small helper (e.g., withCachesOverride) that
accepts an override value and an async callback, saves the original
(cachesBeforeTest), sets globalThis.caches to the override, awaits the callback,
and finally restores the original value; replace the repeated try/finally blocks
in tests with calls to withCachesOverride(undefined, async () => { const { get }
= await import("@/lib/cache"); const result = await get<string>("test-key");
expect(result).toBeNull(); }) and similarly for the other occurrences (lines
referencing get/import), so all cache override/restore logic is centralized and
reused.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2d2672ef-7d4f-4cb3-a840-5af2c73316ba
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (53)
.github/workflows/bundle-analyzer.yml.github/workflows/ci.yml.oxfmtrc.jsonREADME.mddocs/STRUCTURE.mddocs/TECH.mdeslint.config.tsknip.jsonnext.config.tsopen-next.config.tspackage.jsonscripts/analyze-bundle.tsscripts/patch-opennext-setimmediate.tsscripts/patch-opennext-webpack-runtime.tsscripts/webpack/stub.cjssrc/app/api/trpc/[trpc]/route.tssrc/app/archive/@modal/(.)[id]/page.tsxsrc/app/archive/@modal/default.tsxsrc/app/archive/[id]/page.tsxsrc/app/archive/layout.tsxsrc/app/archive/page.tsxsrc/app/opengraph-image.tsxsrc/components/archive/archive-content.tsxsrc/components/archive/archive-detail-modal.tsxsrc/components/archive/archive-grid.tsxsrc/components/archive/archive-loader.tsxsrc/components/archive/painting-item.tsxsrc/cron.tssrc/hooks/use-viewer.tssrc/lib/cloudflare-context.tssrc/lib/workers-ai-client.tssrc/proxy.tssrc/server/db/index.tssrc/server/trpc/context.tssrc/types/open-next.d.tssrc/worker.tstests/integration/lib/workers-ai-client.integration.test.tstests/integration/services/token-analysis-service.integration.test.tstests/preload.tstests/unit/components/archive/archive-grid.test.tsxtests/unit/db/index.test.tstests/unit/lib/cache.test.tstests/unit/lib/cloudflare-context.test.tstests/unit/lib/opennextjs-cloudflare-mock-empty-env.tstests/unit/lib/workers-ai-client-cloudflare.test.tstests/unit/lib/workers-ai-client.test.tstests/unit/next-config.test.tstests/unit/server/trpc/context.test.tstsconfig.jsonvite.config.tsworker/index.tswrangler.jsoncwrangler.toml
💤 Files with no reviewable changes (12)
- src/app/archive/@modal/(.)[id]/page.tsx
- tests/unit/lib/opennextjs-cloudflare-mock-empty-env.ts
- wrangler.toml
- scripts/webpack/stub.cjs
- scripts/analyze-bundle.ts
- scripts/patch-opennext-webpack-runtime.ts
- src/app/archive/@modal/default.tsx
- eslint.config.ts
- src/types/open-next.d.ts
- open-next.config.ts
- src/worker.ts
- scripts/patch-opennext-setimmediate.ts
| const detailHref = useMemo(() => { | ||
| const params = new URLSearchParams(); | ||
| if (page > 1) { | ||
| params.set("page", String(page)); | ||
| } | ||
| if (from) { | ||
| params.set("from", from); | ||
| } | ||
| if (to) { | ||
| params.set("to", to); | ||
| } | ||
| params.set("selected", item.id); | ||
| return `/archive?${params.toString()}`; | ||
| }, [from, item.id, page, to]); |
There was a problem hiding this comment.
Keep /archive/[id] as the canonical archive item URL.
Routing tiles to /archive?selected=... preserves list state, but it also makes copy/open-in-new-tab land on src/app/archive/page.tsx, which only has generic archive metadata. The item-specific metadata and OG tags still live in src/app/archive/[id]/page.tsx, so this change regresses sharing and canonical permalinks for archive items. Keep the modal flow as an enhancement, but retain a canonical /archive/[id] target.
Also applies to: 57-57
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/archive/painting-item.tsx` around lines 31 - 44, The current
detailHref useMemo builds a query-targeted URL (/archive?selected=...) which
breaks canonical/permalink routing; change detailHref to point at the canonical
item route `/archive/${item.id}` and, if needed, append the state-preserving
query string for page/from/to (keep params for page, from, to but remove
`selected`), e.g. return `/archive/${item.id}${paramsString}`; update the other
similar link-generation occurrence mentioned (same pattern at the second
occurrence) so both use the canonical `/archive/${item.id}` path with optional
page/from/to query params.
| (globalThis as CloudflareTestGlobal).__DOOM_INDEX_CLOUDFLARE_ENV__ = { | ||
| DB: testD1Db as unknown as D1Database, | ||
| ASSETS: {} as Fetcher, | ||
| VIEWER_KV: {} as KVNamespace, | ||
| AI: {} as Ai, | ||
| } as CloudflareEnv; | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Show declared Cloudflare env contract and current preload stub side-by-side
rg -n -A40 -B2 '^interface Env' cloudflare-env.d.ts
rg -n -A20 -B4 '__DOOM_INDEX_CLOUDFLARE_ENV__\s*=\s*{' tests/preload.tsRepository: doom-protocol/doom-index
Length of output: 205
🏁 Script executed:
#!/bin/bash
# Find and read the cloudflare-env.d.ts file
echo "=== cloudflare-env.d.ts ==="
fd cloudflare-env.d.ts -x cat {}
echo ""
echo "=== tests/preload.ts (lines around __DOOM_INDEX_CLOUDFLARE_ENV__) ==="
fd preload.ts tests -x cat -n {}Repository: doom-protocol/doom-index
Length of output: 50381
🏁 Script executed:
cat -n tests/preload.tsRepository: doom-protocol/doom-index
Length of output: 5526
🏁 Script executed:
# Verify the exact type definition being used and count total fields
rg -c "^\s+[A-Z_]+:" cloudflare-env.d.ts | head -1Repository: doom-protocol/doom-index
Length of output: 70
Avoid force-casting a partial test env to CloudflareEnv.
Lines 119–124 only set four fields, then cast to CloudflareEnv. This bypasses the full binding contract and can hide missing-env failures in integration tests. The Cloudflare.Env interface requires 28 bindings; the stub provides only 4.
Safer fixture pattern
- (globalThis as CloudflareTestGlobal).__DOOM_INDEX_CLOUDFLARE_ENV__ = {
- DB: testD1Db as unknown as D1Database,
- ASSETS: {} as Fetcher,
- VIEWER_KV: {} as KVNamespace,
- AI: {} as Ai,
- } as CloudflareEnv;
+ const testCloudflareEnv: CloudflareEnv = {
+ DB: testD1Db as unknown as D1Database,
+ ASSETS: {} as Fetcher,
+ VIEWER_KV: {} as KVNamespace,
+ AI: {} as Ai,
+ NEXT_INC_CACHE_R2_BUCKET: {} as R2Bucket,
+ IMAGES: {} as ImagesBinding,
+ ARWEAVE_GATEWAY_BASE_URL: "https://permagate.io",
+ NODE_ENV: "test",
+ NEXTJS_ENV: "test",
+ NEXT_PUBLIC_BASE_URL: "http://localhost:8787",
+ LOG_LEVEL: "DEBUG",
+ IMAGE_MODEL: "runware:400@1",
+ NEXT_PUBLIC_GENERATION_INTERVAL_MS: "600000",
+ NEXT_PRIVATE_DEBUG_CACHE: "false",
+ RUNWARE_API_KEY: "test",
+ CLOUDFLARE_ACCOUNT_ID: "test",
+ R2_ACCESS_KEY_ID: "test",
+ R2_SECRET_ACCESS_KEY: "test",
+ CLOUDFLARE_DATABASE_ID: "test",
+ CLOUDFLARE_D1_TOKEN: "test",
+ TAVILY_API_KEY: "test",
+ COINGECKO_API_KEY: "test",
+ SLACK_WEBHOOK_URL: "test",
+ CACHE_PURGE_API_TOKEN: "test",
+ CACHE_PURGE_ZONE_ID: "test",
+ ADMIN_SECRET: "test",
+ ARDRIVE_TURBO_SECRET_KEY: "test",
+ WORKER_SELF_REFERENCE: {} as Fetcher,
+ };
+ (globalThis as CloudflareTestGlobal).__DOOM_INDEX_CLOUDFLARE_ENV__ = testCloudflareEnv;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/preload.ts` around lines 119 - 125, The test is force-casting a partial
fixture to CloudflareEnv (globalThis.__DOOM_INDEX_CLOUDFLARE_ENV__) which hides
missing bindings; stop asserting a full CloudflareEnv for a 4-field stub.
Instead create a proper test fixture or builder that returns either a fully
populated CloudflareEnv or use Partial<CloudflareEnv> for the test variable and
update usages to accept Partial where appropriate; locate the assignment to
__DOOM_INDEX_CLOUDFLARE_ENV__ in tests/preload.ts and replace the cast to
CloudflareEnv with a safer approach (e.g., buildTestEnv() that fills required
bindings or mark the value as Partial<CloudflareEnv>) and adjust any consumers
that expect CloudflareEnv (functions referencing D1Database, Fetcher,
KVNamespace, Ai) to accept the partial fixture or to throw/assert when a real
binding is required.
worker/index.ts
Outdated
| interface Env { | ||
| ASSETS: Fetcher; | ||
| IMAGES: { | ||
| input: (stream: ReadableStream) => { | ||
| transform: (options: Record<string, unknown>) => { | ||
| output: (options: { format: string; quality: number }) => Promise<{ response: () => Response }>; | ||
| }; | ||
| }; | ||
| }; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify Cloudflare.Env is defined and includes required bindings
echo "=== Checking Cloudflare.Env definition ==="
rg -n "interface Env \{" cloudflare-env.d.ts | head -5
echo ""
echo "=== Checking handleScheduledEvent parameter type ==="
rg -n "handleScheduledEvent" src/cron.ts | head -5Repository: doom-protocol/doom-index
Length of output: 257
🏁 Script executed:
#!/bin/bash
# Check Cloudflare.Env interface definition
echo "=== Cloudflare.Env interface (lines 35-55) ==="
sed -n '35,55p' cloudflare-env.d.ts
echo ""
echo "=== handleScheduledEvent function signature ==="
sed -n '72,85p' src/cron.ts
echo ""
echo "=== Checking imports in worker/index.ts ==="
head -20 worker/index.tsRepository: doom-protocol/doom-index
Length of output: 1849
🏁 Script executed:
#!/bin/bash
# Check how env is passed to handleScheduledEvent in worker/index.ts
echo "=== worker/index.ts lines 20-45 (handler exports) ==="
sed -n '20,45p' worker/index.ts
echo ""
echo "=== Check for any ImagesBinding type definition ==="
rg -n "ImagesBinding" cloudflare-env.d.ts | head -3Repository: doom-protocol/doom-index
Length of output: 1115
🏁 Script executed:
#!/bin/bash
# Check ImagesBinding definition
echo "=== ImagesBinding type definition ==="
rg -n "type ImagesBinding\|interface ImagesBinding" cloudflare-env.d.ts -A 10 | head -20
echo ""
echo "=== Verify if scheduled handler has type annotation for env ==="
sed -n '40,50p' worker/index.ts
echo ""
echo "=== Check the full worker/index.ts to see the export structure ==="
wc -l worker/index.tsRepository: doom-protocol/doom-index
Length of output: 352
🏁 Script executed:
#!/bin/bash
# Check the actual ImagesBinding definition and how it's used
echo "=== Find ImagesBinding definition ==="
rg -n "type ImagesBinding\|interface ImagesBinding" cloudflare-env.d.ts -A 15 | head -30
echo ""
echo "=== Full context of worker/index.ts to see exact handler structure ==="
cat -n worker/index.tsRepository: doom-protocol/doom-index
Length of output: 1954
🏁 Script executed:
#!/bin/bash
# Check if the export default object has any type annotation
echo "=== Checking export structure and any type annotations ==="
cat -n worker/index.ts | grep -A 30 "export default"
echo ""
echo "=== Check what Cloudflare.Env actually includes (full interface) ==="
sed -n '38,70p' cloudflare-env.d.ts
echo ""
echo "=== Verify ImagesBinding is not a type alias ==="
rg -n "ImagesBinding" cloudflare-env.d.ts -B 2 -A 8 | head -40Repository: doom-protocol/doom-index
Length of output: 3559
🏁 Script executed:
#!/bin/bash
# Check the complete ImagesBinding definition
echo "=== ImagesBinding interface definition ==="
sed -n '11043,11080p' cloudflare-env.d.ts
echo ""
echo "=== Check if handler.fetch is typed in vinext or uses the Env type parameter ==="
rg -n "from.*vinext" worker/index.ts
echo ""
echo "=== Check if there are any eslint-disable comments in worker/index.ts ==="
grep -n "eslint-disable" worker/index.tsRepository: doom-protocol/doom-index
Length of output: 1984
Replace local Env interface with Cloudflare.Env.
The local Env interface is both incomplete and incorrect. It's missing bindings (DB, AI, VIEWER_KV, NEXT_INC_CACHE_R2_BUCKET, environment variables) that handleScheduledEvent expects, and its IMAGES type doesn't match the actual ImagesBinding structure from cloudflare-env.d.ts. This creates type safety gaps even though Wrangler provides the full environment at runtime.
🔧 Proposed fix
import handler from "vinext/server/app-router-entry";
import { DEFAULT_DEVICE_SIZES, DEFAULT_IMAGE_SIZES, handleImageOptimization } from "vinext/server/image-optimization";
import { handleScheduledEvent } from "../src/cron";
-interface Env {
- ASSETS: Fetcher;
- IMAGES: {
- input: (stream: ReadableStream) => {
- transform: (options: Record<string, unknown>) => {
- output: (options: { format: string; quality: number }) => Promise<{ response: () => Response }>;
- };
- };
- };
-}
interface ExecutionContext {
waitUntil: (promise: Promise<unknown>) => void;
passThroughOnException: () => void;
}
export default {
- async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
+ async fetch(request: Request, env: Cloudflare.Env, ctx: ExecutionContext): Promise<Response> {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| interface Env { | |
| ASSETS: Fetcher; | |
| IMAGES: { | |
| input: (stream: ReadableStream) => { | |
| transform: (options: Record<string, unknown>) => { | |
| output: (options: { format: string; quality: number }) => Promise<{ response: () => Response }>; | |
| }; | |
| }; | |
| }; | |
| } | |
| import handler from "vinext/server/app-router-entry"; | |
| import { DEFAULT_DEVICE_SIZES, DEFAULT_IMAGE_SIZES, handleImageOptimization } from "vinext/server/image-optimization"; | |
| import { handleScheduledEvent } from "../src/cron"; | |
| interface ExecutionContext { | |
| waitUntil: (promise: Promise<unknown>) => void; | |
| passThroughOnException: () => void; | |
| } | |
| export default { | |
| async fetch(request: Request, env: Cloudflare.Env, ctx: ExecutionContext): Promise<Response> { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@worker/index.ts` around lines 5 - 14, The local Env interface is incomplete
and mismatched to the runtime; replace the custom interface declaration named
"Env" with the provided Cloudflare.Env type and update any references to use
that type; specifically remove or rename the local IMAGES type and adopt the
official ImagesBinding shape from cloudflare-env.d.ts, and ensure the env passed
into handleScheduledEvent (and any usages that reference DB, AI, VIEWER_KV,
NEXT_INC_CACHE_R2_BUCKET, and other environment variables) uses Cloudflare.Env
so those bindings are present and correctly typed.
worker/index.ts
Outdated
| fetchAsset: async (path) => env.ASSETS.fetch(new Request(new URL(path, request.url))), | ||
| transformImage: async (body, { width, format, quality }) => { | ||
| const result = await env.IMAGES.input(body) | ||
| .transform(width > 0 ? { width } : {}) | ||
| .output({ format, quality }); | ||
| return result.response(); | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Cloudflare Workers Images binding API input transform output
💡 Result:
Cloudflare Workers Images binding is a small fluent API that builds an image-processing “pipeline”:
1) Input
Configure the binding (Wrangler/Dashboard) as IMAGES, then create a pipeline from input bytes:
env.IMAGES.input(image)whereimageis provided as aReadableStream(common when youfetch()from R2/Assets/etc.). The docs describe the input as aReadableStream. [1]- You can also pass raw bytes (for example an uploaded file read via
arrayBuffer()), as shown in Cloudflare’s reference example:env.IMAGES.input(fileBuffer). [2]
2) Transform (and compose)
Chain operations:
.transform({...})to apply transformation parameters (resize, rotate, blur, etc.). [1].draw(overlayPipelineOrStream, drawOptions)to overlay another image (which can itself beenv.IMAGES.input(...).transform(...)). [1]
You can call .transform() multiple times; conceptually each call adds a step/node in the operation graph. [2]
3) Output
Finish with:
.output({ format: "image/avif" | "image/webp" | "image/jpeg" | ... , quality?, anim? })formatis required (no default fallback). [1]qualityis 1–100;anim: falsecan reduce animations to a still. [1]
Then materialize the result as a standard Workers Response:
(...).response()returns aResponsewhosebodyyou can return to the client or stream into R2 (await env.R2.put(name, response.body)). [1][2]
Minimal example
const out = (
await env.IMAGES
.input(inputBytesOrStream)
.transform({ width: 512 })
.output({ format: "image/avif", quality: 75 })
).response();
return out;Sources: Cloudflare Images binding docs [1] and Cloudflare’s end-to-end Worker example using input(fileBuffer) ... output(...) ... response() and uploading the response body to R2 [2].
References
- [1] Cloudflare Docs — “Bind to Workers API” (Images binding)
- [2] Cloudflare Blog (2025‑04‑03) — “Improve your media pipelines with the Images binding for Cloudflare Workers”
🏁 Script executed:
sed -n '7,13p' worker/index.tsRepository: doom-protocol/doom-index
Length of output: 303
🏁 Script executed:
sed -n '1,40p' worker/index.tsRepository: doom-protocol/doom-index
Length of output: 1440
🏁 Script executed:
grep -n "eslint-disable" worker/index.tsRepository: doom-protocol/doom-index
Length of output: 50
Fix Env.IMAGES interface to match Cloudflare's ImagesBinding API.
The interface definition (lines 7–13) does not accurately reflect the Cloudflare Images binding. Specifically:
input()should acceptReadableStream | BufferSource, not justReadableStreamqualityin theoutput()options should be optional, not required- The
animparameter (optional) is missing from theoutput()options
Update the interface to:
output: (options: { format: string; quality?: number; anim?: boolean }) => Promise<{ response: () => Response }>;
input: (stream: ReadableStream | BufferSource) => { ... };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@worker/index.ts` around lines 30 - 36, Update the Env.IMAGES binding types to
match Cloudflare ImagesBinding: change the input signature on Env.IMAGES (and
any related types used by transformImage) to accept ReadableStream |
BufferSource, and update the output method options to make quality optional and
add the optional anim boolean (i.e., output: (options: { format: string;
quality?: number; anim?: boolean }) => Promise<{ response: () => Response }>);
adjust any callers such as transformImage to compile with the new signatures.
| "r2_buckets": [ | ||
| { | ||
| "binding": "NEXT_INC_CACHE_R2_BUCKET", | ||
| "bucket_name": "doom-index-next-inc-cache", | ||
| "preview_bucket_name": "doom-index-next-inc-cache", | ||
| }, | ||
| ], | ||
| "d1_databases": [ | ||
| { | ||
| "binding": "DB", | ||
| "database_name": "doom-index", | ||
| "database_id": "f953738f-09c7-47bd-8235-2affa996db8e", | ||
| }, | ||
| ], | ||
| "kv_namespaces": [ | ||
| { | ||
| "binding": "VIEWER_KV", | ||
| "id": "8c588d03a8ab4cd19e0c458dcabf1bec", | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether top-level and env.dev identifiers are the same
rg -n -A2 -B2 '"(bucket_name|database_id|id)"\s*:' wrangler.jsoncRepository: doom-protocol/doom-index
Length of output: 1027
Use isolated dev bindings instead of production resource IDs.
The env.dev section at lines 88, 96, and 102 reuses the same R2 bucket name, D1 database ID, and KV namespace ID as production. This risks unintended mutations to production data during local development.
Create separate dev-only resources with distinct identifiers:
- R2 bucket:
doom-index-next-inc-cache-dev - D1 database ID: (create separate dev database)
- KV namespace ID: (create separate dev namespace)
Suggested config adjustment
"r2_buckets": [
{
"binding": "NEXT_INC_CACHE_R2_BUCKET",
- "bucket_name": "doom-index-next-inc-cache",
- "preview_bucket_name": "doom-index-next-inc-cache",
+ "bucket_name": "doom-index-next-inc-cache-dev",
+ "preview_bucket_name": "doom-index-next-inc-cache-dev",
},
],
"d1_databases": [
{
"binding": "DB",
- "database_name": "doom-index",
- "database_id": "f953738f-09c7-47bd-8235-2affa996db8e",
+ "database_name": "doom-index-dev",
+ "database_id": "<dev-d1-database-id>",
},
],
"kv_namespaces": [
{
"binding": "VIEWER_KV",
- "id": "8c588d03a8ab4cd19e0c458dcabf1bec",
+ "id": "<dev-kv-namespace-id>",
},
],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@wrangler.jsonc` around lines 85 - 103, The dev environment in wrangler.jsonc
is reusing production resource identifiers (r2_buckets doom-index-next-inc-cache
with binding NEXT_INC_CACHE_R2_BUCKET, d1_databases entry DB with database_id
f953738f-09c7-47bd-8235-2affa996db8e, and kv_namespaces VIEWER_KV id
8c588d03a8ab4cd19e0c458dcabf1bec); replace these with dev-only resources to
avoid mutating production: create and reference a separate R2 bucket name
doom-index-next-inc-cache-dev for the NEXT_INC_CACHE_R2_BUCKET binding,
provision a distinct D1 database and update the DB database_id to the new dev
ID, and provision a separate KV namespace and update the VIEWER_KV id to the new
dev namespace ID in the env.dev block.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
package.json (1)
22-22: Align type generation config with deploy config source-of-truth.Line 22 generates types from
wrangler.jsonc, while deploy usesdist/server/wrangler.json(generated byvinext buildat lines 10, 34). Since vinext may transform bindings in the generated config,cloudflare-env.d.tscan drift from runtime bindings—as evidenced by the manual cron trigger sync requirement insrc/cron.ts.Consider either generating types from the deployed config (after build) or explicitly documenting/validating that both configs always expose the same bindings.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` at line 22, The cf-typegen script currently points at wrangler.jsonc which can diverge from the generated runtime config; update the type generation to use the built deploy config (dist/server/wrangler.json) so cloudflare-env.d.ts matches runtime bindings, and ensure this runs after the vinext build (e.g., create/adjust a script that runs vinext build before invoking wrangler types against dist/server/wrangler.json); optionally add a small validation step that compares binding keys between wrangler.jsonc and dist/server/wrangler.json and fails the build if they differ to prevent drift (referencing the npm script name "cf-typegen", the output file "cloudflare-env.d.ts", and the generated config "dist/server/wrangler.json").
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@package.json`:
- Line 36: Update the npm script "vinext:check" so it uses the pinned vinext
version instead of `@latest`: change the script value from "bunx vinext@latest
check" to reference the package version declared in dependencies (e.g., "bunx
vinext@^0.0.30 check" or the exact version from the "vinext" entry) to ensure CI
reproducibility; modify the "vinext:check" script string accordingly.
---
Nitpick comments:
In `@package.json`:
- Line 22: The cf-typegen script currently points at wrangler.jsonc which can
diverge from the generated runtime config; update the type generation to use the
built deploy config (dist/server/wrangler.json) so cloudflare-env.d.ts matches
runtime bindings, and ensure this runs after the vinext build (e.g.,
create/adjust a script that runs vinext build before invoking wrangler types
against dist/server/wrangler.json); optionally add a small validation step that
compares binding keys between wrangler.jsonc and dist/server/wrangler.json and
fails the build if they differ to prevent drift (referencing the npm script name
"cf-typegen", the output file "cloudflare-env.d.ts", and the generated config
"dist/server/wrangler.json").
package.json
Outdated
| "analyze:bundle": "bun run scripts/analyze-bundle.ts", | ||
| "deploy:web": "bunx vinext build && bunx wrangler deploy --config dist/server/wrangler.json", | ||
| "deploy": "bun run deploy:web", | ||
| "vinext:check": "bunx vinext@latest check", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify whether any script still uses floating `@latest` binaries.
python - <<'PY'
import json
from pathlib import Path
pkg = json.loads(Path("package.json").read_text())
scripts = pkg.get("scripts", {})
floating = {k:v for k,v in scripts.items() if "@latest" in v}
print("Scripts using `@latest`:")
for k,v in floating.items():
print(f" - {k}: {v}")
if not floating:
print(" - none")
PYRepository: doom-protocol/doom-index
Length of output: 132
🏁 Script executed:
sed -n '100,110p' package.jsonRepository: doom-protocol/doom-index
Length of output: 322
Use the pinned vinext version instead of @latest to maintain CI reproducibility.
Line 36 uses bunx vinext@latest check, which fetches the latest version from npm and bypasses your pinned dependency at line 103 ("vinext": "^0.0.30"). This can cause unexpected build failures when upstream releases introduce breaking changes.
Suggested fix
- "vinext:check": "bunx vinext@latest check",
+ "vinext:check": "bunx vinext check",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "vinext:check": "bunx vinext@latest check", | |
| "vinext:check": "bunx vinext check", |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` at line 36, Update the npm script "vinext:check" so it uses the
pinned vinext version instead of `@latest`: change the script value from "bunx
vinext@latest check" to reference the package version declared in dependencies
(e.g., "bunx vinext@^0.0.30 check" or the exact version from the "vinext" entry)
to ensure CI reproducibility; modify the "vinext:check" script string
accordingly.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/proxy.ts (1)
39-41: Consider updating cache key prefix for consistency.The cache key prefix still uses
"middleware:404:"while the function has been renamed toproxy. This doesn't affect functionality but creates a naming inconsistency.♻️ Suggested rename
function get404CacheKey(pathname: string): string { - return `middleware:404:${pathname}`; + return `proxy:404:${pathname}`; }Note: Changing this would invalidate existing cached entries, so you may prefer to keep the current prefix if cache continuity matters during deployment.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/proxy.ts` around lines 39 - 41, Update the cache key prefix in function get404CacheKey to match the renamed module: replace the hardcoded "middleware:404:" prefix with "proxy:404:" (i.e., change the returned string built from pathname in get404CacheKey). Also search for any other occurrences of "middleware:404:" and update them to "proxy:404:" to keep keys consistent, being aware this will invalidate existing cached entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/ui/progressive-image.tsx`:
- Around line 138-140: The Image component currently forces unoptimized which
disables Next.js optimization and negates responsive sizes; update the
ProgressiveImage component to accept an optional unoptimized prop (e.g., in
ProgressiveImageProps / the ProgressiveImage function) with a sensible default
(false) and pass that prop into the underlying <Image> instead of the hardcoded
value, ensuring the resolvedSizes/sizes logic continues to use the passed sizes;
then update callers (e.g., where painting items render ProgressiveImage) to pass
unoptimized only when explicitly needed.
In `@src/mdx-components.tsx`:
- Around line 15-20: The MdxImage component currently passes an empty string to
next/image which can break rendering; update the MdxImage function to
early-return null when src is falsy or an empty string (e.g., !src || src.trim()
=== ""), so the component does not render the <Image> or surrounding <figure>
when there is no usable source; locate MdxImage in src/mdx-components.tsx and
add this guard at the top of the function before rendering.
---
Nitpick comments:
In `@src/proxy.ts`:
- Around line 39-41: Update the cache key prefix in function get404CacheKey to
match the renamed module: replace the hardcoded "middleware:404:" prefix with
"proxy:404:" (i.e., change the returned string built from pathname in
get404CacheKey). Also search for any other occurrences of "middleware:404:" and
update them to "proxy:404:" to keep keys consistent, being aware this will
invalidate existing cached entries.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7b43fb71-eb1f-48c0-9f59-41884f50190c
📒 Files selected for processing (8)
next.config.tssrc/components/ui/progressive-image.tsxsrc/mdx-components.tsxsrc/proxy.tstests/unit/components/archive/archive-grid.test.tsxtests/unit/components/ui/progressive-image.test.tsxtests/unit/next-config.test.tsworker/index.ts
💤 Files with no reviewable changes (2)
- tests/unit/next-config.test.ts
- next.config.ts
| sizes={resolvedSizes} | ||
| unoptimized | ||
| loading={loading ?? "lazy"} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Check Next image config / loader usage"
config_files="$(fd 'next\.config\.(ts|js|mjs|cjs)$' -HI || true)"
if [ -n "$config_files" ]; then
echo "$config_files"
# shellcheck disable=SC2086
rg -n 'images\s*:|loader|unoptimized' $config_files
else
echo "No next.config.* found"
fi
echo "Check ProgressiveImage usage on archive route"
rg -n 'ProgressiveImage|ARCHIVE_GRID_SIZES|sizes=|unoptimized' \
src/components/ui/progressive-image.tsx \
src/components/archive/painting-item.tsxRepository: doom-protocol/doom-index
Length of output: 845
🏁 Script executed:
cat -n src/components/ui/progressive-image.tsx | head -160Repository: doom-protocol/doom-index
Length of output: 4807
🏁 Script executed:
cat -n src/components/archive/painting-item.tsx | head -90Repository: doom-protocol/doom-index
Length of output: 3397
🏁 Script executed:
# Check for eslint-disable comments in progressive-image.tsx
rg -n 'eslint-disable' src/components/ui/progressive-image.tsxRepository: doom-protocol/doom-index
Length of output: 50
🏁 Script executed:
rg -n 'ProgressiveImage' --type ts --type tsx -A 3 -B 1Repository: doom-protocol/doom-index
Length of output: 95
🏁 Script executed:
rg -n 'ProgressiveImage' -A 3 -B 1 src/Repository: doom-protocol/doom-index
Length of output: 1508
🏁 Script executed:
rg -n 'Worker|worker|edge' src/components/ui/progressive-image.tsxRepository: doom-protocol/doom-index
Length of output: 50
🏁 Script executed:
# Check if there are any comments explaining the unoptimized setting
cat -n src/components/ui/progressive-image.tsx | sed -n '130,145p'Repository: doom-protocol/doom-index
Length of output: 704
Make unoptimized a configurable prop to avoid negating responsive image benefits.
At line 139, unoptimized is hardcoded to true, which disables Next.js image optimization. This directly undermines the responsive sizes attribute passed from the archive grid (ARCHIVE_GRID_SIZES at src/components/archive/painting-item.tsx:72). On a high-traffic archive route with many images, this prevents responsive image optimization benefits.
Suggested fix:
Configurable unoptimized prop
interface ProgressiveImageProps {
src: string;
sources?: string[];
alt: string;
className?: string;
fill?: boolean;
width?: number;
height?: number;
sizes?: ResponsiveSizes;
loading?: "eager" | "lazy";
+ unoptimized?: boolean;
onLoad?: () => void;
onError?: (error: SyntheticEvent<HTMLImageElement>) => void;
fallback?: ReactNode;
skeleton?: ReactNode;
logContext?: Record<string, unknown>;
}
export const ProgressiveImage: FC<ProgressiveImageProps> = ({
src,
sources,
alt,
className = "",
fill = false,
width,
height,
sizes,
loading,
+ unoptimized = false,
onLoad,
onError,
fallback,
skeleton,
logContext,
}) => {
// ...
<Image
// ...
sizes={resolvedSizes}
- unoptimized
+ unoptimized={unoptimized}
loading={loading ?? "lazy"}
/>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/ui/progressive-image.tsx` around lines 138 - 140, The Image
component currently forces unoptimized which disables Next.js optimization and
negates responsive sizes; update the ProgressiveImage component to accept an
optional unoptimized prop (e.g., in ProgressiveImageProps / the ProgressiveImage
function) with a sensible default (false) and pass that prop into the underlying
<Image> instead of the hardcoded value, ensuring the resolvedSizes/sizes logic
continues to use the passed sizes; then update callers (e.g., where painting
items render ProgressiveImage) to pass unoptimized only when explicitly needed.
| function MdxImage({ src, alt }: { src?: string; alt?: string }) { | ||
| return ( | ||
| <figure className="my-6 flex flex-col items-center"> | ||
| <Image | ||
| src={src ?? ""} | ||
| alt={alt ?? ""} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "1) Find Markdown/MDX image syntax with empty URL"
fd -e md -e mdx | xargs -r rg -nP '!\[[^\]]*\]\(\s*\)' || true
echo "2) Find JSX <img> in MDX files without src attribute"
fd -e mdx | xargs -r rg -nP '<img(?![^>]*\bsrc=)[^>]*>' || true
echo "3) Confirm current fallback-to-empty-src behavior"
rg -n 'function MdxImage|src\?: string|src=\{src \?\? ""\}' src/mdx-components.tsxRepository: doom-protocol/doom-index
Length of output: 315
🏁 Script executed:
cat -n src/mdx-components.tsxRepository: doom-protocol/doom-index
Length of output: 6063
🏁 Script executed:
rg -n "MdxImage" src/mdx-components.tsxRepository: doom-protocol/doom-index
Length of output: 230
🏁 Script executed:
fd -type f -name "*.tsx" -o -name "*.ts" -o -name "*.jsx" -o -name "*.js" | xargs rg -l "MdxImage" 2>/dev/null | head -20Repository: doom-protocol/doom-index
Length of output: 278
Guard next/image against missing src to avoid runtime failures.
Line 19 passes src="" (empty string) to next/image, which causes rendering errors. When MDX processes images without a usable source, the component should return null instead of rendering a broken image element.
Suggested fix
function MdxImage({ src, alt }: { src?: string; alt?: string }) {
+ if (!src) {
+ return null;
+ }
+
return (
<figure className="my-6 flex flex-col items-center">
<Image
- src={src ?? ""}
+ src={src}
alt={alt ?? ""}
width={320}
height={240}
unoptimized
loading="lazy"
className="h-auto max-w-xs border border-[`#ddd`]"
/>
{alt && <figcaption className="mt-2 text-center text-sm text-[`#666`] italic">{alt}</figcaption>}
</figure>
);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mdx-components.tsx` around lines 15 - 20, The MdxImage component
currently passes an empty string to next/image which can break rendering; update
the MdxImage function to early-return null when src is falsy or an empty string
(e.g., !src || src.trim() === ""), so the component does not render the <Image>
or surrounding <figure> when there is no usable source; locate MdxImage in
src/mdx-components.tsx and add this guard at the top of the function before
rendering.
- Added 'dist/' to .gitignore to exclude distribution files. - Introduced a new rule for using Bun instead of Node.js and related tools, detailing commands and usage examples for various scenarios. - Created a new document for vinext evaluation groundwork, outlining changes and migration strategies for future development.
- Introduced a new .env.example file to standardize environment variable configuration. - Updated references in various files to use .env.example instead of .dev.vars or .example.vars. - Adjusted README and documentation to reflect the new environment setup process. - Enhanced CI/CD workflows to utilize the new .env.example for testing and deployment.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.agents/rules/use-bun-instead-of-node-vite-npm-pnpm.mdc:
- Line 41: The rule text "Use HTML imports with `Bun.serve()`. Don't use
`vite`." conflicts with the repo which uses vinext and a `vite.config.ts` with
Vite plugins (e.g. `@cloudflare/vite-plugin`, `@tailwindcss/vite`); update the
rule to either (A) allow Vite when used as part of vinext/framework tooling by
adding an explicit exception mentioning "vinext" and `vite.config.ts` and the
listed plugins, or (B) remove the blanket "Don't use `vite`" restriction and
replace it with guidance that Bun HTML imports are preferred unless the project
uses vinext/Vite-based tooling (mention vinext and the Vite plugins as
examples). Ensure the rule text references vinext, `vite.config.ts`, and the
plugin names so future reviewers understand the exception.
In `@docs/TECH.md`:
- Line 57: The documentation's Cloudflare Workers cron entry is inconsistent
with wrangler.jsonc; update docs/TECH.md to clarify environment-specific
schedules by either replacing the single "10分ごとトリガ - `*/10 * * * *`" line with
explicit entries for production and development (e.g., production `"0 * * * *"`
and dev `"*/10 * * * *"`) or note that the documented cron refers to the dev
environment; reference the actual cron strings shown in wrangler.jsonc and label
them accordingly so readers know which schedule applies to production vs dev.
In `@package.json`:
- Line 71: package.json currently pins "@cloudflare/vite-plugin": "^1.28.0",
which declares peer deps for Vite ^6/7 and is incompatible with Vite 8; update
package.json to use a `@cloudflare/vite-plugin` release that lists Vite ^8 (or
alternatively downgrade your Vite dependency to a 7.x version) and then run the
package manager to refresh lockfiles (locate the dependency by the name
"@cloudflare/vite-plugin" and update the version string and reinstall).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 36627f7f-1257-41e0-9c4d-d755a1befe26
📒 Files selected for processing (17)
.agents/rules/use-bun-instead-of-node-vite-npm-pnpm.mdc.gitignore.oxfmtrc.jsonREADME.mdcloudflare-env.d.tsdocs/PRODUCT.mddocs/TECH.mddocs/analysis/vinext-evaluation.mddocs/guides/opennextjs-env-vars.mdeslint.config.tspackage.jsonsrc/app/about/page.tsxsrc/lib/cloudflare-image.tssrc/types/cloudflare-env.d.tstests/unit/cloudflare-env-types.test.tstsconfig.jsonwrangler.jsonc
💤 Files with no reviewable changes (1)
- docs/guides/opennextjs-env-vars.md
✅ Files skipped from review due to trivial changes (3)
- docs/PRODUCT.md
- docs/analysis/vinext-evaluation.md
- src/lib/cloudflare-image.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- eslint.config.ts
- wrangler.jsonc
- tsconfig.json
|
|
||
| ## Frontend | ||
|
|
||
| Use HTML imports with `Bun.serve()`. Don't use `vite`. HTML imports fully support React, CSS, Tailwind. |
There was a problem hiding this comment.
Conflicting guidance: "Don't use vite" contradicts the project's actual tooling.
This PR introduces vinext which is built on Vite, and the project now has vite.config.ts with Vite plugins (@cloudflare/vite-plugin, @tailwindcss/vite, etc.). The guidance to avoid Vite conflicts with the current architecture.
Consider clarifying that Vite is acceptable when used as part of vinext/framework tooling, or remove the Vite restriction since it doesn't apply to this project's build pipeline.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.agents/rules/use-bun-instead-of-node-vite-npm-pnpm.mdc at line 41, The rule
text "Use HTML imports with `Bun.serve()`. Don't use `vite`." conflicts with the
repo which uses vinext and a `vite.config.ts` with Vite plugins (e.g.
`@cloudflare/vite-plugin`, `@tailwindcss/vite`); update the rule to either (A)
allow Vite when used as part of vinext/framework tooling by adding an explicit
exception mentioning "vinext" and `vite.config.ts` and the listed plugins, or
(B) remove the blanket "Don't use `vite`" restriction and replace it with
guidance that Bun HTML imports are preferred unless the project uses
vinext/Vite-based tooling (mention vinext and the Vite plugins as examples).
Ensure the rule text references vinext, `vite.config.ts`, and the plugin names
so future reviewers understand the exception.
| @@ -57,13 +57,13 @@ updated: 2026-03-13 | |||
| - Cloudflare Workers(Cron: 10分ごとトリガ - `*/10 * * * *`) | |||
There was a problem hiding this comment.
Cron schedule in documentation may be outdated.
Line 57 states "10分ごとトリガ - */10 * * * *" (every 10 minutes), but wrangler.jsonc shows the production cron as "0 * * * *" (hourly). The dev environment uses "*/10 * * * *".
Consider clarifying which environment this refers to, or update to reflect the production schedule.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/TECH.md` at line 57, The documentation's Cloudflare Workers cron entry
is inconsistent with wrangler.jsonc; update docs/TECH.md to clarify
environment-specific schedules by either replacing the single "10分ごとトリガ - `*/10
* * * *`" line with explicit entries for production and development (e.g.,
production `"0 * * * *"` and dev `"*/10 * * * *"`) or note that the documented
cron refers to the dev environment; reference the actual cron strings shown in
wrangler.jsonc and label them accordingly so readers know which schedule applies
to production vs dev.
| "zustand": "^5.0.11" | ||
| }, | ||
| "devDependencies": { | ||
| "@cloudflare/vite-plugin": "^1.28.0", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if `@cloudflare/vite-plugin` specifies peer dependency on vite
npm view `@cloudflare/vite-plugin`@1.28.0 peerDependencies 2>/dev/null || echo "Could not fetch peer dependencies"Repository: doom-protocol/doom-index
Length of output: 117
🏁 Script executed:
cd /tmp/repo && grep -A 2 -B 2 '"vite"' package.jsonRepository: doom-protocol/doom-index
Length of output: 127
🏁 Script executed:
git ls-files | grep -E "^package\.json$"Repository: doom-protocol/doom-index
Length of output: 80
🏁 Script executed:
grep -n '"vite"' package.jsonRepository: doom-protocol/doom-index
Length of output: 92
@cloudflare/vite-plugin@^1.28.0 is incompatible with Vite 8.
The plugin specifies peer dependencies for Vite ^6.1.0 || ^7.0.0, but the project requires Vite ^8.0.0. Upgrade the plugin to a version that supports Vite 8, or downgrade Vite to 7.x.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` at line 71, package.json currently pins
"@cloudflare/vite-plugin": "^1.28.0", which declares peer deps for Vite ^6/7 and
is incompatible with Vite 8; update package.json to use a
`@cloudflare/vite-plugin` release that lists Vite ^8 (or alternatively downgrade
your Vite dependency to a 7.x version) and then run the package manager to
refresh lockfiles (locate the dependency by the name "@cloudflare/vite-plugin"
and update the version string and reinstall).
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (9)
tests/unit/app/providers.test.tsx (1)
78-156: Consider logging or asserting on stderr for easier debugging.The test captures
stderr(line 142) but never inspects it. If the SSR render produces React warnings or the subprocess encounters errors that don't cause a non-zero exit code, they'll be silently swallowed, making test failures harder to diagnose.♻️ Suggested improvement
expect(result.exitCode).toBe(0); + const stderr = new TextDecoder().decode(result.stderr).trim(); + if (stderr) { + console.error("SSR test stderr:", stderr); + } const output = JSON.parse(new TextDecoder().decode(result.stdout).trim()) as {Alternatively, you could assert
stderris empty to catch unexpected warnings:expect(new TextDecoder().decode(result.stderr).trim()).toBe("");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/app/providers.test.tsx` around lines 78 - 156, The test spawns a Bun subprocess and captures stderr into the result object but never inspects it; update the test that defines result (the Bun.spawnSync call in the "renders the same provider tree..." spec) to either assert that new TextDecoder().decode(result.stderr).trim() is empty or to log the stderr content on failure so React warnings or subprocess errors aren't silently swallowed; locate the result variable and add a short assertion or a failure message referencing result.stderr immediately after checking result.exitCode before parsing result.stdout.tests/unit/lib/trpc/vanilla-client.test.ts (1)
3-55: Consider adding a no-baseUrlfallback test case.This suite currently only locks the explicit
baseUrlbranch. Adding one test for behavior whenbaseUrlis omitted will prevent regressions ingetWorkerSafeBaseUrlfallback semantics.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/trpc/vanilla-client.test.ts` around lines 3 - 55, Add a new test case mirroring the existing one but omitting baseUrl to exercise the fallback path: import the same test harness that spawns Bun and import createVanillaTRPCClient from src/lib/trpc/vanilla-client.ts, but instead of throwing when "@/utils/url" is imported, mock that module to export the worker fallback (e.g., getWorkerSafeBaseUrl or getBaseUrl) returning "http://localhost:8787"; then construct the client with no baseUrl and assert client.links is present and its resolved url equals "http://localhost:8787/api/trpc". Ensure you reference createVanillaTRPCClient and the "@/utils/url" export name (getWorkerSafeBaseUrl/getBaseUrl) so the test targets the fallback branch.src/app/opengraph-image.tsx (1)
453-458: Consider caching the resolved env to avoid duplicate async resolution.The env is resolved at line 402 in the main try block. When an error occurs and we enter the catch block, we resolve env again at line 453. While this is functionally correct (the original
envis out of scope), the double resolution adds latency to the error path.If the main flow's env resolution succeeded before the actual image processing failed, you could preserve
envin a higher scope to reuse it in the fallback path.♻️ Suggested refactor to cache env
export default async function Image(): Promise<Response> { const startTime = Date.now(); logger.info("ogp.start"); + let env: CloudflareEnv | undefined; try { logger.info("ogp.step-init-context"); - const env = await resolveCloudflareEnv(); + env = await resolveCloudflareEnv(); if (!env) { throw new Error("Cloudflare context not available"); } // ... rest of main flow } catch (error) { // ... error logging try { - const env = await resolveCloudflareEnv(); if (!env) { - throw new Error("Cloudflare context not available"); + env = await resolveCloudflareEnv(); + if (!env) { + throw new Error("Cloudflare context not available"); + } } const fallbackAssetsFetcher = env.ASSETS; // ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/opengraph-image.tsx` around lines 453 - 458, The code resolves Cloudflare env twice (once in the main try and again in the catch), causing extra async latency; cache the resolved env in a higher scope so the catch path can reuse it instead of calling resolveCloudflareEnv() again. Concretely, declare a let env: ReturnType<typeof resolveCloudflareEnv> | null = null outside the try, assign env = await resolveCloudflareEnv() in the main flow (or on first use), and in the catch check and reuse that cached env when setting fallbackAssetsFetcher and fallbackImagesBinding (references: resolveCloudflareEnv, env, fallbackAssetsFetcher, fallbackImagesBinding).src/app/globals.css (2)
7-7: CSS variable defined but not used in body selector.The
--font-cinzel-decorativevariable is defined on Line 7 but thebodyselector (Lines 33-35) uses the font name directly. Consider using the variable for consistency:♻️ Optional refactor
html, body { /* ... */ font-family: - Cinzel Decorative, - serif; + var(--font-cinzel-decorative); }Also applies to: 33-36
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/globals.css` at line 7, The CSS variable --font-cinzel-decorative is defined but not used; update the body selector to reference var(--font-cinzel-decorative) instead of the hardcoded "Cinzel Decorative" string (replace the direct font-family usage in the body rule), ensuring consistency with other variables and avoiding duplication; verify no other selectors still use the raw font name and replace them similarly if present.
1-1: Consider keepingnext/font/googlefor better font loading performance.Switching from
next/font/googleto a CSS@importloses Next.js's font optimization benefits:
next/fontprovides automaticfont-display: swap, self-hosting, and eliminates layout shift- CSS
@import url()blocks rendering until the font loads and requires an external requestIf this change is intentional due to vinext compatibility constraints, consider adding
&display=swapto the URL to mitigate render-blocking:♻️ Suggested improvement
-@import url("https://fonts.googleapis.com/css2?family=Cinzel+Decorative:wght@400;700;900&display=swap"); +@import url("https://fonts.googleapis.com/css2?family=Cinzel+Decorative:wght@400;700;900&display=swap");Actually, I see
&display=swapis already present — good! The performance impact is mitigated.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/globals.css` at line 1, The CSS file now uses a raw `@import` for Cinzel Decorative which bypasses Next.js font optimizations; revert to using next/font/google by replacing the `@import` with the Next.js font API (e.g., import { Cinzel_Decorative } from 'next/font/google' and instantiate with weight and display options) so you get self-hosting and font-display: swap, or if switching to the `@import` was intentional for vinext compatibility, explicitly ensure the URL includes &display=swap (the `@import` line in globals.css) and document the reason for this decision in the PR description.tests/unit/app/layout.test.ts (1)
4-10: Test validates ViewTransition boundary — consider robustness improvements.This test ensures the root
ViewTransitionboundary remains in the app layout, which is important for the vinext migration. However, the exact string matching is brittle:
- The test checks for
'import { ViewTransition } from "react";'but would fail if the import is reformatted (e.g., multi-line, single quotes, or additional imports in the same statement).Consider a more flexible approach using regex patterns:
♻️ Optional improvement for robustness
it("keeps the root ViewTransition boundary in the app layout", async () => { const source = await Bun.file("src/app/layout.tsx").text(); - expect(source.includes('import { ViewTransition } from "react";')).toBe(true); + expect(/import\s*\{[^}]*ViewTransition[^}]*\}\s*from\s*["']react["']/.test(source)).toBe(true); expect(source.includes("<ViewTransition>")).toBe(true); expect(source.includes("</ViewTransition>")).toBe(true); });That said, the current implementation is acceptable for a focused regression test.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/app/layout.test.ts` around lines 4 - 10, The test's string includes for the ViewTransition import and tags are brittle; update the test inside the "keeps the root ViewTransition boundary in the app layout" it block to use regex-based assertions instead of exact string includes: match the import with a regex that allows whitespace, additional imports and either quote style while looking for ViewTransition (e.g., a pattern that finds "import" with "{ ViewTransition }" from "react"), and assert opening and closing tags with regexes that allow attributes/whitespace (e.g., match /<ViewTransition\b[^>]*>/ and /<\/ViewTransition>/); keep the same test name and file (layout.test.ts) and only replace the expect(...) checks to make them robust.src/components/gallery/gallery-scene.tsx (1)
151-169: Handle lazy-import failures explicitly.Line 158 has no rejection handling. A failed chunk import can silently break mint UI state for the session.
🛠️ Suggested patch
useEffect(() => { if (!isMintFeatureVisible || MintFeatureRootComponent) { return; } let isMounted = true; - void import("@/features/mint/mint-feature-root").then((mod) => { - if (!isMounted) { - return; - } - - setMintFeatureRootComponent(() => mod.MintFeatureRoot); - }); + void import("@/features/mint/mint-feature-root") + .then((mod) => { + if (!isMounted) { + return; + } + setMintFeatureRootComponent(() => mod.MintFeatureRoot); + }) + .catch((error) => { + logger.error("gallery-scene.mint-feature-root.import-failed", { error }); + }); return () => { isMounted = false; }; }, [MintFeatureRootComponent, isMintFeatureVisible]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/gallery/gallery-scene.tsx` around lines 151 - 169, The lazy import inside the useEffect that loads "@/features/mint/mint-feature-root" (the promise returned in the effect that calls setMintFeatureRootComponent) lacks rejection handling; update that code to append a .catch(...) (or use try/catch with async/await) to handle import failures, check isMounted before touching state, log the error (console.error or your app logger) with context, and set a safe fallback state (e.g., keep/set MintFeatureRootComponent to null or flip an "mintLoadError" flag) so the mint UI doesn't remain broken; target the useEffect that references MintFeatureRootComponent, isMintFeatureVisible, and setMintFeatureRootComponent.vite.config.ts (2)
8-44:shimReactCanaryplugin is well-structured but will break when React officially exports these features.The
export * from "react"combined with explicitexport const ViewTransitionwill cause a duplicate export error once React officially adds these canary features as named exports. Consider checking if the export already exists before adding the fallback, or use a more defensive approach:Suggested improvement
load(id) { if (id === "\0virtual:react-with-canary") { return ` - export * from "react"; - export { default } from "react"; import React from "react"; + export * from "react"; + export { default } from "react"; - export const ViewTransition = React.ViewTransition || (({ children }) => children); - export const addTransitionType = React.addTransitionType || (() => {}); + const _ViewTransition = React.ViewTransition ?? (({ children }) => children); + const _addTransitionType = React.addTransitionType ?? (() => {}); + export { _ViewTransition as ViewTransition, _addTransitionType as addTransitionType }; `; } },This will still work when React adds official exports, and the explicit re-exports will take precedence over the
export *barrel.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vite.config.ts` around lines 8 - 44, The plugin shimReactCanary currently uses `export * from "react"` plus unconditional `export const ViewTransition`/`addTransitionType`, which will conflict when React later provides those named exports; update `shimReactCanary` (the load handler for "\0virtual:react-with-canary") to import React as a namespace (e.g., import * as React) and then conditionally export the names only if they don't already exist (e.g., export const ViewTransition = React.ViewTransition || fallback), or alternatively avoid `export * from "react"` and re-export only the existing React bindings plus your fallbacks; ensure the load output checks React's exports rather than unconditionally declaring named exports to prevent duplicate-export errors when React adds them.
41-42: Returningmap: nulldisables source map support for transformed files.For simple text replacements this is acceptable, but may complicate debugging in development. Consider using
magic-stringfor proper source map preservation if debugging transformed files becomes difficult.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vite.config.ts` around lines 41 - 42, The transform currently returns { code: transformedCode, map: null } which disables source maps; update the transformer that creates transformedCode (the code.replace call and transformedCode variable) to preserve source maps by using a MagicString instance: create a MagicString from the original code, perform the replacement on it (or overwrite the changed range), then return { code: magicString.toString(), map: magicString.generateMap({ hires: true }) } instead of map: null so development debugging retains accurate source maps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@package.json`:
- Line 97: Update the dependency entry for vinext in package.json to a released
version; replace the non-existent "vinext": "^0.0.30" with the latest published
version "vinext": "^0.0.29" (or update to a planned 0.0.30 only if you're also
publishing that release), then run npm/yarn install to verify resolution.
In `@src/app/opengraph-image.tsx`:
- Around line 485-519: The catch block for fallbackTransformError should not
retry renderPaintingOnCanvas with the same fallbackImagesBinding (since it
already failed); instead log the error and return the untransformed fallback
buffer directly. Replace the redundant getFallbackImageDataUrl + fetch
round-trip by calling getFallbackImageBuffer(fallbackAssetsFetcher) to obtain
the ArrayBuffer, and skip calling renderPaintingOnCanvas; still fetch
frame/background only if needed elsewhere, but do not pass fallbackImagesBinding
into renderPaintingOnCanvas again. Ensure the logger
(ogp.fallback-transform-error / ogp.fallback-completed) keeps
willUseOriginalFallback: true and that the Response returns the raw fallback
buffer with the same headers.
In `@src/lib/trpc/vanilla-client.ts`:
- Around line 22-23: The function in vanilla-client.ts that falls back to
returning "http://localhost:8787" when location.origin is unavailable should not
hardcode localhost; instead require an explicit base URL (e.g., a baseUrl
argument or use a configured env var like process.env.BASE_URL) and throw a
clear error if neither location.origin nor an explicit base URL is present.
Locate the function that references location.origin / returns
"http://localhost:8787" (the base URL resolution helper in vanilla-client.ts),
change it to prefer: 1) the provided baseUrl parameter, 2) a runtime
env-configured origin, and 3) if neither exists, throw an error explaining that
baseUrl must be supplied for non-window/non-worker contexts so SSR/Node cannot
silently call localhost. Ensure error messaging mentions the function name to
aid debugging.
In `@tests/unit/components/gallery-scene-mint-boundary.test.tsx`:
- Around line 178-183: The test currently parses the entire result.stdout into
JSON when assigning output, which breaks if incidental logs are present; change
the parsing to extract the JSON payload robustly (for example, decode
result.stdout, split into lines, find the last non-empty line or the line that
starts with '{' and ends with '}', then JSON.parse that substring) and assign
that parsed object to output; update the code that references result.stdout /
output in the test so it uses this resilient parsing approach (look for the
variable assignment to output and the use of new
TextDecoder().decode(result.stdout) in this test).
In `@tests/unit/hooks/use-viewer.test.tsx`:
- Around line 68-71: The test assumes result.stdout is pure JSON; instead decode
result.stdout with TextDecoder() and robustly extract the JSON payload by
splitting the decoded string into lines, iterating (preferably from last to
first) and attempting JSON.parse on each trimmed line until one parses
successfully, then assign that parsed value to output; update the code
referencing output and result.stdout so stray child-process logs no longer cause
a parse error (handle JSON.parse exceptions during the iteration and only throw
if no line yields valid JSON).
---
Nitpick comments:
In `@src/app/globals.css`:
- Line 7: The CSS variable --font-cinzel-decorative is defined but not used;
update the body selector to reference var(--font-cinzel-decorative) instead of
the hardcoded "Cinzel Decorative" string (replace the direct font-family usage
in the body rule), ensuring consistency with other variables and avoiding
duplication; verify no other selectors still use the raw font name and replace
them similarly if present.
- Line 1: The CSS file now uses a raw `@import` for Cinzel Decorative which
bypasses Next.js font optimizations; revert to using next/font/google by
replacing the `@import` with the Next.js font API (e.g., import {
Cinzel_Decorative } from 'next/font/google' and instantiate with weight and
display options) so you get self-hosting and font-display: swap, or if switching
to the `@import` was intentional for vinext compatibility, explicitly ensure the
URL includes &display=swap (the `@import` line in globals.css) and document the
reason for this decision in the PR description.
In `@src/app/opengraph-image.tsx`:
- Around line 453-458: The code resolves Cloudflare env twice (once in the main
try and again in the catch), causing extra async latency; cache the resolved env
in a higher scope so the catch path can reuse it instead of calling
resolveCloudflareEnv() again. Concretely, declare a let env: ReturnType<typeof
resolveCloudflareEnv> | null = null outside the try, assign env = await
resolveCloudflareEnv() in the main flow (or on first use), and in the catch
check and reuse that cached env when setting fallbackAssetsFetcher and
fallbackImagesBinding (references: resolveCloudflareEnv, env,
fallbackAssetsFetcher, fallbackImagesBinding).
In `@src/components/gallery/gallery-scene.tsx`:
- Around line 151-169: The lazy import inside the useEffect that loads
"@/features/mint/mint-feature-root" (the promise returned in the effect that
calls setMintFeatureRootComponent) lacks rejection handling; update that code to
append a .catch(...) (or use try/catch with async/await) to handle import
failures, check isMounted before touching state, log the error (console.error or
your app logger) with context, and set a safe fallback state (e.g., keep/set
MintFeatureRootComponent to null or flip an "mintLoadError" flag) so the mint UI
doesn't remain broken; target the useEffect that references
MintFeatureRootComponent, isMintFeatureVisible, and setMintFeatureRootComponent.
In `@tests/unit/app/layout.test.ts`:
- Around line 4-10: The test's string includes for the ViewTransition import and
tags are brittle; update the test inside the "keeps the root ViewTransition
boundary in the app layout" it block to use regex-based assertions instead of
exact string includes: match the import with a regex that allows whitespace,
additional imports and either quote style while looking for ViewTransition
(e.g., a pattern that finds "import" with "{ ViewTransition }" from "react"),
and assert opening and closing tags with regexes that allow
attributes/whitespace (e.g., match /<ViewTransition\b[^>]*>/ and
/<\/ViewTransition>/); keep the same test name and file (layout.test.ts) and
only replace the expect(...) checks to make them robust.
In `@tests/unit/app/providers.test.tsx`:
- Around line 78-156: The test spawns a Bun subprocess and captures stderr into
the result object but never inspects it; update the test that defines result
(the Bun.spawnSync call in the "renders the same provider tree..." spec) to
either assert that new TextDecoder().decode(result.stderr).trim() is empty or to
log the stderr content on failure so React warnings or subprocess errors aren't
silently swallowed; locate the result variable and add a short assertion or a
failure message referencing result.stderr immediately after checking
result.exitCode before parsing result.stdout.
In `@tests/unit/lib/trpc/vanilla-client.test.ts`:
- Around line 3-55: Add a new test case mirroring the existing one but omitting
baseUrl to exercise the fallback path: import the same test harness that spawns
Bun and import createVanillaTRPCClient from src/lib/trpc/vanilla-client.ts, but
instead of throwing when "@/utils/url" is imported, mock that module to export
the worker fallback (e.g., getWorkerSafeBaseUrl or getBaseUrl) returning
"http://localhost:8787"; then construct the client with no baseUrl and assert
client.links is present and its resolved url equals
"http://localhost:8787/api/trpc". Ensure you reference createVanillaTRPCClient
and the "@/utils/url" export name (getWorkerSafeBaseUrl/getBaseUrl) so the test
targets the fallback branch.
In `@vite.config.ts`:
- Around line 8-44: The plugin shimReactCanary currently uses `export * from
"react"` plus unconditional `export const ViewTransition`/`addTransitionType`,
which will conflict when React later provides those named exports; update
`shimReactCanary` (the load handler for "\0virtual:react-with-canary") to import
React as a namespace (e.g., import * as React) and then conditionally export the
names only if they don't already exist (e.g., export const ViewTransition =
React.ViewTransition || fallback), or alternatively avoid `export * from
"react"` and re-export only the existing React bindings plus your fallbacks;
ensure the load output checks React's exports rather than unconditionally
declaring named exports to prevent duplicate-export errors when React adds them.
- Around line 41-42: The transform currently returns { code: transformedCode,
map: null } which disables source maps; update the transformer that creates
transformedCode (the code.replace call and transformedCode variable) to preserve
source maps by using a MagicString instance: create a MagicString from the
original code, perform the replacement on it (or overwrite the changed range),
then return { code: magicString.toString(), map: magicString.generateMap({
hires: true }) } instead of map: null so development debugging retains accurate
source maps.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: db307985-6946-4053-ba60-98ebfd7189a0
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (31)
.claude/settings.local.json.env.example.github/workflows/bundle-analyzer.yml.github/workflows/ci.yml.gitignoreAGENTS.mdREADME.mddocs/guides/runware-models.mddocs/specs/doom-index/design.mddocs/specs/doom-index/requirements.mddocs/specs/doom-index/tasks.mddocs/specs/dynamic-prompt/tasks.mdpackage.jsonscripts/gen-img.tsscripts/upload-metadata-ardrive.tssrc/app/about/page.tsxsrc/app/globals.csssrc/app/layout.tsxsrc/app/opengraph-image.tsxsrc/components/gallery/gallery-scene.tsxsrc/components/gallery/lights.tsxsrc/components/home/home-view.tsxsrc/components/ui/header.tsxsrc/hooks/use-viewer.tssrc/lib/trpc/vanilla-client.tstests/unit/app/layout.test.tstests/unit/app/providers.test.tsxtests/unit/components/gallery-scene-mint-boundary.test.tsxtests/unit/hooks/use-viewer.test.tsxtests/unit/lib/trpc/vanilla-client.test.tsvite.config.ts
✅ Files skipped from review due to trivial changes (3)
- docs/specs/doom-index/requirements.md
- scripts/upload-metadata-ardrive.ts
- docs/guides/runware-models.md
🚧 Files skipped from review as they are similar to previous changes (4)
- src/hooks/use-viewer.ts
- .github/workflows/bundle-analyzer.yml
- .github/workflows/ci.yml
- .gitignore
| "react-server-dom-webpack": "^19.2.4", | ||
| "tailwindcss": "^4.2.1", | ||
| "typescript": "^5.9.3", | ||
| "vinext": "^0.0.30", |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
vinext npm latest version 2026
💡 Result:
As of March 14, 2026, the latest published release/tag for cloudflare/vinext is v0.0.29 (tagged March 12, 2026). [1]
If you want to verify what npm currently serves as latest, run: npm view vinext version. [2]
Sources:
[1] GitHub tags for cloudflare/vinext (github.com)
[2] npm docs for npm view (shows registry “latest” by default) (docs.npmjs.com)
Citations:
- 1: https://github.com/cloudflare/vinext/tags
- 2: https://docs.npmjs.com/cli/v6/commands/npm-view?utm_source=openai
Fix version mismatch: vinext@^0.0.30 does not exist (latest is v0.0.29).
The specified version has not been released to npm. Update to "^0.0.29" unless version 0.0.30 is intentionally planned for imminent release. The 0.0.x series remains experimental per project documentation, but the current constraint will cause installation failures or unexpected version resolution.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` at line 97, Update the dependency entry for vinext in
package.json to a released version; replace the non-existent "vinext": "^0.0.30"
with the latest published version "vinext": "^0.0.29" (or update to a planned
0.0.30 only if you're also publishing that release), then run npm/yarn install
to verify resolution.
src/app/opengraph-image.tsx
Outdated
| } catch (fallbackTransformError) { | ||
| const fallbackTransformErrorMessage = | ||
| fallbackTransformError instanceof Error ? fallbackTransformError.message : String(fallbackTransformError); | ||
| logger.error("ogp.fallback-transform-error", { | ||
| reason: `Fallback image transformation failed: ${fallbackTransformErrorMessage}`, | ||
| error: fallbackTransformErrorMessage, | ||
| willUseOriginalFallback: true, | ||
| }); | ||
|
|
||
| const fallbackDataUrl = await getFallbackImageDataUrl(fallbackAssetsFetcher); | ||
| const fallbackBuffer = await fetch(fallbackDataUrl).then(async (r) => r.arrayBuffer()); | ||
| const [frameBuffer, backgroundBuffer] = await Promise.all([ | ||
| getFrameImageBuffer(fallbackAssetsFetcher), | ||
| getBackgroundImageBuffer(fallbackAssetsFetcher), | ||
| ]); | ||
|
|
||
| logger.info("ogp.fallback-completed", { | ||
| durationMs: Date.now() - startTime, | ||
| }); | ||
|
|
||
| const transformedFallback = await renderPaintingOnCanvas( | ||
| fallbackBuffer, | ||
| fallbackImagesBinding, | ||
| frameBuffer, | ||
| backgroundBuffer, | ||
| ); | ||
|
|
||
| return new Response(transformedFallback, { | ||
| status: 200, | ||
| headers: { | ||
| "Content-Type": "image/png", | ||
| "Cache-Control": `public, max-age=${String(CACHE_TTL_SECONDS.ONE_MINUTE)}`, | ||
| }, | ||
| }); | ||
| } |
There was a problem hiding this comment.
Logic error: Retrying renderPaintingOnCanvas after transformation failure will likely fail again.
The catch block at line 485 handles fallbackTransformError (when renderPaintingOnCanvas fails), but then attempts to call renderPaintingOnCanvas again at lines 505-510 using the same fallbackImagesBinding. If the IMAGES binding transformation failed once, retrying with the same binding will produce the same error.
Additionally, lines 494-495 perform a redundant round-trip: getFallbackImageDataUrl fetches the image and encodes to data URL, then fetch(fallbackDataUrl) decodes it back to ArrayBuffer. You could call getFallbackImageBuffer directly, though this is moot if the retry logic is fixed.
The log at line 491 states willUseOriginalFallback: true, suggesting the intent was to return the untransformed fallback when transformation fails. If so, the code should skip renderPaintingOnCanvas and return the raw fallback buffer directly.
🐛 Proposed fix: Return untransformed fallback when transformation fails
} catch (fallbackTransformError) {
const fallbackTransformErrorMessage =
fallbackTransformError instanceof Error ? fallbackTransformError.message : String(fallbackTransformError);
logger.error("ogp.fallback-transform-error", {
reason: `Fallback image transformation failed: ${fallbackTransformErrorMessage}`,
error: fallbackTransformErrorMessage,
willUseOriginalFallback: true,
});
- const fallbackDataUrl = await getFallbackImageDataUrl(fallbackAssetsFetcher);
- const fallbackBuffer = await fetch(fallbackDataUrl).then(async (r) => r.arrayBuffer());
- const [frameBuffer, backgroundBuffer] = await Promise.all([
- getFrameImageBuffer(fallbackAssetsFetcher),
- getBackgroundImageBuffer(fallbackAssetsFetcher),
- ]);
+ // Return untransformed fallback image since IMAGES binding transformation failed
+ const fallbackBuffer = await getFallbackImageBuffer(fallbackAssetsFetcher);
logger.info("ogp.fallback-completed", {
durationMs: Date.now() - startTime,
});
- const transformedFallback = await renderPaintingOnCanvas(
- fallbackBuffer,
- fallbackImagesBinding,
- frameBuffer,
- backgroundBuffer,
- );
-
- return new Response(transformedFallback, {
+ return new Response(fallbackBuffer, {
status: 200,
headers: {
"Content-Type": "image/png",
"Cache-Control": `public, max-age=${String(CACHE_TTL_SECONDS.ONE_MINUTE)}`,
},
});
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/opengraph-image.tsx` around lines 485 - 519, The catch block for
fallbackTransformError should not retry renderPaintingOnCanvas with the same
fallbackImagesBinding (since it already failed); instead log the error and
return the untransformed fallback buffer directly. Replace the redundant
getFallbackImageDataUrl + fetch round-trip by calling
getFallbackImageBuffer(fallbackAssetsFetcher) to obtain the ArrayBuffer, and
skip calling renderPaintingOnCanvas; still fetch frame/background only if needed
elsewhere, but do not pass fallbackImagesBinding into renderPaintingOnCanvas
again. Ensure the logger (ogp.fallback-transform-error / ogp.fallback-completed)
keeps willUseOriginalFallback: true and that the Response returns the raw
fallback buffer with the same headers.
| return "http://localhost:8787"; | ||
| } |
There was a problem hiding this comment.
Avoid defaulting to localhost when origin cannot be resolved.
Line 22 hardcodes http://localhost:8787 for non-window/non-worker contexts. In production SSR/Node execution, this can silently route tRPC calls to localhost and fail at runtime. Prefer requiring baseUrl (or using an explicit env-configured origin) when location.origin is unavailable.
💡 Proposed fix
function getWorkerSafeBaseUrl(baseUrl?: string): string {
if (baseUrl) {
return baseUrl;
}
- if (typeof self !== "undefined" && "location" in self) {
- return self.location.origin;
- }
-
- if (typeof window !== "undefined") {
- return window.location.origin;
- }
-
- return "http://localhost:8787";
+ const origin =
+ typeof self !== "undefined" && "location" in self
+ ? self.location.origin
+ : typeof window !== "undefined"
+ ? window.location.origin
+ : undefined;
+
+ if (origin && origin !== "null" && origin !== "undefined") {
+ return origin;
+ }
+
+ throw new Error("createVanillaTRPCClient requires `baseUrl` when location.origin is unavailable.");
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return "http://localhost:8787"; | |
| } | |
| function getWorkerSafeBaseUrl(baseUrl?: string): string { | |
| if (baseUrl) { | |
| return baseUrl; | |
| } | |
| const origin = | |
| typeof self !== "undefined" && "location" in self | |
| ? self.location.origin | |
| : typeof window !== "undefined" | |
| ? window.location.origin | |
| : undefined; | |
| if (origin && origin !== "null" && origin !== "undefined") { | |
| return origin; | |
| } | |
| throw new Error("createVanillaTRPCClient requires `baseUrl` when location.origin is unavailable."); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/trpc/vanilla-client.ts` around lines 22 - 23, The function in
vanilla-client.ts that falls back to returning "http://localhost:8787" when
location.origin is unavailable should not hardcode localhost; instead require an
explicit base URL (e.g., a baseUrl argument or use a configured env var like
process.env.BASE_URL) and throw a clear error if neither location.origin nor an
explicit base URL is present. Locate the function that references
location.origin / returns "http://localhost:8787" (the base URL resolution
helper in vanilla-client.ts), change it to prefer: 1) the provided baseUrl
parameter, 2) a runtime env-configured origin, and 3) if neither exists, throw
an error explaining that baseUrl must be supplied for non-window/non-worker
contexts so SSR/Node cannot silently call localhost. Ensure error messaging
mentions the function name to aid debugging.
| const output = JSON.parse(new TextDecoder().decode(result.stdout).trim()) as { | ||
| importCountAfterClick: number; | ||
| importCountBeforeClick: number; | ||
| initialRootVisible: boolean; | ||
| openedRootVisible: boolean; | ||
| }; |
There was a problem hiding this comment.
Make stdout JSON parsing resilient to incidental logs.
Line 178 parses the entire stdout as JSON. Any extra output line can fail the test unnecessarily.
✅ Suggested patch
- const output = JSON.parse(new TextDecoder().decode(result.stdout).trim()) as {
+ const stdoutText = new TextDecoder().decode(result.stdout).trim();
+ const lastLine = stdoutText.split("\n").at(-1);
+ if (!lastLine) {
+ throw new Error("No JSON output received from child process");
+ }
+ const output = JSON.parse(lastLine) as {
importCountAfterClick: number;
importCountBeforeClick: number;
initialRootVisible: boolean;
openedRootVisible: boolean;
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const output = JSON.parse(new TextDecoder().decode(result.stdout).trim()) as { | |
| importCountAfterClick: number; | |
| importCountBeforeClick: number; | |
| initialRootVisible: boolean; | |
| openedRootVisible: boolean; | |
| }; | |
| const stdoutText = new TextDecoder().decode(result.stdout).trim(); | |
| const lastLine = stdoutText.split("\n").at(-1); | |
| if (!lastLine) { | |
| throw new Error("No JSON output received from child process"); | |
| } | |
| const output = JSON.parse(lastLine) as { | |
| importCountAfterClick: number; | |
| importCountBeforeClick: number; | |
| initialRootVisible: boolean; | |
| openedRootVisible: boolean; | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/components/gallery-scene-mint-boundary.test.tsx` around lines 178
- 183, The test currently parses the entire result.stdout into JSON when
assigning output, which breaks if incidental logs are present; change the
parsing to extract the JSON payload robustly (for example, decode result.stdout,
split into lines, find the last non-empty line or the line that starts with '{'
and ends with '}', then JSON.parse that substring) and assign that parsed object
to output; update the code that references result.stdout / output in the test so
it uses this resilient parsing approach (look for the variable assignment to
output and the use of new TextDecoder().decode(result.stdout) in this test).
| const output = JSON.parse(new TextDecoder().decode(result.stdout).trim()) as Array<{ | ||
| options?: { type?: string }; | ||
| url: string; | ||
| }>; |
There was a problem hiding this comment.
Harden JSON parsing against incidental child-process logs
Line 68 assumes stdout is pure JSON. If any extra log line appears, this fails with a parse error unrelated to the behavior under test.
💡 Suggested fix
- const output = JSON.parse(new TextDecoder().decode(result.stdout).trim()) as Array<{
+ const stdout = new TextDecoder().decode(result.stdout);
+ const lines = stdout.split(/\r?\n/).map((line) => line.trim()).filter(Boolean);
+ const payload = lines.at(-1);
+ expect(payload).toBeDefined();
+ const output = JSON.parse(payload!) as Array<{
options?: { type?: string };
url: string;
}>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/hooks/use-viewer.test.tsx` around lines 68 - 71, The test assumes
result.stdout is pure JSON; instead decode result.stdout with TextDecoder() and
robustly extract the JSON payload by splitting the decoded string into lines,
iterating (preferably from last to first) and attempting JSON.parse on each
trimmed line until one parses successfully, then assign that parsed value to
output; update the code referencing output and result.stdout so stray
child-process logs no longer cause a parse error (handle JSON.parse exceptions
during the iteration and only throw if no line yields valid JSON).
|
最新 head 残りの そのため、現状の本命は Cloudflare Workers Builds の build settings 側です。Cloudflare docs 上も preview builds は 必要な設定は少なくとも次のどちらかです。
repo 側の regression ではないことは、 |
Closes #54.
This replaces the OpenNext/webpack Cloudflare path with vinext + Vite and deploys the built worker from
dist/server/wrangler.json.It removes obsolete OpenNext files, patch scripts,
wrangler.toml, and dead test helpers, while consolidating cron intoworker/index.ts.It also migrates Cloudflare binding resolution to
cloudflare:workersand updates docs, CI, and bundle reporting for the new build flow.Verification:
bun run lint,bun run typecheck,bun run format,bun run build:cf, and targeted Bun test suites covering Cloudflare bindings, cache behavior, archive grid props, DB/TRPC context, Workers AI, andnext.config.Summary by CodeRabbit