Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Adds routing for an /api/edb proxy endpoint and updates request path extraction logic to rely on a rewrite-populated path query parameter.
Changes:
- Added Vercel rewrites for
/api/edband/api/edb/*to forward to anedb-proxyhandler. - Updated the EDB handler to compute
subPathfromreq.query.pathinstead of parsingreq.url. - Adjusted the relative import path for
maybeInjectDefaultEtherscanKey.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| vercel.json | Adds rewrites intended to route /api/edb traffic through an edb-proxy endpoint. |
| api/edb/[...path].ts | Changes how the downstream sub-path is derived (now from req.query.path) and updates a relative import. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "destination": "/api/edb-proxy" | ||
| }, | ||
| { | ||
| "source": "/api/edb/:path*", | ||
| "destination": "/api/edb-proxy?path=:path*" |
There was a problem hiding this comment.
The rewrites send /api/edb and /api/edb/* requests to /api/edb-proxy, but the updated request handling logic is in api/edb/[...path].ts (which would normally be reached at /api/edb/*). With these rewrites in place, this handler won’t receive the traffic (and therefore won’t see req.query.path) unless there is a separate /api/edb-proxy serverless function that contains the same logic. Fix by either (a) changing the rewrite destination to /api/edb?path=:path* (or similar) so it hits this file, or (b) moving/duplicating this handler’s logic into the /api/edb-proxy function so the destination matches the implementation.
| "destination": "/api/edb-proxy" | |
| }, | |
| { | |
| "source": "/api/edb/:path*", | |
| "destination": "/api/edb-proxy?path=:path*" | |
| "destination": "/api/edb?path=" | |
| }, | |
| { | |
| "source": "/api/edb/:path*", | |
| "destination": "/api/edb?path=:path*" |
Refactor and optimize code structure across multiple components
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 65 out of 67 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| export { ensureArray } from "./simulationArtifactTypes"; | ||
| import { ensureArray } from "./simulationArtifactTypes"; |
There was a problem hiding this comment.
The module both re-exports ensureArray from ./simulationArtifactTypes and also imports the same symbol from the same module for local use. This is redundant and makes the export surface harder to follow. Prefer a single import and then re-export the imported binding (or import under a different name) so there’s only one linkage to ./simulationArtifactTypes.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 69 out of 71 changed files in this pull request and generated 5 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Refine around the chosen Hook to find the best nearby one. | ||
| // Skip refinement when the eval-probe already confirmed Hook snapshots | ||
| // — they are verified to support eval and the refine batch (~51 RPCs) | ||
| // would just add latency without improving the result. | ||
| const evalProbeConfirmed = nonEntryHooks.length === 0 && hookIdsInRange.length > 0; | ||
| if (nearestFromTargeted !== null && step > 1 && !evalProbeConfirmed) { | ||
| const REFINE_WINDOW = 25; |
There was a problem hiding this comment.
evalProbeConfirmed is computed as nonEntryHooks.length === 0 && hookIdsInRange.length > 0, which is true even if the eval-probe did not run or found no better Hook snapshots. That can incorrectly skip the refinement batch and leave you with only entry-point Hooks (often empty locals). Consider tracking whether the eval-probe actually produced hookProbeIds.length > 0 and only skipping refinement in that case.
| console.error('[useDebugPrep] auto-connect failed:', err); | ||
| if (prepareIdRef.current === prepareId) { | ||
| setPrepState((prev) => ({ | ||
| ...prev, | ||
| error: 'Live session was evicted before connection completed. Click Open Debugger to use trace-based debugging.', | ||
| })); | ||
| } |
There was a problem hiding this comment.
In the auto-connect failure path, prepState stays status: 'ready' while setting only error. This leaves a stale "ready" state even though no session is connected, and downstream logic (e.g., click handlers) may refuse to restart prep because it thinks prep is already ready. Consider setting status: 'failed' (or resetting to idle via cancel) and clearing sessionId/snapshotCount so users can retry preparation cleanly.
| let preppedSessionDead = false; | ||
| if ( | ||
| debugPrepForSimulation?.status === "ready" && | ||
| debugPrepForSimulation.sessionId && | ||
| debugSession?.sessionId !== debugPrepForSimulation.sessionId | ||
| ) { | ||
| try { | ||
| await connectToSession({ | ||
| sessionId: debugPrepForSimulation.sessionId, | ||
| rpcPort: 0, | ||
| snapshotCount: debugPrepForSimulation.snapshotCount ?? 0, | ||
| chainId, | ||
| simulationId, | ||
| }); | ||
| openDebugWindow(); | ||
| return; | ||
| } catch (err) { | ||
| console.warn("[handleOpenDebug] Failed to connect to prepped session:", err); | ||
| preppedSessionDead = true; | ||
| } |
There was a problem hiding this comment.
If connecting to the "ready" prepped session fails, preppedSessionDead is set and later the code skips startReplayDebugPreparation entirely. This can strand users: the stale prepState remains ready, but no live session is usable and no new prep is started. A safer flow is to call cancelDebugPrep() on failure (or otherwise mark the prepState failed) and then allow startReplayDebugPreparation(simulationId) to run so a new live session can be prepared.
There was a problem hiding this comment.
Fixed in e70d6b6. When connecting to the ready prepped session fails, cancelDebugPrep() is now called to clear the stale ready state, and the !preppedSessionDead guard on startReplayDebugPreparation is removed so a fresh prep can be started automatically.
| if (debugRequested && !preppedSessionDead) { | ||
| if (startReplayDebugPreparation(simulationId)) { | ||
| showSuccess( |
There was a problem hiding this comment.
The debugRequested && !preppedSessionDead guard prevents retrying preparation when the previous ready session was evicted. Instead of skipping prep when preppedSessionDead is true, reset/cancel the stale prep state and proceed with preparation so "Open Debugger" can recover automatically.
There was a problem hiding this comment.
Already addressed in e70d6b6 — cancelDebugPrep() is called in the catch block to clear the stale prep state, and the !preppedSessionDead guard was removed from the if (debugRequested) check so startReplayDebugPreparation always runs when appropriate.
… in ExecutionStackTrace
…r predicate Agent-Logs-Url: https://github.com/Timidan/hexkit/sessions/770a26dc-858f-4df0-a6ba-71d03e39aa88 Co-authored-by: Timidan <25876969+Timidan@users.noreply.github.com>
…s dead Agent-Logs-Url: https://github.com/Timidan/hexkit/sessions/c9c0abc9-6d82-415a-90be-0fc87c8a4dc6 Co-authored-by: Timidan <25876969+Timidan@users.noreply.github.com>
No description provided.