Added support for live transcription and gemini 3 models.#66
Added support for live transcription and gemini 3 models.#66shivankacker wants to merge 6 commits intodevelopfrom
Conversation
WalkthroughAdds live speech-to-text: new NotesScribe component and NotesScribeProvider, a useLiveTranscription hook (Google and OpenAI realtime branches) and RecordingResult type, and a global useControlState hook. Introduces TncDialog, updates API with API.liveTranscription endpoints, extends types and quota flags for live transcription, registers NotesScribe in the manifest, adjusts AI model constants and benchmark audio URLs, and surfaces live-related UI and quota controls across history, quotas, usage, and related components. Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Pull request overview
Adds live transcription support (OpenAI Realtime + a Google WS middleware) and expands the available Gemini 3 model options/costing, alongside a small benchmark audio URL update.
Changes:
- Added Gemini 3/3.1 preview models to the model-cost constants.
- Added a live transcription token API, session types, a
useLiveTranscriptionhook, and a newNoteMessageInputfederated component. - Updated benchmark audio URLs to use the CDN +
.webmextension.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| src/utils/constants.ts | Adds Gemini 3/3.1 preview model cost entries. |
| src/utils/benchmark-constants.ts | Switches benchmark audio URLs to CDN-hosted .webm. |
| src/utils/api.ts | Adds API.liveTranscription.getToken() endpoint wrapper. |
| src/types.ts | Introduces live transcription session types (OpenAI + Google). |
| src/manifest.tsx | Exposes new federated component and loosens component typing. |
| src/hooks/useLiveTranscription.ts | New hook implementing WS + audio capture for live transcription. |
| src/components/NotesScribe.tsx | New icon-only mic UI component wired to live transcription hook. |
| const workletNode = new AudioWorkletNode( | ||
| audioContext, | ||
| "pcm16-processor", | ||
| ); | ||
| workletRef.current = workletNode; | ||
| source.connect(workletNode); | ||
| workletNode.connect(audioContext.destination); | ||
|
|
||
| workletNode.port.onmessage = (e) => sendAudio(e.data); | ||
| } catch { | ||
| // Fallback to ScriptProcessorNode (deprecated but widely supported) | ||
| const processor = audioContext.createScriptProcessor(4096, 1, 1); | ||
| source.connect(processor); | ||
| processor.connect(audioContext.destination); | ||
|
|
There was a problem hiding this comment.
The microphone capture graph is connected to audioContext.destination (both in the AudioWorklet and ScriptProcessor fallback). This will audibly route the user’s mic back to their speakers/headphones and can cause echo/feedback, especially while recording. Route through a muted GainNode (gain=0) or avoid connecting to the destination while still keeping the node alive for processing.
| const cleanup = useCallback(() => { | ||
| if (workletRef.current) { | ||
| workletRef.current.disconnect(); | ||
| workletRef.current = null; | ||
| } | ||
| if (sourceRef.current) { | ||
| sourceRef.current.disconnect(); | ||
| sourceRef.current = null; | ||
| } | ||
| if (streamRef.current) { | ||
| streamRef.current.getTracks().forEach((track) => track.stop()); | ||
| streamRef.current = null; | ||
| } | ||
| if (audioContextRef.current) { | ||
| audioContextRef.current.close(); | ||
| audioContextRef.current = null; | ||
| } | ||
| if (wsRef.current) { | ||
| wsRef.current.close(); | ||
| wsRef.current = null; | ||
| } | ||
| setIsConnected(false); | ||
| setIsRecording(false); | ||
| }, []); |
There was a problem hiding this comment.
The hook allocates a WebSocket, AudioContext, and microphone MediaStream, but there’s no unmount cleanup. If a component using this hook unmounts while recording/connecting (route changes, conditional render), the mic/WebSocket can remain active. Add a useEffect with a cleanup function that calls stopRecording()/cleanup() on unmount.
There was a problem hiding this comment.
@copilot apply changes based on this feedback
There was a problem hiding this comment.
Added a useEffect with a cleanup function in commit 28aea4c that calls cleanup() on unmount. This ensures the WebSocket, AudioContext, and microphone MediaStream are properly released when the component unmounts (e.g., on route changes or conditional renders).
| const startRecording = useCallback(async () => { | ||
| setError(null); | ||
| setTranscript(""); | ||
| itemsRef.current.clear(); | ||
| orderRef.current = []; | ||
|
|
||
| try { | ||
| const session = await API.liveTranscription.getToken({ | ||
| facility_id: options.facilityId, | ||
| language: options.language, | ||
| }); | ||
|
|
||
| if (isGoogleSession(session)) { | ||
| await startGoogleRecording(session); | ||
| } else { | ||
| await startOpenAIRecording(session); | ||
| } | ||
| } catch (err: unknown) { |
There was a problem hiding this comment.
startRecording() can be invoked multiple times before isRecording flips to true (e.g., double-click or rapid toggles while the WS/audio setup is in-flight). Because refs are overwritten, this can leak the previous WebSocket/AudioContext/MediaStream and leave orphan connections. Consider guarding with an isStarting flag (or early-return when wsRef.current/audioContextRef.current is already set) and/or calling cleanup() before starting a new session.
There was a problem hiding this comment.
@copilot apply changes based on this feedback
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/manifest.tsx (1)
12-17:⚠️ Potential issue | 🟠 MajorDon't use
anyfor component props in the exported manifest interface.The
componentsfield in theManifestinterface erases prop type information by usingReact.FC<any>. BothNoteMessageInput(which acceptsclassName,message, andsetMessage) andScribe(which acceptsformStateandsetFormState) have explicitly typed props. Since this manifest is re-exported fromsrc/main.tsx, federated consumers lose compile-time prop validation. Define a proper component type map instead of usingany.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/manifest.tsx` around lines 12 - 17, The Manifest.components typing currently erases prop types by using React.FC<any>; update the Manifest interface to preserve component prop types by replacing the loose any with a typed component map (e.g., a union or generic record keyed by component names) that uses the actual prop interfaces for NoteMessageInput and Scribe (and any other exported components) so federated consumers retain compile-time validation; locate the Manifest interface in src/manifest.tsx and replace components: Record<string, React.LazyExoticComponent<React.FC<any>>> with a strongly typed map referencing the real prop types (or make Manifest generic) and update exports in src/main.tsx accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/NotesScribe.tsx`:
- Around line 73-85: The microphone Button is icon-only and lacks an accessible
name; update the Button in NotesScribe (the element rendering MicrophoneIcon and
using props className, isRecording, and onClick={handleToggleRecording}) to
include an appropriate accessible name and state: add a dynamic aria-label such
as aria-label={isRecording ? "Stop transcription" : "Start transcription"} and
also include aria-pressed={isRecording} (or aria-checked) to convey toggle state
to assistive tech.
- Around line 40-44: The effect in NotesScribe that syncs transcript to parent
only sets when transcript is truthy, so when startRecording() resets transcript
to "" the parent draft isn't cleared and stale text remains; update the
useEffect (the effect that depends on transcript and calls setMessage) to always
call setMessage(transcript) (including empty strings) so the parent draft is
cleared on session reset—locate the useEffect referencing transcript and
setMessage and remove the truthy-only guard so it assigns the transcript value
unconditionally.
In `@src/hooks/useLiveTranscription.ts`:
- Around line 191-217: The "error" message handler in ws.onmessage (inside
useLiveTranscription) currently only calls setError but leaves
audio/WebSocket/audio context active; update the "error" case in the
ws.onmessage switch to call cleanup() (the same cleanup used by ws.onerror) in
addition to setting error state so audio capture and resources are stopped
immediately; ensure this change is applied for both provider branches (Google
provider's ws.onmessage block and the OpenAI provider's equivalent) so cleanup()
is invoked whenever msg.type === "error".
- Around line 278-329: The committed ordering isn't applied because items pushed
into orderRef.current by the "conversation.item.input_audio_transcription.delta"
and "conversation.item.input_audio_transcription.completed" handlers prevent
insertInOrder from repositioning them when "input_audio_buffer.committed"
arrives; fix by changing insertInOrder (or the committed-case logic) to remove
any existing occurrence of msg.item_id from orderRef.current before inserting at
the correct spot using previous_item_id so committed ordering is enforced;
update symbols: modify insertInOrder(...) to check for and remove an existing
itemId in orderRef.current (or in the "input_audio_buffer.committed" case remove
the id first) and then perform the insertion using previous_item_id, keeping
itemsRef.current updates as-is.
- Around line 124-129: The code currently appends the CARE_ACCESS_TOKEN_KEY
value to session.url and creates a WebSocket with credentials in the query
string; change this to create the WebSocket without the token (new
WebSocket(session.url)) and authenticate by sending the token in an initial
authenticated message after open (e.g., on ws.onopen send JSON { type: "auth",
token }) or by using a supported subprotocol if the backend implements it;
update the logic around useLiveTranscription to retrieve token from localStorage
(still via CARE_ACCESS_TOKEN_KEY), open ws using session.url, and perform
authentication over the socket immediately after connection instead of embedding
the token in the URL (ensure backend is updated to accept this authentication
method).
- Around line 95-118: The cleanup function created with useCallback (cleanup) is
never registered to run on unmount; add a useEffect inside the
useLiveTranscription hook that returns the cleanup function so resources are
released on unmount (i.e., useEffect(() => { return cleanup; }, [cleanup]) or
useEffect(() => () => cleanup(), [cleanup])); keep cleanup memoized via its
current useCallback dependency array so the effect dependency is stable. Ensure
you do not call cleanup immediately — the effect must return it.
- Around line 483-492: stopRecording currently always sends {"type":"stop"};
update logic to track the active provider in startRecording (e.g., set a
provider string like "google" or "openai" on a ref or state such as
activeProviderRef in useLiveTranscription) and then in stopRecording, check
wsRef.current?.readyState === WebSocket.OPEN and send provider-specific cancel
messages: send {"type":"stop"} for Google and {"type":"response.cancel"} for
OpenAI Realtime. Keep the existing try/catch around wsRef.current.send and
preserve calling cleanup() afterward; reference startRecording, stopRecording,
wsRef, and the new activeProviderRef (or equivalent) when making the changes.
---
Outside diff comments:
In `@src/manifest.tsx`:
- Around line 12-17: The Manifest.components typing currently erases prop types
by using React.FC<any>; update the Manifest interface to preserve component prop
types by replacing the loose any with a typed component map (e.g., a union or
generic record keyed by component names) that uses the actual prop interfaces
for NoteMessageInput and Scribe (and any other exported components) so federated
consumers retain compile-time validation; locate the Manifest interface in
src/manifest.tsx and replace components: Record<string,
React.LazyExoticComponent<React.FC<any>>> with a strongly typed map referencing
the real prop types (or make Manifest generic) and update exports in
src/main.tsx accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 34081edd-9a81-4162-b12a-eddac06bb79d
📒 Files selected for processing (7)
src/components/NotesScribe.tsxsrc/hooks/useLiveTranscription.tssrc/manifest.tsxsrc/types.tssrc/utils/api.tssrc/utils/benchmark-constants.tssrc/utils/constants.ts
| const cleanup = useCallback(() => { | ||
| if (workletRef.current) { | ||
| workletRef.current.disconnect(); | ||
| workletRef.current = null; | ||
| } | ||
| if (sourceRef.current) { | ||
| sourceRef.current.disconnect(); | ||
| sourceRef.current = null; | ||
| } | ||
| if (streamRef.current) { | ||
| streamRef.current.getTracks().forEach((track) => track.stop()); | ||
| streamRef.current = null; | ||
| } | ||
| if (audioContextRef.current) { | ||
| audioContextRef.current.close(); | ||
| audioContextRef.current = null; | ||
| } | ||
| if (wsRef.current) { | ||
| wsRef.current.close(); | ||
| wsRef.current = null; | ||
| } | ||
| setIsConnected(false); | ||
| setIsRecording(false); | ||
| }, []); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/hooks/useLiveTranscription.ts | head -150Repository: ohcnetwork/care_scribe_fe
Length of output: 5457
🏁 Script executed:
tail -n +119 src/hooks/useLiveTranscription.tsRepository: ohcnetwork/care_scribe_fe
Length of output: 12129
Register cleanup on unmount with correct effect pattern.
The hook defines a cleanup function but never registers it to run when the component unmounts. Resources like the microphone, audio context, and websocket can remain open after the UI disappears. This violates the coding guideline: src/**/*.{ts,tsx}: Handle cleanup in useEffect hooks.
Note: The suggested fix in the original comment contains a bug. useEffect(() => cleanup, [cleanup]) calls cleanup immediately on mount (not desired). The correct pattern is to return the cleanup function to run on unmount:
useEffect(() => {
return cleanup;
}, [cleanup]);Or explicitly wrapped:
useEffect(() => {
return () => cleanup();
}, [cleanup]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hooks/useLiveTranscription.ts` around lines 95 - 118, The cleanup
function created with useCallback (cleanup) is never registered to run on
unmount; add a useEffect inside the useLiveTranscription hook that returns the
cleanup function so resources are released on unmount (i.e., useEffect(() => {
return cleanup; }, [cleanup]) or useEffect(() => () => cleanup(), [cleanup]));
keep cleanup memoized via its current useCallback dependency array so the effect
dependency is stable. Ensure you do not call cleanup immediately — the effect
must return it.
src/hooks/useLiveTranscription.ts
Outdated
| const token = localStorage.getItem(CARE_ACCESS_TOKEN_KEY); | ||
| if (!token) throw new Error("No access token available"); | ||
|
|
||
| // Build WS URL with JWT token | ||
| const wsUrl = `${session.url}?token=${encodeURIComponent(token)}`; | ||
| const ws = new WebSocket(wsUrl); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the full context of the useLiveTranscription.ts file around the flagged lines
cat -n src/hooks/useLiveTranscription.ts | head -150 | tail -50Repository: ohcnetwork/care_scribe_fe
Length of output: 1929
🏁 Script executed:
# Let's also look at the session object and understand where it comes from
rg "session" src/hooks/useLiveTranscription.ts -B 5 -A 2Repository: ohcnetwork/care_scribe_fe
Length of output: 2573
🏁 Script executed:
# Search for WebSocket usage patterns in the codebase
rg "new WebSocket" -A 3 -B 3Repository: ohcnetwork/care_scribe_fe
Length of output: 1130
🏁 Script executed:
# Look for any authentication/session related types or utilities
fd -e ts -e tsx | xargs rg -l "WebSocket|session.*token" | head -10Repository: ohcnetwork/care_scribe_fe
Length of output: 102
🏁 Script executed:
# Look for the GoogleLiveTranscriptionSession type definition
rg "GoogleLiveTranscriptionSession" -A 10 -B 2Repository: ohcnetwork/care_scribe_fe
Length of output: 3487
🏁 Script executed:
# Search for API-related files and session/token handling
fd -e ts -e tsx | xargs rg -l "liveTranscription.*getToken\|LiveTranscriptionSession" | head -10Repository: ohcnetwork/care_scribe_fe
Length of output: 51
🏁 Script executed:
# Look for comments or documentation about the API design
rg "Google.*transcription|transcription.*Google" -i -A 3 -B 3Repository: ohcnetwork/care_scribe_fe
Length of output: 1937
🏁 Script executed:
# Check if there are any TODOs or FIXMEs related to authentication or WebSocket
rg "TODO|FIXME" src/hooks/useLiveTranscription.ts -B 2 -A 2Repository: ohcnetwork/care_scribe_fe
Length of output: 51
Avoid putting the CARE bearer token in the WebSocket URL query string.
Query-string credentials are routinely captured in proxy logs, access logs, and browser history, creating a security vulnerability. The backend should provide a URL without embedded credentials and support authentication via WebSocket headers, subprotocols, or an authenticated bootstrap message instead. This is a known security anti-pattern that needs to be addressed at the API design level.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hooks/useLiveTranscription.ts` around lines 124 - 129, The code currently
appends the CARE_ACCESS_TOKEN_KEY value to session.url and creates a WebSocket
with credentials in the query string; change this to create the WebSocket
without the token (new WebSocket(session.url)) and authenticate by sending the
token in an initial authenticated message after open (e.g., on ws.onopen send
JSON { type: "auth", token }) or by using a supported subprotocol if the backend
implements it; update the logic around useLiveTranscription to retrieve token
from localStorage (still via CARE_ACCESS_TOKEN_KEY), open ws using session.url,
and perform authentication over the socket immediately after connection instead
of embedding the token in the URL (ensure backend is updated to accept this
authentication method).
| // Handle incoming transcript messages | ||
| ws.onmessage = (event) => { | ||
| const msg = JSON.parse(event.data); | ||
|
|
||
| switch (msg.type) { | ||
| case "transcript": { | ||
| if (msg.is_final) { | ||
| setTranscript((prev) => | ||
| prev ? `${prev} ${msg.text}` : msg.text, | ||
| ); | ||
| } | ||
| break; | ||
| } | ||
| case "error": { | ||
| console.error("Google STT error:", msg.detail); | ||
| setError(msg.detail || "Transcription error"); | ||
| break; | ||
| } | ||
| } | ||
| }; | ||
|
|
||
| ws.onclose = () => setIsConnected(false); | ||
| ws.onerror = () => { | ||
| setError("WebSocket error"); | ||
| cleanup(); | ||
| }; | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/hooks/useLiveTranscription.tsRepository: ohcnetwork/care_scribe_fe
Length of output: 18961
Stop audio capture on provider-level errors.
When the server sends an "error" message, the code only sets error state but leaves the audio capture, WebSocket, and audio context active. This causes audio to continue streaming to a failed transcription session. Both the Google provider (line 206) and OpenAI provider (line 334) need to call cleanup() in their error handlers to immediately stop audio capture.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hooks/useLiveTranscription.ts` around lines 191 - 217, The "error"
message handler in ws.onmessage (inside useLiveTranscription) currently only
calls setError but leaves audio/WebSocket/audio context active; update the
"error" case in the ws.onmessage switch to call cleanup() (the same cleanup used
by ws.onerror) in addition to setting error state so audio capture and resources
are stopped immediately; ensure this change is applied for both provider
branches (Google provider's ws.onmessage block and the OpenAI provider's
equivalent) so cleanup() is invoked whenever msg.type === "error".
| case "conversation.item.input_audio_transcription.delta": { | ||
| const item = itemsRef.current.get(msg.item_id); | ||
| if (item) { | ||
| item.delta += msg.delta; | ||
| } else { | ||
| itemsRef.current.set(msg.item_id, { | ||
| itemId: msg.item_id, | ||
| previousItemId: null, | ||
| delta: msg.delta, | ||
| transcript: null, | ||
| }); | ||
| if (!orderRef.current.includes(msg.item_id)) { | ||
| orderRef.current.push(msg.item_id); | ||
| } | ||
| } | ||
| setTranscript(buildOrderedTranscript()); | ||
| break; | ||
| } | ||
|
|
||
| case "conversation.item.input_audio_transcription.completed": { | ||
| const item = itemsRef.current.get(msg.item_id); | ||
| if (item) { | ||
| item.transcript = msg.transcript; | ||
| } else { | ||
| itemsRef.current.set(msg.item_id, { | ||
| itemId: msg.item_id, | ||
| previousItemId: null, | ||
| delta: "", | ||
| transcript: msg.transcript, | ||
| }); | ||
| if (!orderRef.current.includes(msg.item_id)) { | ||
| orderRef.current.push(msg.item_id); | ||
| } | ||
| } | ||
| setTranscript(buildOrderedTranscript()); | ||
| break; | ||
| } | ||
|
|
||
| case "input_audio_buffer.committed": { | ||
| const existing = itemsRef.current.get(msg.item_id); | ||
| if (existing) { | ||
| existing.previousItemId = msg.previous_item_id ?? null; | ||
| } else { | ||
| itemsRef.current.set(msg.item_id, { | ||
| itemId: msg.item_id, | ||
| previousItemId: msg.previous_item_id ?? null, | ||
| delta: "", | ||
| transcript: null, | ||
| }); | ||
| } | ||
| insertInOrder(msg.item_id, msg.previous_item_id ?? null); | ||
| break; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -t f -n useLiveTranscription.tsRepository: ohcnetwork/care_scribe_fe
Length of output: 299
🏁 Script executed:
fd -t f useLiveTranscription.tsRepository: ohcnetwork/care_scribe_fe
Length of output: 102
🏁 Script executed:
cat -n src/hooks/useLiveTranscription.tsRepository: ohcnetwork/care_scribe_fe
Length of output: 18961
Committed ordering is never applied to items already in the order array.
Items added via delta/completed handlers (lines 289-291, 308-310) are immediately pushed to orderRef.current. When input_audio_buffer.committed arrives later with previous_item_id, insertInOrder() exits early at line 73 because the item already exists in the order array. The item never gets repositioned, so transcript order becomes event-arrival order instead of the correct committed order.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hooks/useLiveTranscription.ts` around lines 278 - 329, The committed
ordering isn't applied because items pushed into orderRef.current by the
"conversation.item.input_audio_transcription.delta" and
"conversation.item.input_audio_transcription.completed" handlers prevent
insertInOrder from repositioning them when "input_audio_buffer.committed"
arrives; fix by changing insertInOrder (or the committed-case logic) to remove
any existing occurrence of msg.item_id from orderRef.current before inserting at
the correct spot using previous_item_id so committed ordering is enforced;
update symbols: modify insertInOrder(...) to check for and remove an existing
itemId in orderRef.current (or in the "input_audio_buffer.committed" case remove
the id first) and then perform the insertion using previous_item_id, keeping
itemsRef.current updates as-is.
src/hooks/useLiveTranscription.ts
Outdated
| const stopRecording = useCallback(() => { | ||
| // Send stop signal for Google provider before cleanup | ||
| if (wsRef.current?.readyState === WebSocket.OPEN) { | ||
| try { | ||
| wsRef.current.send(JSON.stringify({ type: "stop" })); | ||
| } catch { | ||
| // Ignore send errors during cleanup | ||
| } | ||
| } | ||
| cleanup(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the useLiveTranscription.ts file to understand the full context
wc -l src/hooks/useLiveTranscription.tsRepository: ohcnetwork/care_scribe_fe
Length of output: 106
🏁 Script executed:
# Read the specific lines mentioned in the review plus surrounding context
sed -n '470,510p' src/hooks/useLiveTranscription.tsRepository: ohcnetwork/care_scribe_fe
Length of output: 767
🏁 Script executed:
# Search for provider-related setup to understand how providers are managed
rg "provider|Provider" src/hooks/useLiveTranscription.ts -B 2 -A 2Repository: ohcnetwork/care_scribe_fe
Length of output: 646
🏁 Script executed:
# Look for websocket-related code to understand if there are multiple ws connections
rg "wsRef|WebSocket|ws\." src/hooks/useLiveTranscription.ts -B 2 -A 2Repository: ohcnetwork/care_scribe_fe
Length of output: 5110
🏁 Script executed:
# Search for OpenAI and Google specific handling
rg "openai|OpenAI|google|Google" src/hooks/useLiveTranscription.ts -B 2 -A 2Repository: ohcnetwork/care_scribe_fe
Length of output: 2445
🏁 Script executed:
# Check if there's any state variable tracking which provider is active
rg "provider|session" src/hooks/useLiveTranscription.ts | head -30Repository: ohcnetwork/care_scribe_fe
Length of output: 1162
🏁 Script executed:
# Look for the complete context around stopRecording and surrounding lines
sed -n '450,503p' src/hooks/useLiveTranscription.tsRepository: ohcnetwork/care_scribe_fe
Length of output: 1282
🏁 Script executed:
# Search for any OpenAI-specific stop/close patterns in the codebase
rg "openai|realtime" src/ -t ts -t tsx -A 3 | grep -i "stop\|close\|shutdown"Repository: ohcnetwork/care_scribe_fe
Length of output: 96
🏁 Script executed:
# Search for OpenAI realtime API patterns in TypeScript files
fd -e ts -e tsx | xargs rg "openai|realtime" -B 2 -A 3 | grep -i "type\|send\|message"Repository: ohcnetwork/care_scribe_fe
Length of output: 300
🏁 Script executed:
# Look specifically at how OpenAI messages are formatted in the codebase
rg "input_audio_buffer|realtime" src/ -B 2 -A 2Repository: ohcnetwork/care_scribe_fe
Length of output: 1625
🏁 Script executed:
# Check if there's a session variable available in stopRecording scope
sed -n '430,503p' src/hooks/useLiveTranscription.ts | head -80Repository: ohcnetwork/care_scribe_fe
Length of output: 1863
🌐 Web query:
OpenAI realtime API stop signal message format
💡 Result:
The "stop signal" in OpenAI's Realtime API refers to the client event used to cancel an ongoing model response, which is the response.cancel event. This is sent by the client over WebSocket or WebRTC to interrupt the model's generation (e.g., for user interruptions via VAD). Message format (JSON): { "type": "response.cancel", "event_id": "optional-client-generated-id" } No response_id or other fields are required; it cancels the current in-progress response. The server responds with a response.done event (status: "cancelled") or an error if no response is active. To fully stop audio output (especially in WebRTC/SIP), follow with output_audio_buffer.clear: { "type": "output_audio_buffer.clear", "event_id": "optional-client-generated-id" } This triggers output_audio_buffer.cleared server event. For truncation on interruption (to remove unplayed assistant content from conversation history), use conversation.item.truncate after detecting input_audio_buffer.speech_started or conversation.interrupted, specifying the item_id and audio_samples_played (number of samples played before interrupt). Sources confirm this from official API reference pages on client events, with examples in interruption guides. No traditional "stop" sequences like Chat Completions API; Realtime uses event-based protocol.
Citations:
- 1: https://developers.openai.com/api/docs/api-reference/realtime-client-events/response/cancel
- 2: https://platform.openai.com/docs/api-reference/realtime-client-events/session
- 3: https://developers.openai.com/api/reference/resources/realtime/client-events/
- 4: https://developers.openai.com/api/docs/guides/realtime-conversations
- 5: https://developers.openai.com/api/docs/guides/realtime-conversations/
- 6: https://developers.openai.com/api/reference/resources/realtime/client-events
Track active provider and send provider-specific stop signals.
stopRecording sends {"type":"stop"} to all websockets, but OpenAI Realtime API uses {"type":"response.cancel"} to cancel responses. Google STT expects {"type":"stop"}. Store which provider is active during startRecording and conditionally send the appropriate message type in stopRecording.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hooks/useLiveTranscription.ts` around lines 483 - 492, stopRecording
currently always sends {"type":"stop"}; update logic to track the active
provider in startRecording (e.g., set a provider string like "google" or
"openai" on a ref or state such as activeProviderRef in useLiveTranscription)
and then in stopRecording, check wsRef.current?.readyState === WebSocket.OPEN
and send provider-specific cancel messages: send {"type":"stop"} for Google and
{"type":"response.cancel"} for OpenAI Realtime. Keep the existing try/catch
around wsRef.current.send and preserve calling cleanup() afterward; reference
startRecording, stopRecording, wsRef, and the new activeProviderRef (or
equivalent) when making the changes.
Deploying care-scribe-fe with
|
| Latest commit: |
28aea4c
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://a2fdff55.care-scribe-fe.pages.dev |
| Branch Preview URL: | https://added-gemini-3.care-scribe-fe.pages.dev |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (6)
src/hooks/useLiveTranscription.ts (4)
1-1:⚠️ Potential issue | 🔴 CriticalRegister
cleanupon unmount.The hook tears down the recorder, stream, audio context, and socket, but nothing returns
cleanupfrom an effect. Unmounting mid-session can leave the mic and websocket alive.Possible fix
-import { useCallback, useRef, useState } from "react"; +import { useCallback, useEffect, useRef, useState } from "react"; ... const cleanup = useCallback(() => { if (mediaRecorderRef.current) { if (mediaRecorderRef.current.state !== "inactive") { try { mediaRecorderRef.current.stop(); @@ setIsConnected(false); setIsRecording(false); }, []); + + useEffect(() => { + return () => cleanup(); + }, [cleanup]);As per coding guidelines, "Handle cleanup in useEffect hooks".
Also applies to: 113-151
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/hooks/useLiveTranscription.ts` at line 1, The useLiveTranscription hook currently builds a cleanup function that stops recorder, stream, audioContext and socket but never registers it on unmount; update the effect(s) in useLiveTranscription to return the cleanup function (or call it in a useEffect with empty deps) so it runs on component unmount, ensuring recorder, stream, audioContext and socket are closed (reference symbols: cleanup, recorder, stream, audioContext, socket, useLiveTranscription).
88-109:⚠️ Potential issue | 🟠 MajorCommitted ordering never repositions existing items.
Delta/completed events add IDs to
orderRef.currentbefore"input_audio_buffer.committed"arrives. BecauseinsertInOrder()bails out on existing IDs, the committed order is never applied.Possible fix
const insertInOrder = useCallback( (itemId: string, previousItemId: string | null) => { const order = orderRef.current; - if (order.includes(itemId)) return; + const existingIdx = order.indexOf(itemId); + if (existingIdx !== -1) { + order.splice(existingIdx, 1); + } if (!previousItemId) { - if (order.length === 0) { - order.push(itemId); - } else { - order.push(itemId); - } + order.unshift(itemId); } else { const prevIdx = order.indexOf(previousItemId); if (prevIdx === -1) { order.push(itemId); } else {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/hooks/useLiveTranscription.ts` around lines 88 - 109, insertInOrder currently returns early if itemId already exists, preventing later "committed" events from reordering items; change it so that when an existing itemId is found in orderRef.current it is removed first, then re-inserted according to previousItemId (if previousItemId is null push to end or start as intended, else find prevIdx and splice after it, falling back to push if prevIdx === -1). Update the logic inside insertInOrder to remove any existing occurrence before deciding where to insert so committed ordering can reposition items correctly.
240-243:⚠️ Potential issue | 🟠 MajorStop capture when the provider sends an
"error"event.Both provider branches only set error state here. Audio capture keeps running until a lower-level websocket failure happens, so the client can continue streaming into an already failed session.
Possible fix
case "error": { console.error("Google STT error:", msg.detail); setError(msg.detail || "Transcription error"); + cleanup(); break; } ... case "error": { console.error("Realtime API error:", msg.error); setError(msg.error?.message || "Transcription error"); + cleanup(); break; }Also applies to: 377-380
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/hooks/useLiveTranscription.ts` around lines 240 - 243, In the "error" case where you currently call setError(msg.detail || "Transcription error") (seen in the provider message handler cases at the block around setError and the similar block at lines ~377-380), also stop the ongoing audio capture and mark capturing as stopped; specifically invoke the existing stopCapture() (or if not available, call whatever routine closes the capture stream and websocket and setIsCapturing(false)) so the client stops sending audio into the failed session and performs any cleanup.
157-159:⚠️ Potential issue | 🟠 MajorAvoid putting the Google session token in the websocket URL.
Query-string credentials are routinely captured in proxy/access logs and browser tooling. This should move to an authenticated bootstrap message or another backend-supported channel.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/hooks/useLiveTranscription.ts` around lines 157 - 159, The code is embedding session.token in the websocket URL (wsUrl) which exposes credentials; instead open the WebSocket to session.url without the token and send an authenticated bootstrap message after the connection is established (e.g., on ws.onopen send a JSON auth message with the token or a short-lived auth handle), then have the server validate that message and proceed; update useLiveTranscription.ts to stop constructing wsUrl with token and instead create new WebSocket(session.url) and send the token via an auth bootstrap message (reference: wsUrl, session.token, WebSocket, and the ws onopen/onmessage handlers).src/components/NotesScribe.tsx (2)
164-175:⚠️ Potential issue | 🟠 MajorLabel the microphone control for assistive tech.
This is still an icon-only button, so screen readers get no name for the start/stop transcription action.
Possible fix
<Button + aria-label={ + isRecording ? "Stop live transcription" : "Start live transcription" + } + aria-pressed={isRecording} className={cn( className, "size-10 shrink-0",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/NotesScribe.tsx` around lines 164 - 175, The microphone toggle Button (the JSX Button using cn, className, isRecording, handleToggleRecording, isStarting) is icon-only and lacks an accessible name; add an accessible label by supplying a descriptive aria-label (e.g., "Start transcription" / "Stop transcription" based on isRecording) or include visually-hidden text inside the Button so screen readers can announce the action, ensuring the aria-label updates when isRecording changes and keeping the existing props (onClick=handleToggleRecording, disabled=isStarting) intact.
63-67:⚠️ Potential issue | 🟠 MajorSync transcript resets back into
noteMessage.This effect still ignores
"", so a new recording session keeps the previous transcript in shared state until the next chunk arrives.Possible fix
+ const hasSyncedTranscript = useRef(false); + // Update the shared message state with the live transcript useEffect(() => { - if (transcript) { - setMessage(transcript); - } + if (!hasSyncedTranscript.current) { + hasSyncedTranscript.current = true; + return; + } + setMessage(transcript); }, [transcript, setMessage]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/NotesScribe.tsx` around lines 63 - 67, The effect currently skips updating note state when transcript is an empty string, leaving old text around; in the useEffect block that references transcript and setMessage, change the guard so it only skips on null/undefined (e.g., use transcript != null or transcript !== undefined) instead of falsy checks, so setMessage is called with "" to correctly reset noteMessage on a new recording session; update the useEffect around transcript and setMessage accordingly.
🧹 Nitpick comments (2)
src/types.ts (1)
6-29: Consider adding aproviderdiscriminator toOpenAILiveTranscriptionSession.
GoogleLiveTranscriptionSessionhasprovider: "google"for type discrimination, butOpenAILiveTranscriptionSessionlacks a correspondingprovider: "openai"field. This makes runtime type narrowing in the union rely on checking for the presence of other fields (e.g.,client_secretvsurl).Adding a discriminator would enable cleaner type guards:
if (session.provider === "google") { ... } else if (session.provider === "openai") { ... }♻️ Proposed change
export type OpenAILiveTranscriptionSession = { + provider: "openai"; session_id: string; id: string; model: string; client_secret: { value: string; expires_at: number; }; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.ts` around lines 6 - 29, Add a provider discriminator "openai" to the OpenAILiveTranscriptionSession type so the LiveTranscriptionSession union can be narrowed by session.provider; update the OpenAILiveTranscriptionSession type definition to include provider: "openai" and then update any code that constructs or parses OpenAILiveTranscriptionSession objects (factories, deserializers, tests) to set provider accordingly and adjust any type guards or switches that currently detect OpenAI sessions by checking for client_secret instead of session.provider.src/pages/Usage.tsx (1)
164-172: Consider using a distinct icon for live transcription.Both OCR (line 156) and live transcription use
CameraIcon. Consider using a different icon (e.g.,MicrophoneIcon,RadioIcon, orActivityLogIconfrom Radix) to visually distinguish live transcription from OCR in the overview.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/pages/Usage.tsx` around lines 164 - 172, The live transcription row currently reuses CameraIcon; change its icon to a distinct one (e.g., MicrophoneIcon or RadioIcon) by updating the object with label t("live_transcription") to use <MicrophoneIcon /> (or another chosen Radix icon) and add that icon to the component imports (e.g., include MicrophoneIcon from the Radix icons package) in Usage.tsx; if CameraIcon is no longer used elsewhere remove it from the imports, otherwise keep both.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/TncDialog.tsx`:
- Around line 46-58: The Accept button is enabled while the terms body (tnc) is
still loading; disable the Button until tnc is available by checking tnc before
allowing acceptance. Update the Button rendered in DialogFooter (the Button that
calls onAccept() and onOpenChange(false)) to set a disabled prop when tnc is
falsy or equals the placeholder ("LOADING..."), and ensure any onClick handler
early-returns if tnc is not present so onAccept/onOpenChange cannot be invoked
when terms are missing.
- Around line 43-47: TncDialog currently injects backend HTML directly via
dangerouslySetInnerHTML using the tnc variable and enables the accept button
while tnc is undefined; add defense-in-depth sanitization and a loading-gated
disabled state: install and import DOMPurify, sanitize tnc before passing it to
dangerouslySetInnerHTML inside the TncDialog component (use
DOMPurify.sanitize(tnc || "")), and change the accept button (the dialog's
accept/confirm control) to be disabled when tnc is falsy or equals the loading
placeholder so users cannot accept before content loads; ensure you still render
a safe fallback like "LOADING..." but only as plain text, not as unsanitized
HTML.
---
Duplicate comments:
In `@src/components/NotesScribe.tsx`:
- Around line 164-175: The microphone toggle Button (the JSX Button using cn,
className, isRecording, handleToggleRecording, isStarting) is icon-only and
lacks an accessible name; add an accessible label by supplying a descriptive
aria-label (e.g., "Start transcription" / "Stop transcription" based on
isRecording) or include visually-hidden text inside the Button so screen readers
can announce the action, ensuring the aria-label updates when isRecording
changes and keeping the existing props (onClick=handleToggleRecording,
disabled=isStarting) intact.
- Around line 63-67: The effect currently skips updating note state when
transcript is an empty string, leaving old text around; in the useEffect block
that references transcript and setMessage, change the guard so it only skips on
null/undefined (e.g., use transcript != null or transcript !== undefined)
instead of falsy checks, so setMessage is called with "" to correctly reset
noteMessage on a new recording session; update the useEffect around transcript
and setMessage accordingly.
In `@src/hooks/useLiveTranscription.ts`:
- Line 1: The useLiveTranscription hook currently builds a cleanup function that
stops recorder, stream, audioContext and socket but never registers it on
unmount; update the effect(s) in useLiveTranscription to return the cleanup
function (or call it in a useEffect with empty deps) so it runs on component
unmount, ensuring recorder, stream, audioContext and socket are closed
(reference symbols: cleanup, recorder, stream, audioContext, socket,
useLiveTranscription).
- Around line 88-109: insertInOrder currently returns early if itemId already
exists, preventing later "committed" events from reordering items; change it so
that when an existing itemId is found in orderRef.current it is removed first,
then re-inserted according to previousItemId (if previousItemId is null push to
end or start as intended, else find prevIdx and splice after it, falling back to
push if prevIdx === -1). Update the logic inside insertInOrder to remove any
existing occurrence before deciding where to insert so committed ordering can
reposition items correctly.
- Around line 240-243: In the "error" case where you currently call
setError(msg.detail || "Transcription error") (seen in the provider message
handler cases at the block around setError and the similar block at lines
~377-380), also stop the ongoing audio capture and mark capturing as stopped;
specifically invoke the existing stopCapture() (or if not available, call
whatever routine closes the capture stream and websocket and
setIsCapturing(false)) so the client stops sending audio into the failed session
and performs any cleanup.
- Around line 157-159: The code is embedding session.token in the websocket URL
(wsUrl) which exposes credentials; instead open the WebSocket to session.url
without the token and send an authenticated bootstrap message after the
connection is established (e.g., on ws.onopen send a JSON auth message with the
token or a short-lived auth handle), then have the server validate that message
and proceed; update useLiveTranscription.ts to stop constructing wsUrl with
token and instead create new WebSocket(session.url) and send the token via an
auth bootstrap message (reference: wsUrl, session.token, WebSocket, and the ws
onopen/onmessage handlers).
---
Nitpick comments:
In `@src/pages/Usage.tsx`:
- Around line 164-172: The live transcription row currently reuses CameraIcon;
change its icon to a distinct one (e.g., MicrophoneIcon or RadioIcon) by
updating the object with label t("live_transcription") to use <MicrophoneIcon />
(or another chosen Radix icon) and add that icon to the component imports (e.g.,
include MicrophoneIcon from the Radix icons package) in Usage.tsx; if CameraIcon
is no longer used elsewhere remove it from the imports, otherwise keep both.
In `@src/types.ts`:
- Around line 6-29: Add a provider discriminator "openai" to the
OpenAILiveTranscriptionSession type so the LiveTranscriptionSession union can be
narrowed by session.provider; update the OpenAILiveTranscriptionSession type
definition to include provider: "openai" and then update any code that
constructs or parses OpenAILiveTranscriptionSession objects (factories,
deserializers, tests) to set provider accordingly and adjust any type guards or
switches that currently detect OpenAI sessions by checking for client_secret
instead of session.provider.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 63cf91a0-8b76-4811-a89f-307ce4a68c85
📒 Files selected for processing (14)
public/locale/en.jsonsrc/components/Controller.tsxsrc/components/History.tsxsrc/components/NotesScribe.tsxsrc/components/QuotaSheet.tsxsrc/components/TncDialog.tsxsrc/hooks/useControlState.tssrc/hooks/useLiveTranscription.tssrc/pages/HistoryDetails.tsxsrc/pages/HistoryList.tsxsrc/pages/Quotas.tsxsrc/pages/Usage.tsxsrc/types.tssrc/utils/api.ts
✅ Files skipped from review due to trivial changes (2)
- public/locale/en.json
- src/components/History.tsx
…es on unmount Agent-Logs-Url: https://github.com/ohcnetwork/care_scribe_fe/sessions/bd0ab99e-aa3e-4d7f-978e-d47a6af25a1e Co-authored-by: shivankacker <23238460+shivankacker@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
src/hooks/useControlState.ts (1)
13-18: Define shared control state keys insrc/types.tsto enable type-safe registry lookups.The current
Map<string, ControlStore>registry withas ControlStore<T>casting permits type mismatches at the call site. For example,useControlState<number>("noteMessage", 0)would compile and incorrectly reuse an existingControlStore<string>from a prior call. Define a discriminated or branded key type insrc/types.ts(e.g.,type ControlStateKey<T> = Brand<string, T>) and refactorStoreRegistryto enforce type consistency per key, eliminating the cast ingetOrCreateStore.Aligns with:
**/*.{ts,tsx}: Use strict type checking;src/**/*.{ts,tsx}: Define shared types insrc/types.ts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/hooks/useControlState.ts` around lines 13 - 18, The registry currently uses Map<string, ControlStore> which allows unsafe casts in useControlState/getOrCreateStore and lets callers reuse a ControlStore with the wrong generic; define a typed key in src/types.ts (e.g., export type ControlStateKey<T> = Brand<string, T> or a discriminated union of known keys) and change StoreRegistry to Map<ControlStateKey<any>, ControlStore<any>> (or Map<ControlStateKey<T>, ControlStore<T>> where appropriate) so lookups are type-safe; update useControlState, getOrCreateStore and the Window.__CARE_CONTROL_STORES__ declaration to use ControlStateKey and remove the as ControlStore<T> cast so the compiler enforces that useControlState<T>("someKey") resolves to a ControlStore<T>.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/NotesScribe.tsx`:
- Around line 64-69: The effect currently replaces the shared draft with the
start-of-recording snapshot (messageBeforeRecording.current) plus the new
transcript, overwriting any edits made after recording started; change the
update to merge the transcript into the current draft instead of using the
initial snapshot—inside the useEffect that depends on transcript, call
setMessage with a functional updater that reads the latest message (prev) and
returns a merged string (e.g., combine prev and transcript with a space), or
alternatively read the current noteMessage from the shared store before
appending; avoid relying solely on messageBeforeRecording.current so later edits
are preserved.
- Around line 97-100: The onload handler for the XHR in NotesScribe.tsx
currently treats only status === 200 as success, causing valid 2xx responses
like 201/204 to be rejected; update the xhr.onload callback (the anonymous
function assigned to xhr.onload) to consider any status in the 200–299 range as
success (e.g., check xhr.status >= 200 && xhr.status < 300) and call resolve()
for those statuses, otherwise reject(new Error(...)) as before so successful
uploads don’t surface false errors or skip the upload_completed patch.
- Around line 57-61: The effect in NotesScribe that assigns the DOM node to the
shared ref (useEffect watching container and containerRef) never clears it on
unmount; update the effect (the useEffect that reads container.current and
writes containerRef.current) to return a cleanup function that resets
containerRef.current to null (or undefined) so consumers don’t retain a detached
element when NotesScribe unmounts; keep the same effect location and references
(container, containerRef) and ensure the cleanup runs on unmount/dep change.
---
Nitpick comments:
In `@src/hooks/useControlState.ts`:
- Around line 13-18: The registry currently uses Map<string, ControlStore> which
allows unsafe casts in useControlState/getOrCreateStore and lets callers reuse a
ControlStore with the wrong generic; define a typed key in src/types.ts (e.g.,
export type ControlStateKey<T> = Brand<string, T> or a discriminated union of
known keys) and change StoreRegistry to Map<ControlStateKey<any>,
ControlStore<any>> (or Map<ControlStateKey<T>, ControlStore<T>> where
appropriate) so lookups are type-safe; update useControlState, getOrCreateStore
and the Window.__CARE_CONTROL_STORES__ declaration to use ControlStateKey and
remove the as ControlStore<T> cast so the compiler enforces that
useControlState<T>("someKey") resolves to a ControlStore<T>.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8d6453db-04dd-4f62-b2ca-b944da4886a4
📒 Files selected for processing (2)
src/components/NotesScribe.tsxsrc/hooks/useControlState.ts
| useEffect(() => { | ||
| if (container.current) { | ||
| containerRef.current = container.current; | ||
| } | ||
| }, [container, containerRef]); |
There was a problem hiding this comment.
Clear the shared container ref on unmount.
This effect publishes a DOM node into shared context but never resets it, so consumers can keep a detached element after NotesScribe unmounts.
🧹 Minimal fix
useEffect(() => {
- if (container.current) {
- containerRef.current = container.current;
+ const node = container.current;
+ if (node) {
+ containerRef.current = node;
}
+ return () => {
+ if (containerRef.current === node) {
+ containerRef.current = null;
+ }
+ };
}, [container, containerRef]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/NotesScribe.tsx` around lines 57 - 61, The effect in
NotesScribe that assigns the DOM node to the shared ref (useEffect watching
container and containerRef) never clears it on unmount; update the effect (the
useEffect that reads container.current and writes containerRef.current) to
return a cleanup function that resets containerRef.current to null (or
undefined) so consumers don’t retain a detached element when NotesScribe
unmounts; keep the same effect location and references (container, containerRef)
and ensure the cleanup runs on unmount/dep change.
| useEffect(() => { | ||
| if (transcript) { | ||
| const prefix = messageBeforeRecording.current; | ||
| setMessage(prefix ? `${prefix} ${transcript}` : transcript); | ||
| } | ||
| }, [transcript, setMessage]); |
There was a problem hiding this comment.
Don't overwrite later draft updates with the start-of-recording snapshot.
Because noteMessage lives in a shared control store, this effect discards any other updates that happen after recording starts and replaces them with messageBeforeRecording.current + transcript on the next chunk. Keep the transcript portion separate, or merge against the current draft instead of the initial snapshot.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/NotesScribe.tsx` around lines 64 - 69, The effect currently
replaces the shared draft with the start-of-recording snapshot
(messageBeforeRecording.current) plus the new transcript, overwriting any edits
made after recording started; change the update to merge the transcript into the
current draft instead of using the initial snapshot—inside the useEffect that
depends on transcript, call setMessage with a functional updater that reads the
latest message (prev) and returns a merged string (e.g., combine prev and
transcript with a space), or alternatively read the current noteMessage from the
shared store before appending; avoid relying solely on
messageBeforeRecording.current so later edits are preserved.
| xhr.onload = () => | ||
| xhr.status === 200 | ||
| ? resolve() | ||
| : reject(new Error(`Upload failed: ${xhr.status}`)); |
There was a problem hiding this comment.
Treat any 2xx signed-URL response as success.
This currently rejects valid 201/204 uploads, which can surface a false error toast and skip the upload_completed patch even though the blob was stored successfully.
✅ Small fix
xhr.onload = () =>
- xhr.status === 200
+ xhr.status >= 200 && xhr.status < 300
? resolve()
: reject(new Error(`Upload failed: ${xhr.status}`));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/NotesScribe.tsx` around lines 97 - 100, The onload handler for
the XHR in NotesScribe.tsx currently treats only status === 200 as success,
causing valid 2xx responses like 201/204 to be rejected; update the xhr.onload
callback (the anonymous function assigned to xhr.onload) to consider any status
in the 200–299 range as success (e.g., check xhr.status >= 200 && xhr.status <
300) and call resolve() for those statuses, otherwise reject(new Error(...)) as
before so successful uploads don’t surface false errors or skip the
upload_completed patch.
Summary by CodeRabbit
New Features
Updates
Localization