Conversation
📝 WalkthroughWalkthroughAdds demo-mode content injection and demo mounting, implements interrupt handling for running agent tasks, adds retry logic for LLM calls, expands the system prompt, and updates frontend message types and rendering to support memory messages and thumbnails. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client (UI)
participant Server as WebSocket (main.py)
participant Agent as Agent (agent.py)
participant FS as File System (./demo)
participant API as Gemini API
Client->>Server: send message
activate Server
Server->>Agent: start _process_and_respond task
activate Agent
alt demo not yet injected and DEMO_ENABLED=true
Agent->>FS: load demo content
FS-->>Agent: demo text + images
Agent->>API: send demo injection (with retry)
activate API
API-->>Agent: response + memory metadata
deactivate API
Agent->>Server: emit memory_injected event
Server->>Client: send memory message (thumbnail + count)
end
Agent->>API: send user message (with retry on 429/503)
activate API
loop retry on transient errors
API-->>Agent: error (429/503)
Agent->>Agent: exponential backoff delay
end
API-->>Agent: response
deactivate API
Agent->>Server: emit assistant response
Server->>Client: display assistant message
deactivate Agent
deactivate Server
sequenceDiagram
participant Client as Client (UI)
participant Server as WebSocket (main.py)
participant Agent as Agent (agent.py)
Note over Server: processing_task active
Client->>Server: send interrupt message
activate Server
alt processing_task exists
Server->>Agent: queue interrupt with context
activate Agent
Agent->>Agent: set interrupt flag / enqueue context
Agent-->>Server: send interrupt_received status
deactivate Agent
else no active task
Server-->>Client: notify no_active_task
end
deactivate Server
Note over Agent: Next follow-up includes interrupt context
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docker-compose.yml (1)
46-55: Duplicatevolumeskey will cause data loss.The
volumeskey appears twice in thepython-agentservice (lines 46-48 and 53-55). In YAML, duplicate keys result in the second overriding the first, meaning the./screenshots:/tmp/screenshotsmount will be silently dropped. This will break screenshot capture functionality.Merge both volume definitions into a single block.
🔧 Proposed fix
python-agent: build: ./services/python-agent ports: - "8000:8000" volumes: - ./screenshots:/tmp/screenshots - ./logs:/app/logs + - ./demo:/app/demo:ro environment: - GEMINI_API_KEY=${GEMINI_API_KEY} - MCP_SERVER_URL=http://playwright-browser:3001 - DEMO_ENABLED=${DEMO_ENABLED:-false} - volumes: - - ./logs:/app/logs - - ./demo:/app/demo:ro depends_on:
🤖 Fix all issues with AI agents
In `@demo/prompt.txt`:
- Line 3: In demo/prompt.txt update the sentence "Pick 3 startups from the YC
2025 batch, check how much they raised by doing a google search in a separate
tab, and add their names and most recent raise to the google docs..." to
correctly capitalize product names: change "google search" to "Google search"
(or "Google Search") and "google docs" to "Google Docs", preserving the rest of
the sentence and punctuation.
In `@services/nextjs-webapp/src/components/ChatWindow.tsx`:
- Around line 333-337: The code in ChatWindow.tsx renders "+{message.imageCount
- 1} more images" whenever message.imageCount is truthy, which produces "+0 more
images" when imageCount === 1; change the guard to only render that span when
message.imageCount > 1 (or compute remaining = message.imageCount - 1 and render
only if remaining > 0) so the extra-images indicator appears only when there are
actually additional images.
In `@services/python-agent/agent.py`:
- Around line 196-221: The is_interrupted flag can remain true forever when
interrupt() is called without context; update _check_interrupt() to clear
self.is_interrupted unconditionally before returning (after checking/consuming
self.interrupt_queue) so a plain interrupt signal is consumed even if
get_nowait() yields None, and then remove the redundant reset of
self.is_interrupted in process_message (so only _check_interrupt handles
clearing the flag); reference the methods interrupt, _check_interrupt, the
attribute is_interrupted and the interrupt_queue in your changes.
In `@services/python-agent/main.py`:
- Around line 74-109: The task currently catches all exceptions in
_process_and_respond, which causes asyncio.CancelledError to be logged and a
response attempted on a closed websocket; modify _process_and_respond to
explicitly handle asyncio.CancelledError by either re-raising it or returning
immediately (so it doesn't hit the generic except), ensure asyncio is imported,
and keep the generic except Exception block for other errors; also avoid sending
on websocket after cancellation by exiting the coroutine before any
websocket.send_json calls in the cancellation branch.
🧹 Nitpick comments (2)
services/nextjs-webapp/src/components/ChatWindow.tsx (1)
189-220: Consider visual indication when send button will interrupt.The logic correctly sends an 'interrupt' message when
isLoadingis true, but the button still shows "Send". Users might not realize they're interrupting the agent. Consider changing the button text or styling when in interrupt mode.💡 Example enhancement
<button onClick={sendMessage} disabled={!isConnected || !input.trim()} className={`${isLoading ? 'bg-orange-600 hover:bg-orange-700' : 'bg-blue-600 hover:bg-blue-700'} disabled:bg-gray-700 disabled:cursor-not-allowed text-white px-4 py-2 rounded-lg transition-colors`} > {isLoading ? 'Interrupt' : 'Send'} </button>services/python-agent/agent.py (1)
223-263: LGTM!The retry implementation with exponential backoff correctly handles rate limiting and service unavailability.
Minor optional improvement: Adding jitter to the delay would help prevent thundering herd issues if multiple agents hit rate limits simultaneously.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (6)
demo/step1.pngis excluded by!**/*.pngdemo/step2.pngis excluded by!**/*.pngdemo/step3.pngis excluded by!**/*.pngdemo/step4.pngis excluded by!**/*.pngdemo/step5.pngis excluded by!**/*.pngdemo/step6.pngis excluded by!**/*.png
📒 Files selected for processing (9)
.env.exampledemo/description.txtdemo/prompt.txtdocker-compose.ymlservices/nextjs-webapp/.gitignoreservices/nextjs-webapp/src/components/ChatWindow.tsxservices/python-agent/agent.pyservices/python-agent/main.pyservices/python-agent/prompts.py
🧰 Additional context used
📓 Path-based instructions (1)
services/python-agent/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
services/python-agent/**/*.py: Run Ruff linting and formatting tool after making changes to Python code in services/python-agent/ with command:cd services/python-agent && uvx ruff check --fix .
Run ty type checking tool after making changes to Python code in services/python-agent/ with command:cd services/python-agent && uvx ty check .
Files:
services/python-agent/main.pyservices/python-agent/prompts.pyservices/python-agent/agent.py
🧬 Code graph analysis (1)
services/python-agent/main.py (1)
services/python-agent/agent.py (3)
inject_demo_content(169-189)process_message(537-663)interrupt(196-206)
🪛 LanguageTool
demo/prompt.txt
[uncategorized] ~3-~3: “Google” is a proper noun and needs to be capitalized.
Context: ..., check how much they raised by doing a google search in a separate tab, and add their...
(A_GOOGLE)
[uncategorized] ~3-~3: Did you mean “Google Docs”?
Context: ...heir names and most recent raise to the google docs at https://docs.google.com/document/d/1...
(GOOGLE_PRODUCTS)
🪛 YAMLlint (1.37.1)
docker-compose.yml
[error] 53-53: duplication of key "volumes" in mapping
(key-duplicates)
🔇 Additional comments (15)
demo/description.txt (1)
1-1: LGTM!The demo description file provides a clear task instruction for the demo mode.
services/nextjs-webapp/.gitignore (1)
1-1: LGTM!Ignoring
.vercelis appropriate for Next.js projects deployed to Vercel. Consider whether other common entries (like.next,node_modules,out/) should also be added here or are already covered by a parent.gitignore.services/python-agent/prompts.py (1)
45-83: LGTM!The enhanced system prompt structure is well-organized with clear sections for response format, guidelines, and error recovery. The JSON schema is properly documented and the double-brace escaping in the f-string is correct.
services/nextjs-webapp/src/components/ChatWindow.tsx (3)
6-13: LGTM!Good extension of the Message interface to support the new
memoryrole with optional media metadata.
72-86: LGTM!Memory injection handling is properly guarded with the
data.metadatacheck before appending the message.
90-101: LGTM!Good UX to notify users when an interrupt has been received and will be applied.
.env.example (1)
4-6: LGTM!The new environment variable is well-documented with a clear description, and the default value of
falseis a safe, opt-in approach for the demo feature.services/python-agent/main.py (3)
130-131: LGTM!Task tracking variable is properly initialized for managing concurrent processing.
166-182: Messages treated as interrupts are not queued for processing.When a new message arrives while processing is active (line 171-177), it's queued as interrupt context only. The message content is injected into the current processing loop but won't be processed as a standalone request after the current task completes. This means users must resend their message after the interrupt takes effect.
If this is intentional behavior, consider documenting it. Otherwise, you may want to queue the message for processing after the current task finishes.
189-191: LGTM with optional improvement.Task cancellation on disconnect is correctly implemented. For cleaner shutdown, you could optionally await the cancelled task to ensure it finishes before cleanup, but the current approach is acceptable.
services/python-agent/agent.py (5)
35-48: LGTM!Retry configuration and demo settings are well-structured with sensible defaults. The boolean parsing for
DEMO_ENABLEDhandles common truthy string values appropriately.
84-167: LGTM!The demo content loading logic is well-structured with proper early returns, per-image error handling, and metadata generation for UI display. The defensive checks for missing files/directories are appropriate.
169-189: LGTM!The method correctly guards against re-injection and handles the case where demo content doesn't exist by still setting the flag to prevent repeated load attempts.
627-635: LGTM!The interrupt context injection is properly integrated into the agentic loop, allowing users to provide additional guidance during tool execution. The formatting clearly marks it as user-provided context for the LLM.
58-68: The model namegemini-3-pro-previewis valid and available as of January 2026. It was launched in November 2025 and remains in preview status across Google's developer surfaces. No action required.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| @@ -0,0 +1,3 @@ | |||
| Find funding of a YC 2025 winter startup at https://www.ycombinator.com/companies?batch=Winter%202025. | |||
|
|
|||
| Pick 3 startups from the YC 2025 batch, check how much they raised by doing a google search in a separate tab, and add their names and most recent raise to the google docs at https://docs.google.com/document/d/1IzFp0tiDhllwW_mdyCQKkvNY7uKFYNulge1A4gAvCYs/edit?tab=t.0 on a new line. No newline at end of file | |||
There was a problem hiding this comment.
Fix capitalization of proper nouns.
"google search" should be "Google search" and "google docs" should be "Google Docs" as these are proper product names.
📝 Suggested fix
-Pick 3 startups from the YC 2025 batch, check how much they raised by doing a google search in a separate tab, and add their names and most recent raise to the google docs at https://docs.google.com/document/d/1IzFp0tiDhllwW_mdyCQKkvNY7uKFYNulge1A4gAvCYs/edit?tab=t.0 on a new line.
+Pick 3 startups from the YC 2025 batch, check how much they raised by doing a Google search in a separate tab, and add their names and most recent raise to the Google Docs at https://docs.google.com/document/d/1IzFp0tiDhllwW_mdyCQKkvNY7uKFYNulge1A4gAvCYs/edit?tab=t.0 on a new line.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Pick 3 startups from the YC 2025 batch, check how much they raised by doing a google search in a separate tab, and add their names and most recent raise to the google docs at https://docs.google.com/document/d/1IzFp0tiDhllwW_mdyCQKkvNY7uKFYNulge1A4gAvCYs/edit?tab=t.0 on a new line. | |
| Pick 3 startups from the YC 2025 batch, check how much they raised by doing a Google search in a separate tab, and add their names and most recent raise to the Google Docs at https://docs.google.com/document/d/1IzFp0tiDhllwW_mdyCQKkvNY7uKFYNulge1A4gAvCYs/edit?tab=t.0 on a new line. |
🧰 Tools
🪛 LanguageTool
[uncategorized] ~3-~3: “Google” is a proper noun and needs to be capitalized.
Context: ..., check how much they raised by doing a google search in a separate tab, and add their...
(A_GOOGLE)
[uncategorized] ~3-~3: Did you mean “Google Docs”?
Context: ...heir names and most recent raise to the google docs at https://docs.google.com/document/d/1...
(GOOGLE_PRODUCTS)
🤖 Prompt for AI Agents
In `@demo/prompt.txt` at line 3, In demo/prompt.txt update the sentence "Pick 3
startups from the YC 2025 batch, check how much they raised by doing a google
search in a separate tab, and add their names and most recent raise to the
google docs..." to correctly capitalize product names: change "google search" to
"Google search" (or "Google Search") and "google docs" to "Google Docs",
preserving the rest of the sentence and punctuation.
| {message.imageCount && ( | ||
| <span className="text-xs text-purple-300 mt-1 block"> | ||
| +{message.imageCount - 1} more images | ||
| </span> | ||
| )} |
There was a problem hiding this comment.
Guard against displaying "+0 more images" when imageCount is 1.
When imageCount is 1, this will display "+0 more images" which isn't helpful. Consider showing this only when there are actually additional images.
📝 Suggested fix
- {message.imageCount && (
+ {message.imageCount && message.imageCount > 1 && (
<span className="text-xs text-purple-300 mt-1 block">
+{message.imageCount - 1} more images
</span>
)}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {message.imageCount && ( | |
| <span className="text-xs text-purple-300 mt-1 block"> | |
| +{message.imageCount - 1} more images | |
| </span> | |
| )} | |
| {message.imageCount && message.imageCount > 1 && ( | |
| <span className="text-xs text-purple-300 mt-1 block"> | |
| {message.imageCount - 1} more images | |
| </span> | |
| )} |
🤖 Prompt for AI Agents
In `@services/nextjs-webapp/src/components/ChatWindow.tsx` around lines 333 - 337,
The code in ChatWindow.tsx renders "+{message.imageCount - 1} more images"
whenever message.imageCount is truthy, which produces "+0 more images" when
imageCount === 1; change the guard to only render that span when
message.imageCount > 1 (or compute remaining = message.imageCount - 1 and render
only if remaining > 0) so the extra-images indicator appears only when there are
actually additional images.
| async def interrupt(self, context: str | None = None) -> None: | ||
| """Signal the agent to pause and optionally inject context. | ||
|
|
||
| Args: | ||
| context: Optional text context to inject into the conversation. | ||
| The agent will incorporate this at the next iteration. | ||
| """ | ||
| self.is_interrupted = True | ||
| if context: | ||
| await self.interrupt_queue.put(context) | ||
| logger.info(f"Interrupt signal received (with context: {context is not None})") | ||
|
|
||
| async def _check_interrupt(self) -> str | None: | ||
| """Check if interrupted and return any pending context. | ||
|
|
||
| Returns: | ||
| The interrupt context string if available, None otherwise. | ||
| """ | ||
| if not self.is_interrupted: | ||
| return None | ||
|
|
||
| try: | ||
| context = self.interrupt_queue.get_nowait() | ||
| return context | ||
| except asyncio.QueueEmpty: | ||
| return None |
There was a problem hiding this comment.
is_interrupted flag may not reset when interrupt has no context.
If interrupt() is called without context (just the signal), is_interrupted is set to True, but _check_interrupt() returns None. In process_message (lines 629-632), the flag is only reset when interrupt_context is not None. This leaves is_interrupted=True indefinitely when no context is provided.
Consider resetting is_interrupted in _check_interrupt() unconditionally:
🔧 Proposed fix
async def _check_interrupt(self) -> str | None:
"""Check if interrupted and return any pending context.
Returns:
The interrupt context string if available, None otherwise.
"""
if not self.is_interrupted:
return None
+ self.is_interrupted = False # Reset flag regardless of context
try:
context = self.interrupt_queue.get_nowait()
return context
except asyncio.QueueEmpty:
return NoneAnd remove the redundant reset in process_message:
if interrupt_context is not None:
logger.info(f"Injecting interrupt context: {interrupt_context[:100]}...")
follow_up_parts.append(f"\n\n[USER INTERRUPT - Additional context from user]: {interrupt_context}")
- self.is_interrupted = False # Reset for next iteration🤖 Prompt for AI Agents
In `@services/python-agent/agent.py` around lines 196 - 221, The is_interrupted
flag can remain true forever when interrupt() is called without context; update
_check_interrupt() to clear self.is_interrupted unconditionally before returning
(after checking/consuming self.interrupt_queue) so a plain interrupt signal is
consumed even if get_nowait() yields None, and then remove the redundant reset
of self.is_interrupted in process_message (so only _check_interrupt handles
clearing the flag); reference the methods interrupt, _check_interrupt, the
attribute is_interrupted and the interrupt_queue in your changes.
| async def _process_and_respond( | ||
| agent: BrowserAgent, | ||
| websocket: WebSocket, | ||
| user_content: str | ||
| ) -> None: | ||
| """Process a message and send the response. Runs as a background task.""" | ||
| try: | ||
| # Check if we should inject demo content (first message only) | ||
| if not agent.demo_injected: | ||
| demo_metadata = await agent.inject_demo_content() | ||
| if demo_metadata: | ||
| await websocket.send_json({ | ||
| "type": "memory_injected", | ||
| "metadata": demo_metadata | ||
| }) | ||
| logger.info("Sent memory_injected message to client") | ||
|
|
||
| # Send status update | ||
| await websocket.send_json({ | ||
| "type": "status", | ||
| "content": "thinking" | ||
| }) | ||
|
|
||
| # Process with agent | ||
| response = await agent.process_message(user_content) | ||
| await websocket.send_json({ | ||
| "type": "response", | ||
| "content": response | ||
| }) | ||
| except Exception as e: | ||
| logger.error(f"Agent error: {e}") | ||
| await websocket.send_json({ | ||
| "type": "response", | ||
| "content": f"Sorry, I encountered an error: {str(e)}" | ||
| }) | ||
|
|
There was a problem hiding this comment.
Handle CancelledError to avoid logging noise on interrupt.
When the task is cancelled (line 191), asyncio.CancelledError will propagate and be caught by the generic except Exception block, logging it as an error and attempting to send an error response to a potentially closed websocket.
🔧 Proposed fix
async def _process_and_respond(
agent: BrowserAgent,
websocket: WebSocket,
user_content: str
) -> None:
"""Process a message and send the response. Runs as a background task."""
try:
# Check if we should inject demo content (first message only)
if not agent.demo_injected:
demo_metadata = await agent.inject_demo_content()
if demo_metadata:
await websocket.send_json({
"type": "memory_injected",
"metadata": demo_metadata
})
logger.info("Sent memory_injected message to client")
# Send status update
await websocket.send_json({
"type": "status",
"content": "thinking"
})
# Process with agent
response = await agent.process_message(user_content)
await websocket.send_json({
"type": "response",
"content": response
})
+ except asyncio.CancelledError:
+ logger.info("Processing task was cancelled")
+ raise # Re-raise to properly cancel the task
except Exception as e:
logger.error(f"Agent error: {e}")
- await websocket.send_json({
- "type": "response",
- "content": f"Sorry, I encountered an error: {str(e)}"
- })
+ try:
+ await websocket.send_json({
+ "type": "response",
+ "content": f"Sorry, I encountered an error: {str(e)}"
+ })
+ except Exception:
+ logger.debug("Failed to send error response to client")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def _process_and_respond( | |
| agent: BrowserAgent, | |
| websocket: WebSocket, | |
| user_content: str | |
| ) -> None: | |
| """Process a message and send the response. Runs as a background task.""" | |
| try: | |
| # Check if we should inject demo content (first message only) | |
| if not agent.demo_injected: | |
| demo_metadata = await agent.inject_demo_content() | |
| if demo_metadata: | |
| await websocket.send_json({ | |
| "type": "memory_injected", | |
| "metadata": demo_metadata | |
| }) | |
| logger.info("Sent memory_injected message to client") | |
| # Send status update | |
| await websocket.send_json({ | |
| "type": "status", | |
| "content": "thinking" | |
| }) | |
| # Process with agent | |
| response = await agent.process_message(user_content) | |
| await websocket.send_json({ | |
| "type": "response", | |
| "content": response | |
| }) | |
| except Exception as e: | |
| logger.error(f"Agent error: {e}") | |
| await websocket.send_json({ | |
| "type": "response", | |
| "content": f"Sorry, I encountered an error: {str(e)}" | |
| }) | |
| async def _process_and_respond( | |
| agent: BrowserAgent, | |
| websocket: WebSocket, | |
| user_content: str | |
| ) -> None: | |
| """Process a message and send the response. Runs as a background task.""" | |
| try: | |
| # Check if we should inject demo content (first message only) | |
| if not agent.demo_injected: | |
| demo_metadata = await agent.inject_demo_content() | |
| if demo_metadata: | |
| await websocket.send_json({ | |
| "type": "memory_injected", | |
| "metadata": demo_metadata | |
| }) | |
| logger.info("Sent memory_injected message to client") | |
| # Send status update | |
| await websocket.send_json({ | |
| "type": "status", | |
| "content": "thinking" | |
| }) | |
| # Process with agent | |
| response = await agent.process_message(user_content) | |
| await websocket.send_json({ | |
| "type": "response", | |
| "content": response | |
| }) | |
| except asyncio.CancelledError: | |
| logger.info("Processing task was cancelled") | |
| raise # Re-raise to properly cancel the task | |
| except Exception as e: | |
| logger.error(f"Agent error: {e}") | |
| try: | |
| await websocket.send_json({ | |
| "type": "response", | |
| "content": f"Sorry, I encountered an error: {str(e)}" | |
| }) | |
| except Exception: | |
| logger.debug("Failed to send error response to client") |
🤖 Prompt for AI Agents
In `@services/python-agent/main.py` around lines 74 - 109, The task currently
catches all exceptions in _process_and_respond, which causes
asyncio.CancelledError to be logged and a response attempted on a closed
websocket; modify _process_and_respond to explicitly handle
asyncio.CancelledError by either re-raising it or returning immediately (so it
doesn't hit the generic except), ensure asyncio is imported, and keep the
generic except Exception block for other errors; also avoid sending on websocket
after cancellation by exiting the coroutine before any websocket.send_json calls
in the cancellation branch.
…ry limits in Python agent
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@services/nextjs-webapp/src/components/ChatWindow.tsx`:
- Around line 372-376: The textarea stays expanded after setInput('') because
clearing the value doesn't trigger onChange to shrink the element; add a ref to
the textarea (e.g., inputRef) and, after clearing input in your send handler
(where you call setInput('')), explicitly reset the element height
(inputRef.current.style.height = 'auto' or a desired min height) or add a
useEffect that watches the input state and collapses the textarea when input ===
''. Also keep the existing onChange resizing logic (which should use
inputRef.current or e.target) so manual typing still auto-resizes.
♻️ Duplicate comments (2)
services/python-agent/agent.py (1)
196-221: Interrupt flag can remain stuck true when no context is provided.Same issue as previously noted:
_check_interrupt()returnsNonewithout clearingis_interruptedif the queue is empty, so a context-less interrupt can keep the flag set indefinitely. Consider resetting the flag unconditionally inside_check_interrupt()and removing the manual reset inprocess_message.Also applies to: 627-633
services/nextjs-webapp/src/components/ChatWindow.tsx (1)
321-337: Guard against showing “+0 more images”.When
imageCount === 1, the UI shows “+0 more images.” This is confusing. Render only when there are additional images.🧩 Suggested fix
- {message.imageCount && ( + {message.imageCount && message.imageCount > 1 && ( <span className="text-xs text-purple-300 mt-1 block"> +{message.imageCount - 1} more images </span> )}
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
docker-compose.ymlservices/nextjs-webapp/src/components/ChatWindow.tsxservices/python-agent/agent.py
🧰 Additional context used
📓 Path-based instructions (1)
services/python-agent/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
services/python-agent/**/*.py: Run Ruff linting and formatting tool after making changes to Python code in services/python-agent/ with command:cd services/python-agent && uvx ruff check --fix .
Run ty type checking tool after making changes to Python code in services/python-agent/ with command:cd services/python-agent && uvx ty check .
Files:
services/python-agent/agent.py
🧬 Code graph analysis (1)
services/python-agent/agent.py (2)
services/python-agent/mcp_client.py (1)
get_tools_for_llm(254-263)services/python-agent/prompts.py (1)
build_system_prompt(34-83)
🔇 Additional comments (9)
docker-compose.yml (1)
46-53: Demo mount + DEMO_ENABLED wiring looks good.Read-only demo volume and the env toggle are clear and safe.
services/python-agent/agent.py (4)
84-189: Demo content load/inject flow is clean and well-guarded.Nice use of DEMO_ENABLED gating and metadata shaping for UI.
223-260: Retry wrapper with capped exponential backoff looks solid.
552-553: Switching message sends to the retry wrapper improves resiliency.Also applies to: 635-636
1-8: Please run Ruff and Ty after these Python changes.As per repo guidelines for
services/python-agent/**/*.py, verify with:cd services/python-agent && uvx ruff check --fix . cd services/python-agent && uvx ty check .services/nextjs-webapp/src/components/ChatWindow.tsx (4)
6-13: Message shape updates look consistent.The new
memoryrole and optionalthumbnail/imageCountfields align cleanly with the rendering logic below.
72-101: Memory injection + interrupt status handling integrates cleanly.Appending immediate memory messages and adding an interrupt status system message fits the flow well.
189-217: ConfirmisLoadingis cleared after an interrupt completes.
isLoadingonly resets ondata.type === 'response'. If an interrupt ends the run without a response, the spinner may persist. Please verify the backend always sends a response (or an explicit “done” status) that clears loading in this path.
383-386: Send button disable logic matches the new interrupt flow.Allowing sends while loading (when input is non-empty) lines up with the interrupt behavior.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| onChange={(e) => { | ||
| setInput(e.target.value) | ||
| e.target.style.height = 'auto' | ||
| e.target.style.height = Math.min(e.target.scrollHeight, 120) + 'px' | ||
| }} |
There was a problem hiding this comment.
Textarea auto-resize doesn’t reset after send.
When setInput('') runs, the height stays expanded because no onChange fires. This leaves a tall empty box after sending multiline input.
🧩 Suggested fix
@@
- const messagesEndRef = useRef<HTMLDivElement>(null)
+ const messagesEndRef = useRef<HTMLDivElement>(null)
+ const inputRef = useRef<HTMLTextAreaElement>(null)
@@
- setInput('')
+ setInput('')
+ if (inputRef.current) inputRef.current.style.height = 'auto'
@@
- <textarea
+ <textarea
+ ref={inputRef}
value={input}
onChange={(e) => {
setInput(e.target.value)
e.target.style.height = 'auto'
e.target.style.height = Math.min(e.target.scrollHeight, 120) + 'px'
}}🤖 Prompt for AI Agents
In `@services/nextjs-webapp/src/components/ChatWindow.tsx` around lines 372 - 376,
The textarea stays expanded after setInput('') because clearing the value
doesn't trigger onChange to shrink the element; add a ref to the textarea (e.g.,
inputRef) and, after clearing input in your send handler (where you call
setInput('')), explicitly reset the element height
(inputRef.current.style.height = 'auto' or a desired min height) or add a
useEffect that watches the input state and collapses the textarea when input ===
''. Also keep the existing onChange resizing logic (which should use
inputRef.current or e.target) so manual typing still auto-resizes.
Summary by CodeRabbit
New Features
Improvements
Configuration
✏️ Tip: You can customize this high-level summary in your review settings.