Skip to content

⚡ Bolt: Use AsyncGroq for non-blocking LLM calls#73

Open
Adityasingh-8858 wants to merge 1 commit into
mainfrom
bolt-async-groq-17164051398032665602
Open

⚡ Bolt: Use AsyncGroq for non-blocking LLM calls#73
Adityasingh-8858 wants to merge 1 commit into
mainfrom
bolt-async-groq-17164051398032665602

Conversation

@Adityasingh-8858
Copy link
Copy Markdown
Collaborator

💡 What: Replaced the synchronous Groq client with AsyncGroq inside FastAPI endpoints. Added the required await statements for the chat.completions.create calls.
🎯 Why: In FastAPI, using synchronous blocking operations (like external HTTP requests to the Groq API) inside an async def endpoint prevents the main event loop from processing any other incoming requests. This leads to severe performance degradation under concurrent loads.
📊 Impact: Prevents event loop blocking during external AI network requests, which enables the backend to handle concurrent requests seamlessly and scale efficiently.
🔬 Measurement: Verify by running the backend test suite (cd backend && PYTHONPATH=. pytest -v -p anyio -p asyncio -o asyncio_mode=auto). All tests pass, proving that the async adaptation behaves identically functionally while operating without blocking the thread. Additionally, load testing concurrent requests to /initiate-transfer would demonstrate unblocked throughput compared to sequential waiting.


PR created automatically by Jules for task 17164051398032665602 started by @Deepaksingh7238

Identified that using the synchronous `Groq` client inside FastAPI `async def` endpoints blocked the event loop. Switched to `AsyncGroq` and used `await` to maintain concurrency and scalability for the `/initiate-transfer` and `/ai-voice` endpoints. Added a performance optimization comment and logged the learning in `.jules/bolt.md`.

Co-authored-by: Deepaksingh7238 <110552872+Deepaksingh7238@users.noreply.github.com>
Copilot AI review requested due to automatic review settings April 18, 2026 15:56
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the FastAPI backend to use Groq’s async client so LLM completion calls don’t block the event loop, improving concurrency under load.

Changes:

  • Switched Groq client import/initialization from Groq to AsyncGroq.
  • Added await to chat.completions.create(...) calls in async endpoints.
  • Added an internal Bolt note documenting the optimization.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
backend/main.py Converts Groq completion calls to non-blocking async calls within FastAPI endpoints.
.jules/bolt.md Documents the rationale and action taken for the async Groq optimization.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread backend/main.py
Comment on lines 503 to 507
# Attempt real Groq call
global groq_client
if groq_client is None:
groq_client = Groq(api_key=GROQ_API_KEY)
groq_client = AsyncGroq(api_key=GROQ_API_KEY)

Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lazy-initializing the global groq_client without synchronization can race under concurrent requests (multiple AsyncGroq instances may be created and one discarded without being closed). To avoid resource leaks and non-deterministic initialization, initialize the client once in lifespan or guard the initialization with an asyncio.Lock and ensure any extra created client is closed.

Copilot uses AI. Check for mistakes.
Comment thread backend/main.py
Comment on lines 317 to +321
global groq_client
if groq_client is None:
groq_client = Groq(api_key=GROQ_API_KEY)
chat_completion = groq_client.chat.completions.create(
groq_client = AsyncGroq(api_key=GROQ_API_KEY)
# Optimization: Using AsyncGroq and await to prevent blocking the FastAPI event loop
chat_completion = await groq_client.chat.completions.create(
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

groq_client is a global AsyncGroq client but it’s never closed on shutdown. Async clients typically hold an underlying HTTP connection pool; leaving it open can leak sockets and can prevent clean shutdowns. Consider instantiating AsyncGroq during app startup (lifespan) and awaiting the SDK’s async close method during shutdown (or use an async context manager) instead of leaving a global client unclosed.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants