Skip to content

Handle API rate limiting#57

Open
hi-rai wants to merge 1 commit intomainfrom
dev/himanshu/handle-api-rate-limiting
Open

Handle API rate limiting#57
hi-rai wants to merge 1 commit intomainfrom
dev/himanshu/handle-api-rate-limiting

Conversation

@hi-rai
Copy link
Contributor

@hi-rai hi-rai commented Mar 3, 2026

  1. Implemented retry with exponential backoff in case of 429, 502 and 503 HTTP statuses
  2. Increased result upload batch size from 50 to 500
  3. Using the newer batch file upload API, which allows max file count of 100 and max cumulative size of 100 MiB. Also reduced the file upload concurrency from 10 to 3

This fixes https://github.com/Hypersequent/tms-issues/issues/2341

@hi-rai hi-rai self-assigned this Mar 3, 2026
@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the robustness and efficiency of API interactions, particularly for file and result uploads. By introducing an intelligent retry mechanism with exponential backoff, the system becomes more resilient to transient network issues and API rate limiting. Furthermore, the adoption of a new batch file upload API and an increased result upload batch size will lead to fewer API calls and faster processing of large data sets, ultimately improving overall performance and reliability.

Highlights

  • API Rate Limiting and Retries: Implemented an HTTP retry mechanism with exponential backoff for API calls encountering 429 (Too Many Requests), 502 (Bad Gateway), and 503 (Service Unavailable) status codes, enhancing system resilience.
  • Batch File Upload API Migration: Migrated to a newer batch file upload API endpoint, allowing multiple files to be uploaded in a single request, improving efficiency and reducing API call overhead.
  • Result Upload Batch Size Increase: Increased the batch size for result uploads from 50 to 500, which will lead to fewer API calls and faster processing of results.
  • Optimized File Upload Concurrency: Adjusted file upload concurrency from 10 to 3 concurrent batch uploads, aligning with the new batch upload API's capabilities and limits (max 100 files or 100 MiB per batch).
Changelog
  • src/api/file.ts
    • Refactored uploadFile to uploadFiles to support multiple file uploads in a single request.
    • Updated the API endpoint from /api/public/v0/file to /api/public/v0/file/batch.
    • Modified the response handling to process an array of uploaded files.
  • src/api/index.ts
    • Imported the new withHttpRetry utility.
    • Wrapped the API fetcher with withHttpRetry to enable automatic retries for all API calls.
  • src/api/utils.ts
    • Added a new withHttpRetry higher-order function that wraps a fetch function.
    • Implemented retry logic with exponential backoff, jitter, and handling of Retry-After headers.
    • Configured default retryable HTTP status codes (429, 502, 503).
  • src/tests/http-retry.spec.ts
    • Added a comprehensive suite of unit tests for the withHttpRetry function.
    • Tests cover successful responses, non-retryable errors, retries on specific status codes, max retries, Retry-After header handling, exponential backoff, and custom options.
  • src/tests/result-upload.spec.ts
    • Updated the mock server to correctly handle requests to the new /api/public/v0/file/batch endpoint.
    • Adjusted the setMaxResultsInRequest value in afterEach hook from 50 to 500.
    • Modified countFileUploadApiCalls to target the new batch upload endpoint.
    • Updated assertions in attachment upload tests to reflect batching (e.g., expect(numFileUploadCalls()).toBe(1) instead of 5 or 4).
  • src/utils/result-upload/ResultUploader.ts
    • Modified constants: MAX_CONCURRENT_FILE_UPLOADS replaced by MAX_CONCURRENT_BATCH_UPLOADS (set to 3), and new constants MAX_BATCH_SIZE_BYTES (100 MiB) and MAX_BATCH_FILE_COUNT (100) were added.
    • Increased MAX_RESULTS_IN_REQUEST from 50 to 500.
    • Implemented logic to group attachments into batches based on size and file count limits.
    • Changed the attachment upload process to use the new batch upload API and process batches concurrently.
    • Adjusted progress tracking and attachment mapping to handle batch results.
  • src/utils/result-upload/junitXmlParser.ts
    • Added Math.round() to the calculation of timeTaken to ensure integer milliseconds.
Activity
  • No specific activity (comments, reviews, progress updates) was provided in the context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@hi-rai hi-rai requested a review from AndrianBdn March 3, 2026 13:11
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new withHttpRetry utility function that wraps fetch to provide automatic retries with exponential backoff, jitter, and Retry-After header support for specific HTTP status codes (429, 502, 503). This retry mechanism is then integrated into the main API client. Additionally, the file upload API has been refactored to support batch uploads, allowing multiple files to be sent in a single request to a new /file/batch endpoint. The ResultUploader now groups attachments into batches based on size and count limits before uploading them concurrently. Corresponding test cases for the withHttpRetry utility and updates to existing result upload tests to reflect the new batching behavior were also added. A minor fix was included to round timeTaken to the nearest millisecond in the JUnit XML parser. The reviewer noted that the current batching logic in ResultUploader could still create a batch exceeding MAX_BATCH_SIZE_BYTES if a single attachment is larger than this limit, suggesting an explicit check for individual attachment sizes before batching to prevent API errors and provide clearer feedback.

Comment on lines +194 to +207
for (const item of allAttachments) {
const size = item.attachment.buffer!.byteLength
if (
currentBatch.length > 0 &&
(currentBatchSize + size > MAX_BATCH_SIZE_BYTES ||
currentBatch.length >= MAX_BATCH_FILE_COUNT)
) {
batches.push(currentBatch)
currentBatch = []
currentBatchSize = 0
}
currentBatch.push(item)
currentBatchSize += size
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current batching logic can create a batch that exceeds MAX_BATCH_SIZE_BYTES if a single attachment is larger than this limit. The check currentBatch.length > 0 allows any file to be added to an empty batch, regardless of its size. This could lead to API errors that are hard to debug.

It would be better to validate the size of each attachment individually before adding it to a batch and provide a clear error message if it's too large.

		for (const item of allAttachments) {
			const size = item.attachment.buffer!.byteLength

			if (size > MAX_BATCH_SIZE_BYTES) {
				const errorMsg =
					'Attachment "' +
					item.attachment.filename +
					'" (' +
					(size / 1024 / 1024).toFixed(2) +
					' MiB) is larger than the maximum batch size of ' +
					MAX_BATCH_SIZE_BYTES / 1024 / 1024 +
					' MiB.'
				if (this.args.force) {
					printError(errorMsg + ' Skipping file.')
					continue
				}
				printErrorThenExit(errorMsg)
			}

			if (
				currentBatch.length > 0 &&
				(currentBatchSize + size > MAX_BATCH_SIZE_BYTES ||
					currentBatch.length >= MAX_BATCH_FILE_COUNT)
			) {
				batches.push(currentBatch)
				currentBatch = []
				currentBatchSize = 0
			}
			currentBatch.push(item)
			currentBatchSize += size
		}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This size is only for batching opportunity.
If the file is larger than that, we leave it to the server. Currently we don't allow file greater than 50 MB on the server, but if we change that in future, no changes would be required for the CLI tool

1. Implemented retry with exponential backoff in case of 429, 502
   and 503 HTTP statuses
2. Increased result upload batch size from 50 to 500
3. Using the newer batch file upload API, which allows max file
   count of 100 and max cumulative size of 100 MiB. Also reduced
   the file upload concurrency from 10 to 3
@hi-rai hi-rai force-pushed the dev/himanshu/handle-api-rate-limiting branch from 8832450 to aa969c7 Compare March 3, 2026 13:14
@satvik007 satvik007 added this to the 26W10 milestone Mar 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants