Feature Request
e.g. https://platform.claude.com/docs/en/build-with-claude/batch-processing
https://ai.google.dev/gemini-api/docs/batch-api
https://platform.openai.com/docs/guides/batch
Motivation
To send asynchronous groups of requests with 50% lower costs, a separate pool of significantly higher rate limits, and a clear 24-hour turnaround time. The service is ideal for processing jobs that don't require immediate responses.
Proposal
Not as of now. I saw there is already JSONL decoder here so might be a starting point
|
//! JSONL is currently not used, it might be used when Anthropic batches beta feature is used. |
Alternatives
No
Feature Request
e.g. https://platform.claude.com/docs/en/build-with-claude/batch-processing
https://ai.google.dev/gemini-api/docs/batch-api
https://platform.openai.com/docs/guides/batch
Motivation
To send asynchronous groups of requests with 50% lower costs, a separate pool of significantly higher rate limits, and a clear 24-hour turnaround time. The service is ideal for processing jobs that don't require immediate responses.
Proposal
Not as of now. I saw there is already JSONL decoder here so might be a starting point
rig/rig/rig-core/src/providers/anthropic/decoders/jsonl.rs
Line 1 in a9be758
Alternatives
No