Problem: Multiple concurrent workflows writing to a database can overwhelm the target system.
Solution: Use a dedicated batcher workflow to aggregate writes from many workflows into efficient bulk operations.
- Batch Aggregation: Central batcher collects write requests from concurrent workflows
- Time-Based Batching: Writes batched every 20 seconds (configurable)
- Acknowledgment: Each workflow receives confirmation when write completes
- Load Reduction: N individual DB calls → 1 batch operation
- Exactly-Once Processing: Request deduplication prevents duplicate writes
- Robust Continue-as-New: Proper workflow restart to prevent unbounded event history growth
Main Service (main-service/):
- Handles transactional business logic workflows
- Queue:
main-workflow-queue
Batcher Service (batcher-service/):
- Aggregates and batches write operations
- Queue:
batcher-queue
temporal-batching-demo/
├── README.md
├── pyproject.toml
├── uv.lock
│
├── main-service/
│ ├── workflow.py # Transactional main workflow with timeout handling
│ ├── activities.py # Business activities
│ ├── worker.py # Main service worker
│ └── starter.py # Start main workflows
│
└── batcher-service/
├── workflow.py # Batcher workflow with proper continue-as-new
├── activities.py # Batch write activities
├── worker.py # Batcher worker
└── starter.py # Start batcher service with enhanced monitoring
-
Install dependencies:
uv sync
-
Start Temporal:
temporal server start-dev
-
Run workers (separate terminals):
uv run python batcher-service/worker.py uv run python main-service/worker.py
-
Start services:
# Terminal 3 uv run python batcher-service/starter.py # Terminal 4 uv run python main-service/starter.py --workflows 10
-
View results:
- Console: See batch write operations and health monitoring
- Web UI: http://localhost:8233
- Main workflows start and process business logic
- Each signals the batcher with write requests (with request IDs for deduplication)
- Batcher collects requests for 20 seconds or until 100 requests
- Single batch write operation executes (printed to console)
- Batcher confirms completion to all workflows
- Main workflows complete successfully, or fail if write not confirmed within 2 minutes
- Batcher uses Temporal's recommendations for continue-as-new timing to maintain optimal performance
- Main workflows fail if write request cannot be submitted
- Main workflows fail if write confirmation not received within 2 minutes
- Main workflows fail if database write reports failure
- Request deduplication using deterministic request IDs
- Duplicate signals are safely ignored
- Request IDs removed after successful processing
- Uses Temporal's
workflow.info().is_continue_as_new_suggested()for optimal timing - State cleanup and signal handler synchronization before continuing
- Bounded memory usage with smart deduplication cleanup
- Safety mechanisms prevent infinite continue-as-new loops
- Comprehensive retry logic for signal delivery failures
- Individual confirmation failures don't affect other workflows
- Proper exception handling causes workflow failures rather than endless retries
The starter script provides real-time monitoring:
- Pending write requests
- Processed batch count
- Session-level signal counts
- Active deduplication set size
- Continue-as-new cycle counter
- Temporal's continue-as-new suggestions
- Continue-as-new event detection and handle updates
- Shard Batchers: Multiple batcher workflows with deterministic routing
- Single Batcher: One batcher workflow processes all requests
- Hardcoded Configuration: Batcher workflow ID and parameters are hardcoded
- Confirmation Delivery: No retry logic for confirmation signals back to caller workflows (TODO: add retry for production)
- Batch Size: 100 requests (configurable)
- Batch Timeout: 20 seconds (configurable)
- Write Confirmation Timeout: 2 minutes (ensures transactional integrity)
- Continue-as-New: Driven by Temporal's built-in suggestions
- Continue-as-New Safety Limit: 10 cycles (prevents runaway loops)