An agent-based travel demand model that replaces traditional discrete choice models (logit, nested logit) with LLM prompts. Each synthetic agent is given a persona and asked — via structured LLM calls — to choose a work/school location, generate activities, schedule them, pick destinations, and select travel modes. The result is a full set of daily trip chains that can be assigned to a road network.
- Python 3.12+
- uv package manager
- An API key for at least one supported LLM provider
This project uses LLMs to power agent decisions. It supports four providers:
- Gemini API (default)
- Anthropic API
- OpenAI API
- xAI Grok API
Set the API key for the provider you want to use:
# For Gemini (default)
export GEMINI_API_KEY=your_key_here
# For Anthropic
export ANTHROPIC_API_KEY=your_key_here
# For OpenAI
export OPENAI_API_KEY=your_key_here
# For xAI Grok
export XAI_API_KEY=your_key_hereTo make this permanent, add the line to your shell config (e.g.
~/.bashrc or ~/.zshrc).
The provider is selected automatically based on the model name. Names
starting with claude use Anthropic; names starting with gpt-, o1,
or o3 use OpenAI; names starting with grok- use xAI; everything else
uses Gemini:
from aibm import Agent
# Uses Gemini (default)
agent = Agent(name="Alice")
# Uses Anthropic
agent = Agent(name="Alice", model="claude-sonnet-4-20250514")
# Uses OpenAI
agent = Agent(name="Alice", model="gpt-4o")
# Uses xAI Grok
agent = Agent(name="Alice", model="grok-4-1")# Install all dependencies (package + pipeline + dev tools)
uv sync --group pipeline
# Set your API key
export OPENAI_API_KEY=your_key_here
# Run the full pipeline (download data, synthesize population, simulate, assign)
uv run snakemake --cores 1 -s workflow/SnakefileInstall the package in editable mode with dev tools:
uv syncRun tests:
uv run pytestRun a script:
uv run python scripts/example.pyRough estimates for simulating 200 households (~500 agents) on the Walcheren example model:
<<<<<<< Updated upstream | Model | Approximate cost | Notes |
|-------|-----------------|-------| | gpt-4o-mini | ~$0.50–1.00 |
Recommended for development | | gemini-2.5-flash-lite | ~$0.30–0.80 |
Good budget option | | gpt-4o | ~$5–10 | Higher quality, much more
expensive | | claude-sonnet-4-20250514 | ~$5–10 | Similar to gpt-4o |
======= | Model | Approximate cost | Notes | |
-------------------------- | ---------------- |
----------------------------------- | | gpt-4o-mini | ~$0.50–1.00 |
Recommended for development | | gemini-2.5-flash-lite | ~$0.20-0.30 |
Good budget option | | gpt-4o | ~$5–10 | Higher quality, much more
expensive | | claude-sonnet-4-20250514 | ~$5–10 | Similar to gpt-4o |
| claude-haiku | ~$3.60 | |
Stashed changes
Costs depend on prompt complexity and number of discretionary activities
generated. The n_households setting in workflow/config.yaml controls
sample size.
To work with the Jupyter notebooks, install the notebooks group:
uv sync --group notebooksLaunch JupyterLab:
uv run jupyter labThe notebooks/ directory contains hands-on explorations of the model
components:
- synthetic_population.ipynb — manually build a small population of zones, households, and agents
uv run ruff check src tests
uv run ruff format src testsActivate pre-commit hooks (runs ruff automatically on every
git commit):
uv run pre-commit installflowchart TD
A[Population Synthesis] --> B[Per Household]
subgraph B[Per Household]
direction TB
subgraph C[Per Agent]
C1[Persona Generation] --> C2[Zone Choice<br/>work / school]
C2 --> C3[Activity Generation]
C3 --> C4[Schedule Activities]
C4 --> C5[Plan Discretionary]
end
C5 --> D[Joint Activities]
D --> E[Escort Trips]
E --> F[Vehicle Allocation]
F --> G[Build Tours]
G --> H[Mode Choice]
end
H --> I[Network Assignment]
The package is used to develop an example model for the Walcheren region in the Netherlands. Walcheren consists of municipalities Middelburg, Veere and Vlissingen.
- Demographic data for population synthesis from
CBS Vierkantstatistieken.
Place the zip in
data/raw/.
Install the pipeline dependencies and run Snakemake:
uv sync --group pipeline
uv run snakemake --cores 1 -s workflow/SnakefileThe pipeline steps are:
- download_boundaries — fetch Walcheren municipality polygons from PDOK
- filter_grid — spatial-filter CBS 100m grid to Walcheren
- clean — handle anonymisation, remap age groups, derive household size distributions
- build_specs — convert cleaned data to ZoneSpec objects
- synthesize — generate synthetic population
Output lands in data/processed/walcheren_population.parquet.
The pipeline runs a full cross-product of three independent dimensions.
Expensive shared steps (network download, grid processing, population
synthesis, skim matrices, POIs) run once and are reused. Only simulate
and assign_network re-run per scenario.
How it works:
<<<<<<< Updated upstream
workflow/config.yamlis the base config with ascenarios:list- Each scenario has a YAML file in
workflow/scenarios/<name>.yamlthat overrides only thesimulation:block -
Dimension Directory Controls Provider workflow/providers/LLM model, API key, rate limits Iteration workflow/iterations/Prompt variants and simulation settings Policy workflow/policies/Network/infrastructure interventions Stashed changes
The baseline scenario ships with the project and applies no overrides.
Adding a scenario (e.g. to test a different model):
<<<<<<< Updated upstream
-
Active scenarios are controlled by three lists in
workflow/config.yaml:
providers:
- gpt_4o_mini
- claude_haiku_4_5
iterations:
- baseline
policies:
- baselineThe pipeline runs every valid combination. Providers can opt out of
specific iterations via only_iterations: in their provider YAML.
- Create
workflow/iterations/my_variant.yaml(can override anysimulation:key):Stashed changes
<<<<<<< Updated upstreamsimulation: prompts: mode_choice: instructions: "..."
- Add
my_variantto theiterations:list inworkflow/config.yaml.
Policies model real-world transport interventions by overriding network
or transit config. Any key from workflow/config.yaml can be
overridden.
- Create
workflow/policies/my_policy.yaml:Stashed changes
# Example: e-bike adoption raises cycling speed by 30 % network: bike_speed_kmh: 23.4
- Add
my_policyto thepolicies:list inworkflow/config.yaml.
The pipeline will automatically rebuild the affected skim matrices and re-run all provider×iteration combinations under the new policy.
The baseline policy ships with an empty YAML (no overrides) and must
always be present.
Visualise simulation results on an interactive map.
Prepare the data (converts pipeline parquet output to JSON for the browser):
# For a specific scenario (provider__iteration__policy)
uv run python webapp/prepare_data.py \
--config workflow/config.yaml \
--scenario gpt_4o_mini__baseline__baselineThe app is fully static — open webapp/static/index.html directly in
your browser, or serve it with any static file server:
# Python's built-in server
cd webapp/static && python -m http.server 8000Then open http://localhost:8000 in your browser.
To customise the app content, edit these two files:
webapp/static/content/about.md— article shown in the "About this project" overlaywebapp/static/config.json— GitHub and LinkedIn URLs shown as icon links in the sidebar
Deployment: The webapp/static/ directory is deployed as-is to
Cloudflare Pages.