Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 78 additions & 0 deletions .ci-smoke/results.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
{
"greedy": {
"train": {
"episodes": 1,
"mean_return": -139.8,
"std_return": 0.0,
"asset_survival_rate": 0.0,
"containment_success_rate": 0.0,
"mean_final_burned_area": 557.0,
"mean_time_to_containment": 150.0,
"mean_resource_efficiency": 19.892857142857142,
"variance_across_episodes": 0.0,
"mean_normalized_burn_ratio": 0.911620294599018
},
"val": {
"episodes": 1,
"mean_return": -139.8,
"std_return": 0.0,
"asset_survival_rate": 0.0,
"containment_success_rate": 0.0,
"mean_final_burned_area": 557.0,
"mean_time_to_containment": 150.0,
"mean_resource_efficiency": 19.892857142857142,
"variance_across_episodes": 0.0,
"mean_normalized_burn_ratio": 0.911620294599018
},
"holdout": {
"episodes": 1,
"mean_return": -139.8,
"std_return": 0.0,
"asset_survival_rate": 0.0,
"containment_success_rate": 0.0,
"mean_final_burned_area": 557.0,
"mean_time_to_containment": 150.0,
"mean_resource_efficiency": 19.892857142857142,
"variance_across_episodes": 0.0,
"mean_normalized_burn_ratio": 0.911620294599018
}
},
"random": {
"train": {
"episodes": 1,
"mean_return": -297.3999999999999,
"std_return": 0.0,
"asset_survival_rate": 0.0,
"containment_success_rate": 0.0,
"mean_final_burned_area": 595.0,
"mean_time_to_containment": 150.0,
"mean_resource_efficiency": 21.25,
"variance_across_episodes": 0.0,
"mean_normalized_burn_ratio": 0.9738134206219312
},
"val": {
"episodes": 1,
"mean_return": -257.60000000000014,
"std_return": 0.0,
"asset_survival_rate": 0.0,
"containment_success_rate": 0.0,
"mean_final_burned_area": 581.0,
"mean_time_to_containment": 150.0,
"mean_resource_efficiency": 20.75,
"variance_across_episodes": 0.0,
"mean_normalized_burn_ratio": 0.9509001636661211
},
"holdout": {
"episodes": 1,
"mean_return": -266.00000000000006,
"std_return": 0.0,
"asset_survival_rate": 0.0,
"containment_success_rate": 0.0,
"mean_final_burned_area": 578.0,
"mean_time_to_containment": 150.0,
"mean_resource_efficiency": 20.642857142857142,
"variance_across_episodes": 0.0,
"mean_normalized_burn_ratio": 0.9459901800327333
}
}
}
1 change: 1 addition & 0 deletions .ci-smoke/scenario_parameter_records_seeded_holdout.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"schema_version": 3, "split": "holdout", "record_count": 1, "records": [{"record_id": "ci-holdout", "base_spread_prob": 0.14, "severity_bucket": "medium", "wind_direction": "E", "wind_strength": 0.35, "ignition_seed": 101, "layout_seed": 202, "split": "holdout"}]}
1 change: 1 addition & 0 deletions .ci-smoke/scenario_parameter_records_seeded_train.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"schema_version": 3, "split": "train", "record_count": 1, "records": [{"record_id": "ci-train", "base_spread_prob": 0.14, "severity_bucket": "medium", "wind_direction": "E", "wind_strength": 0.35, "ignition_seed": 101, "layout_seed": 202, "split": "train"}]}
1 change: 1 addition & 0 deletions .ci-smoke/scenario_parameter_records_seeded_val.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"schema_version": 3, "split": "val", "record_count": 1, "records": [{"record_id": "ci-val", "base_spread_prob": 0.14, "severity_bucket": "medium", "wind_direction": "E", "wind_strength": 0.35, "ignition_seed": 101, "layout_seed": 202, "split": "val"}]}
47 changes: 39 additions & 8 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,45 @@ jobs:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v4
- run: uv sync
- name: Verify env imports and runs
- name: Build tiny seeded split datasets
run: |
uv run python -c "
from src.models.fire_env import WildfireEnv
env = WildfireEnv()
obs, _ = env.reset(seed=42)
assert obs.shape == (636,)
for _ in range(10):
obs, r, done, trunc, info = env.step(env.action_space.sample())
print('smoke test passed')
import json
from pathlib import Path

out = Path('.ci-smoke')
out.mkdir(exist_ok=True)

base = {
'record_id': 'ci-record',
'base_spread_prob': 0.14,
'severity_bucket': 'medium',
'wind_direction': 'E',
'wind_strength': 0.35,
'ignition_seed': 101,
'layout_seed': 202,
}

for split in ('train', 'val', 'holdout'):
payload = {
'schema_version': 3,
'split': split,
'record_count': 1,
'records': [{**base, 'record_id': f'ci-{split}', 'split': split}],
}
(out / f'scenario_parameter_records_seeded_{split}.json').write_text(
json.dumps(payload)
)

print('tiny seeded datasets ready')
"
- name: Tiny evaluator smoke test
run: |
uv run python -m src.models.evaluate_agents \
--agents greedy,random \
--episodes 1 \
--seeds 42 \
--train-dataset .ci-smoke/scenario_parameter_records_seeded_train.json \
--val-dataset .ci-smoke/scenario_parameter_records_seeded_val.json \
--holdout-dataset .ci-smoke/scenario_parameter_records_seeded_holdout.json \
--output .ci-smoke/results.json
15 changes: 10 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,11 @@ We build the static dataset at `src/ingestion/static_dataset.py`. The script:
- computes offline environment variables and writes `scenario_parameter_records.json` plus split files in `data/static`. The environment variables written are:
- `base_spread_prob`
- `severity_bucket`
- `wind_dir_deg`
- `wind_direction` (8-direction string)
- `wind_strength`
- `ignition_seed`
- `layout_seed`
- writes seeded benchmark variants (`scenario_parameter_records_seeded.json` and `scenario_parameter_records_seeded_{train|val|holdout}.json`) for reproducible initialization; holdout seeded export is currently a single unique held-out record.
- With the following extra fields stored:
- `spread_rate_1h_m`
- `spread_score`
Expand Down Expand Up @@ -93,6 +96,8 @@ We run this command run to ingest our dataset (with a large cap to avoid split t
uv run python -m src.ingestion.static_dataset --target-count 50000 --raw-alberta-csv data/static/fp-historical-wildfire-data-2006-2025.csv
```

This command builds data from the CSV and generates initialization seeds for ignition and asset layout for the corresponding environment. CFFDRS was not used to reduce confounding variables and any bias introduced due to incomplete CFFDRS data ingested for some specific fires.

If CFFDRS for the selected year is sparse, the builder still runs and writes records without supplementary CFFDRS enrichment.

Optionally, test with a smaller target count:
Expand All @@ -112,10 +117,10 @@ uv run python -m src.ingestion.static_dataset --fire-records path/to/fire_record
After building the dataset, you can train by running:

```bash
uv run python -m src.models.train_rl_agent --scenario-dataset data/static/scenario_parameter_records_train.json --val-dataset data/static/scenario_parameter_records_val.json --holdout-dataset data/static/scenario_parameter_records_holdout.json
uv run python -m src.models.train_rl_agent --scenario-dataset data/static/scenario_parameter_records_seeded_train.json --val-dataset data/static/scenario_parameter_records_seeded_val.json --holdout-dataset data/static/scenario_parameter_records_seeded_holdout.json
```

The scenario parameter file can then be consumed by `FireEnv` and PPO training.
The seeded scenario parameter files are the canonical benchmark inputs for `FireEnv` and PPO training.

The builder also writes year-based split files for the benchmark:

Expand All @@ -126,13 +131,13 @@ The builder also writes year-based split files for the benchmark:
Training command:

```bash
uv run python -m src.models.train_rl_agent --scenario-dataset data/static/scenario_parameter_records_train.json --val-dataset data/static/scenario_parameter_records_val.json --holdout-dataset data/static/scenario_parameter_records_holdout.json
uv run python -m src.models.train_rl_agent --scenario-dataset data/static/scenario_parameter_records_seeded_train.json --val-dataset data/static/scenario_parameter_records_seeded_val.json --holdout-dataset data/static/scenario_parameter_records_seeded_holdout.json
```

General split benchmark evaluation (PPO + baselines):

```bash
uv run python -m src.models.evaluate_agents --agents ppo,greedy,random --train-dataset data/static/scenario_parameter_records_train.json --val-dataset data/static/scenario_parameter_records_val.json --holdout-dataset data/static/scenario_parameter_records_holdout.json --episodes 20 --seeds 42,43,44
uv run python -m src.models.evaluate_agents --agents ppo,greedy,random --train-dataset data/static/scenario_parameter_records_seeded_train.json --val-dataset data/static/scenario_parameter_records_seeded_val.json --holdout-dataset data/static/scenario_parameter_records_seeded_holdout.json --episodes 20 --seeds 42,43,44
```

The dataset builder prints cleaning/drop summaries to stdout and uses progress bars when `tqdm` is available.
Loading
Loading