diff --git a/README.md b/README.md
index b30e566ed..a5833339a 100644
--- a/README.md
+++ b/README.md
@@ -43,7 +43,7 @@ VirtualShip is a command line simulator allowing students to plan and conduct a
- Surface drifters
- Argo float deployments
-
+Along the way, students will encounter realistic problems that may occur during an oceanographic expedition, requiring them to make decisions to adapt their plans accordingly. For example, delays due to equipment failures, pre-depature logistical issues or safety drills.
## Installation
diff --git a/docs/user-guide/assignments/Sail_the_ship.ipynb b/docs/user-guide/assignments/Sail_the_ship.ipynb
index 2a2d85772..43342b0a8 100644
--- a/docs/user-guide/assignments/Sail_the_ship.ipynb
+++ b/docs/user-guide/assignments/Sail_the_ship.ipynb
@@ -89,7 +89,7 @@
"**Follow the instructions [here](https://virtualship.readthedocs.io/en/latest/user-guide/tutorials/surf_research_cloud_setup.html) to set up your SURF RC environment for VirtualShip.**\n",
"\n",
"
\n",
- "**NOTE**: If you have Anaconda installed on your local machine and would like to run VirtualShip locally instead of on SURF RC, please see [VirtualShip - Installation](https://virtualship.readthedocs.io/en/latest/#installation) for instructions.\n",
+ "**Note**: If you have Anaconda installed on your local machine and would like to run VirtualShip locally instead of on SURF RC, please see [VirtualShip - Installation](https://virtualship.readthedocs.io/en/latest/#installation) for instructions.\n",
"
"
]
},
@@ -113,7 +113,7 @@
"### Upload the coordinates to your virtual machine\n",
"\n",
" \n",
- "**IMPORTANT**: _If you have not done so already_, make sure you create a folder for your group's expedition data in the persistent storage on SURF RC (i.e. the `data/storage/` folder). You can do so by running `mkdir /data/storage/{your-group-name}` in Terminal, replacing `{your-group-name}` with your actual group name, or by using the \"New Folder\" button in the JupyterLab file explorer panel.\n",
+ "**Important**: _If you have not done so already_, make sure you create a folder for your group's expedition data in the persistent storage on SURF RC (i.e. the `data/storage/` folder). You can do so by running `mkdir /data/storage/{your-group-name}` in Terminal, replacing `{your-group-name}` with your actual group name, or by using the \"New Folder\" button in the JupyterLab file explorer panel.\n",
"
\n",
"\n",
"Back in the SURF RC JupyterLab interface, use the **file explorer** on the left hand side to navigate to the directory where your group will be running your expedition (i.e. `data/storage/{your-group-name}`). \n",
@@ -130,7 +130,7 @@
"Open a Terminal window if you do not already have one open. Remember, this can be done from the Launcher tab by clicking on \"Terminal\" button under the \"Other\" section, or by going to the \"File\" menu --> \"New\" --> \"Terminal\".\n",
"\n",
" \n",
- "**IMPORTANT**: Once in Terminal, navigate to where you would like your expedition to be run on your (virtual) machine. You can do so by `cd /data/storage/{your-group-name}`, replacing `{your-group-name}` with your actual group name. This is where you will be working from for the rest of the session.\n",
+ "**Important**: Once in Terminal, navigate to where you would like your expedition to be run on your (virtual) machine. You can do so by `cd /data/storage/{your-group-name}`, replacing `{your-group-name}` with your actual group name. This is where you will be working from for the rest of the session.\n",
"
\n",
"\n",
"Now enter the following command in the Terminal (changing `EXPEDITION_NAME` to something more meaningful for your group's expedition):\n",
@@ -138,7 +138,7 @@
"`virtualship init EXPEDITION_NAME --from-mfp {CoordinatesExport}.xlsx`\n",
"\n",
" \n",
- "**TIP**: The `{CoordinatesExport}.xlsx` in the command above refers to the .xlsx file exported from MFP and uploaded to your virtual machine earlier. Replace the filename with the name of your .xlsx file.\n",
+ "**Tip**: The `{CoordinatesExport}.xlsx` in the command above refers to the .xlsx file exported from MFP and uploaded to your virtual machine earlier. Replace the filename with the name of your .xlsx file.\n",
"
\n",
"\n",
"This will create a folder/directory called `EXPEDITION_NAME` (or what you have changed this to) with a single file: `expedition.yaml`. This file contains details on the ship and instrument configurations, as well as the expedition schedule based on the sampling site coordinates that you specified in your MFP export. The `--from-mfp` flag indicates that the exported coordinates should be used."
@@ -151,7 +151,7 @@
"## 5) Expedition scheduling & ship configuration\n",
"\n",
" \n",
- "**TIP**: From here, you should replace any references to `EXPEDITION_NAME` with the actual name you used for your expedition when running any `virtualship` commands.\n",
+ "**Tip**: From here, you should replace any references to `EXPEDITION_NAME` with the actual name you used for your expedition when running any `virtualship` commands.\n",
"
\n",
"\n",
"The next step is to finalise the expedition schedule plan, including setting times and instrument selection choices for each waypoint, as well as configuring the ship (including any underway measurement instruments). \n",
@@ -172,7 +172,7 @@
"### Waypoint datetimes\n",
"\n",
" \n",
- "**NOTE**: VirtualShip supports running experiments in the years 1993 through to the present day by leveraging the suite of products available on the Copernicus Marine Data Store.\n",
+ "**Note**: VirtualShip supports running experiments in the years 1993 through to the present day by leveraging the suite of products available on the Copernicus Marine Data Store.\n",
"
\n",
"\n",
"You will need to enter dates and times for each of the sampling stations/waypoints selected in the MFP route planning stage. This can be done under _Schedule Editor_ > _Waypoints & Instrument Selection_ in the planning tool.\n",
@@ -180,11 +180,11 @@
"Each waypoint has its own sub-panel for parameter inputs (click on it to expand the selection options). Here, the time for each waypoint can be inputted. There is also an option to adjust the latitude/longitude coordinates and you can add or remove waypoints.\n",
"\n",
" \n",
- "**NOTE**: It is important to ensure that the timings for each station are realistic. There must be enough time for the ship to travel to each site at the prescribed speed (10 knots). The expedition schedule will be automatically verified when you press _Save Changes_ in the planning tool.\n",
+ "**Note**: It is important to ensure that the timings for each station are realistic. There must be enough time for the ship to travel to each site at the prescribed speed (10 knots). The expedition schedule will be automatically verified when you press _Save Changes_ in the planning tool.\n",
"
\n",
"\n",
" \n",
- "**TIP**: The MFP route planning tool will give estimated durations of sailing between sites at the 10 knots sailing speed. This can be useful to refer back to when planning the expedition timings and entering these into the `virtualship plan` tool.\n",
+ "**Tip**: The MFP route planning tool will give estimated durations of sailing between sites at the 10 knots sailing speed. This can be useful to refer back to when planning the expedition timings and entering these into the `virtualship plan` tool.\n",
"
\n",
"\n",
"### Instrument selection\n",
@@ -192,7 +192,7 @@
"You should now consider which measurements are to be taken at each sampling site (think about those required for your chosen research question), and therefore which instruments need to be selected in the planning tool at each waypoint.\n",
"\n",
" \n",
- "**TIP**: Click [here](https://virtualship.readthedocs.io/en/latest/user-guide/assignments/Research_proposal_intro.html#Measurement-Options) for more information on what measurement options are available, and a brief introduction to each instrument.\n",
+ "**Tip**: Click [here](https://virtualship.readthedocs.io/en/latest/user-guide/assignments/Research_proposal_intro.html#Measurement-Options) for more information on what measurement options are available, and a brief introduction to each instrument.\n",
"
\n",
"\n",
"You can make instrument selections for each waypoint in the same sub-panels as the [waypoint time](#waypoint-datetimes) selection by simply switching each on or off. Multiple instruments are allowed at each waypoint.\n",
@@ -203,11 +203,11 @@
"When you are happy with your ship configuration and schedule plan, press _Save Changes_ at the bottom of the planning tool.\n",
"\n",
" \n",
- "**NOTE**: On pressing _Save Changes_ the tool will check the selections are valid (for example that the ship will be able to reach each waypoint in time). If they are, the changes will be saved to the `expedition.yaml` file, ready for the next steps. If your selections are invalid you should be provided with information on how to fix them.\n",
+ "**Note**: On pressing _Save Changes_ the tool will check the selections are valid (for example that the ship will be able to reach each waypoint in time). If they are, the changes will be saved to the `expedition.yaml` file, ready for the next steps. If your selections are invalid you should be provided with information on how to fix them.\n",
"
\n",
"\n",
" \n",
- "**CAUTION**: The `virtualship plan` tool will check that the ship can reach each waypoint according to the prescribed ship speed (10 knots). However, before the ultimate simulation step (i.e. step 6 below) there will be a final, automated check that the schedule also accounts for the time taken to conduct the measurements at each site (e.g. a CTD cast in deeper waters will take longer). Therefore, we recommend to take this extra time into account at this stage of the planning by estimating how long each measurement will take and adding this time on.\n",
+ "**Caution**: The `virtualship plan` tool will check that the ship can reach each waypoint according to the prescribed ship speed (10 knots). However, before the ultimate simulation step (i.e. step 6 below) there will be a final, automated check that the schedule also accounts for the time taken to conduct the measurements at each site (e.g. a CTD cast in deeper waters will take longer). Therefore, we recommend to take this extra time into account at this stage of the planning by estimating how long each measurement will take and adding this time on.\n",
"
"
]
},
@@ -220,7 +220,7 @@
"You are now ready to run your virtual expedition! This stage will take all the measurements for each of instruments you selected at each waypoint in your expedition schedule, using input data sourced from the [Copernicus Marine Data Store](https://data.marine.copernicus.eu/products).\n",
"\n",
" \n",
- "**NOTE**: You will need to register for a Copernicus Marine Service account (you can do so [here](https://data.marine.copernicus.eu/register)), if you have not done so already.\n",
+ "**Note**: You will need to register for a Copernicus Marine Service account (you can do so [here](https://data.marine.copernicus.eu/register)), if you have not done so already.\n",
"
\n",
"\n",
"You can run your expedition simulation using the command: \n",
@@ -232,7 +232,11 @@
"Small simulations (e.g. small space-time domains and fewer instrument deployments) will be relatively fast. For large, complex expeditions, it _could_ take up to an hour to simulate the measurements depending on your choices. Waiting for simulation is a great time to practice your level of patience. A skill much needed in oceanographic fieldwork ;-)\n",
"\n",
" \n",
- "**TIP**: Not using underway instruments will speed up the simulation time considerably. So, if you do not plan to use underway temperature/salinity or ADCP measurements, make sure to switch these off in the planning tool before running the expedition.\n",
+ "**Tip**: Not using underway instruments will speed up the simulation time considerably. So, if you do not plan to use underway temperature/salinity or ADCP measurements, make sure to switch these off in the planning tool before running the expedition.\n",
+ "
\n",
+ "\n",
+ " \n",
+ "**Important**: VirtualShip will encounter 'problems' during the expedition, which simulate the various challenges and unexpected events that can occur during real-life oceanographic expeditions (e.g. instrument and/or equipment failure, logistical challenges etc.). These may require your intervention to ensure your expedition schedule can continue!\n",
"
"
]
},
diff --git a/docs/user-guide/quickstart.md b/docs/user-guide/quickstart.md
index 2d927df0e..4ae6cca1c 100644
--- a/docs/user-guide/quickstart.md
+++ b/docs/user-guide/quickstart.md
@@ -4,7 +4,7 @@ Welcome to this Quickstart to using VirtualShip. In this guide we will conduct a
This Quickstart is available as an instructional video below, or you can continue with the step-by-step guide.
-```{warning}
+```{caution}
Please note the video below may show output from an earlier version of VirtualShip, as the codebase is in active development. The instructions in the video are still generally applicable for the current version. However, for the most up-to-date instructions, please follow the text in this Quickstart guide.
```
@@ -150,6 +150,16 @@ Small simulations (e.g. small space-time domains and fewer instrument deployment
Why not browse through previous real-life [blogs and expedition reports](https://virtualship.readthedocs.io/en/latest/user-guide/assignments/Sail_the_ship.html#Reporting) in the meantime?!
+#### Encountering 'problems' during the expediton (configurable)
+
+By default, VirtualShip will encounter 'problems' during the expedition, which simulate the various challenges and unexpected events that can occur during real-life oceanographic expeditions (e.g. instrument and/or equipment failure, logistical challenges etc.) and may require your intervention to ensure your expedition schedule can continue.
+
+The 'problems' add authenticity to the simulation. However, if you require a 'problem'-free expedition, you can run the simulation with the "problem level" (`prob-level`) set to 0 (i.e. `virtualship run EXPEDITION_NAME --prob-level 0`).
+
+```{tip}
+For maximum authenticity, you can set `--prob-level 2`, which will scale the number of problems encountered by the complexity of your expedition (longer duration, more waypoints, more instruments will lead to more problems). By default, the `prob-level` = 1, which limits the number of problems to a maximum of 2, regardless of the expedition complexity.
+```
+
#### Using pre-downloaded data (optional)
By default, VirtualShip will stream data 'on-the-fly' from the Copernicus Marine Data Store, meaning no prior data download is necessary. However, if you prefer to use pre-downloaded data instead (e.g. due to limited internet connection or wanting to use different input data) you can do so by running `virtualship run EXPEDITION_NAME --from-data `.
diff --git a/src/virtualship/cli/_run.py b/src/virtualship/cli/_run.py
index f07fbab27..8164dae98 100644
--- a/src/virtualship/cli/_run.py
+++ b/src/virtualship/cli/_run.py
@@ -4,29 +4,36 @@
import os
import shutil
import time
+from datetime import datetime
from pathlib import Path
import copernicusmarine
-import pyproj
from virtualship.expedition.simulate_schedule import (
MeasurementsToSimulate,
ScheduleProblem,
simulate_schedule,
)
-from virtualship.models import Schedule
-from virtualship.models.checkpoint import Checkpoint
+from virtualship.make_realistic.problems.simulator import ProblemSimulator
+from virtualship.models import Checkpoint, Schedule
+from virtualship.models.expedition import Expedition
from virtualship.utils import (
+ CACHE,
CHECKPOINT,
+ EXPEDITION,
+ EXPEDITION_IDENTIFIER,
+ EXPEDITION_LATEST,
+ PROBLEMS_ENCOUNTERED,
+ PROJECTION,
+ REPORT,
+ RESULTS,
+ SELECTED_PROBLEMS,
_get_expedition,
+ _save_checkpoint,
expedition_cost,
get_instrument_class,
)
-# projection used to sail between waypoints
-projection = pyproj.Geod(ellps="WGS84")
-
-
# parcels logger (suppress INFO messages to prevent log being flooded)
external_logger = logging.getLogger("parcels.tools.loggers")
external_logger.setLevel(logging.WARNING)
@@ -35,7 +42,9 @@
logging.getLogger("copernicusmarine").setLevel("ERROR")
-def _run(expedition_dir: str | Path, from_data: Path | None = None) -> None:
+def _run(
+ expedition_dir: str | Path, prob_level: int, from_data: Path | None = None
+) -> None:
"""
Perform an expedition, providing terminal feedback and file output.
@@ -50,8 +59,8 @@ def _run(expedition_dir: str | Path, from_data: Path | None = None) -> None:
print("╚═════════════════════════════════════════════════╝")
if from_data is None:
- # TODO: caution, if collaborative environments, will this mean everyone uses the same credentials file?
- # TODO: need to think about how to deal with this for when using collaborative environments AND streaming data via copernicusmarine
+ # TODO: caution, if collaborative environments (or the same machine), this will mean that multiple users share the same copernicusmarine credentials file
+ # TODO: deal with this for if/when using collaborative environments (same machine) and streaming data from Copernicus Marine Service?
COPERNICUS_CREDS_FILE = os.path.expandvars(
"$HOME/.copernicusmarine/.copernicusmarine-credentials"
)
@@ -73,7 +82,15 @@ def _run(expedition_dir: str | Path, from_data: Path | None = None) -> None:
expedition = _get_expedition(expedition_dir)
- # Verify instruments_config file is consistent with schedule
+ # unique id to determine if an expedition has 'changed' since last run (to avoid re-selecting problems when user makes tweaks to schedule to deal with problems encountered)
+ expedition_id = _unique_id(expedition, expedition_dir)
+
+ # dedicated problems directory for this expedition
+ problems_dir = expedition_dir.joinpath(
+ CACHE, PROBLEMS_ENCOUNTERED.format(expedition_id=expedition_id)
+ )
+
+ # verify instruments_config file is consistent with schedule
expedition.instruments_config.verify(expedition)
# load last checkpoint
@@ -81,8 +98,8 @@ def _run(expedition_dir: str | Path, from_data: Path | None = None) -> None:
if checkpoint is None:
checkpoint = Checkpoint(past_schedule=Schedule(waypoints=[]))
- # verify that schedule and checkpoint match
- checkpoint.verify(expedition.schedule)
+ # verify that schedule and checkpoint match, and that problems have been resolved
+ checkpoint.verify(expedition, problems_dir)
print("\n---- WAYPOINT VERIFICATION ----")
@@ -93,29 +110,28 @@ def _run(expedition_dir: str | Path, from_data: Path | None = None) -> None:
# simulate the schedule
schedule_results = simulate_schedule(
- projection=projection,
+ projection=PROJECTION,
expedition=expedition,
)
+
+ # handle cases where user defined schedule is incompatible (i.e. not enough time between waypoints, not problems)
if isinstance(schedule_results, ScheduleProblem):
print(
- f"SIMULATION PAUSED: update your schedule (`virtualship plan`) and continue the expedition by executing the `virtualship run` command again.\nCheckpoint has been saved to {expedition_dir.joinpath(CHECKPOINT)}."
+ f"Please update your schedule (`virtualship plan` or directly in {EXPEDITION}) and continue the expedition by executing the `virtualship run` command again.\nCheckpoint has been saved to {expedition_dir.joinpath(CHECKPOINT)}."
)
_save_checkpoint(
Checkpoint(
- past_schedule=Schedule(
- waypoints=expedition.schedule.waypoints[
- : schedule_results.failed_waypoint_i
- ]
- )
+ past_schedule=expedition.schedule,
+ failed_waypoint_i=schedule_results.failed_waypoint_i,
),
expedition_dir,
)
return
# delete and create results directory
- if os.path.exists(expedition_dir.joinpath("results")):
- shutil.rmtree(expedition_dir.joinpath("results"))
- os.makedirs(expedition_dir.joinpath("results"))
+ if os.path.exists(expedition_dir.joinpath(RESULTS)):
+ shutil.rmtree(expedition_dir.joinpath(RESULTS))
+ os.makedirs(expedition_dir.joinpath(RESULTS))
print("\n----- EXPEDITION SUMMARY ------")
@@ -124,32 +140,77 @@ def _run(expedition_dir: str | Path, from_data: Path | None = None) -> None:
print("\n--- MEASUREMENT SIMULATIONS ---")
- # simulate measurements
- print("\nSimulating measurements. This may take a while...\n")
-
+ # identify instruments in expedition
instruments_in_expedition = expedition.get_instruments()
- for itype in instruments_in_expedition:
- # get instrument class
- instrument_class = get_instrument_class(itype)
- if instrument_class is None:
- raise RuntimeError(f"No instrument class found for type {itype}.")
-
- # get measurements to simulate
- attr = MeasurementsToSimulate.get_attr_for_instrumenttype(itype)
- measurements = getattr(schedule_results.measurements_to_simulate, attr)
-
- # initialise instrument
- instrument = instrument_class(
- expedition=expedition,
- from_data=Path(from_data) if from_data is not None else None,
- )
+ # initialise problem simulator
+ problem_simulator = ProblemSimulator(expedition, expedition_dir)
- # execute simulation
- instrument.execute(
- measurements=measurements,
- out_path=expedition_dir.joinpath("results", f"{itype.name.lower()}.zarr"),
+ # re-load previously encountered (same expedition as previously) problems if they exist, else select new problems and cache them
+ if os.path.exists(problems_dir.joinpath(SELECTED_PROBLEMS)):
+ problems = problem_simulator.load_selected_problems(
+ problems_dir.joinpath(SELECTED_PROBLEMS)
+ )
+ else:
+ problems = problem_simulator.select_problems(
+ instruments_in_expedition, prob_level
)
+ problem_simulator.cache_selected_problems(
+ problems, problems_dir.joinpath(SELECTED_PROBLEMS)
+ ) if problems else None
+
+ # simulate instrument measurements
+ print("\nSimulating measurements. This may take a while...\n")
+
+ for itype in instruments_in_expedition:
+ try:
+ # get instrument class
+ instrument_class = get_instrument_class(itype)
+ if instrument_class is None:
+ raise RuntimeError(f"No instrument class found for type {itype}.")
+
+ # execute problem simulations for this instrument type
+ if problems:
+ if (
+ hasattr(problems["problem_class"][0], "pre_departure")
+ and problems["problem_class"][0].pre_departure
+ ):
+ pass
+ else:
+ print(f"\033[4mUp next\033[0m: {itype.name} measurements...\n")
+
+ problem_simulator.execute(
+ problems,
+ instrument_type_validation=itype,
+ log_dir=problems_dir,
+ )
+
+ # get measurements to simulate
+ attr = MeasurementsToSimulate.get_attr_for_instrumenttype(itype)
+ measurements = getattr(schedule_results.measurements_to_simulate, attr)
+
+ # initialise instrument
+ instrument = instrument_class(
+ expedition=expedition,
+ from_data=Path(from_data) if from_data is not None else None,
+ )
+
+ # execute simulation
+ instrument.execute(
+ measurements=measurements,
+ out_path=expedition_dir.joinpath(RESULTS, f"{itype.name.lower()}.zarr"),
+ )
+ except Exception as e:
+ # clean up if unexpected error occurs
+ if os.path.exists(problems_dir):
+ shutil.rmtree(problems_dir)
+ if expedition_dir.joinpath(CHECKPOINT).exists():
+ os.remove(expedition_dir.joinpath(CHECKPOINT))
+
+ raise RuntimeError(
+ f"An unexpected error occurred while simulating measurements: {e}. Please report this issue, with a description and the traceback, "
+ "to the VirtualShip issue tracker at: https://github.com/OceanParcels/virtualship/issues"
+ ) from e
print("\nAll measurement simulations are complete.")
@@ -158,6 +219,20 @@ def _run(expedition_dir: str | Path, from_data: Path | None = None) -> None:
print(
f"Your measurements can be found in the '{expedition_dir}/results' directory."
)
+
+ if problems:
+ ProblemSimulator.post_expedition_report(
+ problems, expedition_dir.joinpath(RESULTS, REPORT)
+ )
+ print("\n----- RECORD OF PROBLEMS ENCOUNTERED ------")
+ print(
+ f"\nA post-expedition report of problems encountered during the expedition is saved in: {expedition_dir.joinpath(RESULTS, REPORT)}"
+ )
+
+ # delete checkpoint file (in case it interferes with any future re-runs)
+ if os.path.exists(expedition_dir.joinpath(CHECKPOINT)):
+ os.remove(expedition_dir.joinpath(CHECKPOINT))
+
print("\n------------- END -------------\n")
# end timing
@@ -166,6 +241,39 @@ def _run(expedition_dir: str | Path, from_data: Path | None = None) -> None:
print(f"[TIMER] Expedition completed in {elapsed / 60.0:.2f} minutes.")
+def _unique_id(expedition: Expedition, expedition_dir: Path) -> str:
+ """
+ Return a unique id for the expedition (marked by datetime), which can be used to determine whether the expedition has 'changed' since the last run.
+
+ Simultaneously, log to .txt file if first run or if there have been additions of instruments since last run.
+ """
+ current_instruments = expedition.get_instruments()
+ new_id = datetime.now().strftime("%Y%m%d%H%M%S")
+ previous_id = None
+
+ cache_dir = expedition_dir.joinpath(CACHE)
+ if not cache_dir.exists():
+ cache_dir.mkdir()
+ id_path = cache_dir.joinpath(EXPEDITION_IDENTIFIER)
+ last_expedition_path = cache_dir.joinpath(EXPEDITION_LATEST)
+
+ if id_path.exists():
+ previous_id = id_path.read_text().strip()
+ last_expedition = Expedition.from_yaml(last_expedition_path)
+ last_instruments = last_expedition.get_instruments()
+
+ added_instruments = set(current_instruments) - set(last_instruments)
+ if not added_instruments:
+ return previous_id # if no additions, keep previous id to allow re-use of previously encountered problems
+ else:
+ id_path.write_text(new_id)
+
+ else:
+ id_path.write_text(new_id)
+
+ return new_id
+
+
def _load_checkpoint(expedition_dir: Path) -> Checkpoint | None:
file_path = expedition_dir.joinpath(CHECKPOINT)
try:
@@ -174,11 +282,6 @@ def _load_checkpoint(expedition_dir: Path) -> Checkpoint | None:
return None
-def _save_checkpoint(checkpoint: Checkpoint, expedition_dir: Path) -> None:
- file_path = expedition_dir.joinpath(CHECKPOINT)
- checkpoint.to_yaml(file_path)
-
-
def _write_expedition_cost(expedition, schedule_results, expedition_dir):
"""Calculate the expedition cost, write it to a file, and print summary."""
assert expedition.schedule.waypoints[0].time is not None, (
@@ -186,6 +289,6 @@ def _write_expedition_cost(expedition, schedule_results, expedition_dir):
)
time_past = schedule_results.time - expedition.schedule.waypoints[0].time
cost = expedition_cost(schedule_results, time_past)
- with open(expedition_dir.joinpath("results", "cost.txt"), "w") as file:
+ with open(expedition_dir.joinpath(RESULTS, "cost.txt"), "w") as file:
file.writelines(f"cost: {cost} US$")
print(f"\nExpedition duration: {time_past}\nExpedition cost: US$ {cost:,.0f}.")
diff --git a/src/virtualship/cli/commands.py b/src/virtualship/cli/commands.py
index f349dc6cf..be088ed5c 100644
--- a/src/virtualship/cli/commands.py
+++ b/src/virtualship/cli/commands.py
@@ -82,6 +82,17 @@ def plan(path):
"path",
type=click.Path(exists=True, file_okay=False, dir_okay=True, readable=True),
)
+@click.option(
+ "--prob-level",
+ type=click.IntRange(0, 2),
+ default=1,
+ help="Set the problem level for the expedition simulation [default = 1].\n\n"
+ "Level 0 = No problems encountered during the expedition.\n\n"
+ "Level 1 = 1-2 problems encountered.\n\n"
+ "Level 2 = 1 or more problems encountered, depending on expedition length and complexity, where longer and more complex expeditions will encounter more problems.\n\n"
+ "N.B.: If an expedition has already been run with problems encountered, changing the prob_level on a subsequent re-run will have no effect (previously encountered problems will be re-used). To select new problems (or to skip problems altogether), delete the 'problems_encountered' directory in the expedition directory before re-running with a new prob_level.\n\n"
+ "Changing waypoint locations and/or instrument types will also result in new problems being selected on the next run.",
+)
@click.option(
"--from-data",
type=str,
@@ -92,6 +103,6 @@ def plan(path):
"Assumes that variable names at least contain the standard Copernicus Marine variable name as a substring. "
"Will also take the first file found containing the variable name substring. CAUTION if multiple files contain the same variable name substring.",
)
-def run(path, from_data):
+def run(path, prob_level, from_data):
"""Execute the expedition simulations."""
- _run(Path(path), from_data)
+ _run(Path(path), prob_level, from_data)
diff --git a/src/virtualship/expedition/simulate_schedule.py b/src/virtualship/expedition/simulate_schedule.py
index e450fcc7c..94fccbc2a 100644
--- a/src/virtualship/expedition/simulate_schedule.py
+++ b/src/virtualship/expedition/simulate_schedule.py
@@ -20,6 +20,7 @@
Spacetime,
Waypoint,
)
+from virtualship.utils import _calc_sail_time
@dataclass
@@ -115,6 +116,8 @@ def __init__(self, projection: pyproj.Geod, expedition: Expedition) -> None:
self._next_ship_underwater_st_time = self._time
def simulate(self) -> ScheduleOk | ScheduleProblem:
+ # TODO: instrument config mapping (as introduced in #269) should be helpful for refactoring here...
+
for wp_i, waypoint in enumerate(self._expedition.schedule.waypoints):
# sail towards waypoint
self._progress_time_traveling_towards(waypoint.location)
@@ -122,9 +125,9 @@ def simulate(self) -> ScheduleOk | ScheduleProblem:
# check if waypoint was reached in time
if waypoint.time is not None and self._time > waypoint.time:
print(
- f"Waypoint {wp_i + 1} could not be reached in time. Current time: {self._time}. Waypoint time: {waypoint.time}."
+ f"\nWaypoint {wp_i + 1} could not be reached in time. Current time: {self._time}. Waypoint time: {waypoint.time}."
"\n\nHave you ensured that your schedule includes sufficient time for taking measurements, e.g. CTD casts (in addition to the time it takes to sail between waypoints)?\n"
- "**Note**, the `virtualship plan` tool will not account for measurement times when verifying the schedule, only the time it takes to sail between waypoints.\n"
+ "\nHint: previous schedule verification checks (e.g. in the `virtualship plan` tool or after dealing with unexpected problems during the expedition) will not account for measurement times, only the time it takes to sail between waypoints.\n"
)
return ScheduleProblem(self._time, wp_i)
else:
@@ -140,22 +143,13 @@ def simulate(self) -> ScheduleOk | ScheduleProblem:
return ScheduleOk(self._time, self._measurements_to_simulate)
def _progress_time_traveling_towards(self, location: Location) -> None:
- geodinv: tuple[float, float, float] = self._projection.inv(
- lons1=self._location.lon,
- lats1=self._location.lat,
- lons2=location.lon,
- lats2=location.lat,
- )
- ship_speed_meter_per_second = (
- self._expedition.ship_config.ship_speed_knots * 1852 / 3600
- )
- azimuth1 = geodinv[0]
- distance_to_next_waypoint = geodinv[2]
- time_to_reach = timedelta(
- seconds=distance_to_next_waypoint / ship_speed_meter_per_second
+ time_to_reach, azimuth1, ship_speed_meter_per_second = _calc_sail_time(
+ self._location,
+ location,
+ self._expedition.ship_config.ship_speed_knots,
+ self._projection,
)
end_time = self._time + time_to_reach
-
# note all ADCP measurements
if self._expedition.instruments_config.adcp_config is not None:
location = self._location
diff --git a/src/virtualship/instruments/base.py b/src/virtualship/instruments/base.py
index 984e4abf5..2ca1b7836 100644
--- a/src/virtualship/instruments/base.py
+++ b/src/virtualship/instruments/base.py
@@ -67,7 +67,10 @@ def __init__(
)
self.wp_times = wp_times
- self.min_time, self.max_time = wp_times[0], wp_times[-1]
+ self.min_time, self.max_time = (
+ wp_times[0],
+ wp_times[-1] + timedelta(days=1),
+ ) # avoid edge issues
self.min_lat, self.max_lat = min(wp_lats), max(wp_lats)
self.min_lon, self.max_lon = min(wp_lons), max(wp_lons)
diff --git a/src/virtualship/make_realistic/problems/scenarios.py b/src/virtualship/make_realistic/problems/scenarios.py
new file mode 100644
index 000000000..7d7ec7d6b
--- /dev/null
+++ b/src/virtualship/make_realistic/problems/scenarios.py
@@ -0,0 +1,244 @@
+from __future__ import annotations
+
+from dataclasses import dataclass
+from datetime import timedelta
+
+from virtualship.instruments.types import InstrumentType
+
+# =====================================================
+# SECTION: Problem Classes
+# =====================================================
+
+
+@dataclass
+class GeneralProblem:
+ """Base class for general problems. Can occur pre-depature or during expedition."""
+
+ short_name: str
+ message: str
+ delay_duration: timedelta
+ pre_departure: bool # True if problem occurs before expedition departure, False if during expedition
+
+ # TODO: could add a (abstract) method to check if problem is valid for given waypoint, e.g. location (tropical waters etc.)
+
+
+@dataclass
+class InstrumentProblem:
+ """Base class for instrument-specific problems. Cannot occur before expedition departure."""
+
+ short_name: str
+ message: str
+ delay_duration: timedelta
+ instrument_type: InstrumentType
+
+ # TODO: could add a (abstract) method to check if problem is valid for given waypoint, e.g. location ()
+
+
+# =====================================================
+# SECTION: General Problems
+# =====================================================
+
+GENERAL_PROBLEMS = [
+ GeneralProblem( # Problem: Scheduled food delivery is delayed.
+ short_name="scheduled_food_delivery_delayed",
+ message=(
+ "The scheduled food delivery prior to departure has not arrived. Until the supply truck reaches the pier, "
+ "we cannot leave. Once it arrives, unloading and stowing the provisions in the ship's cold storage "
+ "will also take additional time. These combined delays postpone departure by 5 hours."
+ ),
+ delay_duration=timedelta(hours=5.0),
+ pre_departure=True,
+ ),
+ GeneralProblem( # Problem: Sudden initiation of a mandatory safety drill.
+ short_name="safety_drill_initiated",
+ message=(
+ "A miscommunication with the ship's captain results in the sudden initiation of a mandatory safety drill. "
+ "The emergency vessel must be lowered and tested while the ship remains stationary, pausing all scientific "
+ "operations for the duration of the exercise. The drill introduces a delay of 2 hours."
+ ),
+ delay_duration=timedelta(hours=2.0),
+ pre_departure=False,
+ ),
+ GeneralProblem( # Problem: Fuel delivery tanker delayed.
+ short_name="fuel_delivery_tanker_delayed",
+ message=(
+ "The fuel tanker expected to deliver fuel has not arrived. Until the tanker reaches the pier, "
+ "we cannot leave. Once it arrives, securing the fuel lines in the ship's tanks and fueling operations "
+ "will also take additional time. These combined delays postpone departure by 5 hours."
+ ),
+ delay_duration=timedelta(hours=5.0),
+ pre_departure=True,
+ ),
+ GeneralProblem( # Problem: Marine mammals observed in deployment area.
+ short_name="marine_mammals_observed",
+ message=(
+ "A pod of dolphins is observed swimming directly beneath the planned deployment area. "
+ "To avoid risk to wildlife and comply with environmental protocols, all operations "
+ "must pause until the animals move away from the vicinity. This results in a delay of about 2 hours."
+ ),
+ delay_duration=timedelta(hours=2),
+ pre_departure=False,
+ ),
+ GeneralProblem( # Problem: Ballast pump failure during ballasting operations.
+ short_name="ballast_pump_failure",
+ message=(
+ "One of the ship's ballast pumps suddenly stops responding during routine ballasting operations. "
+ "Without the pump, the vessel cannot safely adjust trim or compensate for equipment movements on deck. "
+ "Engineering isolates the faulty pump and performs a rapid inspection. Temporary repairs allow limited "
+ "functionality, but the interruption causes a delay of 4 hours."
+ ),
+ delay_duration=timedelta(hours=4.0),
+ pre_departure=False,
+ ),
+ GeneralProblem( # Problem: Bow thruster's power converter fault during station-keeping.
+ short_name="bow_thruster_power_converter_fault",
+ message=(
+ "The bow thruster's power converter reports a fault during station-keeping operations. "
+ "Dynamic positioning becomes less stable, forcing a temporary suspension of high-precision sampling. "
+ "Engineers troubleshoot the converter and perform a reset, resulting in a delay of 4 hours."
+ ),
+ delay_duration=timedelta(hours=4.0),
+ pre_departure=False,
+ ),
+ GeneralProblem( # Problem: Hydraulic fluid leak from A-frame actuator.
+ short_name="hydraulic_fluid_leak_aframe_actuator",
+ message=(
+ "A crew member notices hydraulic fluid leaking from the A-frame actuator during equipment checks. "
+ "The leak must be isolated immediately to prevent environmental contamination or mechanical failure. "
+ "Engineering replaces a faulty hose and repressurizes the system. This repair causes a delay of about 6 hours."
+ ),
+ delay_duration=timedelta(hours=6.0),
+ pre_departure=False,
+ ),
+ GeneralProblem( # Problem: Main engine's cooling water intake blocked.
+ short_name="engine_cooling_intake_blocked",
+ message=(
+ "The main engine's cooling water intake alarms indicate reduced flow, likely caused by marine debris "
+ "or biological fouling. The vessel must temporarily slow down while engineering clears the obstruction "
+ "and flushes the intake. This results in a delay of 4 hours."
+ ),
+ delay_duration=timedelta(hours=4.0),
+ pre_departure=False,
+ ),
+]
+
+# TODO: draft problem below, but needs a method to adjust ETA based on reduced speed (future PR)
+# GeneralProblem(
+# short_name="engine_overheat_speed_reduction",
+# message=(
+# "One of the main engines has overheated. To prevent further damage, the engineering team orders a reduction "
+# "in vessel speed until the engine can be inspected and repaired in port. The ship will now operate at a "
+# "reduced cruising speed of 8.5 knots for the remainder of the transit."
+# )
+# delay_duration: None = None # speed reduction affects ETA instead of fixed delay
+# ship_speed_knots: float = 8.5
+# )
+
+
+# TODO: draft problem below, but needs a method to check if waypoint is in tropical waters (future PR)
+# GeneralProblem(
+# short_name="venomous_centipede_onboard",
+# message=(
+# "A venomous centipede is discovered onboard while operating in tropical waters. "
+# "One crew member becomes ill after contact with the creature and receives medical attention, "
+# "prompting a full search of the vessel to ensure no further danger. "
+# "The medical response and search efforts cause an operational delay of about 2 hours."
+# )
+# delay_duration: timedelta = timedelta(hours=2.0)
+# pre_departure: bool = False
+# )
+
+
+# =====================================================
+# SECTION: Instrument-specific Problems
+# =====================================================
+
+INSTRUMENT_PROBLEMS = [
+ InstrumentProblem( # Problem: CTD cable jammed in winch drum.
+ short_name="ctd_cable_jammed",
+ message=(
+ "During preparation for the next CTD cast, the CTD cable becomes jammed in the winch drum. "
+ "Attempts to free it are unsuccessful, and the crew determines that the entire cable must be "
+ "replaced before deployment can continue. This repair is time-consuming and results in a delay "
+ "of 5 hours."
+ ),
+ delay_duration=timedelta(hours=5.0),
+ instrument_type=InstrumentType.CTD,
+ ),
+ InstrumentProblem( # Problem: ADCP returns invalid data.
+ short_name="adcp_invalid_data",
+ message=(
+ "The hull-mounted ADCP begins returning invalid velocity data. Engineering suspects damage to the cable "
+ "from recent maintenance activities. The ship must hold position while a technician enters the cable "
+ "compartment to perform an inspection and continuity test. This diagnostic procedure results in a delay "
+ "of 2 hours."
+ ),
+ delay_duration=timedelta(hours=2.0),
+ instrument_type=InstrumentType.ADCP,
+ ),
+ InstrumentProblem( # Problem: CTD temperature sensor failure.
+ short_name="ctd_temperature_sensor_failure",
+ message=(
+ "The primary temperature sensor on the CTD begins returning inconsistent readings. "
+ "Troubleshooting confirms that the sensor has malfunctioned. A spare unit can be installed, "
+ "but integrating and verifying the replacement will pause operations. "
+ "This procedure leads to an estimated delay of 3 hours."
+ ),
+ delay_duration=timedelta(hours=3.0),
+ instrument_type=InstrumentType.CTD,
+ ),
+ InstrumentProblem( # Problem: CTD salinity sensor failure.
+ short_name="ctd_salinity_sensor_failure",
+ message=(
+ "The CTD's primary salinity sensor fails and must be replaced with a backup. After installation, "
+ "a mandatory calibration cast to a minimum depth of 1000 meters is required to verify sensor accuracy. "
+ "Both the replacement and calibration activities result in a total delay of roughly 4 hours."
+ ),
+ delay_duration=timedelta(hours=4.0),
+ instrument_type=InstrumentType.CTD,
+ ),
+ InstrumentProblem( # Problem: CTD winch hydraulic pressure drop.
+ short_name="ctd_winch_hydraulic_pressure_drop",
+ message=(
+ "The CTD winch begins to lose hydraulic pressure during routine checks prior to deployment. "
+ "The engineering crew must stop operations to diagnose the hydraulic pump and replenish or repair "
+ "the system. Until pressure is restored to operational levels, the winch cannot safely be used. "
+ "This results in an estimated delay of 2.5 hours."
+ ),
+ delay_duration=timedelta(hours=2.5),
+ instrument_type=InstrumentType.CTD,
+ ),
+ InstrumentProblem( # Problem: CTD rosette trigger failure.
+ short_name="ctd_rosette_trigger_failure",
+ message=(
+ "During a CTD cast, the rosette's bottle-triggering mechanism fails to actuate. "
+ "No discrete water samples can be collected during this cast. The rosette must be brought back "
+ "on deck for inspection and manual testing of the trigger system. This results in an operational "
+ "delay of 3.5 hours."
+ ),
+ delay_duration=timedelta(hours=3.5),
+ instrument_type=InstrumentType.CTD,
+ ),
+ InstrumentProblem( # Problem: Drifter fails to establish satellite connection before deployment.
+ short_name="drifter_satellite_connection_failure",
+ message=(
+ "The drifter scheduled for deployment fails to establish a satellite connection during "
+ "pre-launch checks. To improve signal acquisition, the float must be moved to a higher location on deck "
+ "with fewer obstructions. The team waits for the satellite connection to be established, resulting in a delay "
+ "of 2 hours."
+ ),
+ delay_duration=timedelta(hours=2.0),
+ instrument_type=InstrumentType.DRIFTER,
+ ),
+ InstrumentProblem( # Problem: Argo float fails to establish satellite connection before deployment.
+ short_name="argo_float_satellite_connection_failure",
+ message=(
+ "The Argo float scheduled for deployment fails to establish a satellite connection during "
+ "pre-launch checks. To improve signal acquisition, the float must be moved to a higher location on deck "
+ "with fewer obstructions. The team waits for the satellite connection to be established, resulting in a delay "
+ "of 2 hours."
+ ),
+ delay_duration=timedelta(hours=2.0),
+ instrument_type=InstrumentType.ARGO_FLOAT,
+ ),
+]
diff --git a/src/virtualship/make_realistic/problems/simulator.py b/src/virtualship/make_realistic/problems/simulator.py
new file mode 100644
index 000000000..26e33bc72
--- /dev/null
+++ b/src/virtualship/make_realistic/problems/simulator.py
@@ -0,0 +1,529 @@
+from __future__ import annotations
+
+import json
+import os
+import random
+import sys
+import time
+from pathlib import Path
+from typing import TYPE_CHECKING
+
+from rich import box
+from rich.console import Console
+from rich.live import Live
+from rich.spinner import Spinner
+from rich.table import Table
+from yaspin import yaspin
+
+from virtualship.instruments.types import InstrumentType
+from virtualship.make_realistic.problems.scenarios import (
+ GENERAL_PROBLEMS,
+ INSTRUMENT_PROBLEMS,
+ GeneralProblem,
+ InstrumentProblem,
+)
+from virtualship.models.checkpoint import Checkpoint
+from virtualship.utils import (
+ CACHE,
+ EXPEDITION,
+ EXPEDITION_LATEST,
+ EXPEDITION_ORIGINAL,
+ PROJECTION,
+ _calc_sail_time,
+ _calc_wp_stationkeeping_time,
+ _make_hash,
+ _save_checkpoint,
+)
+
+if TYPE_CHECKING:
+ from virtualship.models.expedition import Expedition
+
+LOG_MESSAGING = {
+ "pre_departure": "Hang on! There could be a pre-departure problem in-port...",
+ "during_expedition": "Oh no, a problem has occurred during the expedition, at waypoint {waypoint}...!",
+ "schedule_problems": "This problem will cause a delay of {delay_duration} hours {problem_wp}. The next waypoint therefore cannot be reached in time. Please account for this in your schedule (`virtualship plan` or directly in {expedition_yaml}), then continue the expedition by executing the `virtualship run` command again.\n",
+ "problem_avoided": "Phew! You had enough contingency time scheduled to avoid delays from this problem.\n",
+}
+
+
+# default problem weights for problems simulator (i.e. add +1 problem for every n days/waypoints/instruments in expedition)
+PROBLEM_WEIGHTS = {
+ "every_ndays": 7,
+ "every_nwaypoints": 6,
+ "every_ninstruments": 3,
+}
+
+
+class ProblemSimulator:
+ """Handle problem simulation during expedition."""
+
+ def __init__(self, expedition: Expedition, expedition_dir: str | Path):
+ """Initialise ProblemSimulator with a schedule and probability level."""
+ self.expedition = expedition
+ self.expedition_dir = Path(expedition_dir)
+
+ def select_problems(
+ self,
+ instruments_in_expedition: set[InstrumentType],
+ prob_level: int,
+ ) -> dict[str, list[GeneralProblem | InstrumentProblem] | None] | None:
+ """
+ Select problems (general and instrument-specific). When prob_level = 2, number of problems is determined by expedition length, instrument count etc.
+
+ If only one waypoint, return just a pre-departure problem.
+
+ Map each selected problem to a random waypoint (or None if pre-departure). Finally, cache the suite of problems to a directory (expedition-specific via hash) for reference.
+ """
+ valid_instrument_problems = [
+ problem
+ for problem in INSTRUMENT_PROBLEMS
+ if problem.instrument_type in instruments_in_expedition
+ ]
+
+ pre_departure_problems = [
+ p
+ for p in GENERAL_PROBLEMS
+ if isinstance(p, GeneralProblem) and p.pre_departure
+ ]
+
+ num_waypoints = len(self.expedition.schedule.waypoints)
+ num_instruments = len(instruments_in_expedition)
+ expedition_duration_days = (
+ self.expedition.schedule.waypoints[-1].time
+ - self.expedition.schedule.waypoints[0].time
+ ).days
+
+ # if only one waypoint, return just a pre-departure problem
+ if num_waypoints < 2:
+ return {
+ "problem_class": [random.choice(pre_departure_problems)],
+ "waypoint_i": [None],
+ }
+
+ if prob_level == 0:
+ num_problems = 0
+ elif prob_level == 1:
+ num_problems = random.randint(1, 2)
+
+ elif prob_level == 2:
+ base = 1
+ extra = ( # i.e. +1 problem for every n days/waypoints/instruments (tunable above)
+ (expedition_duration_days // PROBLEM_WEIGHTS["every_ndays"])
+ + (num_waypoints // PROBLEM_WEIGHTS["every_nwaypoints"])
+ + (num_instruments // PROBLEM_WEIGHTS["every_ninstruments"])
+ )
+ num_problems = base + extra
+ num_problems = min(
+ num_problems, len(GENERAL_PROBLEMS) + len(valid_instrument_problems)
+ )
+
+ selected_problems = []
+ problems_sorted = None
+ if num_problems > 0:
+ random.shuffle(GENERAL_PROBLEMS)
+ random.shuffle(valid_instrument_problems)
+
+ # bias towards more instrument problems when there are more instruments
+ instrument_bias = min(0.7, num_instruments / (num_instruments + 2))
+ n_instrument = round(num_problems * instrument_bias)
+ n_general = min(len(GENERAL_PROBLEMS), num_problems - n_instrument)
+ n_instrument = (
+ num_problems - n_general
+ ) # recalc in case n_general was capped to len(GENERAL_PROBLEMS)
+
+ selected_problems.extend(GENERAL_PROBLEMS[:n_general])
+ selected_problems.extend(valid_instrument_problems[:n_instrument])
+
+ # allow only one pre-departure problem to occur; replace any extras with non-pre-departure problems
+ selected_pre_departure = [
+ p
+ for p in selected_problems
+ if isinstance(p, GeneralProblem) and p.pre_departure
+ ]
+ if len(selected_pre_departure) > 1:
+ to_keep = random.choice(selected_pre_departure)
+ num_to_replace = len(selected_pre_departure) - 1
+ # remove all but one pre_departure problem
+ selected_problems = [
+ problem
+ for problem in selected_problems
+ if not (
+ isinstance(problem, GeneralProblem)
+ and problem.pre_departure
+ and problem is not to_keep
+ )
+ ]
+ # available non-pre_departure problems not already selected
+ available_general = [
+ p
+ for p in GENERAL_PROBLEMS
+ if not p.pre_departure and p not in selected_problems
+ ]
+ available_instrument = [
+ p for p in valid_instrument_problems if p not in selected_problems
+ ]
+ available_replacements = available_general + available_instrument
+ random.shuffle(available_replacements)
+ selected_problems.extend(available_replacements[:num_to_replace])
+
+ # map each problem to a [random] waypoint (or None if pre-departure)
+ # limited to one per waypoint, else complicates scheduling and contingency checking
+ waypoint_idxs = []
+ unassigned_problems = []
+ available_idxs = list(
+ range(len(self.expedition.schedule.waypoints) - 1)
+ ) # exclude last waypoint (problem there would have no impact on scheduling)
+
+ # TODO: if incorporate departure and arrival port/waypoints in future, bear in mind index selection here may need to change
+ for problem in selected_problems:
+ if getattr(problem, "pre_departure", False):
+ waypoint_idxs.append(None)
+ else:
+ if available_idxs:
+ wp_select = random.choice(available_idxs)
+ waypoint_idxs.append(wp_select)
+ available_idxs.remove(wp_select) # each waypoint only used once
+ else:
+ unassigned_problems.append(
+ problem
+ ) # if run out of available waypoints, remove problem from selection
+
+ # remove any problems that couldn't be assigned a waypoint (i.e. if more problems than available waypoints)
+ if unassigned_problems:
+ selected_problems = [
+ p for p in selected_problems if p not in unassigned_problems
+ ]
+
+ # pair problems with their waypoint indices and sort by waypoint index (pre-departure first)
+ paired = sorted(
+ zip(selected_problems, waypoint_idxs, strict=True),
+ key=lambda x: (x[1] is not None, x[1] if x[1] is not None else -1),
+ )
+ problems_sorted = {
+ "problem_class": [p for p, _ in paired],
+ "waypoint_i": [w for _, w in paired],
+ }
+
+ return problems_sorted if selected_problems else None
+
+ def execute(
+ self,
+ problems: dict[str, list[GeneralProblem | InstrumentProblem] | None],
+ instrument_type_validation: InstrumentType | None,
+ log_dir: Path,
+ log_delay: float = 4.0,
+ ):
+ """
+ Execute the selected problems, returning messaging and delay times.
+
+ N.B. a problem_waypoint_i is different to a failed_waypoint_i defined in the Checkpoint class; failed_waypoint_i is the waypoint index after the problem_waypoint_i where the problem occurred, as this is when scheduling issues would be encountered.
+ """
+ # TODO: when prob-level =2 and have general problems which occur at later waypoints: could artificially delay their propagation until later in the simulation? Otherwise they are front-loaded at the start of the simulation... Instrument problems are fine because they only propagate when instrument is simulated...
+
+ for problem, problem_waypoint_i in zip(
+ problems["problem_class"], problems["waypoint_i"], strict=True
+ ):
+ # skip if instrument problem but `p.instrument_type` does not match `instrument_type_validation` (i.e. the current instrument being simulated in the expedition, e.g. from _run.py)
+ if (
+ isinstance(problem, InstrumentProblem)
+ and problem.instrument_type is not instrument_type_validation
+ ):
+ continue
+
+ problem_hash = _make_hash(problem.message + str(problem_waypoint_i), 8)
+ hash_fpath = log_dir.joinpath(f"problem_{problem_hash}.json")
+ if hash_fpath.exists():
+ continue # problem * waypoint combination has already occurred; don't repeat
+
+ if isinstance(problem, GeneralProblem) and problem.pre_departure:
+ alert_msg = LOG_MESSAGING["pre_departure"]
+
+ else:
+ alert_msg = LOG_MESSAGING["during_expedition"].format(
+ waypoint=int(problem_waypoint_i) + 1
+ )
+
+ # log problem occurrence, save to checkpoint, and pause simulation
+ self._log_problem(
+ problem,
+ problem_waypoint_i,
+ alert_msg,
+ problem_hash,
+ hash_fpath,
+ log_delay,
+ )
+
+ # cache original expedition for reference and/or restoring later if needed (checkpoint.yaml [written in _log_problem] can be overwritten if multiple problems occur so is not a persistent record of original schedule)
+ self._cache_original_expedition(self.expedition)
+
+ @staticmethod
+ def cache_selected_problems(
+ problems: dict[str, list[GeneralProblem | InstrumentProblem] | None],
+ selected_problems_fpath: str,
+ ) -> None:
+ """Cache suite of problems to json, for reference."""
+ # make dir to contain problem jsons (unique to expedition)
+ os.makedirs(Path(selected_problems_fpath).parent, exist_ok=True)
+
+ # cache dict of selected_problems to json
+ with open(
+ selected_problems_fpath,
+ "w",
+ encoding="utf-8",
+ ) as f:
+ json.dump(
+ {
+ "problem_class": [p.short_name for p in problems["problem_class"]],
+ "waypoint_i": problems["waypoint_i"],
+ "timestamp": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
+ },
+ f,
+ indent=4,
+ )
+
+ @staticmethod
+ def post_expedition_report(
+ problems: dict[str, list[GeneralProblem | InstrumentProblem] | None],
+ report_fpath: str | Path,
+ ) -> None:
+ """Produce human-readable post-expedition report (.txt), including problems that occured (their full messages), the waypoint and what delay they caused."""
+ for problem, problem_waypoint_i in zip(
+ problems["problem_class"], problems["waypoint_i"], strict=True
+ ):
+ affected_wp = (
+ "in-port" if problem_waypoint_i is None else f"{problem_waypoint_i + 1}"
+ )
+ delay_hours = problem.delay_duration.total_seconds() / 3600.0
+ with open(report_fpath, "a", encoding="utf-8") as f:
+ f.write("---\n")
+ f.write(f"Waypoint: {affected_wp}\n")
+ f.write(f"Problem: {problem.message}\n")
+ f.write(f"Delay caused: {delay_hours} hours\n\n")
+
+ @staticmethod
+ def load_selected_problems(
+ selected_problems_fpath: str,
+ ) -> dict[str, list[GeneralProblem | InstrumentProblem] | None]:
+ """Load previously selected problem classes from json."""
+ with open(
+ selected_problems_fpath,
+ encoding="utf-8",
+ ) as f:
+ problems_json = json.load(f)
+
+ # extract selected problem classes from their names (using the lookups preserves order they were saved in)
+ selected_problems = {"problem_class": [], "waypoint_i": []}
+ general_problems_lookup = {cls.short_name: cls for cls in GENERAL_PROBLEMS}
+ instrument_problems_lookup = {
+ cls.short_name: cls for cls in INSTRUMENT_PROBLEMS
+ }
+
+ for cls_name, wp_idx in zip(
+ problems_json["problem_class"], problems_json["waypoint_i"], strict=True
+ ):
+ if cls_name in general_problems_lookup:
+ selected_problems["problem_class"].append(
+ general_problems_lookup[cls_name]
+ )
+ elif cls_name in instrument_problems_lookup:
+ selected_problems["problem_class"].append(
+ instrument_problems_lookup[cls_name]
+ )
+ else:
+ raise ValueError(
+ f"Problem class '{cls_name}' not found in known problem registries."
+ )
+ selected_problems["waypoint_i"].append(wp_idx)
+
+ return selected_problems
+
+ def _log_problem(
+ self,
+ problem: GeneralProblem | InstrumentProblem,
+ problem_waypoint_i: int | None,
+ alert_msg: str,
+ problem_hash: str,
+ hash_fpath: Path,
+ log_delay: float,
+ ):
+ """Log problem occurrence with spinner and delay, save to checkpoint, write hash."""
+ time.sleep(3.0) # brief pause before spinner
+ with yaspin(text=alert_msg) as spinner:
+ time.sleep(log_delay)
+ spinner.ok("💥 ")
+
+ self._hash_to_json(
+ problem,
+ problem_hash,
+ problem_waypoint_i,
+ hash_fpath,
+ )
+
+ has_contingency = self._has_contingency(problem, problem_waypoint_i)
+
+ if has_contingency:
+ impact_str = LOG_MESSAGING["problem_avoided"]
+ result_str = "The expedition will carry on shortly as planned."
+
+ # update problem json to resolved = True
+ with open(hash_fpath, encoding="utf-8") as f:
+ problem_json = json.load(f)
+ problem_json["resolved"] = True
+ with open(hash_fpath, "w", encoding="utf-8") as f_out:
+ json.dump(problem_json, f_out, indent=4)
+
+ else:
+ affected = (
+ "in-port"
+ if problem_waypoint_i is None
+ else f"at waypoint {problem_waypoint_i + 1}"
+ )
+
+ impact_str = f"Not enough contingency time scheduled to mitigate delay of {problem.delay_duration.total_seconds() / 3600.0} hours occuring {affected} (future waypoint(s) would be reached too late).\n"
+ result_str = LOG_MESSAGING["schedule_problems"].format(
+ delay_duration=problem.delay_duration.total_seconds() / 3600.0,
+ problem_wp=affected,
+ expedition_yaml=EXPEDITION,
+ )
+
+ # save checkpoint
+ checkpoint = Checkpoint(
+ past_schedule=self.expedition.schedule,
+ failed_waypoint_i=problem_waypoint_i + 1
+ if problem_waypoint_i is not None
+ else 0,
+ ) # failed waypoint index then becomes the one after the one where the problem occurred; as this is when scheduling issues would be run into; for pre-departure problems this is the first waypoint
+ _save_checkpoint(checkpoint, self.expedition_dir)
+
+ # save latest version of expedition (overwrites previous)
+ self.expedition.to_yaml(self.expedition_dir.joinpath(CACHE, EXPEDITION_LATEST))
+
+ # display tabular output in
+ self._tabular_outputter(
+ problem_str=problem.message,
+ impact_str=impact_str,
+ result_str=result_str,
+ has_contingency=has_contingency,
+ )
+
+ if has_contingency:
+ return # continue expedition as normal
+ else:
+ sys.exit(0) # pause simulation
+
+ def _has_contingency(
+ self,
+ problem: InstrumentProblem | GeneralProblem,
+ problem_waypoint_i: int | None,
+ ) -> bool:
+ """Determine if enough contingency time has been scheduled to avoid delay affecting the waypoint immediately after the problem."""
+ if problem_waypoint_i is None:
+ return False # pre-departure problems always cause delay to first waypoint
+
+ else:
+ curr_wp = self.expedition.schedule.waypoints[problem_waypoint_i]
+ next_wp = self.expedition.schedule.waypoints[problem_waypoint_i + 1]
+
+ wp_stationkeeping_time = _calc_wp_stationkeeping_time(
+ curr_wp.instrument, self.expedition
+ )
+
+ scheduled_time_diff = next_wp.time - curr_wp.time
+
+ sail_time = _calc_sail_time(
+ curr_wp.location,
+ next_wp.location,
+ ship_speed_knots=self.expedition.ship_config.ship_speed_knots,
+ projection=PROJECTION,
+ )[0]
+
+ return (
+ scheduled_time_diff
+ > sail_time + wp_stationkeeping_time + problem.delay_duration
+ )
+
+ def _make_checkpoint(self, failed_waypoint_i: int | None = None) -> Checkpoint:
+ """Make checkpoint, also handling pre-departure."""
+ return Checkpoint(
+ past_schedule=self.expedition.schedule, failed_waypoint_i=failed_waypoint_i
+ )
+
+ def _cache_original_expedition(self, expedition: Expedition):
+ """Cache original schedule to file for user's reference."""
+ path = self.expedition_dir.joinpath(CACHE, EXPEDITION_ORIGINAL)
+ if path.exists():
+ return # don't overwrite if already cached
+ expedition.to_yaml(path)
+ print(f"\nOriginal expedition.yaml cached to {path}.\n")
+
+ @staticmethod
+ def _hash_to_json(
+ problem: InstrumentProblem | GeneralProblem,
+ problem_hash: str,
+ problem_waypoint_i: int | None,
+ hash_path: Path,
+ ) -> dict:
+ """Convert problem details + hash to json."""
+ hash_data = {
+ "problem_hash": problem_hash,
+ "message": problem.message,
+ "problem_waypoint_i": problem_waypoint_i,
+ "delay_duration_hours": problem.delay_duration.total_seconds() / 3600.0,
+ "timestamp": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
+ "resolved": False,
+ }
+ with open(hash_path, "w", encoding="utf-8") as f:
+ json.dump(hash_data, f, indent=4)
+
+ @staticmethod
+ def _tabular_outputter(problem_str, impact_str, result_str, has_contingency: bool):
+ """Display the problem, impact, and result in a live-updating table. Sleep times are included to increase readability and engagement for user."""
+ console = Console()
+ console.print() # line break before table
+
+ col_kwargs = dict(ratio=1, no_wrap=False, max_width=None, justify="left")
+
+ def make_table(problem, impact, result, col_kwargs, colour_results=False):
+ table = Table(box=box.SIMPLE, expand=True)
+ table.add_column("Problem Encountered", **col_kwargs)
+ table.add_column("Impact on schedule", **col_kwargs)
+
+ if colour_results:
+ style = "green1" if has_contingency else "red1"
+ table.add_column("Result", style=style, **col_kwargs)
+ else:
+ table.add_column("Result", **col_kwargs)
+
+ table.add_row(problem, impact, result)
+ return table
+
+ empty_spinner = Spinner("dots", text="")
+ impact_spinner = Spinner("dots", text="Assessing impact on schedule...")
+
+ with Live(console=console, refresh_per_second=10) as live:
+ # stage 0: empty table
+ table = make_table(empty_spinner, empty_spinner, empty_spinner, col_kwargs)
+ live.update(table)
+ time.sleep(3.0)
+
+ # stage 1: show problem
+ table = make_table(problem_str, empty_spinner, empty_spinner, col_kwargs)
+ live.update(table)
+ time.sleep(3.0)
+
+ # stage 2: spinner in "Impact on schedule" column
+ table = make_table(problem_str, impact_spinner, empty_spinner, col_kwargs)
+ live.update(table)
+ time.sleep(7.0)
+
+ # stage 3: table with problem and impact-investigation complete
+ table = make_table(problem_str, impact_str, empty_spinner, col_kwargs)
+ live.update(table)
+ time.sleep(4.0)
+
+ # stage 4: complete table with problem, impact, and result (give final outcome colour based on fail/success)
+ table = make_table(
+ problem_str, impact_str, result_str, col_kwargs, colour_results=True
+ )
+ live.update(table)
+ time.sleep(3.0)
diff --git a/src/virtualship/models/__init__.py b/src/virtualship/models/__init__.py
index d61c17194..7a106ba60 100644
--- a/src/virtualship/models/__init__.py
+++ b/src/virtualship/models/__init__.py
@@ -1,5 +1,6 @@
"""Pydantic models and data classes used to configure virtualship (i.e., in the configuration files or settings)."""
+from .checkpoint import Checkpoint
from .expedition import (
ADCPConfig,
ArgoFloatConfig,
@@ -34,4 +35,5 @@
"Spacetime",
"Expedition",
"InstrumentsConfig",
+ "Checkpoint",
]
diff --git a/src/virtualship/models/checkpoint.py b/src/virtualship/models/checkpoint.py
index 98fe1ae0a..700c714f5 100644
--- a/src/virtualship/models/checkpoint.py
+++ b/src/virtualship/models/checkpoint.py
@@ -2,6 +2,8 @@
from __future__ import annotations
+import json
+from datetime import timedelta
from pathlib import Path
import pydantic
@@ -9,7 +11,13 @@
from virtualship.errors import CheckpointError
from virtualship.instruments.types import InstrumentType
-from virtualship.models import Schedule
+from virtualship.models.expedition import Expedition, Schedule
+from virtualship.utils import (
+ EXPEDITION,
+ PROJECTION,
+ _calc_sail_time,
+ _calc_wp_stationkeeping_time,
+)
class _YamlDumper(yaml.SafeDumper):
@@ -29,6 +37,7 @@ class Checkpoint(pydantic.BaseModel):
"""
past_schedule: Schedule
+ failed_waypoint_i: int | None = None
def to_yaml(self, file_path: str | Path) -> None:
"""
@@ -51,24 +60,122 @@ def from_yaml(cls, file_path: str | Path) -> Checkpoint:
data = yaml.safe_load(file)
return Checkpoint(**data)
- def verify(self, schedule: Schedule) -> None:
+ def verify(self, expedition: Expedition, problems_dir: Path) -> None:
"""
- Verify that the given schedule matches the checkpoint's past schedule.
-
- This method checks if the waypoints in the given schedule match the waypoints
- in the checkpoint's past schedule up to the length of the past schedule.
- If there's a mismatch, it raises a CheckpointError.
+ Verify that the given schedule matches the checkpoint's past schedule , and/or that any problem has been resolved.
- :param schedule: The schedule to verify against the checkpoint.
- :type schedule: Schedule
- :raises CheckpointError: If the past waypoints in the given schedule
- have been changed compared to the checkpoint.
- :return: None
+ Addresses changes made by the user in response to both i) scheduling issues arising for not enough time for the ship to travel between waypoints, and ii) problems encountered during simulation.
"""
- if (
- not schedule.waypoints[: len(self.past_schedule.waypoints)]
- == self.past_schedule.waypoints
+ new_schedule = expedition.schedule
+
+ # 1) check that past waypoints have not been changed, unless is a pre-departure problem
+ if self.failed_waypoint_i is None:
+ pass
+ elif (
+ not new_schedule.waypoints[: int(self.failed_waypoint_i)]
+ == self.past_schedule.waypoints[: int(self.failed_waypoint_i)]
):
raise CheckpointError(
- "Past waypoints in schedule have been changed! Restore past schedule and only change future waypoints."
+ f"Past waypoints in schedule have been changed! Restore past schedule and only change future waypoints (waypoint {int(self.failed_waypoint_i) + 1} onwards)."
)
+
+ # 2) check that problems have been resolved in the new schedule
+ hash_fpaths = [
+ str(path.resolve()) for path in problems_dir.glob("problem_*.json")
+ ]
+
+ if len(hash_fpaths) > 0:
+ for file in hash_fpaths:
+ with open(file, encoding="utf-8") as f:
+ problem = json.load(f)
+ if problem["resolved"]:
+ continue
+ elif not problem["resolved"]:
+ # check if delay has been accounted for in the new schedule (at waypoint immediately after problem waypoint; or first waypoint if pre-departure problem)
+ delay_duration = timedelta(
+ hours=float(problem["delay_duration_hours"])
+ )
+
+ problem_waypoint = (
+ new_schedule.waypoints[0]
+ if problem["problem_waypoint_i"] is None
+ else new_schedule.waypoints[problem["problem_waypoint_i"]]
+ )
+
+ # pre-departure problem: check that whole delay duration has been added to first waypoint time (by testing against past schedule)
+ if problem["problem_waypoint_i"] is None:
+ time_diff = (
+ problem_waypoint.time - self.past_schedule.waypoints[0].time
+ )
+ resolved = time_diff >= delay_duration
+
+ # problem at a later waypoint: check new scheduled time exceeds sail time + delay duration + instrument deployment time (rather whole delay duration add-on, as there may be _some_ contingency time already scheduled)
+ else:
+ failed_waypoint = new_schedule.waypoints[self.failed_waypoint_i]
+
+ scheduled_time = failed_waypoint.time - problem_waypoint.time
+
+ stationkeeping_time = _calc_wp_stationkeeping_time(
+ problem_waypoint.instrument,
+ expedition,
+ ) # total time required to deploy instruments at problem waypoint
+
+ sail_time = _calc_sail_time(
+ problem_waypoint.location,
+ failed_waypoint.location,
+ ship_speed_knots=expedition.ship_config.ship_speed_knots,
+ projection=PROJECTION,
+ )[0]
+
+ min_time_required = (
+ sail_time + delay_duration + stationkeeping_time
+ )
+
+ resolved = scheduled_time >= min_time_required
+
+ if resolved:
+ print(
+ "\n\n🎉 Previous problem has been resolved in the schedule.\n"
+ )
+
+ # save back to json file changing the resolved status to True
+ problem["resolved"] = True
+ with open(file, "w", encoding="utf-8") as f_out:
+ json.dump(problem, f_out, indent=4)
+
+ # only handle the first unresolved problem found; others will be handled in subsequent runs but are not yet known to the user
+ break
+
+ else:
+ problem_wp_str = (
+ "in-port"
+ if problem["problem_waypoint_i"] is None
+ else f"at waypoint {problem['problem_waypoint_i'] + 1}"
+ )
+ affected_wp_str = (
+ "1"
+ if problem["problem_waypoint_i"] is None
+ else f"{problem['problem_waypoint_i'] + 2}"
+ )
+ time_elapsed = (
+ (sail_time + delay_duration + stationkeeping_time)
+ if problem["problem_waypoint_i"] is not None
+ else delay_duration
+ )
+ failed_waypoint_time = (
+ failed_waypoint.time
+ if problem["problem_waypoint_i"] is not None
+ else new_schedule.waypoints[0].time
+ )
+ current_time = problem_waypoint.time + time_elapsed
+
+ raise CheckpointError(
+ f"The problem encountered in previous simulation has not been resolved in the schedule! Please adjust the schedule to account for delays caused by the problem (by using `virtualship plan` or directly editing the {EXPEDITION} file).\n\n"
+ f"The problem was associated with a delay duration of {problem['delay_duration_hours']} hours {problem_wp_str} (meaning waypoint {affected_wp_str} could not be reached in time). "
+ f"Currently, the ship would reach waypoint {affected_wp_str} at {current_time}, but the scheduled time is {failed_waypoint_time}."
+ + (
+ f"\n\nHint: don't forget to factor in the time required to deploy the instruments {problem_wp_str} when rescheduling waypoint {affected_wp_str}."
+ if problem["problem_waypoint_i"] is not None
+ else ""
+ )
+ )
diff --git a/src/virtualship/models/expedition.py b/src/virtualship/models/expedition.py
index b8f65558f..3184fcaa8 100644
--- a/src/virtualship/models/expedition.py
+++ b/src/virtualship/models/expedition.py
@@ -12,9 +12,11 @@
from virtualship.errors import InstrumentsConfigError, ScheduleError
from virtualship.instruments.types import InstrumentType
from virtualship.utils import (
+ _calc_sail_time,
_get_bathy_data,
_get_waypoint_latlons,
_validate_numeric_to_timedelta,
+ register_instrument_config,
)
from .location import Location
@@ -165,23 +167,22 @@ def verify(
if wp.instrument is InstrumentType.CTD:
time += timedelta(minutes=20)
- geodinv: tuple[float, float, float] = projection.inv(
- wp.location.lon,
- wp.location.lat,
- wp_next.location.lon,
- wp_next.location.lat,
- )
- distance = geodinv[2]
+ time_to_reach = _calc_sail_time(
+ wp.location,
+ wp_next.location,
+ ship_speed,
+ projection,
+ )[0]
- time_to_reach = timedelta(seconds=distance / ship_speed * 3600 / 1852)
arrival_time = time + time_to_reach
if wp_next.time is None:
time = arrival_time
elif arrival_time > wp_next.time:
raise ScheduleError(
- f"Waypoint planning is not valid: would arrive too late at waypoint number {wp_i + 2}. "
- f"location: {wp_next.location} time: {wp_next.time} instrument: {wp_next.instrument}"
+ f"Waypoint planning is not valid: would arrive too late at waypoint {wp_i + 2}. "
+ f"Location: {wp_next.location} Time: {wp_next.time}. "
+ f"Currently projected to arrive at: {arrival_time}."
)
else:
time = wp_next.time
@@ -204,6 +205,7 @@ def serialize_instrument(self, instrument):
return instrument.value if instrument else None
+@register_instrument_config(InstrumentType.ARGO_FLOAT)
class ArgoFloatConfig(pydantic.BaseModel):
"""Configuration for argos floats."""
@@ -244,6 +246,7 @@ def _validate_stationkeeping_time(cls, value: int | float | timedelta) -> timede
model_config = pydantic.ConfigDict(populate_by_name=True)
+@register_instrument_config(InstrumentType.ADCP)
class ADCPConfig(pydantic.BaseModel):
"""Configuration for ADCP instrument."""
@@ -266,6 +269,7 @@ def _validate_period(cls, value: int | float | timedelta) -> timedelta:
return _validate_numeric_to_timedelta(value, "minutes")
+@register_instrument_config(InstrumentType.CTD)
class CTDConfig(pydantic.BaseModel):
"""Configuration for CTD instrument."""
@@ -288,6 +292,7 @@ def _validate_stationkeeping_time(cls, value: int | float | timedelta) -> timede
return _validate_numeric_to_timedelta(value, "minutes")
+@register_instrument_config(InstrumentType.CTD_BGC)
class CTD_BGCConfig(pydantic.BaseModel):
"""Configuration for CTD_BGC instrument."""
@@ -310,6 +315,7 @@ def _validate_stationkeeping_time(cls, value: int | float | timedelta) -> timede
return _validate_numeric_to_timedelta(value, "minutes")
+@register_instrument_config(InstrumentType.UNDERWATER_ST)
class ShipUnderwaterSTConfig(pydantic.BaseModel):
"""Configuration for underwater ST."""
@@ -330,6 +336,7 @@ def _validate_period(cls, value: int | float | timedelta) -> timedelta:
return _validate_numeric_to_timedelta(value, "minutes")
+@register_instrument_config(InstrumentType.DRIFTER)
class DrifterConfig(pydantic.BaseModel):
"""Configuration for drifters."""
@@ -364,6 +371,7 @@ def _validate_stationkeeping_time(cls, value: int | float | timedelta) -> timede
return _validate_numeric_to_timedelta(value, "minutes")
+@register_instrument_config(InstrumentType.XBT)
class XBTConfig(pydantic.BaseModel):
"""Configuration for xbt instrument."""
diff --git a/src/virtualship/utils.py b/src/virtualship/utils.py
index 2879855e4..ed13fae10 100644
--- a/src/virtualship/utils.py
+++ b/src/virtualship/utils.py
@@ -1,6 +1,7 @@
from __future__ import annotations
import glob
+import hashlib
import os
import re
import warnings
@@ -12,23 +13,127 @@
import copernicusmarine
import numpy as np
+import pyproj
import xarray as xr
from parcels import FieldSet
from virtualship.errors import CopernicusCatalogueError
if TYPE_CHECKING:
- from virtualship.expedition.simulate_schedule import ScheduleOk
- from virtualship.models import Expedition
-
+ from virtualship.expedition.simulate_schedule import (
+ ScheduleOk,
+ )
+ from virtualship.models import Expedition, Location
+ from virtualship.models.checkpoint import Checkpoint
import pandas as pd
import yaml
from pydantic import BaseModel
from yaspin import Spinner
+# =====================================================
+# SECTION: simulation constants
+# =====================================================
+
EXPEDITION = "expedition.yaml"
CHECKPOINT = "checkpoint.yaml"
+RESULTS = "results"
+
+# projection used to sail between waypoints
+PROJECTION = pyproj.Geod(ellps="WGS84")
+
+# caching for problems module
+CACHE = "cache"
+EXPEDITION_IDENTIFIER = "id_latest.txt"
+PROBLEMS_ENCOUNTERED = "problems_encountered_" + "{expedition_id}"
+SELECTED_PROBLEMS = "selected_problems.json"
+REPORT = "post_expedition_report.txt"
+
+EXPEDITION_ORIGINAL = "expedition_original.yaml"
+EXPEDITION_LATEST = "expedition_latest.yaml"
+
+# =====================================================
+# SECTION: Copernicus Marine Service constants
+# =====================================================
+
+# Copernicus Marine product IDs
+
+PRODUCT_IDS = {
+ "phys": {
+ "reanalysis": "cmems_mod_glo_phy_my_0.083deg_P1D-m",
+ "reanalysis_interim": "cmems_mod_glo_phy_myint_0.083deg_P1D-m",
+ "analysis": "cmems_mod_glo_phy_anfc_0.083deg_P1D-m",
+ },
+ "bgc": {
+ "reanalysis": "cmems_mod_glo_bgc_my_0.25deg_P1D-m",
+ "reanalysis_interim": "cmems_mod_glo_bgc_myint_0.25deg_P1D-m",
+ "analysis": None, # will be set per variable
+ },
+}
+
+BGC_ANALYSIS_IDS = {
+ "o2": "cmems_mod_glo_bgc-bio_anfc_0.25deg_P1D-m",
+ "chl": "cmems_mod_glo_bgc-pft_anfc_0.25deg_P1D-m",
+ "no3": "cmems_mod_glo_bgc-nut_anfc_0.25deg_P1D-m",
+ "po4": "cmems_mod_glo_bgc-nut_anfc_0.25deg_P1D-m",
+ "ph": "cmems_mod_glo_bgc-car_anfc_0.25deg_P1D-m",
+ "phyc": "cmems_mod_glo_bgc-pft_anfc_0.25deg_P1D-m",
+ "nppv": "cmems_mod_glo_bgc-bio_anfc_0.25deg_P1D-m",
+}
+
+MONTHLY_BGC_REANALYSIS_IDS = {
+ "ph": "cmems_mod_glo_bgc_my_0.25deg_P1M-m",
+ "phyc": "cmems_mod_glo_bgc_my_0.25deg_P1M-m",
+}
+MONTHLY_BGC_REANALYSIS_INTERIM_IDS = {
+ "ph": "cmems_mod_glo_bgc_myint_0.25deg_P1M-m",
+ "phyc": "cmems_mod_glo_bgc_myint_0.25deg_P1M-m",
+}
+
+# variables used in VirtualShip which are physical or biogeochemical variables, respectively
+COPERNICUSMARINE_PHYS_VARIABLES = ["uo", "vo", "so", "thetao"]
+COPERNICUSMARINE_BGC_VARIABLES = ["o2", "chl", "no3", "po4", "ph", "phyc", "nppv"]
+
+BATHYMETRY_ID = "cmems_mod_glo_phy_my_0.083deg_static"
+
+
+# =====================================================
+# SECTION: decorators / dynamic registries and mapping
+# =====================================================
+
+# helpful for dynamic access in different parts of the codebase
+
+# main instrument (simulation) class registry and registration utilities
+INSTRUMENT_CLASS_MAP = {}
+
+
+def register_instrument(instrument_type):
+ def decorator(cls):
+ INSTRUMENT_CLASS_MAP[instrument_type] = cls
+ return cls
+
+ return decorator
+
+
+def get_instrument_class(instrument_type):
+ return INSTRUMENT_CLASS_MAP.get(instrument_type)
+
+
+# map for instrument type to instrument config (pydantic basemodel) names
+INSTRUMENT_CONFIG_MAP = {}
+
+
+def register_instrument_config(instrument_type):
+ def decorator(cls):
+ INSTRUMENT_CONFIG_MAP[instrument_type] = cls.__name__
+ return cls
+
+ return decorator
+
+
+# =====================================================
+# SECTION: helper functions
+# =====================================================
def load_static_file(name: str) -> str:
@@ -215,40 +320,6 @@ def _get_expedition(expedition_dir: Path) -> Expedition:
) from e
-# custom ship spinner
-ship_spinner = Spinner(
- interval=240,
- frames=[
- " 🚢 ",
- " 🚢 ",
- " 🚢 ",
- " 🚢 ",
- " 🚢",
- " 🚢 ",
- " 🚢 ",
- " 🚢 ",
- " 🚢 ",
- "🚢 ",
- ],
-)
-
-
-# InstrumentType -> Instrument registry and registration utilities.
-INSTRUMENT_CLASS_MAP = {}
-
-
-def register_instrument(instrument_type):
- def decorator(cls):
- INSTRUMENT_CLASS_MAP[instrument_type] = cls
- return cls
-
- return decorator
-
-
-def get_instrument_class(instrument_type):
- return INSTRUMENT_CLASS_MAP.get(instrument_type)
-
-
def add_dummy_UV(fieldset: FieldSet):
"""Add a dummy U and V field to a FieldSet to satisfy parcels FieldSet completeness checks."""
if "U" not in fieldset.__dict__.keys():
@@ -272,47 +343,6 @@ def add_dummy_UV(fieldset: FieldSet):
) from None
-# Copernicus Marine product IDs
-
-PRODUCT_IDS = {
- "phys": {
- "reanalysis": "cmems_mod_glo_phy_my_0.083deg_P1D-m",
- "reanalysis_interim": "cmems_mod_glo_phy_myint_0.083deg_P1D-m",
- "analysis": "cmems_mod_glo_phy_anfc_0.083deg_P1D-m",
- },
- "bgc": {
- "reanalysis": "cmems_mod_glo_bgc_my_0.25deg_P1D-m",
- "reanalysis_interim": "cmems_mod_glo_bgc_myint_0.25deg_P1D-m",
- "analysis": None, # will be set per variable
- },
-}
-
-BGC_ANALYSIS_IDS = {
- "o2": "cmems_mod_glo_bgc-bio_anfc_0.25deg_P1D-m",
- "chl": "cmems_mod_glo_bgc-pft_anfc_0.25deg_P1D-m",
- "no3": "cmems_mod_glo_bgc-nut_anfc_0.25deg_P1D-m",
- "po4": "cmems_mod_glo_bgc-nut_anfc_0.25deg_P1D-m",
- "ph": "cmems_mod_glo_bgc-car_anfc_0.25deg_P1D-m",
- "phyc": "cmems_mod_glo_bgc-pft_anfc_0.25deg_P1D-m",
- "nppv": "cmems_mod_glo_bgc-bio_anfc_0.25deg_P1D-m",
-}
-
-MONTHLY_BGC_REANALYSIS_IDS = {
- "ph": "cmems_mod_glo_bgc_my_0.25deg_P1M-m",
- "phyc": "cmems_mod_glo_bgc_my_0.25deg_P1M-m",
-}
-MONTHLY_BGC_REANALYSIS_INTERIM_IDS = {
- "ph": "cmems_mod_glo_bgc_myint_0.25deg_P1M-m",
- "phyc": "cmems_mod_glo_bgc_myint_0.25deg_P1M-m",
-}
-
-# variables used in VirtualShip which are physical or biogeochemical variables, respectively
-COPERNICUSMARINE_PHYS_VARIABLES = ["uo", "vo", "so", "thetao"]
-COPERNICUSMARINE_BGC_VARIABLES = ["o2", "chl", "no3", "po4", "ph", "phyc", "nppv"]
-
-BATHYMETRY_ID = "cmems_mod_glo_phy_my_0.083deg_static"
-
-
def _select_product_id(
physical: bool,
schedule_start,
@@ -552,3 +582,103 @@ def _get_waypoint_latlons(waypoints):
strict=True,
)
return wp_lats, wp_lons
+
+
+def _save_checkpoint(checkpoint: Checkpoint, expedition_dir: Path) -> None:
+ file_path = expedition_dir.joinpath(CHECKPOINT)
+ checkpoint.to_yaml(file_path)
+
+
+def _calc_sail_time(
+ location1: Location,
+ location2: Location,
+ ship_speed_knots: float,
+ projection: pyproj.Geod,
+) -> tuple[timedelta, tuple[float, float, float], float]:
+ """Calculate sail time between two waypoints (their locations) given ship speed in knots."""
+ geodinv: tuple[float, float, float] = projection.inv(
+ lons1=location1.longitude,
+ lats1=location1.latitude,
+ lons2=location2.longitude,
+ lats2=location2.latitude,
+ )
+ ship_speed_meter_per_second = ship_speed_knots * 1852 / 3600
+ distance_to_next_waypoint = geodinv[2]
+ return (
+ timedelta(seconds=distance_to_next_waypoint / ship_speed_meter_per_second),
+ geodinv[0],
+ ship_speed_meter_per_second,
+ )
+
+
+def _calc_wp_stationkeeping_time(
+ wp_instrument_types: list,
+ expedition: Expedition,
+ instrument_config_map: dict = INSTRUMENT_CONFIG_MAP,
+) -> timedelta:
+ """For a given waypoint (and the instruments present at this waypoint), calculate how much time is required to carry out all instrument deployments."""
+ from virtualship.instruments.types import InstrumentType # avoid circular imports
+
+ # TODO: this can be removed if/when CTD and CTD_BGC are merged to a single instrument
+ both_ctd_and_bgc = (
+ InstrumentType.CTD in wp_instrument_types
+ and InstrumentType.CTD_BGC in wp_instrument_types
+ )
+
+ # extract configs for all instruments present in expedition
+ valid_instrument_configs = [
+ iconfig
+ for _, iconfig in expedition.instruments_config.__dict__.items()
+ if iconfig
+ ]
+
+ # extract configs for instruments present in given waypoint
+ wp_instrument_configs = []
+ for iconfig in valid_instrument_configs:
+ for itype in wp_instrument_types:
+ if instrument_config_map[itype] == iconfig.__class__.__name__:
+ wp_instrument_configs.append(iconfig)
+
+ # get wp total stationkeeping time
+ cumulative_stationkeeping_time = timedelta()
+ for iconfig in wp_instrument_configs:
+ if (
+ both_ctd_and_bgc
+ and iconfig.__class__.__name__
+ == INSTRUMENT_CONFIG_MAP[InstrumentType.CTD_BGC]
+ ):
+ continue # only need to add time cost once if both CTD and CTD_BGC are being taken; in reality they would be done on the same instrument
+ if hasattr(iconfig, "stationkeeping_time"):
+ cumulative_stationkeeping_time += iconfig.stationkeeping_time
+
+ return cumulative_stationkeeping_time
+
+
+def _make_hash(s: str, length: int) -> str:
+ """Make unique hash for problem occurrence."""
+ assert length % 2 == 0, "Length must be even."
+ half_length = length // 2
+ return hashlib.shake_128(s.encode("utf-8")).hexdigest(half_length)
+
+
+# =====================================================
+# SECTION: misc.
+# =====================================================
+
+
+# custom ship spinner
+ship_spinner = Spinner(
+ interval=240,
+ frames=[
+ " 🚢 ",
+ " 🚢 ",
+ " 🚢 ",
+ " 🚢 ",
+ " 🚢",
+ " 🚢 ",
+ " 🚢 ",
+ " 🚢 ",
+ " 🚢 ",
+ "🚢 ",
+ ],
+)
diff --git a/tests/cli/test_run.py b/tests/cli/test_run.py
index 190442347..d546cae8c 100644
--- a/tests/cli/test_run.py
+++ b/tests/cli/test_run.py
@@ -53,7 +53,9 @@ def test_run(tmp_path, monkeypatch):
fake_data_dir = tmp_path / "fake_data"
fake_data_dir.mkdir()
- _run(expedition_dir, from_data=fake_data_dir)
+ _run(
+ expedition_dir, prob_level=0, from_data=fake_data_dir
+ ) # problems turned off here
results_dir = expedition_dir / "results"
diff --git a/tests/expedition/test_expedition.py b/tests/expedition/test_expedition.py
index 90027e8ec..314b9db89 100644
--- a/tests/expedition/test_expedition.py
+++ b/tests/expedition/test_expedition.py
@@ -199,7 +199,7 @@ def test_verify_on_land():
]
),
ScheduleError,
- "Waypoint planning is not valid: would arrive too late at waypoint number 2...",
+ "Waypoint planning is not valid: would arrive too late at waypoint 2\\.",
id="NotEnoughTime",
),
],
diff --git a/tests/make_realistic/problems/test_scenarios.py b/tests/make_realistic/problems/test_scenarios.py
new file mode 100644
index 000000000..294d1ed71
--- /dev/null
+++ b/tests/make_realistic/problems/test_scenarios.py
@@ -0,0 +1,45 @@
+from datetime import timedelta
+
+from virtualship.instruments.types import InstrumentType
+from virtualship.make_realistic.problems.scenarios import (
+ GENERAL_PROBLEMS,
+ INSTRUMENT_PROBLEMS,
+ GeneralProblem,
+ InstrumentProblem,
+)
+
+
+def _assert_general_problem_class(cls):
+ assert isinstance(cls, GeneralProblem)
+
+ # required attributes and types
+ assert isinstance(cls.message, str)
+ assert cls.message.strip(), "message should not be empty"
+
+ assert isinstance(cls.delay_duration, timedelta)
+ assert isinstance(cls.pre_departure, bool)
+
+
+def _assert_instrument_problem_class(cls):
+ assert isinstance(cls, InstrumentProblem)
+
+ # required attributes and types
+ assert isinstance(cls.message, str)
+ assert cls.message.strip(), "message should not be empty"
+
+ assert isinstance(cls.delay_duration, timedelta)
+ assert isinstance(cls.instrument_type, InstrumentType)
+
+
+def test_general_problems():
+ assert GENERAL_PROBLEMS, "GENERAL_PROBLEMS should not be empty"
+
+ for cls in GENERAL_PROBLEMS:
+ _assert_general_problem_class(cls)
+
+
+def test_instrument_problems():
+ assert INSTRUMENT_PROBLEMS, "INSTRUMENT_PROBLEMS should not be empty"
+
+ for cls in INSTRUMENT_PROBLEMS:
+ _assert_instrument_problem_class(cls)
diff --git a/tests/make_realistic/problems/test_simulator.py b/tests/make_realistic/problems/test_simulator.py
new file mode 100644
index 000000000..6d08635ee
--- /dev/null
+++ b/tests/make_realistic/problems/test_simulator.py
@@ -0,0 +1,257 @@
+import json
+import random
+from datetime import datetime, timedelta
+
+from virtualship.instruments.types import InstrumentType
+from virtualship.make_realistic.problems.scenarios import (
+ GENERAL_PROBLEMS,
+ GeneralProblem,
+ InstrumentProblem,
+)
+from virtualship.make_realistic.problems.simulator import ProblemSimulator
+from virtualship.models.expedition import (
+ Expedition,
+ InstrumentsConfig,
+ Schedule,
+ ShipConfig,
+ Waypoint,
+)
+from virtualship.models.location import Location
+from virtualship.utils import REPORT
+
+
+def _make_simple_expedition(
+ num_waypoints: int = 2, distance_scale: float = 1.0, no_instruments: bool = False
+) -> Expedition:
+ """Func. rather than fixture to allow for configurability in different tests."""
+ sample_datetime = datetime(2024, 1, 1, 0, 0, 0)
+ instruments_non_underway = [inst for inst in InstrumentType if not inst.is_underway]
+
+ waypoints = []
+ for i in range(num_waypoints):
+ wp = Waypoint(
+ location=Location(
+ latitude=0.0 + i * distance_scale, longitude=0.0 + i * distance_scale
+ ),
+ time=sample_datetime + timedelta(days=i),
+ instrument=[]
+ if no_instruments
+ else random.sample(instruments_non_underway, 3),
+ )
+ waypoints.append(wp)
+
+ schedule = Schedule(waypoints=waypoints)
+ instruments = InstrumentsConfig()
+ ship = ShipConfig(ship_speed_knots=10.0)
+ return Expedition(
+ schedule=schedule, instruments_config=instruments, ship_config=ship
+ )
+
+
+def test_select_problems_single_waypoint_returns_pre_departure(tmp_path):
+ expedition = _make_simple_expedition(num_waypoints=1)
+ instruments_in_expedition = expedition.get_instruments()
+ simulator = ProblemSimulator(expedition, str(tmp_path))
+ problems = simulator.select_problems(instruments_in_expedition, prob_level=2)
+
+ assert isinstance(problems, dict)
+ assert len(problems["problem_class"]) == 1
+ assert problems["waypoint_i"] == [None]
+
+ problem_cls = problems["problem_class"][0]
+ assert isinstance(problem_cls, GeneralProblem)
+ assert getattr(problem_cls, "pre_departure", False) is True
+
+
+def test_no_instruments_no_instruments_problems(tmp_path):
+ expedition = _make_simple_expedition(num_waypoints=2, no_instruments=True)
+ instruments_in_expedition = expedition.get_instruments()
+ assert len(instruments_in_expedition) == 0, "Expedition should have no instruments"
+
+ simulator = ProblemSimulator(expedition, str(tmp_path))
+ problems = simulator.select_problems(instruments_in_expedition, prob_level=2)
+
+ has_instrument_problems = any(
+ isinstance(cls, InstrumentProblem) for cls in problems["problem_class"]
+ )
+ assert not has_instrument_problems, (
+ "Should not select instrument problems when no instruments are present"
+ )
+
+
+def test_select_problems_prob_level_zero():
+ expedition = _make_simple_expedition(num_waypoints=2)
+ instruments_in_expedition = expedition.get_instruments()
+ simulator = ProblemSimulator(expedition, ".")
+
+ problems = simulator.select_problems(instruments_in_expedition, prob_level=0)
+ assert problems is None
+
+
+def test_cache_and_load_selected_problems_roundtrip(tmp_path):
+ expedition = _make_simple_expedition(num_waypoints=2)
+ simulator = ProblemSimulator(expedition, str(tmp_path))
+
+ # pick two general problems (registry should contain entries)
+ problem1 = GENERAL_PROBLEMS[0]
+ problem2 = GENERAL_PROBLEMS[1] if len(GENERAL_PROBLEMS) > 1 else problem1
+
+ problems = {"problem_class": [problem1, problem2], "waypoint_i": [None, 0]}
+
+ sel_fpath = tmp_path / "subdir" / "selected_problems.json"
+ simulator.cache_selected_problems(problems, str(sel_fpath))
+
+ assert sel_fpath.exists()
+ with open(sel_fpath, encoding="utf-8") as f:
+ data = json.load(f)
+ assert "problem_class" in data and "waypoint_i" in data
+
+ # now load via simulator, verify class names map back to original selected problem classes
+ loaded = simulator.load_selected_problems(str(sel_fpath))
+ assert loaded["waypoint_i"] == problems["waypoint_i"]
+ assert [c.short_name for c in problems["problem_class"]] == [
+ c.short_name for c in loaded["problem_class"]
+ ]
+
+
+def test_hash_to_json(tmp_path):
+ expedition = _make_simple_expedition(num_waypoints=2)
+ simulator = ProblemSimulator(expedition, str(tmp_path))
+
+ any_problem = GENERAL_PROBLEMS[0]
+
+ hash_path = tmp_path / "problem_hash.json"
+ simulator._hash_to_json(
+ any_problem, "deadbeef", None, hash_path
+ ) # "deadbeef" as sub for hex in test
+
+ assert hash_path.exists()
+ with open(hash_path, encoding="utf-8") as f:
+ obj = json.load(f)
+ assert obj["problem_hash"] == "deadbeef"
+ assert "message" in obj and "delay_duration_hours" in obj
+ assert obj["resolved"] is False
+
+
+def test_has_contingency_pre_departure(tmp_path):
+ expedition = _make_simple_expedition(num_waypoints=2)
+ simulator = ProblemSimulator(expedition, str(tmp_path))
+
+ pre_departure_problem = next(
+ gp for gp in GENERAL_PROBLEMS if getattr(gp, "pre_departure", False)
+ )
+ assert pre_departure_problem is not None, (
+ "Need at least one pre-departure problem class in the general problem registry"
+ )
+
+ # _has_contingency should return False for pre-departure (waypoint = None)
+ assert simulator._has_contingency(pre_departure_problem, None) is False
+
+
+def test_select_problems_prob_levels(tmp_path):
+ expedition = _make_simple_expedition(num_waypoints=3)
+ instruments_in_expedition = expedition.get_instruments()
+ simulator = ProblemSimulator(expedition, str(tmp_path))
+
+ for level in range(3): # prob levels 0, 1, 2
+ problems = simulator.select_problems(
+ instruments_in_expedition, prob_level=level
+ )
+ if level == 0:
+ assert problems is None
+ else:
+ assert isinstance(problems, dict)
+ assert len(problems["problem_class"]) > 0
+ assert len(problems["waypoint_i"]) == len(problems["problem_class"])
+ if level == 1:
+ assert len(problems["problem_class"]) <= 2
+
+
+def test_prob_level_two_more_problems(tmp_path):
+ prob_level = 2
+
+ short_expedition = _make_simple_expedition(
+ num_waypoints=2
+ ) # short in terms of number of waypoints
+ instruments_in_short_expedition = short_expedition.get_instruments()
+ simulator_short = ProblemSimulator(short_expedition, str(tmp_path))
+
+ long_expedition = _make_simple_expedition(num_waypoints=12)
+ instruments_in_long_expedition = long_expedition.get_instruments()
+ simulator_long = ProblemSimulator(long_expedition, str(tmp_path))
+
+ problems_short = simulator_short.select_problems(
+ instruments_in_short_expedition, prob_level=prob_level
+ )
+ problems_long = simulator_long.select_problems(
+ instruments_in_long_expedition, prob_level=prob_level
+ )
+
+ assert len(problems_long["problem_class"]) >= len(
+ problems_short["problem_class"]
+ ), "Longer expedition should have more problems than shorter one at prob_level=2"
+
+
+def test_unique_waypoint_assignment(tmp_path):
+ expedition = _make_simple_expedition(num_waypoints=12)
+ instruments_in_expedition = expedition.get_instruments()
+ simulator = ProblemSimulator(expedition, str(tmp_path))
+
+ problems = simulator.select_problems(instruments_in_expedition, prob_level=2)
+ waypoint_indices = problems["waypoint_i"]
+
+ # filter None (pre-departure) and check uniqueness of waypoint indices
+ non_none_indices = [i for i in waypoint_indices if i is not None]
+ assert len(non_none_indices) == len(set(non_none_indices)), (
+ "Each problem should be assigned a unique waypoint index (excluding pre-departure problems)"
+ )
+
+
+def test_has_contingency_during_expedition(tmp_path):
+ # expedition with long distance between waypoints
+ long_wp_expedition = _make_simple_expedition(num_waypoints=2, distance_scale=3.0)
+ long_simulator = ProblemSimulator(long_wp_expedition, str(tmp_path))
+ # short distance
+ short_wp_expedition = _make_simple_expedition(num_waypoints=2, distance_scale=0.01)
+ short_simulator = ProblemSimulator(short_wp_expedition, str(tmp_path))
+
+ # a during-expedition general problem
+ problem_cls = next(
+ c for c in GENERAL_PROBLEMS if not getattr(c, "pre_departure", False)
+ )
+
+ assert problem_cls is not None, (
+ "Need at least one non-pre-departure problem class in the general problem registry"
+ )
+
+ # short distance expedition should have contingency, long distance should not (given time between waypoints and ship speed is constant)
+ assert short_simulator._has_contingency(problem_cls, problem_waypoint_i=0) is True
+ assert long_simulator._has_contingency(problem_cls, problem_waypoint_i=0) is False
+
+
+def test_post_expedition_report(tmp_path):
+ expedition = _make_simple_expedition(
+ num_waypoints=12
+ ) # longer expedition to increase likelihood of multiple problems at prob_level=2
+ instruments_in_expedition = expedition.get_instruments()
+
+ simulator = ProblemSimulator(expedition, str(tmp_path))
+ problems = simulator.select_problems(instruments_in_expedition, prob_level=2)
+
+ report_path = tmp_path / REPORT
+ simulator.post_expedition_report(problems, report_path)
+
+ assert report_path.exists()
+ with open(report_path, encoding="utf-8") as f:
+ content = f.read()
+
+ assert content.count("Problem:") == len(problems["problem_class"]), (
+ "Number of reported problems should match number of selected problems."
+ )
+ assert content.count("Delay caused:") == len(problems["problem_class"]), (
+ "Number of reported delay durations should match number of selected problems."
+ )
+ for problem in problems["problem_class"]:
+ assert problem.message in content, (
+ "Problem messages in report should match those of selected problems."
+ )
diff --git a/tests/test_checkpoint.py b/tests/test_checkpoint.py
new file mode 100644
index 000000000..f84693c96
--- /dev/null
+++ b/tests/test_checkpoint.py
@@ -0,0 +1,128 @@
+import json
+from datetime import datetime
+from pathlib import Path
+
+import pytest
+
+from virtualship.models.checkpoint import Checkpoint
+from virtualship.models.expedition import Expedition, Schedule, Waypoint
+from virtualship.models.location import Location
+from virtualship.utils import get_example_expedition
+
+
+@pytest.fixture
+def expedition(tmp_file):
+ with open(tmp_file, "w") as file:
+ file.write(get_example_expedition())
+ return Expedition.from_yaml(tmp_file)
+
+
+def make_dummy_checkpoint(failed_waypoint_i=None):
+ wp1 = Waypoint(
+ location=Location(latitude=0.0, longitude=0.0),
+ time=datetime(2024, 2, 1, 10, 0, 0),
+ instrument=[],
+ )
+ wp2 = Waypoint(
+ location=Location(latitude=1.0, longitude=1.0),
+ time=datetime(2024, 2, 1, 12, 0, 0),
+ instrument=[],
+ )
+
+ schedule = Schedule(waypoints=[wp1, wp2])
+ return Checkpoint(past_schedule=schedule, failed_waypoint_i=failed_waypoint_i)
+
+
+def test_to_and_from_yaml(tmp_path):
+ cp = make_dummy_checkpoint()
+ file_path = tmp_path / "checkpoint.yaml"
+ cp.to_yaml(file_path)
+ loaded = Checkpoint.from_yaml(file_path)
+
+ assert isinstance(loaded, Checkpoint)
+ assert loaded.past_schedule.waypoints[0].time == cp.past_schedule.waypoints[0].time
+
+
+def test_verify_no_failed_waypoint(expedition):
+ cp = make_dummy_checkpoint(failed_waypoint_i=None)
+ cp.verify(expedition, Path("/tmp/empty")) # should not raise errors
+
+
+def test_verify_past_waypoints_changed(expedition):
+ cp = make_dummy_checkpoint(failed_waypoint_i=1)
+
+ # change past waypoints
+ new_wp1 = Waypoint(
+ location=Location(latitude=0.0, longitude=0.0),
+ time=datetime(2024, 2, 1, 11, 0, 0),
+ instrument=None,
+ )
+ new_wp2 = Waypoint(
+ location=Location(latitude=1.0, longitude=1.0),
+ time=datetime(2024, 2, 1, 12, 0, 0),
+ instrument=None,
+ )
+ new_schedule = Schedule(waypoints=[new_wp1, new_wp2])
+ expedition.schedule = new_schedule
+
+ with pytest.raises(Exception) as excinfo:
+ cp.verify(expedition, Path("/tmp/empty"))
+ assert "Past waypoints in schedule have been changed" in str(excinfo.value)
+
+
+@pytest.mark.parametrize(
+ "delay_duration_hours, should_resolve",
+ [
+ (1.0, True), # problem resolved
+ (5.0, False), # problem unresolved
+ ],
+)
+def test_verify_problem_resolution(
+ tmp_path,
+ expedition,
+ delay_duration_hours,
+ should_resolve,
+):
+ wp1 = Waypoint(
+ location=Location(latitude=0.0, longitude=0.0),
+ time=datetime(2024, 2, 1, 10, 0, 0),
+ instrument=[],
+ )
+ wp2 = Waypoint(
+ location=Location(latitude=1.0, longitude=1.0),
+ time=datetime(2024, 2, 1, 12, 0, 0),
+ instrument=[],
+ )
+ past_schedule = Schedule(waypoints=[wp1, wp2])
+ cp = Checkpoint(past_schedule=past_schedule, failed_waypoint_i=1)
+
+ # new schedule
+ new_wp1 = wp1
+ new_wp2 = Waypoint(
+ location=Location(latitude=1.0, longitude=1.0),
+ time=datetime(2024, 2, 1, 20, 0, 0),
+ instrument=[],
+ )
+ new_schedule = Schedule(waypoints=[new_wp1, new_wp2])
+ expedition.schedule = new_schedule
+
+ # unresolved problem file
+ problem = {
+ "resolved": False,
+ "delay_duration_hours": delay_duration_hours,
+ "problem_waypoint_i": 0,
+ }
+ problem_file = tmp_path / "problem_1.json"
+ with open(problem_file, "w") as f:
+ json.dump(problem, f)
+
+ # check if resolution is detected correctly
+ if should_resolve:
+ cp.verify(expedition, tmp_path)
+ with open(problem_file) as f:
+ updated = json.load(f)
+ assert updated["resolved"] is True
+ else:
+ with pytest.raises(Exception) as excinfo:
+ cp.verify(expedition, tmp_path)
+ assert "has not been resolved in the schedule" in str(excinfo.value)
diff --git a/tests/test_utils.py b/tests/test_utils.py
index deca66d5c..b3ad75043 100644
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -7,8 +7,13 @@
from parcels import FieldSet
import virtualship.utils
+from virtualship.instruments.types import InstrumentType
from virtualship.models.expedition import Expedition
+from virtualship.models.location import Location
from virtualship.utils import (
+ PROJECTION,
+ _calc_sail_time,
+ _calc_wp_stationkeeping_time,
_find_nc_file_with_variable,
_get_bathy_data,
_select_product_id,
@@ -236,3 +241,100 @@ def test_data_dir_and_filename_compliance():
assert 'elif all("P1M" in s for s in all_files):' in utils_code, (
"Expected check for 'P1M' in all_files not found in _find_files_in_timerange. This indicates a drift between docs and implementation."
)
+
+
+# TODO: test for calc_sail_time
+
+
+def test_calc_sail_time(projection=PROJECTION):
+ LATITUDE = 0.0 # constant at equator
+
+ location1 = Location(latitude=LATITUDE, longitude=0.0)
+ location2 = Location(latitude=LATITUDE, longitude=1.0)
+ ship_speed_knots = 10.0
+
+ sail_time, _, ship_speed_ms = _calc_sail_time(
+ location1, location2, ship_speed_knots, projection
+ )
+
+ # should be approximately 21638 seconds (6 hours, 0 minutes, 38 seconds)
+ assert abs(sail_time.total_seconds() - 21638) < 10 # small tolerance
+
+ calculated_distance_m = ship_speed_ms * sail_time.total_seconds()
+ assert (
+ abs(calculated_distance_m - 111319) < 100
+ ) # # 1 degree longitude at equator ≈ 111319 meters; allow small tolerance
+
+
+def test_calc_wp_stationkeeping_time(expedition, monkeypatch):
+ """Test _calc_wp_stationkeeping_time for correct stationkeeping time calculation."""
+
+ class DummyInstrumentsConfig:
+ def __init__(self, ctd, ctd_bgc, argo, xbt):
+ self.ctd = ctd
+ self.ctd_bgc = ctd_bgc
+ self.argo = argo
+ self.xbt = xbt
+
+ class CTDConfig:
+ stationkeeping_time = datetime.timedelta(minutes=50)
+
+ class CTD_BGCConfig:
+ stationkeeping_time = datetime.timedelta(minutes=50)
+
+ class ArgoFloatConfig:
+ stationkeeping_time = datetime.timedelta(minutes=20)
+
+ class XBTConfig: # has no stationkeeping time
+ deceleration_coefficient = 0.1
+
+ monkeypatch.setattr(
+ "virtualship.utils.INSTRUMENT_CONFIG_MAP",
+ {
+ InstrumentType.CTD: "CTDConfig",
+ InstrumentType.CTD_BGC: "CTD_BGCConfig",
+ InstrumentType.ARGO_FLOAT: "ArgoFloatConfig",
+ InstrumentType.XBT: "XBTConfig",
+ },
+ )
+
+ # Create a dummy expedition with instruments_config containing the dummy configs
+ instruments_config = DummyInstrumentsConfig(
+ ctd=CTDConfig(),
+ ctd_bgc=CTD_BGCConfig(),
+ argo=ArgoFloatConfig(),
+ xbt=XBTConfig(),
+ )
+ expedition.instruments_config = (
+ instruments_config # overwrite instruments_config with test dummy
+ )
+
+ # instruments at a given waypoint
+ wp_instrument_types_all = [
+ InstrumentType.CTD,
+ InstrumentType.CTD_BGC,
+ InstrumentType.ARGO_FLOAT,
+ InstrumentType.XBT,
+ ]
+
+ # all dummy instruments
+ stationkeeping_time_all = _calc_wp_stationkeeping_time(
+ wp_instrument_types_all, expedition
+ )
+ assert (
+ stationkeeping_time_all
+ == CTDConfig.stationkeeping_time
+ + (
+ CTD_BGCConfig.stationkeeping_time * 0.0
+ ) # CTD(_BGC) counted once when both present
+ + ArgoFloatConfig.stationkeeping_time
+ )
+
+ # xbt only (no stationkeeping time)
+ wp_instrument_types_xbt = [InstrumentType.XBT]
+ stationkeeping_time_xbt = _calc_wp_stationkeeping_time(
+ wp_instrument_types_xbt, expedition
+ )
+ assert stationkeeping_time_xbt == datetime.timedelta(0), (
+ "XBT should have zero stationkeeping time"
+ )