This file records changes to the codebase grouped by version release. Unreleased changes are generally only present during development (relevant parts of the changelog can be written and saved in that section before a version number has been assigned)
- Updated internal functions for Python 3.14 and Pandas 3.0.0
-
Changed initialization method to increase prioritization for characteristics with zero value. If a characteristic has a zero value, this will now automatically force included compartments to have a zero value. In practice, this means that it is less likely that frameworks will cause negative initial popsizes, so some frameworks that previously did not work will now initialize correctly.
-
Added
at.TimeSeries.clear()to reset a time series while preserving the units. This function can be useful when updating the value of databook quantities programatically. -
Added a separate numerical tolerance used for initialization (
at.model.model_settings['initialization_tolerance']) which permits more approximate initializations while still maintaining the same numerical tolerance for the rest of the integration. -
Added another logging level (
at.VERBOSE) which enables more targeted additional output -
Backwards-compatibility notes
-
Some initializations might show numerical (e.g.,
1e-10) differences in their values due to the new algorithm. In a small number of cases (depending on the framework), it is possible that the updated initialization method could result in a slightly different initialization. -
The default initialization tolerance is now
1e-3instead of1e-6, some models that previously raised aBadInitializationerror will now run without error. Users should note that if it's necessary to guarantee an exact initialization, this tolerance should be reduced.
- Fix bug in creating databook if framework 'Databook pages' sheet contains multiple code names mapping to the same full name.
- Switched from
setup.pytopyproject.toml-based installation.
- Framework definitions of compartments, characteristics, and parameters support a new column 'databook default all'. If set to 'y', then when a databook is produced, the data entry table will only contain a record for 'All' instead of having population-specific rows. Further manual editing of the tables is supported as normal.
- Automatic calibration can now selectively weight parts of the time series to select or prioritise a subset of time points.
- Added YAML-based calibration support to Atomica, covered in Tutorial 7 in the online documentation.
ProjectSettingsnow computes the simulation time vector in a more robust way to reduce edge cases where the reportedsim_dtdoesn't match the input.ParameterSet.load_calibration()now clears any existing initialization if the calibration being loaded does not contain an initialization. Previously, the absence of an 'initialization' sheet in the calibration would be treated as not making any change to the initialization. This could cause calibrations to become mixed if a calibration without an initialization was loaded after a calibration with an initialization. Now, a missing initialization sheet is treated as meaning 'no initialization' and any existing initialization will be cleared when the calibration is loaded.Project.load_databook()will no longer populateProject.databookwhen aProjectDatainstance is supplied rather than a spreadsheet. The intention ofProject.databookas opposed toProject.data.to_spreadsheet()is that the original databook may contain comments or other content that is not preserved when the databook is loaded into aProjectDatainstance. Therefore,Project.databookserves as a record of the original inputs. However, in previous versions of Atomica, if aProjectDatainstance was provided rather than a spreadsheet,ProjectData.to_spreadsheet()would be used to populateProject.databook. For large databooks, this can be computationally expensive and particularly affect the use case of passing in a preloaded databook to improve performance. Since the conversion of theProjectDatato a spreadsheet upon loading offers no functional difference to creating the spreadsheet fromProject.datawhen required, Atomica no longer performs this conversion upfront.PlotData.time_aggregate()now only interpolates if necessary, using simulated time points as much as possible.
Backwards-compatibility notes
- In some edge cases, the simulation time points in the output may be different. In those cases, the difference between simulation time points in the model output would not have matched the model input, although the correct time step would have been used to calculate parameter values. In these cases, there may be an extra time point in the model output. Re-running the model should produce results that are close to the original results.
- If accessing
Project.databook, in some cases this may now beNonerather than ansc.Spreadsheet(). If that occurs,Project.data.to_spreadsheet()should be used to produce an equivalent spreadsheet. - Time aggregation of
PlotDatamay produce slightly different results due to the more accurate selection of time points in this version.
- If aggregating characteristics with a denominator, weighted aggregations use the denominator of the quantity rather than the total population size to perform the weighting. This can be useful for quantities that are proportions of things other than the population size e.g., proportion of active infections that are diagnosed
- If aggregating characteristics with a denominator, weighted aggregation will be used by default (rather than 'average')
- Output aggregation of durations now uses 'sum' instead of 'average' by default
- If no aggregation method is specified, the aggregation method will now be selected separately for each output
- Population aggregation of non-number units now uses 'weighted' by default rather than 'average'
Backwards-compatibility notes
- Results obtained when aggregating characteristics with a denominator using the 'weighted' method will change. To reproduce the previous results, it is necessary to perform the population size weighting manually by removing the aggregation, then extracting the population sizes, and using those to aggregate the outputs. This is considered to be a rare use case because the updated result is a more useful weighting compared to the previous result.
- Results obtained when aggregating characteristics with a denominator without explicitly specifying a method will change, because 'weighted' aggregation is now used by default.
- Support entering
'total'as the population name in auto-calibration measurables to calibrate aggregated values across populations in the model to aggregate values entered in the databook under a 'Total' population
- Enable automated calibration of transfers and updated documentation to cover this feature
- Added an option to save initial compartment sizes inside a
ParameterSet. Importantly, this saved representation allows setting the initial subcompartment sizes for aTimedCompartment. It therefore offers the possibility of initializing the model in a steady state computed from a previous simulation run, that would not be possible to initialize conventionally because standard initialization uniformly distributes people into the subcompartments of a timed compartment. - Added
ParameterSet.make_constantto facilitate constructing copies ofParameterSetinstances that are constant over time.
- Updated
at.Project()to explicitly take in the settings arguments forsim_start,sim_end, andsim_dt. These are now applied after databooks are loaded, fixing a bug where these arguments would get overwritten when loading the databook.
at.calibratenow supports passing any additional arguments into the optimization function e.g.,sc.asdallowing additional options for customizing the optimization.
- Updated various Pandas operations to improve compatibility with Pandas 2.2.0
- Replaced 'probability' units with 'rate' units in many of the library example frameworks
- For many of those parameters, also removed the maximum upper limit value of 1 as such parameters should not generally have this limit
- Numerical indices are no longer inadvertently added to framework dataframes (compartments, characteristics, parameters, and interactions).
Backwards-compatibility notes
- Removing the upper limit of 1 on parameters that were in 'probability' units may change the output of models using the library example frameworks. The updated results should be considered more realistic because the 'probability' parameters were actually behaving as rates, and therefore should not have had an upper limit imposed in the first place.
- Added ability to provide a row in the databook for 'all' (or 'All') populations, as shorthand for entering the same value in every population. This option serves as a fallback value if population-specific values have also been provided.
- Added optional
n_colsargument toat.plot_seriesto allow series plots to appear in a single figure using subplots
ProjectData.get_ts()now correctly handles retrieving timeseries data from interactions and transfers with underscores in their nameProjectDatanow validates that transfer and interaction names are unique- Improved handling Excel cell references if population names and framework variable full names overlap
- Added ability to place '#ignore' directives in places other than the first column of a spreadsheet, to selectively ignore parts of lines. Added documentation to cover this functionality.
- Fixed a bug where cells that were documented as being ignored actually populated content
- Warning messages now print the file/line they were generated from
- Added extra documentation on parallels with explicit ODE compartment models
- Improve error message if calibration file contains duplicate entries
- Transfer parameters no longer raise an error if specified in 'Duration' units
- Transfer parameters in rate units are no longer limited to a maximum value of 1
Backwards-compatibility notes
- Transfers in rate units with a databook value greater than 1 were internally limited to a value of 1 previously. Models with such transfers will produce different results. This is expected to be uncommon, as most models have transfer parameters with values less than 1.
- Some numerical errors in
model.py(particularly relating to errors/warnings in parameter functions) are now caught and printed with more informative error messages.
- Change the table parsing routine again to resolve further edge cases, restore removal of leading and trailing spaces from cells in the framework, and improve performance. The original
Nonebehaviour has consequently been restored (undoing the change in 1.26.2) although it is still recommended thatpandas.isna()is used instead of checking forNone.
- Switch to
sc.gitinfofrom Sciris. The git commit hash recorded in Atomica objects will now only contain the first 7 characters. Code that usesat.fast_gitinfoshould usesc.gitinfoinstead. - Improve robustness of the table parsing routine used by
at.Framework. In some cases, the data type of empty cells is nowNaNrather thanNone. This affects any code that either checks for the contents beingNoneor which relies onNonebeing treated asFalsein conditional statements e.g.,if <contents>:. Affected code should instead usepandas.isna()which handlesNone,NaN, andpd.NA.
- Improve numerical robustness of
SpendingPackageAdjustmentunder certain edge cases - Fix bug in cumulative plot labelling that could result in the axis label containing more than one 'Cumulative' prefix
- Allow initializing compartments and characteristics with a 0 value by setting a default value without needing to add the quantity to the databook. This simplifies initialization of models that have large numbers of compartments that should always be initialized with a 0 value, without needing to add many databook entries or extra initialization characteristics.
- Allow framework variables with single characters (previously, all code names had to be at least two characters long)
- Improve handling of automatic number of workers if a number is provided instead of a collection of inputs
- Add
optim_argsargument toat.optimizewhich allows arguments to be passed to optimization functions such as ASD
- The "Databook pages" sheet in the framework is now optional, if a compartment, characteristic or parameter has a "Databook page" that does not appear in the "Databook pages" sheet (or if the "Databook pages" sheet is missing entirely) then the specified page will be created with the specified name as both the code name and full name. As the "Databook pages" sheet is created and populated with these names during framework validation, downstream code expecting the sheet to exist should not require any changes.
- Fix array size error for junctions belonging to a duration group (some otherwise valid frameworks previously raised an error when running the model)
- Fix missing cells/NaNs in equivalent spending caused by numerical precision errors
- Unpin
matplotlibversion insetup.py
- Improve exported results link labelling for transfers
- Implemented variable total spend in
SpendingPackageAdjustment - Optimized performance for
SpendingPackageAdjustmentif proportions are fixed by adding afix_propsflag that skips addingAdjustablesfor the proportions - Improved framework validation robustness when dataframe cells contain NA-like values (
np.nanorpd.NA) instead of justNone
- Program number eligible defaults to 0 if target compartments are missing (rather than raising a key error)
ProgramSetspreadsheet constructor is now a class method to allow inheritance- Fixed bug where program overwrites that impact a transition parameter via at least one intermediate parameter did not impact outcomes
- Improved
SpendingPackageAdjustmentperformance although varying total spend is not yet supported
- Fix bug in program fractional coverage where not all programs were constrained to a peak coverage of 1
- Update calls to
sc.asd()to be compatible with Sciris v1.2.3 - Update installation instructions to use
piprather thansetup.pydirectly - Improve handling of unspecified timescales in plotting routines
- Unfreeze
pandasdependency because they have fixed some regressions that affectedatomica
- Replace deprecated
sc.SItickformatterusage - Fix bug in program coverage overwrite timestep scaling. Coverage overwrites must always be provided in dimensionless units
- Improved framework validation (informative errors raised in some additional cases)
- Calibrations can be loaded for mismatched frameworks/databooks - missing or extra entries will be skipped without raising an error
- Implemented
Population.__contains__to easily check whether variables are defined in a population - Improved error message when plotting if requesting an output that is not defined in all populations
- Add
ParameterSet.y_factorsas a property attribute to quickly access and set y-factors.
- Fix bug in
ProgramSet.remove_program()- this function would previously raise an error
- Added methods
ParameterSet.calibration_spreadsheet(),ParameterSet.save_calibration()andParameterSet.load_calibration()to allow saving calibration scale factors to spreadsheets that can be edited externally.
- Fix plotting routines that were previously checking for missing timescales by checking for
Nonevalues, and were thus missingnp.nanvalues. This change was introduced around version 1.24.1 when framework validation now guarantees that the parameter timescale is a numeric type. This causes missing timescales to be populated withnanrather thanNone. - Add library framework for malaria
Backwards-compatibility notes
- Any code checking for missing timescales by checking for a
Nonevalue should instead usepd.isna()to check fornanorNonevalues
- Fixes a bug in validation that ensures parameters in 'proportion' units cannot have a timescale. Previously frameworks with this error would incorrectly pass validation
- Added validation of plots sheet in framework file
- Allow validating a framework multiple times
- Fix an edge case with timed transitions and split transition matrices
Backwards-compatibility notes
- In rare cases, if an existing framework file contains an error that was not previously detected, it may now produce an error when loaded. Such errors indicate problems in the framework that should be debugged as usual.
- Fix bug where program outcomes were not correctly applied if overwriting a function parameter that does not impact any transitions
- Added
at.stop_logging()and an optionalresetargument toat.start_logging()
at.ProgramSetnow stores all compartments with anon_targetableflagin at.ProgramSet.compsso that it can read/write workbooks for models that use coverage scenarios only.
- Exporting results now includes some time-aggregated quantities (summed over the year, rather than providing annualized values as of Jan 1)
- Added log level commands to
calibrate()andreconcile()so that they respectat.Quiet()in the same way asoptimize()already does.
- Update documentation to support Sphinx 3
- Added
__all__variables to modules so that module variables are no longer accidentally imported into the top-level Atomica module. - Renamed
defaults.pymoduledemos.py
Backwards-compatibility notes
- Code that relied on module variables being imported to Atomica may now fail. For example,
atomica.versionwas accidentally set to the version string rather than referencing theatomica.versionmodule. Such code should be updated by finding the relevant module variable and referencing that instead - for example, replacingatomica.versionwithatomica.version.version. - Plotting settings are no longer imported to
atomica.settingsby accident - instead, they should be accessed viaatomica.plotting.settingsonly. The same usage pattern applies to settings in other modules like calibration and cascades.
Project.calibrate()no longer saves the new calibration to the project by default
Backwards-compatibility notes
- Add the explicit argument
save_to_project=Truetocalibrate()to match previous behaviour
- Drop version constraint for
openpyxlto support both version 2.5 and >2.5
- Add equality operator to
at.Timeseries - Support passing in arrays to the
at.TimeSeriesconstructor
- Refactored
ProgramSet.save(filename, folder)toProgramSet.save(fname)so that it now matchesProjectDataandProjectFramework at.atomica_pathreturns aPathobject. Thetrailingsepargument has been removed as it is not relevant when returning a path. Legacy code may experience aTypeError: unsupported operand type(s) for +: 'WindowsPath' and 'str'or similar if the output ofat.atomica_pathis concatenated by addition. In general, this can be resolved by replacing+with/e.g.at.LIBRARY_PATH+'foo.xlsx'becomesat.LIBRARY_PATH/'foo.xlsx'.at.parent_dirreturns aPathobjectProject.load_databooknow supports passing in aProjectDatainstance (as does theProjectconstructor)- Remove unused
num_interpopsargument fromProject.create_databook() - Add support for '>' in transition matrix to represent junction residual
- Renamed
calibration.py:perform_autofit()tocalibration.py:calibrate() - Added debug-level log messages, which can be viewed by setting
atomica.logger.setLevel(1) - Added changelog