From f9c290a1acf0121e07fdef497774562041927635 Mon Sep 17 00:00:00 2001 From: jlnav Date: Tue, 14 Apr 2026 13:15:35 -0500 Subject: [PATCH 01/34] remove some mentions of pyyaml and tomli. bump the version in the spack install example. make overview-usecases more personally straightforward. --- docs/advanced_installation.rst | 19 ++---- docs/introduction_latex.rst | 2 - docs/overview_usecases.rst | 102 ++++++++++----------------------- 3 files changed, 37 insertions(+), 86 deletions(-) diff --git a/docs/advanced_installation.rst b/docs/advanced_installation.rst index 060435b564..8151cb31fa 100644 --- a/docs/advanced_installation.rst +++ b/docs/advanced_installation.rst @@ -10,8 +10,6 @@ automatically installed alongside libEnsemble: * NumPy_ ``>= 1.21`` * psutil_ ``>= 5.9.4`` * `pydantic`_ ``>= 2`` -* pyyaml_ ``>= v6.0`` -* tomli_ ``>= 1.2.1`` * gest-api_ ``>= 0.1,<0.2`` We recommend installing in a virtual environment from ``uv``, ``conda`` or another source. @@ -108,12 +106,12 @@ Further recommendations for selected HPC systems are given in the The above command will install the latest release of libEnsemble with the required dependencies only. Other optional dependencies can be specified through variants. The following - line installs libEnsemble version 0.7.2 with some common variants + line installs libEnsemble version 1.5.0 with some common variants (e.g., using :doc:`APOSMM<../examples/aposmm>`): .. code-block:: bash - spack install py-libensemble @0.7.2 +mpi +scipy +mpmath +petsc4py +nlopt + spack install py-libensemble @1.5.0 +mpi +scipy +mpmath +petsc4py +nlopt The list of variants can be found by running:: @@ -121,7 +119,7 @@ Further recommendations for selected HPC systems are given in the On some platforms you may wish to run libEnsemble without ``mpi4py``, using a serial PETSc build. This is often preferable if running on - the launch nodes of a three-tier system (e.g., Summit):: + the launch nodes of a three-tier system:: spack install py-libensemble +scipy +mpmath +petsc4py ^py-petsc4py~mpi ^petsc~mpi~hdf5~hypre~superlu-dist @@ -171,13 +169,10 @@ Further recommendations for selected HPC systems are given in the ``Python`` and the packages distributed with it (e.g., ``numpy``), and will often include the system MPI library. -Optional Dependencies for Additional Features ---------------------------------------------- +Globus Compute +-------------- -The following packages may be installed separately to enable additional features: - -* pyyaml_ and tomli_ - Parameterize libEnsemble via yaml or toml -* `Globus Compute`_ - Submit simulation or generator function instances to remote Globus Compute endpoints +`Globus Compute`_ may be installed optionally to submit simulation function instances to remote Globus Compute endpoints. .. _conda-forge: https://conda-forge.org/ .. _Conda: https://docs.conda.io/en/latest/ @@ -191,9 +186,7 @@ The following packages may be installed separately to enable additional features .. _pydantic: https://docs.pydantic.dev/1.10/ .. _PyPI: https://pypi.org .. _Python: http://www.python.org -.. _pyyaml: https://pyyaml.org/ .. _Spack: https://spack.readthedocs.io/en/latest .. _spack_libe: https://github.com/Libensemble/spack_libe -.. _tomli: https://pypi.org/project/tomli/ .. _tqdm: https://tqdm.github.io/ .. _uv: https://docs.astral.sh/uv/ diff --git a/docs/introduction_latex.rst b/docs/introduction_latex.rst index e7750bac5f..512282dbfe 100644 --- a/docs/introduction_latex.rst +++ b/docs/introduction_latex.rst @@ -39,7 +39,6 @@ .. _pytest-timeout: https://pypi.org/project/pytest-timeout/ .. _pytest: https://pypi.org/project/pytest/ .. _Python: http://www.python.org -.. _pyyaml: https://pyyaml.org/ .. _Quickstart: https://libensemble.readthedocs.io/en/main/introduction.html .. _ReadtheDocs: http://libensemble.readthedocs.org/ .. _SciPy: http://www.scipy.org @@ -51,7 +50,6 @@ .. _SWIG: http://swig.org/ .. _tarball: https://github.com/Libensemble/libensemble/releases/latest .. _Tasmanian: https://github.com/ORNL/Tasmanian -.. _tomli: https://pypi.org/project/tomli/ .. _tqdm: https://tqdm.github.io/ .. _user guide: https://libensemble.readthedocs.io/en/latest/programming_libE.html .. _VTMOP: https://github.com/Libensemble/libe-community-examples#vtmop diff --git a/docs/overview_usecases.rst b/docs/overview_usecases.rst index 5467bab3eb..b1c31885d2 100644 --- a/docs/overview_usecases.rst +++ b/docs/overview_usecases.rst @@ -5,8 +5,8 @@ Manager, Workers, Generators, and Simulators ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. begin_overview_rst_tag -libEnsemble's **manager** allocates work to **workers**, -which perform computations via **generators** and **simulators**: +libEnsemble's **manager** allocates work from **generators** to **workers**, +which perform computations via **simulators**: * :ref:`generator`: Generates inputs for the *simulator* * :ref:`simulator`: Performs an evaluation using parameters from the *generator* @@ -18,18 +18,11 @@ which perform computations via **generators** and **simulators**: | -.. figure:: images/diagram_with_persis.png - :alt: libE component diagram - :align: center - :scale: 40 - -| - An :doc:`executor` interface is available so generators and simulators can launch and monitor external applications. -libEnsemble uses a NumPy structured array called the :ref:`history array` -to record all simulations and generated values. +All simulations and generated values are recorded in a NumPy +structured array called the :ref:`history array`. Allocator Function ~~~~~~~~~~~~~~~~~~ @@ -37,9 +30,6 @@ Allocator Function * :ref:`allocator`: Decides whether a simulator or generator should be invoked (and with what inputs/resources) as workers become available -The default allocator (``alloc_f``) prompts workers to run the highest-priority simulator work. -If a worker is idle and no simulator work is available, that worker is prompted to query the generator. - The default allocator is appropriate for the majority of use cases but can be customized for users interested in more advanced allocation strategies. @@ -47,58 +37,47 @@ Example Use Cases ~~~~~~~~~~~~~~~~~ .. begin_usecases_rst_tag -Below are some expected libEnsemble use cases that we support (or are working -to support): - .. dropdown:: **Click Here for Use-Cases** * A user wants to optimize a simulation calculation. The simulation may already be using parallel resources but not a large fraction of a computer. libEnsemble can coordinate concurrent evaluations of the - simulation ``sim_f`` at multiple parameter values based on candidate parameter - values produced by ``gen_f`` (possibly after each ``sim_f`` output). + simulator at multiple parameter values based on candidate parameter + values produced by the generator (possibly after each simulator output). - * A user has a ``gen_f`` that produces meshes for a - ``sim_f``. Based on the ``sim_f`` output, the ``gen_f`` can refine a mesh or + * A user has a generator that produces meshes for a + simulator. Based on the simulator output, the generator can refine a mesh or produce a new mesh. libEnsemble ensures that generated meshes can be reused by multiple simulations without requiring data movement. - * A user wants to evaluate a simulation ``sim_f`` with different sets of + * A user wants to evaluate a simulation with different sets of parameters, each drawn from a set of possible values. Some parameter values are known to cause the simulation to fail. libEnsemble can stop unresponsive evaluations and recover computational resources for future - evaluations. The ``gen_f`` can update the sampling strategy after discovering - regions where evaluations of ``sim_f`` fail. + evaluations. The generator can update the sampling strategy after discovering + regions where evaluations of the simulator fail. - * A user has a simulation ``sim_f`` that requires calculating multiple - expensive quantities, some of which depend on other quantities. The ``sim_f`` + * A user has a simulation that requires calculating multiple + expensive quantities, some of which depend on other quantities. The simulator can monitor intermediate quantities to stop related calculations early and preempt future calculations associated with poor parameter values. - * A user has a ``sim_f`` with multiple fidelities, where higher-fidelity - evaluations require more computational resources. A ``gen_f``/``alloc_f`` - pair decides which parameters should be evaluated and - at what fidelity level. libEnsemble coordinates these evaluations without - requiring the user to write parallel code. + * A user has a simulation with multiple fidelities, where higher-fidelity + evaluations require more computational resources. The generator and allocator + decide which parameters should be evaluated and at what fidelity level. libEnsemble + coordinates these evaluations without requiring the user to write parallel code. - * A user wishes to identify multiple local optima for a ``sim_f``. In addition, + * A user wishes to identify multiple local optima for a simulation. In addition, sensitivity analysis is desired at each identified optimum. libEnsemble can - use points from the APOSMM ``gen_f`` to identify optima. After a point is - determined to be an optimum, a different ``gen_f`` can generate the - parameter sets required for sensitivity analysis of ``sim_f``. + use points from the APOSMM generator to identify optima. After a point is + determined to be an optimum, a different generator can generate the + parameter sets required for sensitivity analysis of the simulation. - Combinations of these use cases are also supported. For example, libEnsemble - can be used to solve optimization problems where simulations fail - frequently. + Combinations of these use cases are also supported. Glossary ~~~~~~~~ -Here we define some terms used throughout libEnsemble's code and documentation. -Although many of these terms seem straightforward, defining them helps reduce -confusion when communicating about libEnsemble and -its capabilities. - .. dropdown:: **Click Here for Glossary** :open: @@ -107,46 +86,27 @@ its capabilities. workers and collects their output. * **Worker**: libEnsemble processes responsible for performing units of work, - which may include executing tasks or submitting external jobs. Workers run - generation and simulation routines and return results to the manager. - - * **Calling Script**: libEnsemble is typically imported, parameterized, and - initiated in a single Python file referred to as a *calling script*. ``sim_f`` - and ``gen_f`` functions are commonly configured and parameterized here. - - * **User function**: A generator, simulator, or allocation function. These - Python functions govern the libEnsemble workflow. They - must conform to the libEnsemble API for each respective user function, but otherwise can - be created or modified by the user. - libEnsemble includes many examples of each type. + which may include executing tasks or submitting external jobs. Workers typically + run simulators and return results to the manager. * **Executor**: The executor provides a simple, portable interface for - launching and managing user tasks (applications). Multiple executors are + launching and managing tasks (applications). Multiple executors are available, including the base ``Executor`` and ``MPIExecutor``. * **Submit**: To enqueue or indicate that one or more jobs or tasks should be launched. When using the libEnsemble Executor, a *submitted* task is either executed immediately or queued for execution. - * **Tasks**: Subprocesses or independent units of work. Workers perform - tasks as directed by the manager. Tasks may include launching external - programs for execution using the Executor. - - * **Persistent**: Typically, a worker communicates with the manager - before and after initiating a user ``gen_f`` or ``sim_f`` calculation. Persistent user - functions instead communicate directly with the manager during execution, - allowing them to maintain and update data structures efficiently. These - calculations and their assigned workers are referred to as *persistent*. + * **Tasks**: Subprocesses or independent units of work. Tasks result from + launching external programs for execution using the Executor. - * **Resource Manager**: libEnsemble includes a built-in resource manager that can detect - (or be provided with) available resources (e.g., a node list). Resources are - divided among workers using *resource sets* and can be dynamically - reassigned. + * **Resource Manager**: libEnsemble includes a resource manager that can detect + (or be provided with) available resources (e.g., a list of nodes). *Resource sets* are + divided among workers and can be dynamically reassigned. * **Resource Set**: The smallest unit of resources that can be assigned (and dynamically reassigned) to workers. By default this is the provisioned resources - divided by the number of workers. It can also be set - explicitly using the ``num_resource_sets`` ``libE_specs`` option. + divided by the number of workers. It can also be set explicitly using the ``num_resource_sets`` ``libE_specs`` option. * **Slot**: Resource sets enumerated on a node (starting from zero). If a resource set spans multiple nodes, each node is considered to have slot From b1391e7727679d24fda5fad1c34934e10b6858f5 Mon Sep 17 00:00:00 2001 From: jlnav Date: Tue, 14 Apr 2026 13:46:21 -0500 Subject: [PATCH 02/34] modernize initial programming_libe example. Adjust ensemble.py to only take class-versions of specs. mypy fixes --- docs/programming_libE.rst | 2 - libensemble/ensemble.py | 124 +++++++++++++++----------------------- libensemble/specs.py | 2 +- 3 files changed, 50 insertions(+), 78 deletions(-) diff --git a/docs/programming_libE.rst b/docs/programming_libE.rst index f4ffaecac6..e385ff91f8 100644 --- a/docs/programming_libE.rst +++ b/docs/programming_libE.rst @@ -1,8 +1,6 @@ Constructing Workflows ====================== -We now give greater detail in programming with libEnsemble. - .. toctree:: :maxdepth: 2 :caption: The Basics diff --git a/libensemble/ensemble.py b/libensemble/ensemble.py index fe079c99a9..96e4c3733f 100644 --- a/libensemble/ensemble.py +++ b/libensemble/ensemble.py @@ -32,7 +32,7 @@ class Ensemble: """ The primary object for a libEnsemble workflow. - Parses and validates settings, sets up logging, and maintains output. + Parses and validates settings and maintains output. .. dropdown:: Example :open: @@ -40,44 +40,38 @@ class Ensemble: .. code-block:: python :linenos: - import numpy as np + from gest_api.vocs import VOCS from libensemble import Ensemble - from libensemble.gen_funcs.sampling import latin_hypercube_sample + from libensemble.gen_classes.sampling import UniformSample from libensemble.sim_funcs.simple_sim import norm_eval - from libensemble.specs import ExitCriteria, GenSpecs, LibeSpecs, SimSpecs + from libensemble.specs import ExitCriteria, GenSpecs, SimSpecs - libE_specs = LibeSpecs(nworkers=4) - sampling = Ensemble(libE_specs=libE_specs) + sampling = Ensemble(parse_args=True) sampling.sim_specs = SimSpecs( sim_f=norm_eval, inputs=["x"], outputs=[("f", float)], ) + + vocs = VOCS( + variables={"x": [-3, 3]}, + objectives={"f": "EXPLORE"}, + ) + + generator = UniformSample(vocs=vocs) + sampling.gen_specs = GenSpecs( - gen_f=latin_hypercube_sample, - outputs=[("x", float, (1,))], - user={ - "gen_batch_size": 50, - "lb": np.array([-3]), - "ub": np.array([3]), - }, + gen_f=generator, + batch_size=50, ) - sampling.add_random_streams() sampling.exit_criteria = ExitCriteria(sim_max=100) if __name__ == "__main__": sampling.run() sampling.save_output(__file__) - - Run the above example via ``python this_file.py``. - - Instead of using the libE_specs line, you can also use ``sampling = Ensemble(parse_args=True)`` - and run via ``python this_file.py -n 4`` (4 workers). The ``parse_args=True`` parameter - instructs the Ensemble class to read command-line arguments. - Configure by: .. dropdown:: Option 1: Providing parameters on instantiation @@ -117,25 +111,25 @@ class Ensemble: Parameters ---------- - sim_specs: :obj:`dict` or :class:`SimSpecs` + sim_specs: class:`SimSpecs` - Specifications for the simulation function + Specifications for the simulator function. - gen_specs: :obj:`dict` or :class:`GenSpecs`, Optional + gen_specs: class:`GenSpecs`, Optional - Specifications for the generator function + Specifications for the generator. - exit_criteria: :obj:`dict` or :class:`ExitCriteria`, Optional + exit_criteria: class:`ExitCriteria`, Optional - Tell libEnsemble when to stop a run + Tell libEnsemble when to stop a run. - libE_specs: :obj:`dict` or :class:`LibeSpecs`, Optional + libE_specs: class:`LibeSpecs`, Optional - Specifications for libEnsemble + Specifications for libEnsemble. - alloc_specs: :obj:`dict` or :class:`AllocSpecs`, Optional + alloc_specs: class:`AllocSpecs`, Optional - Specifications for the allocation function + Specifications for the allocation function. persis_info: :obj:`dict`, Optional @@ -144,12 +138,12 @@ class Ensemble: executor: :class:`Executor`, Optional - libEnsemble Executor instance for use within simulation or generator functions + libEnsemble Executor instance for use within simulation or generator functions. H0: `NumPy structured array `_, Optional A libEnsemble history to be prepended to this run's history - :ref:`(example)` + :ref:`(example)`. parse_args: bool, Optional @@ -161,24 +155,20 @@ class Ensemble: def __init__( self, - sim_specs: SimSpecs | dict | None = SimSpecs(), - gen_specs: GenSpecs | dict | None = GenSpecs(), - exit_criteria: ExitCriteria | dict | None = {}, - libE_specs: LibeSpecs | dict | None = LibeSpecs(), - alloc_specs: AllocSpecs | dict | None = AllocSpecs(), - persis_info: dict | None = {}, + sim_specs: SimSpecs = SimSpecs(), + gen_specs: GenSpecs = GenSpecs(), + exit_criteria: ExitCriteria = ExitCriteria(), + libE_specs: LibeSpecs = LibeSpecs(), + alloc_specs: AllocSpecs = AllocSpecs(), + persis_info: dict = {}, executor: Executor | None = None, H0: npt.NDArray | None = None, - parse_args: bool | None = False, + parse_args: bool = False, ): self.sim_specs = sim_specs self.gen_specs = gen_specs self.exit_criteria = exit_criteria - self._libE_specs: LibeSpecs | None = None - if isinstance(libE_specs, dict): - self._libE_specs = LibeSpecs(**libE_specs) - else: - self._libE_specs = libE_specs + self._libE_specs: LibeSpecs = libE_specs self.alloc_specs = alloc_specs self.persis_info = persis_info self.executor = executor @@ -224,33 +214,26 @@ def ready(self) -> bool: return all([i for i in [self.exit_criteria, self._libE_specs, self.sim_specs]]) @property - def libE_specs(self) -> LibeSpecs | None: + def libE_specs(self) -> LibeSpecs: return self._libE_specs @libE_specs.setter def libE_specs(self, new_specs): - # We need to deal with libE_specs being specified as dict or class, and - # "not" overwrite the internal libE_specs["comms"]. - # Respect everything if libE_specs isn't set if not hasattr(self, "_libE_specs") or not self._libE_specs: - if isinstance(new_specs, dict): - self._libE_specs = LibeSpecs(**new_specs) - else: - self._libE_specs = new_specs + self._libE_specs = new_specs return # Cast new libE_specs temporarily to dict - if not isinstance(new_specs, dict): # exclude_defaults should only be enabled with Pydantic v2 - if new_specs.comms != "mpi" and new_specs.comms != self._libE_specs.comms: # passing in a non-default comms - raise ValueError(OVERWRITE_COMMS_WARN) - platform_specs_set = False - if new_specs.platform_specs != {}: # bugginess across Pydantic versions for recursively casting to dict - platform_specs_set = True - platform_specs = new_specs.platform_specs - new_specs = specs_dump(new_specs, exclude_none=True, exclude_defaults=True) - if platform_specs_set: - new_specs["platform_specs"] = specs_dump(platform_specs, exclude_none=True) + if new_specs.comms != "mpi" and new_specs.comms != self._libE_specs.comms: # passing in a non-default comms + raise ValueError(OVERWRITE_COMMS_WARN) + platform_specs_set = False + if new_specs.platform_specs != {}: # bugginess across Pydantic versions for recursively casting to dict + platform_specs_set = True + platform_specs = new_specs.platform_specs + new_specs = specs_dump(new_specs, exclude_none=True, exclude_defaults=True) + if platform_specs_set: + new_specs["platform_specs"] = specs_dump(platform_specs, exclude_none=True) # Unset "comms" if we already have a libE_specs that contains that field, that came from parse_args if new_specs.get("comms") and hasattr(self._libE_specs, "comms"): @@ -269,10 +252,10 @@ def run(self) -> tuple[npt.NDArray, dict, int]: Manager--worker intercommunications are parsed from the ``comms`` key of :ref:`libE_specs`. An MPI runtime is assumed by default - if ``--comms local`` wasn't specified on the command-line or in ``libE_specs``. + if ``-n N`` wasn't specified on the command-line or ``comms="local"`` in ``libE_specs``. If a MPI communicator was provided in ``libE_specs``, then each ``.run()`` call - will initiate intercommunications on a **duplicate** of that communicator. + will initiate on a **duplicate** of that communicator. Otherwise, a duplicate of ``COMM_WORLD`` will be used. Returns @@ -368,8 +351,7 @@ def save_output(self, basename: str, append_attrs: bool = True): Format: ``_results_History_length=_evals=_ranks=`` """ if self.is_manager: - if self._get_option("libE_specs", "workflow_dir_path"): - assert self.libE_specs is not None + if getattr(self.libE_specs, "workflow_dir_path", False): save_libE_output( self.H, self.persis_info, @@ -380,11 +362,3 @@ def save_output(self, basename: str, append_attrs: bool = True): ) else: save_libE_output(self.H, self.persis_info, basename, self.nworkers, append_attrs=append_attrs) - - def _get_option(self, specs, name): - """Gets a specs value, underlying spec is either a dict or a class""" - attr = getattr(self, specs) - if isinstance(attr, dict): - return attr.get(name) - else: - return getattr(attr, name) diff --git a/libensemble/specs.py b/libensemble/specs.py index 08b02462bb..f7560c4893 100644 --- a/libensemble/specs.py +++ b/libensemble/specs.py @@ -434,7 +434,7 @@ class LibeSpecs(BaseModel): ``False`` by default to protect results. """ - workflow_dir_path: str | Path | None = "." + workflow_dir_path: str | Path = "." """ Optional path to the workflow directory. """ From 011981395c1e2699ed97a0c509547dcbc0629178 Mon Sep 17 00:00:00 2001 From: jlnav Date: Fri, 17 Apr 2026 14:00:08 -0500 Subject: [PATCH 03/34] nitpicky --- docs/nitpicky | 12 ++++++++++++ libensemble/generators.py | 5 ++++- 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/docs/nitpicky b/docs/nitpicky index e43a0760bb..66bac90c00 100644 --- a/docs/nitpicky +++ b/docs/nitpicky @@ -57,3 +57,15 @@ py:meth libensemble.tools.save_libE_output # Types specifying objects that can dramatically vary py:class comm py:class communicator + +# Additional nitpicky targets from recent Sphinx warnings +py:class libensemble.resources.platforms.Lumi +py:class libensemble.resources.platforms.LumiGPU +py:class numpy._typing._array_like._ScalarT +py:class Comm +py:class npt.DTypeLike +py:class libensemble.generators.PersistentGenInterfacer +py:class gest_api.vocs.VOCS +py:class libensemble.generators.LibensembleGenerator +py:class ~_ScalarT +py:class numpy.random._generator.Generator diff --git a/libensemble/generators.py b/libensemble/generators.py index a1927b6de6..15ae0725e4 100644 --- a/libensemble/generators.py +++ b/libensemble/generators.py @@ -220,7 +220,9 @@ def finalize(self) -> None: def export( self, vocs_field_names: bool = False, as_dicts: bool = False ) -> tuple[npt.NDArray | list | None, dict | None, int | None]: - """Return the generator's results + """ + Return the generator's results. + Parameters ---------- vocs_field_names : bool, optional @@ -229,6 +231,7 @@ def export( as_dicts : bool, optional If True, return local_H as list of dictionaries instead of numpy array. Default is False. + Returns ------- local_H : npt.NDArray | list From fcb005a48586574c81b337878ac86cb8b5f27b7c Mon Sep 17 00:00:00 2001 From: jlnav Date: Mon, 20 Apr 2026 10:02:28 -0500 Subject: [PATCH 04/34] trying out Furo theme in docs --- .pre-commit-config.yaml | 2 +- docs/_static/libE_logo.png | Bin 0 -> 54153 bytes docs/_static/libE_logo_white.png | Bin 0 -> 32035 bytes docs/conf.py | 39 +++++++------------------------ pixi.lock | 4 ++-- pyproject.toml | 3 ++- 6 files changed, 14 insertions(+), 34 deletions(-) create mode 100755 docs/_static/libE_logo.png create mode 100644 docs/_static/libE_logo_white.png diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index dc82efef8e..69c11918b9 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -37,4 +37,4 @@ repos: rev: v1.19.1 hooks: - id: mypy - exclude: ^libensemble/utils/(launcher|loc_stack|runners|pydantic|output_directory)\.py$|libensemble/tests/(regression_tests|functionality_tests|unit_tests|scaling_tests)/.* + exclude: ^docs/conf\.py$|libensemble/utils/(launcher|loc_stack|runners|pydantic|output_directory)\.py$|libensemble/tests/(regression_tests|functionality_tests|unit_tests|scaling_tests)/.* diff --git a/docs/_static/libE_logo.png b/docs/_static/libE_logo.png new file mode 100755 index 0000000000000000000000000000000000000000..17f051faab46f91a58f98c4dc617304e97707fcf GIT binary patch literal 54153 zcmeFY^;?wP8$C)Wr3ebb5EAMOQqo-tQpyY>-7O%Ee!l1X7n~o?To>0c%=7Ge*1hg^ue}L>3{xT}qbI||!y{Kwe((ejk1Prg zk3g4%2>j)ViFP4)BXw5Rdya>9_tyCjzE_T{2Ob^^p2~y!nqDctC)FGobm}%|;}n02 z=$QI(y@p@ZuEj6HClIdugxA(n{4~KjMURNq)LS|5UN1wwGS{ny7YW+;>@J=<4=8<( z7z@ZQ)Uedr^2Teed5$pusI2~bFLncqPUJNq0A2riweM2I^58z;wUm=8;{W|b1mim& z{eLe$=NIa@-|+&2sG|PY$mJAM>c3x^&RjtJ{eVZv4!QXE6GW`_&fgC$E&qR4xW)MY z7ZA4+c>mvr#RQVcG|-W&(3LlMQz*DRuF26Tlu{;)vXo|iG(PMqZYBo_=}o?eb8~oZ z%IfjNtidOkD(ox4dJ@)@Fv>n~X-?PQn5q2-a&b9ONC9&vn9>5pPS>|A5sF|9ra~+I zjm6eGEkxI~=<;4Mg;A!}c+oJ=IgtGQ*3<`tl1N5=PoB$>HOcHEz0(OaN$hCU;tFms z;|hFaFlwGFib$|1kSH%%A^ggApj{CdS=Bq{>IuUmAC%E33v_2tE<5f$Irx%=91WdygUtrN~$t)=;}A zR{UwHo-OkERqp=&eY(%)_^F%^-duP0qP35~%+nj1Ekl5EDF-6`C*B{7rr zyOB$pDbAQ@@t61rx6a8_Dgp09_Da@U9nCK%F!L>W)Qjs-{v~maUW*M&tG7{njx=cc)h%dlVDA1V{AS5*g zR(eQmh#P#&7hVlAe*nlbe1(;t)X;HiGn$z+KZq1OT=@ zV(0v>HsbVCbe9=;9HEq@cK+D7EY9Q!UfgvVx4?7p79!A`ke; zUMo$SVxEG#YmOl)!Ux7+l7H?s#XG-Ms>J)9wl-?I3W6-9FV6WNQtkod={cy;Ho9x= zV2iVHUj*V)MiFdE|1UE^@c!=kCpJvN-qUNy_LM z=eXxraJ}F$-2U%D(&YuF8i8wF_ESlvfu*7mC%nVFCiU=Ls)MLbTE$?@%G|{l^6NXmvVVv}DAa_#E= z3KLW4>8u>#0)WLFb-$-{W3kJ{)1@mZI2ho)egi^aO$=5vm%h7vWX#=J5qBk}izS+E zCzZ0zpJa9qxqR4&J>m$ad^uT!ac|OxA!|;?aER!iP%~-rb)m!8f2_G^eEC$4L=Eoj zH_v%=1Ff(_5B<3%?);vW=@cR-7HLwn_`~ADpOLp9Kv?K&YFb5Ioc27X@6vUXEg4{c ziLzuOL?OylDZg*&f-r$;_pZcu;e=FR*tIBv%3@;Fav4~KO2Q?S529PDG%42Z#BZ!S&oybAC;JeorGct<3n0`x&*Im~ND{4iomNVw`v4 z&6$JR^~k^Rct8(QP7=UZ!R{g|84&$%ekVtv4_&JVbryNMhAYS_pke52iGB$6y}()q}?r=T`z`3@0&%Pyz*E-swMYsQhzV8u( z2})$=_lS{VyrTu{lvme~3R!Cq=ub6ylC3eB6_jnW9#3A^4Y9xEC-U2x0`FcPpdA5z zYU$m_)xRpOL6U-lv6m#Ed#|M678s8KLqW@&q2VFTI2osnYF=JhNq3uJ#6ft34=`D1 z>xFKQ#k46nRbE?e*V=P^_vS>lZ|l}q!qQ{ecG?GBpAp~y-9iKO^W}EySPMyWdpbF< zgq0aum${{9;?mfjvyHLhPFODqKRVLX}r8TE8DTT z!hH>Z9L@C5JyLbL~J5Vn?d7AP^TO4K~)MNRMlf>$x_ z5FOfuyEr2cAa(NN(#Pc#HoFNY*DqN$PlS$gXU+ac;v-K$d+YgfI47$P@;O9F@!%yfKL`E@M$75GOGvHB8a^F-6*3oEH>BWh= z0uv0bSOWUeI@XNqm+|ltqPtcqOe;$`5P+gO^O4W|v3cL}{s$-!Zbxf9^TSDOabh6WW1gq$Mm$bt z9CpJAzd)XIWg99TD_;Yi43afCl8Alxna~I3r%f759EuLxC%9qTXQoRp<4x;mF2YF6 z*iRmLo^9BiaZUTyLXb?7hZXBb+eqQ&*f4Q&Rw*av5*PsA;4^Ut#%D2XJ6?C|$*;Ho zSR#|s{w9PIkSME-)#I@oBNO#I*7*!%L$4H)YbW-)%B=XyeWz#G=7jpEmEsPa0Q|H2 z@CWFRc0xbeC~tA$Mq~i+U>}mj=_KD@@3L&%zB?tpR=knxm0z3SFZ3aRgtfum3pEA$ ze<|kYQI#Sic83Z2Q;9WrUKNSfZ~gl`+F2kMiP%eCPuEku^5D39l3)!qhdS3vWT6yn zcwu|bjz8_glOw+_x)-R|Y(uBkplQ{(M@aM6@H3P1)N=?2X~$->p?0I)LQV@exKJfN zGR#`Ce5GMW;q(lMOIccamAh zuZPFjY5no7A!%n^TgTC+)XVYIuhypz$JVg7-mjzUe*4XPtSe=wJIrAV9I49S0=rWb7|G@Lqt9vw-q^!U$&wcXaJIwiwjAYBCN~w+R|LNt9hSK!%Ow zV38hASV4ftWGVDvBhP)?&jn&J%#`|9bn*Z=r@p?4g1lnd^Np|>o^ME1?ec!pG(`jI zCEd;fL|){UeM#$YBF5Z1wQIuL;>Xqcu;J^}I2`x8yihl_0JC_6KXqS?prQJA*;e&f zZQs`WvH=V#vkT!P?Op2sib`G5`LEn_15u`1gb%%TlHd-5oK$v>!}D7^L6{udHt^mM zi)Rl1xNKIVhARdG-2vT~aDh4^;z2ACefU&WAK$jR%QHCR(2Aqf5l2UYH4p>OzAwG$ zwWY|7j8MQa*8ykio&F?1_vqWG#L{ig61#s+rji_&?9A^Ri}+vHI^>7mkRG2olUQy_eS3 z>cloeNz%szULGN1+y(L!{7jq_hmHWSx_ThDC*DLg!KfNlJ}wQOlwIkH@*`Jt>c}{7 zKYpm8qz2S2Eusq&DYMez?fbnRD8y&tClietxPfFmx`4QM3lx*N;aNx@*MG`1Y`oJ5 zP{W@QZpEE74dm2$Qh>N6;KrN`+J z3<4iTt;A=xVtTrW;`3H~_6CL>zmGd;_{ucKq7^}Ksc0yPUW)+@rBau9D5SjbNnnd0 z`@61Q@BS^$Bk(ES%ApXL!H(;bv>cSuYC$%n}<{hQupH8mI4vO8YJTnq`g zm~|tK>Bwkh>@@FZjp!A{4PYN^z(z)@w0_%NQbKbc)sRzAvAScnkTP z)9y{4lda1}E1zAR*;ytRc5>F7F?XP2e#?GQMa!PWaX5ZH2SiIcsKck6@6-vu0e93j zm+mwsUsl|BG`l(*${yPk9_KRdbqW%BYGu|bD_XH6>+j9aFCfYn!Of~@s&t2$hrSQ5 zPOXS2WO)jcDDr~L=kH^!jZE!EgbC|>7IlxyjZUj{*(6qF@T<{>#p>64;-LLF&Xl@Z z^E^lR8hHNje9nHYabj<2y8i{L9?@kFN*9J|8FRf zU?^4t7ru)5Z7n8k94bUSP?KIdRa1{&5=Kn+5hFYuoUKk2$dt-lnZ1NQOvm~K00lZq z_?O7Pg4)RdNLJ<&bRQ3lO8OC3TLrO?d_N1suLgpI*Bl-9{XijvWhfA${O^pTto8B& zrap+k9j{B{W?{qz3E_D?giQCf3h^)~MX#m)yn+eMz0Y?9>g*f)hpyCG-V8CNJKo-v z5MkiGt0;Fv@yq>ob+yK>%wogn0-aPJoT;`)5FKIis+$%g`7?nx^ zWx(lP)8z{YJ{H@;W!skp@f21$oqS|!^GxG(7oL5T zDK(L4w1xuj6J6^p8*oRb@bBhfsR3Eq`BAI9^~aCEZAt$tk4#AwGnbyYK`c<#JX2Je zHkXnvAi$%~E1hnhcp^rhzKS$x_>fE;o#yb<7$@WINnSwoTY#3YkCo{+5D$Z>*p=!o z%kOEYY{NXErjS2-VyU*8Sm zVPwEfNB0nEy4x4d9CyG*jUw*AFDS#LbJ=OC0xl z7ukY0%RuWkg*G+sAAY8@9b4wLopZdqWxeO89uor_O-1=CxTg;YLX zKiBRtSdO}JwVJJCt{V3@r&4Nf<`18VUza@HcZnTd+n1QGhamlX0u;wNpnKBYjCKK+ zS>OMiTrnhLzEQTWkX;5SiRU%T=V@{?Lf!hhK;^&677m<* zvXxc(X?O($nN(b{v>L0#fWM!I5wtD5%jdQ7l9K zc3#gAdWIDGemAT(;!vujkr)z1Jz|neAzREvL%!134czv$+>Ft)w}H@U3D3&67d`ww$@L0az=-zV$n6I6W9{JiWltaVZ=Y45WNzl=B- zei(yHZ0T51)yXtf9%?GQY(70;;a`x;fWo4&@<Ds*f+uOmUJ?>HwZHmI{5w3o z9&CHm@isXD30kd^S^byC{F-ndo#v`yZ9JoI3sWiucAXY+A0ZQu9|`GEhUmsQP3fkSpExm` zN^-vZ*xLMjySa#vVZR~_Z|sdYO?1%cET~v|y+296Qjkv)q7Q{c4^^(6{quVPA6Q8q zz%%>0V_KCWlGe!(um^+%60_qfX{#wj7E2r8bRrYD>T$Jw3%He@yL>d!Q=o){Q%0w` z*;EBkImwY2VDHnwok*J1My@n^O@ooNj5*8&jBie>)4heBq{f{Y-53$$TFJb~^Cb%l ze)tK*SG~C5tGAbuw$khvW%snm?wQEUVa^QIQ&y=GIMydfS$%t)#a35>V5_#=a4tne z*reDW_c(B!GD;5Y60KWCDOkL(9v+}>LleWE{eI!aqo%Pi=41QhpV-$-SPAsp-1DV< zlx3Qg^a-$nQ)I+B=r4wrmH{JY-+%fK!j4l-sRbUQYh9sh$M4u8jTil+2C_S8KgW)L zf|M}s{!STxeR9}O1q~=Q1~x7CYWT(j1hW7mOo+@4rXHUs9joA!nnSVu2UGTqERYai zfilO&XoZS0W+%RakC%*{u+NS}28D7HXhp>G*QC7 zLWoNi2D~JStg+udE{9c{qZaOj=SVzzCH{DHL3u%mjsxPR=Q@nNQrOmbi6ZtHL-5mah} z?V!&?btZYQH|(oY{6!safI?p8#?z#0hrXbOEdwUi<*u=Jb?T<;2h|iX!3%xB$*iox z<2?NaYE}W&g>h)y@QX7O+*ts{@yng~P3&EEj}MkpXK8*5!0u^+O!s>?KVtkPePT7d#pO#YQ$MaO&*J}A15HKzgE{6ZYFPVb6-p8HcnwX;w8Jw+W}+FX4^UQw8| z+-{#>r*QmbU>jq9+QIdy0^+A=zdRp(M1^|p%d*rok7>o>zQnTa&X3pTq5E{DlXYy7 zp_kDL6v>@gH3Rh6_R!T83fGiJNc-N0M{$>D2X-a2Q2|h}dSx14F7{VXxWJE3PJw4J zjy2{!I@d%3jWMSKZC_AJE7E`b;TCNHGq>LC=Gy^BsVffyVBY#bJB<66H01#OG0ve& zULr_CPR{BB2E=Wdy05~t_;wuR`gVzwBd6Zq4{5_BQD z6_vTtOKvELH&b?ZW>v?2*2)A-K#f6q><2Nf+!KNFNjP&jN-3LAT>1TE8N(6sX6f7v zu}iHl%mp_^ey`Kw8DmFPO*N2T?9!x53br2A) ziVc-H6nzxc2)r5^i!_!G?>n?DL|24{V z%tKi$^mlI6H@wm8_zkH1i9r}ob948WKKv2R6}A7Tb@|>G6%~Nf#~Am{=7Y0@L=Hyj zse}U_Xcl~OlTVcY@yI=H`lR4^5Y2<8ZdECrt1jw0d`7}qW{l*!C)B3%fG*2G)*a8L*mZUSW#JX&FYHa4aae|zl*8DH}>52#MdzH6bR%B z;pt?^V`CaE@nyWbPz`AZulLD9dX?V?*B$a1TxGPEZZ^lz|8np77;$P|mGbiy%)o`6 zzR}j}xW@J-%e^b&Q|(4&b_A{wPo5X&20Wen!zJg6OFBXB`;O4oa}D$CBPI2CBSXST z-3U_GFZRR^YEwr+rNmE@-`9;vG9kxuU;~3UcR$y;VTEj&ox_qip9lIFnYlu}r=)@&leH3ihdXLCBJl#nR7Vvlk_!JY zUR22xaA)|CuN^Z4HW!O84QD$QZ62Bo%h$1;dvdwUTl3Gv*9KQ%WM2coaxE!nd3aou z?Y|`c0;R?=V_3u3EUWtLPi0meV_In~!5BFO7Y|lJ*#6Q+W4ByDF=8c=ll!Za@pEqM z$tw;MS&_$j=j)_vRm{e3=Y11zD%p-yFuq>9@nUemDrC~FzWf| zGis4r+)avxLQ>!KjwbOob;bRejU1N?muij5QvYOvp{r4QkI=7NQegu^Kc-Zuo_I2( z?5Ar`0M(f&SIuUoP(*e4+`@RN5|>P-m6Xqrj}eJ#;3~;Nev~_%1WDtEWqpddl+Wah zmzsc)TA+l8z)79O#>YST4KB|&Mg@?+`yJ`y*3%gEegM9d8%JTmJy!Mlqe?$&Pyd9i zi+L<*Z)3%R5i6WW*S4=_o8z~Uv%JC*^3%QXPT{Aubd5ia;lW{iV}zi!_?a>vuV3{X zZfhZxL!WII&Mk{n_qEefu=hrEf6P&zHO)xh^XyZr3s%g`^#<3eq%DwB`CAI1&n18F za2{cwm`({DNWq%_y7n_g7Z4}?OYZF;dA)}*6f4Hw1tFD4ILhX-sG|Q*nnB#s-tPIL z2BX-JAEzF=ILD;NigE-pz3cFz380x0u2pk+X18SSo`H5fzA`!fM!L-tX>hF;d@D#yTeHjB&t|ywh{`h%+E;aFnNJBWJf!PC__SEI|x}8y3WZ}fu zFR~e)t_@D3)BUM^F7-wM(|=;(b*gD)`#{}yHVt0+B<+1-qLyFD0>( zQqiJNmrw)D0PI84wT%>`1B1+ZFjH2Z@~gAYRoz9+aO5r_>+@Bz#8!yf0(HU`$2%Xf zkj2R>c)>r+$Pk)tcKt*l%0)2k$v8~l!PhNp8Q9Ohe89w*3xwr~8y3nA4RuR#K-ZW5 z6Vf5xHWeG|UAk+OLSO_o?srWF8@K{Wb2moqsLJWm1&)nn^p}WVMifSEu&03loRJO2 zbDIWkc6a-KyZ*&-q=?BOOZby{UCa?znCI%z+J*{2DaY=!p)X#1Z34rOyo$;NLd@cQ%6imsI2Z{ZJP6zPHb$m>?cV- zc;3JqY?bmfR_j73D%Ygix6+I4wmp+BpGmc6x=uL#*Kgp|)hG|0xbolGIRz?V9Ev4H zf*%d_t;cRZ#ZnF{T1(|rsc0xBcQ#LdN|uXZ$lgl3acrI0z*z3bXt5-?8SV ze}MU~n-&qOD39R^3G3?I_aD0*?BT-ODlgr0hvg040Od4aIY@E78#M|FkdorhYvE0* za=o7-{CnqT$^dn-lk1@Cb6a$B3Rc8rov8yTj*T@JRgN-XJhnHFt$ymBpVV4R%KxJ# z`z2qX%sv~V8*Q(ZIdC66LYEqgh&|W<>j7#GX?sXkNiQWqY^!1!fH+Y&l7Wd0RxA0{ z?V-He$c&6MZ{erzeZux3Ma@&rCTn0!h(XCtG9m9fhYPbf=Y77{(8&Zlr>|uFnU*%z zdz`nWrPz+l%tS{)G@MoDiWT5WP=}lV;W5US?_iK30H4@diF0pHgoX9k9<8ok4vUC& zGRZ9{1)N}_xtL5eECItxGMFD+5sEW5k*VC2nNfj%hZ|`w<_YDTZcq6P<*g82-&$3O zq*w%dp#Swe;=va6q$_v0M^-x@q;`bJNtoASN}s67U*Y6>TD0 znxMgoqyVfpPlV34sCBQbj=OmD^G@TEZLedJ#F0#Car4gZ26cQjU8G{y3-AAWFdZjV z46`y~HHsV8!(v)$rz&UNP$i_7&yQ{>oDRD34g9c;p!QkJT=nu<7E`zfwO4qTsvc+O zE;O@rv;Q8v486oLHN9iYpLY28Ppw_PdjZkDeY@k0gWWX-2lGj{D3AnFkJ|!mj-{ z*1!_l3y4(hecn}yA?Lh324ANX$C~Q+IgSj6PcOW=hDE!)cuVrnauBm$i>SFAL#RnS z9wu$W6J}GUa+E)G*?y8Z%Hcq?crtqU4jfpVfZe^jSofa)+QwU>1f())>r+C60G@;B z`yskU64qmN%|$Eq_|^A&ZtvKB4Q`o@6TSp;8JyUfw>U)#p~@fOt4wR7_Z_28DN;lZ zM(f-i#;l}m-QD6=y7H2x0y64>%{-VS6IgkL`R|s|fvjvjO{)8cxt=cgRrao&=HE{8 zrQyjgB?s6nPPHesZf~(E=nIW?OFk7PXocpdRVl_(rO!do5OdrfeH1${-yyas<(HZx z%e-^4Fa7AFnF&)~6?Dj99MEg=;gQ>PkQQc_2f%=OYB>BLy^8|R%?0jYa(?_wD9DG` zZ_Rh<=$DB zH6yMd8DIrbMJq_+rh#J(;=Wm4j)-3Vk~JvV8(X$h#uMWmk}iki(h)Y>o-MN@xGG)% zc0;E5gczi03%G+cRqVh;AS#6X4szGAC}>2pM{!Z|$Sn1bf6gDYh#t3xtW>gTP^m=}e)~*-E&e$+FOin>$~k_BXBc%`k?K=P zH#pj){zIYNz~@uwd-1R-HCNRjr&PYJQb-%~f^F?C=%bPXq;0ZVrS z{&udR2qQ?$5HIs!$4-su;@=)UNfhf#RQTv$#_d!pHdTx1fece|`HilTPV7ljC|ZHn z74~%T{uPY}BVQs>mNGv`wotN!4&Vry&j+?|H$YZ|P2^>QGgU5CXJJX6OWx>dDqu-= z8dgCfruMDHfKam#`^9gL81O*^SRMFH=2DpdIZP9{0hoT|Tygw#e<>LCv3o;7Ybfb) z!p*Fm%A1~4cb}C$6fvq)dLpIk`S!Pwi|`2=-#DxEOT4(;uUOkc?nCcMyYAG2tTR}% z&M*&b@^rFNU4*BRce0m{PO1j-@f5`7m5x93i#N>@gkjfD(!?Cx^39~AubS8aY3|#2P;v{cXwgsy++ttIpX@w?@9*)Z zj;9n4N+p*rEQLBa**M?R-|DqG+8Kh8jqQmthELU%ex`3ydHobV{`5w!Z4#92KpJ@0 zN;gu?_4fT8+}dpPLryr`Gq33M7IZtUNx)>4+|%ro_WjxI&nYmmw3{tA!vE1_o&7mv zPCB~?2T>9*;B$skp%(^S2+mx-???P0Dudo&l)qSySAyC=#Nn{*i{-UpEOm%GI4sL_ z;}{&msBEn|vZ-5snsczi(yqb?&bp+^m_+)B&_v~`go>M|c-lsA^E@nizuGy#I}rTz z6gx4!VhX(1(=-`TTYzdX{p`~-a2R*UO@g2L#2v`_SGc1JOlqK10^!JAGBH6TDtxO7 zLFA&=cPBCFTxGtj8mmdOj9{$bDEw5mx~wLsD%vE{=ve}zFAm=mJ2AQAH-rK*)$yem zvAAgl1K#3w+x0l=;jvC7gOg(^4+`~BoM{hs{sBXE4!g7R=9EyeXX@-#IGO}bi)khSL{ zMZK%O67Dh!O5oTfW^&}OHPEREPhO&56BDqm)|S$TT$bL??B#9@H|N3VRSjjveySii zT70L1G`#bNSKMLBN_p87b14%|T@U$>jF0d-hI1XeMx;-h4jRIzal7B+a(rXAT!ev} z=48$?uvHO*W~G?Rt_NFp0}IG780WF3Aa%YZbi3_K;d=NRFdF2Rq3ns``%jLN(mo_G z!Q%PTN{~KfCjv2J40)DK#V)G`3{km{>sFghntz`Ammd}$CFZPj)oj%CN#PC|CY$}^ zhXir~vhBOPsRzyrpLyi?Mj3))?!G?sXdT~9WmT1S(y3fVw|_{TL&Ux=$-m|nSF#UP zRIb)3l%0Dj`*F(m0&GLTTQ(6V1fzz#=&asL3g{o0;u$>P8)CV~f2xOuoHij)mZw5H{C{Hg{5;{FW`ttEYX7K- zFBeUt6DZ<=wFAjL5xfYkxj!>1-*i@*#hE^2SkDNTf;!p(6<4IHVX7v|sImLcLeG|< z3pn1%EY0y=`^U-XC=bB$8+W76Y*(kVS=_)Xg9oz!B!K_YD)&i6qmVZinmg`i-h(@Y zws{K}K|ocr+v@9~Nu!{R;CG4$g$*EZ6FGb*RkrL>P4m{>P^*yAp=0BrI~`%n3{!KC zoh-#MeO;dCwW(1ao8y3{t^^B|u0%4&!)~rU3av0t%;NEc@~C|}jQ>^anIoCfb5$x4 zt3N6c@8E8$4H9Jh3Ti_U@?XDdW9&TU3lG_d!!vE8Of)7iy6#Tn7ryw5f1^}verTqf zUTjt3Q+KxvR1ll2nT&S3XQTD$6A#`x#Ljz{txr6?=uy_Pe*OJZLXStvm!Uj(*M%*f zKI<&L^AT^J_1Y5+hKxi zC#yq-gB0M=B8|Kw2YnqG_Ftd0W#u%>UlXgMq|f2`dmI3C=yg2^6|Br>Geoaag`&_) zDB(Kj-OgBEDHf(@8hg=1kZYD3GW+pSPSZGdFOy_%>tlj!*x6VTn$LWcZ!kP>2qQ9P zq+|6A(r_1kGoMBX{aog-#Reh$d7vjHS7w|f@~4oHey;Qhox{o>RGv?^#f~J34Ri^$ zEk!$WM?C}x8Uxg$kdF9};rj|g3(b7KyFbe8 ztO!2FYYt9+^Vx1>PV^!rK;XIt@1}WYo7F#FhTfLH{OpjU@ItS%x^`FBv>1b&{qnuK9UoK~{D-z`y_OL|u{&@9eSIS;W$$ z$5O<+^iQg4WCu4h?RDg%PGyoH596LCwe1~0toKagjz9JT%eYJ5*~m8^3dvKR*Trr7 zt8S}HjT?h|ez(tL7N`GNXTGv_z!+QeGj*CieTw#UN47K9H-Y5MqV=a^@u)4l*X1{c zqt|?__WG*kKRd^Mb1;$h3RTP@iMpwn%PNcxjJ0)_F8JvQCca)ZXX6^ggKDSAeXR(Qf5D;E{SonOp^j!3ar`wCIu z!tz0$p6>5E^)5dj1h{q#`xH=4+NgbMySe@FY>9B59WS4Eve>y)7TU<3`p#G0BjoPg znllQxiT=|5twvGlUqBCLq8!@^?e-yQJ*n}--J}(^1hI^DgWSdtr)&I z!Hea7J?_u;bxLe0oosme_;X;0$pf`!i|=jw#orpOurk}|{en#+&KQdWCOy-`=ifXG zF9pyiufOLtGbFQSl#Xi@Wzt==FcsOupGBK4d(VoG4OL)JF{>SR!?88XHVxkNXigvGdmc}=1k9~5VZ1~}D zx<$3S(YLOgLt&R5LsENEd;1nCR?6Bo;rWSqY1c$4LwKgbst#GJT+=U#lq{{#2O_MJ%sk8*Ytl7j|`zntgScl@D)iS-*UeCBPr`^=DTbwMh zeete{4c@&kWUOQi46lvEx+NDZ19*S%V$!|{D%^HcFM=Gu`n>a{W9X(t=F^6UOYl;H z%dgf~i^X&d#HMNCO*{61EYxW=Wdsdf3&>#;H%7sKq>j`=Ru{EeEJBWx4nFef2s~Ad zkiRbS*5o4f9{juAPH@ykJSDQ^JLVs{IL6Y@W+nKg)gZC0vS{ZGG|EbXD70k zfGw#7QNWR}m`b__@`jK6Ga|LI5}c~PCPc>KoI*0LSlq?Ck6JC=`S^5sAJZaF9o9qp z%ZV&Al4C!6^FcMGacAjc!NIO)_FODR9TSae<4clAA||gSs>7>e!qDk3N6YzS!8PT5 zg=X3LIwbk^&H^?&0+|zGoh-8>ev2m!3SGoTC*eGsT5gqg+~ptkv+gbv1Y6osDLg|` zjoXpi=e8z;=oiV2Cx;R}5h;pn`e`-%*jii{{DUR_bkM?2w(D}_1`bOx7{&r~fv-MW zji&;*vjEPIh-F0jO6tEi)Rb>ZY+4S{^GNizR`;7O@fD z5Pe2A12NSzle2~#Uf8tTE+i1he4cmdrr|Jtu({!<_OODP^_4o7(b!6Yj;4srOY185 zcP=c}36E3jX%@@uWt&N~A;KT|s_*l3SQ8T5T0eMC^|{Q(VUFdztA&ue6Ti!e7;&JdGM`U`Ag5wy{Jw`GU@r1I)soAh>SPG0OmL8&K`q}8aG+=nE} zx}FLhhqT-1I>k_S>wd-VBK(4v5_Fi__<8Qdkvp0}&48v< zRf{5-j`2qHuJ~%b`pWF(2p&qR_6@7Ne*Xo`}qvi^#rR1ygE%7a9n8X0(@oCZL&p zeV2Eab{Q=BisQlq-uV_3EM_FsJnuH=Ix6EP$dQ*@27%da+S-kw_obX3l;2;-8Y^7E z`Z*nn;k8_13-ZiLBEl2<(zp!ucE-TuX(wX2-)_p^TnqI3+z*qJ_w%s+Mja$4CL2%F zb9Z4$;!>lSGC5(W!nWwysEwLTmw!P1;nMr-axK$$yxxkW^%{3bv6hEG6Nx2R1C5=| zO0f)qVr~g9HQM`Mt4qSktxw-tjaoZ&tf51yivoUg6QA_CQp&v-R<4WNwClEzM3Y(> z_=Zl>&=jT@i|#O$ETb*1Wy=oy>K*!O9{79v?M8~`zC$JZ89SASzQUTs%)@Jm5-RGX zei{5H>jN>D@BP3`@%?z%r+Q%{L5)QAm-M?OLcDV2&ZFqP2R}njdmERHhT*d2Q!NDE zH1?J3+p7B^7w>Yx$SiVQ;*nc?I*>EjUrQaI18RLU_tlr)1*Vc&q-)6jd3q%o%8q{A;Qe}$k?$^hQHcsWUJmg7rOit6DByJj zqGCw-#S4ctY?&TbvTbVq(ZKW3yn zuvz!NneuB-jz>FQnPp<;;E~wQSXGX?`i;5^{~Og!75lp4i8oUk1v9B)1c+B1Ijt^J z$(4wJ$l&6qC2yomdtn&0pvt1N(zRdp;EkDF-2d*OfVGu8_&37_jY||y_LIHKddMO+ zk+*fdY!-S9UnYP zuo@LbZp??v96WM%Q_fiZM~IVf#=Sf7aM^`+578|<5EL*PR$#4<8h)`NJYdPLUZ}+z z>$Y9sW}{gqY~}e$$77lhZ`|#s*Zq#bhrU1dG&HFQ0(B^}@+TaY*|w? z`Q$T!rmZ71SvZxv5v8ypQ*c{B%>(6BD zVT#m8yd*?OjPPot@(qL9Nij(4i^UNA2jg4NQZ9}f6{I|(bJ(DSduOuEoK-6b|H1vT z53!m*3V(!aQ<+~Q#v`ld94`O*xWrBlA~AA+R3YF#+~s<_inZ!V0ow0;n>)l4=$k^+cW^wXp8o52+m_4Ik6?g>dBVWOWEQMYiJX?-ME_Z1u zg1q_ut-v%qFpJcctf!QNzblOzFKp85CHM{9OXFpi5N`y?aZkcjqLQYp+=GZ;23S~_ z@|G+I=z87wg6MNt`KRSbxcAd^1vj>1dsmO77XerD)mjsCs+~rmZFZjRYLi7pP)5!%tLb?vf_1e3^BMZj z0J53?OAN3oqsEPgKD=6+K!y}nFy5Z_R~#3@LNJ}wJtq%miv6FKpb%z{1*%(?=-{6+}H)b%9UN^C${m*C#EcJYa^zTGCPp7GsSaP*R{+jw1GJOs$|h4 zi4-7;NaW6*enxw>Ft-8Gub&X;Z@A4F@r_48QMN*s)JSW+t|vP2NW2tMCc&QzuQhRF zA{*e@2`0HXSdCD~jCgewqOknd&Zl_oSF%28y-#kjSK#*USy*+^M&3lJv78w_M9}Jq zH+r$%`h(FIJJLQqDH<znb;E^yPoyh9e%ObBK@Wn$ubC#Q+Btf2 zv4Y4LpG7QWh@aaxcrnR?AMZ)$P5J&#_k2s8`~yW@A@W2`Ghc=hD@0-#!cK=&LYsd5 za|sqZ)1S!6X}#jPp3q+=xGh5!wR0~W$&_EmTx=(z+*;EnK`>Z7zxyQrg`J8a5b(P6 z#zG$~BeEW5Y3`vtcD)DHr`LX2u`0Yk%m_yK@fOFJ zVCp@nh7Dc^U*gR8tT1?DV<@0TVnV;xeSJ9G((pM?_@yBA;BRij9-0nSRYoV8fa?Dr zn!Y-$s^|Oq(j_5CT^a?H21#j<6eJ`sU2*}D?v_$%5v1YLofni&K|tx2l;_A`(rvLwj073x5OchQN_i6V1QylWblE1tu#^_Id2#}ET&X)eYED1a1?vzj zo9)OH-j?#DLW6DJCsnVA$F{q7Tn@vi=ydn zwxc4w6&f#Lh%RTRO*k7K`dVh*lKm{SnDy2V@-(~x?vnTbGX@>zCAVfyT59N<<)u;T zQ{w&iHL7aQ5*YMl>dh~)t_F=Ia_N>fAtl!%4F%kzpm(d`c;*nHj@9a%htSAQ|IJ%W zdN?zo(b+SfoZ&JfI^9s{__lZIgP6AKQe@FUm9k%+mk$rs=7sAo?@iv1KJbEmi-lWd zwo8ZgsPQ~)ZSmuonQ=b&bNi~k5sR3!xwCbnI}s;QHsiowz=k+Nc?}&1B7NLbG%+!W z7~8XkdRZKj-^oFU9vfQFBOlRS%xDzt@-hE{+H;Px=ce_#x!G(2Am8-F=NCd8fzUcLIki|gP`o@_vudR zEwLGZr6axTsvdvZ8{Y81p+;*q1BJSulM{1cxLxFQ!5-qH)_xTAIeV#;HEfmEP08Q%9H-8u zyT7`jUZZS#oL(LDWa-;2L)u@8YE0XA(;LL)OZD*&x*!EFRWexo-jU+{Z9b!Bx%MN1 z;!_8zqbqjAG6(foaPdL=w*e8{j49th%aUzDd|h8nJ!AWGs4#ztJFl%~0Q#Y(teX0V z&!DJ_{Fb2gn&D8TNjQODILc6wet%+Pv-RX{Be!*Px$(+;=YC8fVhF2ra?|_JztmKn zwjTvue$VyI|9vXFVn02iS=;^dX>&G}w7Ffj&i3JfiK+QPu`?c&R%pyK_y|{E;L=Is z8%9MZD*XlXs{Bog-{krAv6q8^P`4_{57mQ+1sp_iCVp=FjrcrHSBX?Z%pN5QnfI}e zvBRtqra446ep2kj4Mn0Zh%a&;wQlXRpmt7IxvUGctK4c~$|2ZB)XZ$QiqcJ_#K+osf z^U|)qGxXdiL}Z3r2j#~}SzmY@7d`*wTP2kC=l;R&)(V!p-xvs9oCxuskF=4XpwP2C zB0VilPW#ShhW<5D!>Xm4`WFvAq4k>`Jsr9aL3NedKD6#hlhtg4Qm_$v$rfJhm*ZKM z;eJ+SIe${csT-1sbhq3S+MWlPC~D!iXZ6&L+B{RnM8kfK9{xZ}E9uu?Z$UgQ=$pDb zzfsog%HNMkRUYed`wg#6%EI?5#cjC`9Sj*Hy>caZI6kc4ndBG#X^^ERyGh6}Zk8Zv zBD{9x8obDz?01&VjzWha%5`Hehg`}u)`nE^oHux)E$2$VlpQfO;}UIG&0SJQb`|Jr zb(rI~ua!~l^l%S5!r{_>a(zKV#iEa3r<5w?V?RItTf?zGjoVY~5@k@nf0Op4%j$`^ zx<`79)s>HOyo72evDrXI(m>r_Xi)1@X-i@XmFD_&TfWq(38f63`s$(XKa22U0J`LZ zZVi@0pAZIoKu%sz3P>uj7;Yf?{S-6m^BAQ$?8#(qgH?Q) z34DPqhv-+W2oAD~j}ZA-(yMGuj(yGDGXAa4GtHq$rcsab?k%w=mt;xCtf^}}A@T#q zKWr-lXR>7>=4tuTlK(PyHSDLFSvL5YCGKRJ^LL$cx|@!Y>Oc|0we)xXTD$MqwAf~R z(IOA#qVrIsj10-b+r!?Bq2nv$v^qwGt^srH*0g>iOK>c9M^@YWAMC8?3RfX(Z;eA% z2pGD*n~=bS0I<-^GtGyM^nT^lO#v<_Z;rx8?5k4DuD6x2$8cDwYyqhWz&pC`W{mb)>WuWtLi68x|ZIlU?)%tIScv%LnY3^Of^Jr!KG5B2=murtE6%?fK3Opxdt zzew88+5P48FH*Z%(Tk2)19-v;zw@YKlT^B}T;-khujAU=i3xT)5Q>ki{f3d|;*tWV zE&sPT<}BKbznbh)_Wv95X=%KK;4%ETF0AEL8j0dvL*_ z@&mkt&_RA0b0_8q@K|0(6SxEQDH_atswaUG67HvIqXy(K1iIl3Dnw0Ru~r*{=$oYU zE2-d@^Hq-xCZgep+dr&^D4SRmZ|f%NGM?x+kV?DWw2N9PWVC60Sjrvj$QpolePn)j zn^3x7us*SGiysjTO^yEIw@W8RebwX_-T2;QzEV-*cpoV=gsb-xcRxS9>-ib<^h7*? z^>>osiSq+rQLn5WM?G`#7^Ck59MWcQ_TwS$nxuPjMxI|irMFdz3B&xFS*jO#uhztV z(CuIOcQGTm0UP=8HGcG`wrc`tzmMl2VDrS?NvSu2ySsr`9wuYUaXaDtG4|=B1bdT( zLIb1tQN@+2{?1kZW0=C8d~Vm~zgS^gNu~P%%yewAXlN{1>#% zq6*zVUg1O3UdbcV(1{?%q|X{P;Z+EtvTy)OM}9^upk;^P!Vxhchkk}HwATL}_Z_)Q zj=gElJB^?-v7NFrb~P33sgQm$vCjpCTKe@xnG>G1nxGLAyTOqxcun({53+Z%8~6pA z9SnjQmj~46zU9r8cL$QMt> zRCCdvt1=Q9IxjJ~BNISi1OU%t!TZaj=*=sOHvp&D>-d)isT&L0Tr^DpvbmI0K1i1+ zrp*h~lK&nwC$c}Fp`s1#@w^^ax%GR#H}v}P?&)D(|4~{vmIpTlUvlVmTrM7&av~Z; zgTam^EaDl8$8tSmdmN{qoN-NAg<6Kr-UCmmL-sG5uNg3Co5$+1mFY=;r)$`nQ=R|Y z6~tFysjFzPV|zzxR`WB&=)3KKr{-@hg&dl-I=M=n!p;zOV4Y21gHgCzx4R&pNIk7>B195P2G6JKE3 z)uUFc&P{}bXno3!zjel3AC~|Y_$(t@GATG(A&n}=VHHq5F?o$^EwA{MRKuLI&=ys! zgnzMRdLaZKQ!(@rNxRfcIdY-lBqhdTu}ys%hO6Fk^;PY885xe!vtO2Ik%?RCo&J40 zcQx^L?zx`Db7AAeN0^XITC(X-a)<^{w>6A>Mf2dD2f5~x!F`5+4$nXFQxeY1!76o! z*LQ5DL;hc6xi4t9gm1SObeP9J0~c(FBcC?G&z?MD`-6S=wkSSU+A&XxJb24!k7`zT z0vFdju>nG0^^Fayv~I=unrU@_=W5uAD6vOLKa7b7!Am>2INT%G2t! zsinJxK82O`^hJW=bISF2gA(q#1X}PMT3i%}!n~plhYreU`fqQavJOUz*xS+;uhcWo zz8a5Ms#z6lvIt!qb^6cd`Ke`VD4FK({xrH99bEVp zM8ixUntF~`Xz`xm*U_Rwp-p8DV(W_KA%8SK2d(?Q=p1}%x6z^2XN6SOAM0~B#1joU zFR6|6lzyl0u?fFQP<=KlWB1g+Rvg2nv<`R=165Ia?$1O?%F2$h-$89mtRI%${JO+% z|A)C=k@W459szBxYx9<5^6qNyo!hs#FJYlKZcSW0HCf+k&3M(o6N*8hJ)=8gx<043 zCoyKm_Tz=1r-+4_F3azH0j#>}@n3xJl;VMUZgVla)*B;WQgL!YGvmfqKIriuNpeP+ zw1FRo)OnBDy&$EdqZ6KASyu(#;PKSIU%5-?*OHyr7HW~Hi&8D*CHrdsqF=Ex#JK=5 z#OkAR0;7P#j~N5Z3mRY&*(=8){zVgqgJ2JkLr?KGUR>soj@ue*9rIt#@u6)RIgo5( z{`-29ziSU^AJ!v?z$9goe(iB1c7uoud2?e>5AAo}c^MSgaUc4Y5y3DT8X29SPgppa z)Bm)7$Bhpg8DTwLyOh{c<3X+~7EkxO+O3%KtWTLd+(u^IcXeE(tky%9?6hEP%+R2+ zesfq-817x%T7(-~zp&m=|I}0+?b1+UB4LV-qv@SZ>H5k{^MeI7CwC}zhsd(%sO*?5 z%g4Jf)v)2K|`1ZXqnJUG)lrkfwN zPpNHc2FkP-jw{38xCt2m?8j0n1e=oVlqo!v8CWw@v!}VtsUe^~>P7s4)?RpwZ*maA z#P~YH7O;W_Ks6NtIblrk?NV+=`Pd07=Hu~425Em?-|G6K##duz^wbA4vpIKCV-l;h z%cE#5VZ%rBbLqNFq3%hb^K3+VoQ5OM(I`WS0xM?vVd}!nT78W}f^*4G`mf7TIuz$O+o94xcNF31${#&~UL zQt{ZYUzv>J7n%uLIJey%H+^{@IRX&~+x{|tSIB4d7^Wxw)tftIfPJtI!5%QUo>(;h ze((z|f`>AKgyIB9%f#RfK%_IjTWoj}SzDY@57)UPpli8v#wNjyJk8NNIz1e|T&u4A zW)JuFOxjM8<6D$@Mb?s@XSdUcEuev^QY z(8A9*?_~W+S`oi$>U;i`iD9m}kuHm>>w})8|GOhVI1poQvvg~;{u=-0m;Q+V$d*ZI zG2Gg6>=IHjQ}yuUrsEq?M?{LE8V_p( zBL=!0T4p?tF{>{yIgo~Lo9leqE~)=t9hff<yr8U!JN3vK?HQg_1TNi`*S{Am%&pNYoYLH&N+1k~<<+d94 z1~*vltx$yP^*+Ok03*6^RHx9JW@}X3foVys6iOqFeXFL~emBJ(>{x(4hi#_wf$SJ0 z>IdwDO4%~2IJV}Sh@)-JU|BXIh1{Xa+Pv}`kMByF6tZw-=3qSt_U{sY*l*ZloXFd( zQVgH%{l-DJlW2hRKw&;A$ZzDfd)eT2vFK;CGGFh%SfnU$_=jrWhXfb|`EkO_PJ9v! z`CHztHZ1{4jl_AWlbI-jcJlE>6!3`+oCUNHdTw`E?x#9Nw99u_9Kq%|1)pomuMBv{ z*k$t8e2&uUUcrWRf_~28T05d-kpAi~z|~9wLX8QFgKL&0?BWPN05zUBO@yjhesv|d z+AYkn)oG2F<3g$k6X!X%F(9+Kn!5x`YWJN>{SKjzh$KIuza$I@7vwzQSj<&5+# zZ?HV}TkX%=%}w9*AB+p_d}W=|Ruh9rJ}nrwke7)0nc+NwA4y6bW~)i=s?d)20;_hK z=d9-SMtznRi_Ez6L12}%W|H=H9hAF))^r>0}&y)U+(5r+2OS>!MET}cICLWt^X<{IuUdGsPi6{APm zb=Om`7Nj`gQ<(G+_D21zQo}t|T@SuwmXd(zi-%;oG~SQ-l3}f;)UBcWB4rb=Pfjkn zs!I<6CJJ6 z%vW5X<`Oq&krU#FEs~ZN`lyCO%}5OS9`W%?l)7k<*e&mRduSnb0ZITVn6Acrx_oO_ zsu3B5MjIYq@7;vEd)o)g?oeS8)$jWmA54;E z98CpD!;e*Oz_IFSHEM+|1u}P-?@W_WV^On^hFT~GP=h!UifD;d_H~8IqJ@s z7w7e(TjnRux%fr1>qf?e|HPE}%cNTmlPG-uz_Oazv|op_Q$LihJmv5v(m(%P9-%$% zGvoLr;2vF{?(_aQ&qNWQczwL+Yy(?7C6$s;>wW*cu5wf6%oXreyp%^O%$J}Vrb1=} z6rRfc$AF~xP*K2-4w_19#ax|Fu=n|af@^n`0y9a!7MD=(xukzIx*XLDqnj?4>moeE zkC>1N-H#;!;w)E*`Ok0>^>NYW9*oNWqMOg?_$Cu!S}*7Q2cCyF_^nru#OE9PKa#AV zcLULs{4lYi@5a|&6BbQ304E^bJsLov8OJ#aR)k^>mM(mbRM^FENJE%Q$$~u@aI^NHtGXw_9}Kbt=uYMGrETXopj>gk0IJ}x2u(`0;F<% z7cX^3#isUw?A5~V##=H>Rxm3Veuh79s32asx)*I^Wnpj`78d03OuLOb(--7C3|r-4?ggBDAa$roOkOy!4|=V|FTuW1;Z2AvI=m->nfR;s7-EMv=_1 zG&1qxkLYSr?eZbdR)tOCwF7ykV(edi-nlU!AFI=vaPHKk6#EU6V58>D*|l+x(2L}u z{Y*$2iE}P5vgo8^#u8c>=6j;EHv|# z$s@_1aL4s=)__`P@gaPiXZtkc7LJ1Sut3Lk)3QX??5jAnPSQn~Ih(3hdF?Do`g0ai z96-NTm!m!fg4e{o1*!4h7#J(PBdalnIaKNydvA^#B~%>@v_F}iU(-;C9IjAY91)P^j0y&GZ-i7H@WyR61I1#%mpR;gs zgB|F6$6LvI52g}sQ(5!9~%nNA$~J z%1W=xboHKIi^NNyLr@=fnuTr$0NoTq2Q~KekT>!CtL0RAil?fZU2jnC%({-KiCkDO z8zjK@*UjD8pT=*?jER^@$uL7i+2jgdCa!G}UtRvdiuj(jIIJD**#Tw0cNVGdppWJU zk)cdhqqC$)w3YSAm7XJ0HHL8l0qV}cBQePr5|q6&PsXg4R-HxfeqWFvoBia<4n$2K z0u@OV7G`kQm6=?&nDE$aG@|BZ(?o4`OP?vIo4nE6kHaD_!hh1{s)Y)X^s&hP<+<7L z!PQ`7wB>sgr;8{qq*in!zKiVxn`AJUv|eeWR=%(6OP(P||6I?4OG*#wgyRRx#Tug* zmwX8IpGq~5lxlqXwSHYTP|}vD^ks4Q52|<(771hRN1zMoE@R4R?|oekH1B0nHuH;3 zR(AYX!a#8#0E+cMG>A+$tXLv~1i(DUi?U|xBT$z2MwLJZJt1m3aLJSjiI(lP*~3W& z0mD|tjHOC%)F1VQasMVCiM^u*+qGSc^R)|GYs&F+ zC&Q_mk7XFAz)%9XzZzW+rz~YSWgqX8m!7mO$$+C2aPN!a#-_%wB`IM|9}LK3Rhf*| z(8RB|lr*4oP@DjyWHD)2B%N@UQ6Xd)`8jUDMlu59aVBX>0q&zvELOPQif#7841}x? z%Ll+{5H!Q&>>ECB`xZBWZ%wtFs_8xhc^8nuXwL+(1N;y}ePm6~3@VSvYHB{<oQ zMGTRY`Vc}srmkIi(CCvBQ-t7)lMn80ir+^6o|K6R#xT;1{KL6XF`Ps=uE^xrZ1r$| zlGiIm6a5JvlstpVXZ%G*6!yB+N`3EBLp=5iRbqb(jm3r#fqhn-(9N71&&BSyOaoOU zg>19J-X-1cCEea;i<9XtUu6BdKVwM&)E_p0jJ!xuPUV}9`+a~(H(#$wh#Fuq<6Wk$ z%ffrt`Hvz(9UnU^hF%efxkWfX>=Q!^ZOL&VScnP?wK?6jH6NFJ_Aj2&yEg~>b!}l6 z%jZ8h-z97a_f}ge0>_vsSrlqBSXJ28H$u_=dpgVBDn&f)sW?}#VzqR9lp&tDt#2t7 z{zr}bu28~n6lJn`X|ITqg*nks{K1aQ zRI1X>q7ry0ftoF;m=3a|cIVemkkFFI51P#5$)1ugU1rVK&l?%IZBT%WwPq#LW}Tr7 zER)SRRF6;x-WTZOP~*(_T0fNbIW`&uC=1DaJAhbY>qno}7>kb4^zfTBTF4jriNijq z-&gSu_D*;2cqM{}ecn%+wsA^BpVT>DM>su8*>IVILX6%_@H{oq$JUZ7NuPs#Xzm6kiEacAG}vp&8pL3J)xQ)WpaeAFy`V1ZvCfWlm)|xBj!lWb)pJ{{ zYiuqgMF4(^C>CTM-KYmBE>h!G-gG*gE>_?=AD%-!z4%BW?nT=+N!;}p*UD@&~@7xl0|Ocgmp=?0!NLaO|keC(nw zQkkqoMH+pTc#y^*)@+Y>x4QlK?>|Ox<3A?$BS0&Jjz&eCyHBUTP@c%6eaWvc@F}|! z&ZS>xu=t?EnlE5g`Vb8qu_9vBQ{LuHdef`M!f#gcxurZW%#9BM3AY$-8A=~L;b@v5 z5Dm)J)riMPHn^dRpO37OF+RKMxSgk&o4zjwDV2^31Q)D zIFs}lE97B&PbR(+V}pBzq(8(!vBkKqa~jp&|5Ij*B!+^OVMr!6c`8qC$;5pr|J%P? zBoT=M>D+EJH#yElQ^^*y=3Q;4_2Jm}Ge^_YFu;;4nSgv9(wfK=u&Gjj3_n`;cfr@d@6n0G%wGNbL!OBpDUw2@7Ft z*%59I)Wo9VmSEO#Y%D{Dt}KdA^OBf-jA&2#@0FMD9{AVF0dUKe$o4Sj}lnsMEsu|1LP?*2klJh=%Qr?nu=Cczje0J=O0x76AZ4Xpu{(u7E zIcmy;9m|5ADM{HwAsUcPz0vbRWI^@C3a?4@(DQ zEh+~NU0mw8ho}9|^xk7>$-H8#CI5!wjpNol^YulalwbNZHrX6l#NT{;Me#hzUJ+N+ z1G0|8uZvB@AN`%0DE%S{DgfN_^JZ!;hsfB3^OvJSn3tRwb`$ zYScW8__6w2Cr@?g@@LNlAZl}<)1CK;a$dj zs6LIN@V72t{uOI71furkUE@JxM~jj&I+!T|e3!YZNyJM(GEM4I8e_c1;H%-<;R^*F z-31ItYiY=m_f72^@ja7suk;^;`%M92WfLeqZgU^W0BpCI@A+-!D(^PHRur1xrZ%QV z0>tss>2wy1^X+{!*6?F+OqX(Jbz%@cKYZ{`WF+HT+G8kV6Bn|||6?w$RJK^si#e1d zs2=(kBE@;B6(ZVE2xQ;NVCej}U>7+g0N&&Lqc-4dtY}Y+<%J)`mS|VvX7q+FlBJtA z9F2%VC1nf4=$>g6%1IgbZC(j(Xm7*h)6EzDY8B@El;zOpH|BH)C@=pi(Q;SD0fu(u zWdshmT7ya(d#D^a%zAWvH6M?*Hfl}mZ|Qah9?Ebdv9+4uK|p&}-Q!Z~FIv=4ra=@) zfJFF9v}|ScyAgW?4H0BP#oP{!uIh2=qhs~I>#Bt>lKC>0*vLFcm;;>&@f!2dXHg&y zn{(zUtODWal3m8Mg0hFAvEmanc>}`b1TsLJ*3;4$ujwA~5R9m}(WU?gKT?1Ll_Do1$kuWU79GY6ad#@QGkk_T6ZwaFc zFe%XOsRk3tOED%U5}5;s?k9;m$I#udrBxH3gQMOC33%<|csaWeIVK?w%S%)sWGO(X z&$k6nz^2w1hU3H2Rs+Lior$6GYIdY+m!Qe8>HWyO$#5{`JeyGZ z0Dl4vuMq-yqP)5;Iv_zzaWXJ*9&R<@#4^M_Q{U#8;2*Yz#AQnuOhD1Nd~sJ6u!oP8Kem-v)xEKBE<#3zB&n z`bPhqq|6wf`@dsf01ek%RHx_y)m8cEr^WaT0EwNBN8*Zyn)6j7O6c8h3QGhxjpxC|x(l_a+2*Xw=NstIQSxIsf*rlQJGa z@j{ACRI6T;d=k_gsiWG~ek)_bL*eRHJc#5W#l+aR0erYiX$j$kR)p#^ry2nN39Kh8 ziY4+)Qy-V|K9(>BIaqK;US1n_V^k3$cdQodJ)C|r*2rnxNby8Wr|+zhUDVLo+S%UQ zIwJZV*sbAwZF}KiI$5!zu^u&jkJab|MzIEXh;>X*8<+#K4b+`CW>msHHNvQnR#xAx z?{yb~n14}**GeJJj!@rUz!nUzE`!wZ-f^QrS~)2D<~_UydW_EhgQ`&{i!bP20t+Le zaFqhR(HQ}u2Cjg@fB1RXK87&UMX*)|1CP-r~SqWdeZy8m3&6~C^8Fhdhwy&G8cyXPGu(2SS#DtK_0L!%#=n(Ez8G-zU$X5wIm#aBts7$ChVkG# zozeuCzD8zcUYFgL@c%=2VF7n9H#YdW$NhUXwNC|d(jf;i^k!t>)6eLjUuW6g3i@TB zBr&o17DVdIedKaeyLP;K99D4tm-~wbwri_?R5GE9`lCR;R3)9|fsi?Bu9ZIc&fQJV z9z#JdrnY-Rnj+-BSr{>bCaUp@+Kw(lh`>c0h0E_H_dkl}AIE>F;W#J9<8s$5 z;Ijd<`!hOZcVC?AdY~R@@V9#w$SK}i1H40)P*mr}fi9;)OQgGx0?CBd&;_A^B;rq_ z;OHQnbT-$_T|1QyGzn1Ex7$uq0jT^kFQ>1vYm96^@`Xc1eeG-yBR%v}_^o4MFDQ8aXGbg~Du;pX__e|6sp zRTDwu{l5=pkV7*$Fco~ol+U$Ufj+~g%u*(sh0JX!NtC{#6%8VZkprz8Fuc`@k;Gpl zf=Jp}7)kC;1J zTOPg_lchv}b{nEK1&&drU^FMHc+Ltl;KtqJ;d;F3tQ1 z8VKYWBcDwU1$`3-vf|tRI%|7NH)%%T5HOTye`hfd9DK_zY$9gANal;^cc6DM zJpmMYqy8_cdH0a)ln(eGXIk3a%XNwKl1`U412NX+YFp6b)b*!O!&@ZeE0?HhriPPW zHHxbQx*W$)Zx&*gmQa9C2?UD*Xgv#l_^sZdWf?(QYQO$p`_(4LPxC3?pM58V#vgU| z^wZc0UjE3A_pQXWE-BsKQ>yjz?^Ek2V2;bN#F}EBFu6xHn%!%h{s2d3{;wN_(_QmQ z6PxVnOU@JdqG&%hO}f%6QFJYe*b+cODNBOh-T9m1wU_}gMs6D^P~gEF4ZY>(Z4($9 zEI10YD++~4jj74-_E;cNZ}N#3Te&n0&7SY6oSpC82Kn?K&W;Vt%4J^4$z;oQztKj- z-{9T&91>a6K8Z9H^h_e!PqENI4yCgz3o<6|rVbtbMwfDhK#Jm~6I=936hPYY*@8ep zCYz0QRWB|~z6HK@$rCRh_$2KJgw}vJ;%M3Lf>BLU2BN6dfkXMC{i}qX&mu|lATD4s zKIkmC2(elKwVD61EbL(qW6=B?ZUw_x970~#gvyn(bMnpDczbanh8WW(=f`@R^?pPJ zYlj>cwr^$$FSWuy1;!|fTt@5zwjlnW9(WsePh()eJ^Q4P{cMu9!ymS2u=9>R z51C<{TnQUUIF=mi=XexXjDecHqVo7Jli!pc!6kq^F2CZ75ZY?|NHYK`^ch`P(m6s) z$YB>l@FC+59Waz(ZMe$}(Bw!zcw%c?nbBV2>3t_1GipWDbZEF+HHa`ZC&@!&h-+PM z+5qWoK-0bt1SGjGaqp@D-(Jm5rD2C8=8BiUr^x%6B5en&((l0#WU_tcF=Uh|aEKWA zGw#+R_bZC~9G0Q&<$rEgjS%@+X3D{3LfW>N<)-;pR%@>`TvNKj|Ep*oy96x`qRTVF%(5Hs4@>&;Lx9>oipU=j{2xOFi-Z{@K{odOvlbv6gmVbH6@j0FkeCvfa>2Ohd zLK!M7*BgB9|Fi(LWfF1DLH+;^LjrJn@+g_@P#_;Ow1GfJFqLOaj*aPP3Y^G)xjGdF zu;=E1E`&OC9C`mC%1JJorA(;VtM6)wdjGDlgQH;*Zm(^GY6?+Q^%wta^*hCcl>loB zEkdAxOkwGZX)4s5Z-Vem?U;NW?OJ=T$G8sq>%P~$%Q-G~0jo-X2RR$8Z^L699*8&~}F?I{h}ACTn5YeSnju3(?CZQv^FIbKP_Cj*wS&X&cdy{m4(X{c2d zNxd0x)JLdaq(9(j)Ly53Fpxx2_!@TbK{{!$tVgWtNoM`@Qp+Q!_TvA7;_>5L?X)XXYvhQzuWYtXi0QRt_HuGP zKnX%I)L|T&)(^xU8wB92@`nm|GP34erAMMnf2ea`ygW&VpF3+JJ`Xd;?#Bi@mTLj; zrS89Zs7{D4*r~#t&Mi~Fy|FNafw>f#t`L{y|Myy5=A$>mC|1w+(l)8rX!aa)EnXiW zEWQJ93Vs5f5>bd5Bq?SOkcNzA++#Oup(wEHBEAV`iQ}Q zyjMSrzNt?->uv6Ojim{?e(jHaOyARiJsS)C3k{a*)BN=x`%4y`TVR}ZGBdp}pE&my z@Q(MdQe&8U1pd{+FE7?|3aewyuThn?&TnF~?Ea4sNDm{om(YmY1n~k?$59eW{K}It zya?ejfVA)~bEThWz`-t43YDYdyS8HEdjVuTUEL%v@Q)Ha%S*x{I5hkCaoUp6n5)bE3+EQd)7Y3qZRP=&skea%f)e-%SiTO6cJZICni$RcXY-91R$sb* zV(2k!3q-0h#XGz(8ppK3oJnAK!hSnQ5A%K7rjJ?LO#)RdnIZF4zUs{w-*4ongWg>u$@5#yhcsn zNwtFo&q_OWYn}F3*dx{uLvI14lXVZVA3XsFz4J-2=6#=$Q2asi`)UjD72%17 ze;ISiAP~#ArwNfiy92=HrC1vdY42BS92`%8 zqyyVfPHHU(H~AwI5U5@hnx!=Ig=GHzg#Wt*7vpHG-BIC@9=|1`eSoxicr};&gDdQlSE|pfX7FW?BL`3ffqHt>c}4P@snJm zI^2IcCjVzz&GhEweNaHeHHXK?( zvo%m=Kl(z-=WJCND#xFyFQDsPxYKJ?gTY)b%|^645eAUN)zIx$iLjp7p(iVA}#6{JC*8 z)g2SE83u;HqZ5pee6#(_#ag!CX}Go3E+%hWbKr_^CeG#9#@k>eS!Dt zLH_%K`w-CeZeyxCp`l~4k<&vYpbFnwI65tTkctD548TPhCZA@?rJ&EvTDep3zAWiN z42j^STm*jK2cS>V*J1vw!-`xxokgu?u;Bq#Q0uy>GAnv>Os^l9+|8V3Yo?JkJ=;gl zq?)6_+5zk$!aK#JO&bAcqvIw(333ApMJo6qf3d6P;c&iOsWf!Cy9WzG1dhg8G|2$) zSrw_EK^8P^Nad2SF(LG*FZ^pkq6A>uWP5WzipG@7HRxmHqqTYby}o=jODC;lx>kO#n%N~P^U#rp9p`!*n%Hk^jRtL}QMwguDV zFSM%%H0$X&khsN!>?QXrz`gREuG2t}2phzzr+Vt8!;SY%=`Lzrzxx|RIfO#FX6a578>WhXKFpX7G~$A3rOZ_6^zJ$RE|z)LU1RqnqJt<+HlWEyWx^ z9zIKR;kQmPbXoDtSEh^)K?hjx-WkT!9+DYx0+-EvoMyXu!qXX894rf_k9(?fpAVGL zmGB_}BHR0*W;QfzveY-@d@qft`^a{aA^*vzszc73H_~JGKrKQQ*u-%Ad09HGk8ACh z-70=BKhzJhAwwPUI66x4Lx?{?L;DsU#>>&X``}NyyMR*Weg_pI(p!MM%L!~ zfO7Il&I9y-*7e2SP|No@cJ(Vu&qj(w67~lOXBQpT143HiITlu*Ue_j2=9B?F+fl8W zJvXI?meH&e0GS-2i~|~Ykv86!13(&$&hA&Erl*-W7Y4@%{I)(bX`2k$^;=mmqAuV) zf2H<;CK$chEt@l%B`L3R__J>nNZNBud&jzje$=%lx>LB;LQ~Kb0S1K} zJN*8I%UNH5kJXgD@P{=*+2v87y6^l1=*`I+zW~FJd=<)+@hbRVks0!hd`ESBa*do} zA7aRHtb2PHyiMtQi)V=!!&%zMpGGU&{rALq-#(9O-g}&fG-%q*o*>e;3MTCh?!%2d zk=xe|gxVtEJ8f%uMr0oMtGM87AkV_s;uGrX30htd4MZ(rV0^)a@vv|0 z2%QEm0768;&uts2oxoi5gy`&kw_TyjGd%^z)t*!wHCa$yrfwMySSE$WV@Jq=!gFc@ zHr86R-D=ZUR)hMIFMpBL(8!$^N6dEW^T6iF!%6krW4(W~SobmGF(nwMxE(_voeV!e zJ!hjb0Ur0#h2U)g$CiX{YI)I@K2PrdKcLp9N5Qn#h$qHA>FyA)D9*mG%AJ8|DbgS= z_h)0Md;^rgZu5>aUV)DPsh9P}nUd>JLD3z+(A}U1;nWFrbci39a>(T8TiH*6->RiX zO874kam}?wf-}GP{|wt=m_OeWQ!3`ekD4xVW8jz&hpQor2CFyLq=0A-7`x*V)jf6F z+6F3@F3DRR!o)mdnm(F`IA(gp?nx_N!SDLnk>IUAwl zhTFY~1lG@FKVc`)tl2mqqjNG!kH5w%t=T2`*oT|#2|rfq_%1M$M*;nFOAxg>#_1Pd zlnK3@aS!k<=h*fCRvu8BUe5k|UXP&c0ov63F^EkN0+5Xz;ijX=EL0PN&Y&MtgP-sV zT{0|apKz|V`fvgr3$XN$iBW64j~rRUJ*Dq_LfwE}iCz)XNj}rCJRZP~z7nt9%)$Uo ztZ0>(H#Kb3)2U~D?GLBOd`}u1{OD9M!|(dBl!4e*s)*W_%87~)ZwVY`q%XH!toeXU zI-_+lxtXYuqpNkp@S_vONCBNjHOcR*Z5D!m(H%g>PVZDTRSWnQ7r*9hpDeox5ZLzM zb8Br&%-POG(!kn1|y?ocM}rVj61Pv*r`8L&*Hezvl2G> zCY=c~%E}OwHZRaDz^)_|Dx_~ z7!YA+*7D#|feNghbH=6ohJZOPoj~=B@a_9L2&D7Rn#b-ZXBKSn43>Um0R==AJ7U_A zRI%pk0QB~A+Qz)337_O}R*!Vy5#lvLH+}uA2B<=!Hf69kQ6tp-mD%<+3?JQplJ9$j zhL5jV{hoQgyw|u16$-w^D&<@G18}Q;$*6-I ze6$qwwJh7aLZp*S!NKz%xm2CL7*Y)v05?lzYP^t#7r%L}7eBDFfm%KwO{PhCZ*_5d zj?_oNiUHnWGERUuw$nFtb zi5Se_k#txE|6#vn`0Inp)oy{Ng*6eJ#+AXj#bjXlN9K10SC9Z z+!4?cLC(cYW4Netl#sw|$1ZMtL{n@Jj-dL=ylBIG%wP0P>H!8;!SPY-B3o3iq}`n4 zAMYC!GrjjtY-=rt)*SrymAHO`t$&j@J~b+?2_fPsrOGHYLM4aty$7b&zqg|t20u`G zlCZU}Inry_zyAxP{O05O6D3f&8tJ;5&MQo)M#F1g5$h=$h z9?F3!Jv<8DU!m)LMFOI%au5tQEKbbSdS^5Eqvt%5+wl1S%OB$tFO-x>+&t2iaFbw7 zy>}rzS~B=>y@72PRh}g?))i2q{zS-W`dsgDO_6Ku3&FUPQ3do6Y!;ZVm&JqiPlOwC zdHi~05hFDX=amlXOAw)b7lG|U+M62+TlvWpb-v(^miM%KqIe0G8b{i`yXZpkS}L5&84Kz zIf)}CuG5zngKiAwjxP?HFCfIP%@bZb{Xr2xCIf;wyqqiS%kjp{r-(=T0DoJ-QL2hV z&s34VH4KFDHmKMX3UkK3@Wu-7!dDP+y}vt(xA+bb86y}|t5)YQ!t^Wf316ZISMjy#aD z+x$xjT8tOzmg*Z1?`(Vm4HxSnxJH4J?aXuOD{sAllRgBv!i}eC(`BTuhqwM?W8Iv! z>jTK8TR+e7P%Kks0z6UA9#E#_=Q(7w$q;($^ z+_Aec10-;Av{@s)Zx(sa<_=i=4NmgkhVI@Wp262Qg>ep5fF{Oc3cD(4dS`nG&G?F$-lmUcJlvaxZKzGDI1d!g ztZqd|T#4vU#y-(A9w;)Hb^}45>s`|Ud5==|``Ry-TBp{xHus^si#lLcu3@eTAMmTv z@w%lROm8mKfb{QIF@njf;7FpR%{!ZsxzATgF-E%R#GTl!9X@~C0V$xs?{c2milh1v z9N9w^=7n^mkfsJBVRr<~jj_Ew8{I4UAQJ3)zkl47K;h04uGXD`^#Ut!6LCxP^~+TL zX8;FHVpJ%}JWk2vk5>r*E6{AbdwBx%j>LSgK+<5WStVxo)x^`nOVx`eA_q6yLAg%v zYJ~dgmJEGqy}-$Q9S#DaGS{RM11{Q)<>NT93rLI%dtLZb47!d>YP8Rs+8lh3SOef% z2tV7Of(n@`;>yE)o;8_^ zGh&yXUG3-F?CtO%hwwXr{+(4(E3)?x(u6+g&Iea{z}xG3Lq(;k$ik0|ChOBL`u5Ab zmm3~#Y2Vs7Ozo1yRy#nh0>Ff#(!JI@@8{l37xCWVf33b{x9XAnz3}5AjvAH^L~G}E zS84WaFFTZEFMR-}{Aj}koaS!>w=*}TfL45V{g-z}-$!8uE`U@r|Ey`831VJETc};u zPiO&4NBj>+&X<0EWke2`V7lxi8KX#WbvjL7(Q~kAelW+p?Dg1O; zDlfhWbu;oquSxNG(Pb|Zb;tl{8vQdoPhe^MXS1BE_S4jyl`p^T!gyT4p>SsI5cv*Z zZ8pByK0=4#L-0R_$QLi2W&9N6PP|VE)Dk{M@$?Z^bU?f(1hhSr%Y6xq=QEd#g7f>T zbhJ2zrbwMyLbQSj^%HDpa5h0CDL7tqkzu00%02_;`qX=?B5=XEAA$Hec&CKyns%VP z&!FyA6@Xbm9Q0(>u;WA_GSJ@*ry-G)R!<1ZoXEoZB0-TPFc&0ZZ)qMdszX80d&f|4 zgle$=b~=Nr3ciT?2)5&1;NeM>BR>He^xEOzEE@&QQ-dAvj|qN`iw>Ht@g$Uo%9nA6 zy)gm4(o5t~KvsO$Bb1WZdBid}toKr&diz=B73P$fQ&jMCEcg2-Fc8S=^>rUu_4pi9 z76O~{&tOc%t>^tcAxlX9>;DY-hXdD3?~?q7WFT)k(}Zkrc`M+FJQG{s_GpQVn{{$u zSJq!sIEG|rV0+?>B71ZDEb~F&oxFsEK(sSSPSZfs<3q(qTl>H&ejv zb9$L^Sw(ENA6BEQW9lbxg7n5dILae-V4tjL0<0mvRu9!JBnN^_ZNA-&{mlnp|Hf;M zw4iJ!gGlblKTKQ)xsy`+<8K}76LOcqSNj}6C-@rAiYe^jlYY4e6;E8$`iqh_{z)@k zBRC+lMJQ8ff;K%LHA_1eK$orshgT~KE=~q9o}cW*-s#@U4w%U$K>a#zNV|wKM}q4> z5~r+nKk8!U#+f8vSb#RnqDsK_qmcq^hebZdDOgTo$TKd$kfZ8E`n*vQp1_D^Og0f{ zv~KEe@Vw+F^e0=ppFL^T# z!jcV*3%k?uaA|3)ocs0IYoM8{=bSQ_w;r4jC_FRfqg-AR2=SBqBJ|4n?z%ewsk8On zKEH=z@&*aynY0EIQ2}uTVNb_ARdcdA`^*E*_jr)^*;GuVRP?(_^YBL_y$!jhgg0d~ zBvd_LQqh}$6FZr<2FT6@(4nR2&fs!3O>d(H`1w3AU6!roh(aDSZ7VyD_jAap0**=e2QK zbV<`JhU6t>v=u`d6J(OyT=V13(|NT=$7I7gslI?mudL<5;-yqG`<^_!0k(i3d1fq! zsSW~- zj7zzX?LheeA=AG+ovDIQD&Lim#g1odk^?A5P{l)B(ZnN!ptTe}D$1|!cEaz=p@$n!N_IVEGua)MNY5kmJYH;5@GQxw@jM#(T4hPa?G;d#OW5Wk^&Tk+! z)N=hxU097K_gK79ZNf}s+&H+F1F26eMp}R)NHCY>o zi~+AudNnp~*Gu~rnZ?YV*cMFULYn(|*6zvjur(1RH>lYk959wA$Xpfm3ZL-@fG4>8 zqdXEv`Gh;n^pVDk`|}@#(?L&=bBVrIHnm7h)hJC zcKtnyh*DiKMk6`fk%s%!w|GX3%bw9mk%xHWY9_FR)Q}BkIn>o}V~B6858zc4){JE0 z!8Bpgu}<=v{u?$yFo$#8#OW;NLzWk$$WCzoo%Xd?mov1ljW7`32AGSl_|Lwktjn!= z(z5cUf6Ka?$;R-yg@j&<|1*9w$x;84#?+!$a`;6W&Zln7-xO=0O4Tcz+9})O>&NXn zPaICl*2R6j6DV>z)Hrt9Ncaw(cuwg^9?u8K3_umrOyS}_S~qmE%6;bXQEkQ@7L@Q- z^pW02sUUgOs@J`6+UORNj97iz9*E=D=&FkI_1#kwK`vxjlR%X#uIabdqPwO3Krc6t ztaMuS@@=9>zU<{NVXGrP3$Ix0FXHx! z=$m2|$DEUC#>SIZTwZ;L`mPwko_*biMNA(b84^6~7I{vT^#ZSCA)h0y+A;OX=-rln z8A_7+;%kOAX~SB+FSa%5vQ*Iq$#tQ^z4g$0yAPqp=jD?&G$~}^G?etpaQ+99<7K&m zBkxGQ%(T7CpN1XC#y6KiUPqZ{-CV5oU@lfXm)F6F4Z!dgf)aMlHoguWg(-$IROVe- zX48U;A)dk0StBV-+sLWlQw^2r=hZifAru=D=rtE-6^H1CvMSSlP1@DZNztb*U!|hr z+koE)dGc6R(?c*Qw4I!q)W+xZ$5>Z%C=>kKW0RN(um+3Mzcqq}JjR9wckJ({&1BY_ z-~EL%C^&FPIb-}0(%%x=Vfkx2_-b!4vh%*{jpqgs-!UZ`f?l@(?21Lu@s{S3P43hH zq0sx?%J84oW3HOvUBas)$_UJ@K2gq;d}i10h`U#s9%YKwglT>nndr><@*LYN@|;g_ zb;J;Xk)5u0{VF#~8-V?O_@d>K{9+Pkes0?C0qcCj8*J3(L@4sS#L{)oEn@U1@%EG> zpE<&@E&+2SXJ3~fftelmy1Fzj2T9M7Li3Wys4~JCqn_;VXscz#?Ecbm8Ujq{XKUzr zc<Aj~(r`Oaye^SQZG&(#KC9ycduqR2h6S@{K*?g>m?o*6q-uCNW zCDW|>I8uf?{EF+}L;6vnw=ff!?5M}*wV|rfY#+;W4uiqW;re~|Qi}hf#9L4Ko`DrE zGAU4>+SdZP@pQHj>>xEnj0-(Kd$puKjAoZ}d*BXRvhWQlYsa^|?RRj(CK_w*(2n3M zEkf+&=|2B-eLa?-^(UI7G?E|ig3dB?&u_&9ecWku zs&}hv<7LbKT0I-TqA~o+vt6W+7uok&x-p5@emY@BWXW-IsLD^NTP?2lExJi6H(#ii zH?Kcmc?9uIE<&YnKpYKYVaec{hFw0)TtT<4^7`nbv5JhP7mxJ&X|qgQ?$|{jNqXVI zKVAfBeS}E#mBLOxz*wcV^d>)`W@HT=Z!Kj6S7h@kW~g26wxk|9x41Q(MAkT)Rq~G>YUO@JfAbr2>5>5W4gS+zcIF#c?v^I5$D_ zAzon@#0@6t++qFc6jh~Am3x?aCxpQ8t@|>+X-;o)pVL|~=hVD58rN;e0mzuANc~qvddWzcSGU`$vn+>4(#0UpLR3rq z884R6OjEb@kj?`Czy%(M4*R^2ozGYPYSY#+$a3jGE%Ep1-5>@gTDR7~Ls+o=xt&{I zaAyrf@JAW3Q&teG{zPhH>;B&QCjwn35p4-{BPhZpRS_4yRXq5?_o>ytAzuj&Yiz5> zB?9vF)yhA2`VL2v`4zRl`+XKdHd<~agbgSdDQa}#;${_0wYW;N zsA!*B<3Yye2a)`c8eTSb;rny(Y=%Xea0BH>%}9&plNT+c8|+|z=8)Umt=g&w8e1yd{=V&^ZwtP)3j>c z!wGw#)84Gm_Xk;X$M4K-^n8B-;MvZIr-(Aai!`p4fcTlV%m>E2H;7tsl{T*(1GLC@ zcejEqq>{6H=9iUJ_cPzc`lAfJV`D|*x2Jwias&QKhBZSL{AS3twsf`N)Qnz%!GW-_ z3L4rf(O;@Fqx0m$F~{8cifEB-;;eV$W2>x<^C@7}H8*%^LU(vY=xtKt;jnz_7~Pzf zTpiUOSnMxV@(EH^c=F6y)s~Pu8ud`qAv?RC^?vy%9`ZKUIM8lRs zY+PcdSOp6+X|?i=^=)ZNpJre;UzEmLI9=F}8Y+0mLr$Z3tsRIAl~*W<=1T#(?z7K) zZoZCD^`;76+*?~8Gp#1FEbCrU*nh#g_g!Jq2ljrl;MUB`As)MqYKGO$q_I6mT^m6r zD$bs<+m~S|4{Qya)4Jl92^-RGhz>S7;ar?9{MF*|S?)59!@k=`(_yw@?1y2LnzKV` zBQ0r2V)Anxtq0z?5Qi=_tENZQ7uY9-yuMO=r(KQbF%w?DqeN{ae%IsSd_N_2m***J8Ax@L=)cAg! z+d%r2Z(4lmQQ!K;>2NlsW==OdTA5rpLv1;^^&T%&bx&D98rI#FkLRD{O)sd?=Hq8u zeR5PT3l{FF{zmXREk!8xHd-fDKinlghBNfcyoE`k?{GuM(f(}7;n(PDaS*FG6 z4Y2IEP2U-D$}Sl*X4vsgsjRD6&{q4B#S?u#Yl|i<*}uB4CdSp+lv!>4@n3TATv2ubm~971Jz-Y0TWVKa?@M~f} zF)N;qAL`nC(Kjgj9t|fa<>A!U%A)_heb9IIjJj>zUbg=m+!^;FvbsyDXV+5WkC@Ot ztjjM=qyRMQuNp%ZY7j2Jeh<1m?Rmktz)3^7Z;5Ab31hw$6i+^!T>3-lW!u5pGNEXEU+q^I90|yGy7L#s*I0aPlpJ3B)B~K_h9(S_xMR}>Gf+%PI=PtvDqBq ze;2x01e?9xMp8O^(}lN3F@HsU7pvt(qfObdsxsHRpeNlm({@LptM*ah0-i6uXXFzI zm_vm!iR3$2Yhye*()kVEW(ud$I=#+iVIHKkRg?2essVLerFB_6RlTv;SjC=2UgCn! z3&yH%t5#_sjP11$C9RPdEL}oWC#cNG!6>gwhA$7}W{&EW3+LBs>O_CV6HSJ&3XWvM z5U?@T%;WB)BHka|!A0fkROqTHH2;XTMNGB+`N`Dn0FC-#S8L&r~!Z)>6sB z4b&c&Nb>6j@A7c6-10{f3E`+S;Wrz}<;%8GEGkwky}-z* zKF~anOryvFYTUZnDNMU%+ccgtfuNW6!ei;nL_9k#@6gd=QeNHRy`AnfvR9IxKs}=0 zLZ=6NVgg5~`n<) z=s*7XX-~%`%h#E)()^RHlFz$d#JCMqg=>O;;#F=q86;nENXRDn@mz;34+%lKogOni z$l>0TaHo&fFQlurntC~C6ki-K_F8F7lDY4+OSeBNgnjEXS)yRcz{#VmrCOSFJVzap zXKnjhMb*{l)K;P2L(5;QB1NN~EyGXTKFl2P%z37@@9#QG)YN=Fv(ot~aw}>8AO$uf z+CCLlHAZgtZ+Ro%70$6NY3QXdj%k3iGzS&+p|q7frscBHM32#0LP|es8RfonloN)L zhAO*NGFq=J#ami_W6r?h9!SpGUuv|fUSQy3>$t(&=bXf-M3hc@^@R+ko9ffa_?v50 zWzZ$ONhzi)5xkO79kt5Z`bVh=-}MN8j|hH>ZGP>1oK* zpQRstdl3)mE)k{=iP6v->gktft#bs;bmCPDt5g{~mqSl09XAqOE36;ro}9W?g(}ok z9)8?TJWzp_=48#JhxtVhanSYpu0l+d`&bjdAmV^f8g|Yk7+%pC__8k+>%=Ztvr~Ay z+|_rwJ9K!QEZbKVKf!r}euy}K)S(G?)5Ut+5%htebD*A30v$J2s&0DkT&UIwB_~pKu6!Qeshlv8V zn82=>(Ra*!?@d?FC~alQ!S~ZkG89hy*W}eacj{GmB}Eh@_{O&<>eJY@1<$ajPHF4+ zK30OAtUx{-X>^S)XuS*jVmp5G#Tzwk{pF6gAAgGK-Q7jbs>8|2!gzP@3f81pTszjy z>qBIH^BvAccl@v^pvU)Zf4iq4E{7^)<0gUJ1#cq)z7aAcdV+85+V^05>`GV~sve7n!!*uH1(+S;oKBU^-c;`iw?@&ko+qx%wafh~K} zm*8!>m)e9^*_-X)2K#xleQ0_{8biHc`AX)!p3!@puf01lQ$0^(h0w#@ukq@V@<%uz zzp0}G#BUU(=t@6VgB%1Oc%M`*Gc+$Z@FAm2r^`+KKWg5$c&bC~JH0+cf{AzT{mHhL z^ZnMd3Ioqq-Y>c@$xjg>1rP&aNHhKAoTL#PmQ7H>*S^w@C!X1gKaojLg1csC*WpI2 zx%eN)>~vnKc=gcrex73n(3uom!8$k)kTQOOgM>^kg6m2cev?ZK8-{l^$H-$naZ$B| z#b8^bRb)Q$5FgSRD6g-q6Eo7r*c|6!9f}mGJg{~E1mJ<)Ku_mwY;Vjns#h$TObfpg zB_J{2ac_B@A^cU|$CVH^yBVguH*?QX73_`m;k3!c0)g`TT4WL4i4-}ak*1K`*j_|F zG59t?Op)Ci9>pxfzE-!PjvnT=JAC$hCHJ^VnksXD;uP0|V7l>D!DHWsd1CesT^&m8 zHh&Rb32!EVnKhvUntL8n(@S>u5+08=Ed*ffn@_t<1V=5JoPV;8RL&P%`Y4AYf~L(( zei(JQAx!`&yeT;1TG@<*Y=~G~pP75<^U&%1^RhcEHp`>+eu>S;N|&NAsvK({LQse~x7d^V-p)%cNBoan+|4fG~F+n9T45=(k__MYlg(UurGuGqTTw1px95 zV#?u>F8+hvQcL~i5a3^O+}vkt(a0JU&~~!3O9BQnM+oDr3bnz}!Zx>4I+p2@yF`hP zt@#t3+##M)$wp>I-wwLK{x1A1Y0O+DDPFP)=@qjl|5VoZaKIpO^0M!eA{CC4%Xw%y zq3Smt>&u&DeTH59?`U=)5_%S%pDpjHZq-D>X8rZ84HlV4-H=JPrn1>IJLEveL;<@V zzbcgP4mU>#;)jc-o=20m5s7~7-D`Zy*F}B(RNK-auJau5GHu+Av14+qpEznx&!+!S zi{M5SE^7vO=Z4Qcys7EEm`2GUg;{VjQ_o0Q6H+zE%>Z?9L3e(^Fh}KxO);CHdArm3 zv9PW5{!szRH3+<|e7k5luwA4wH#wUl<(?zoLfq_OyZ`rKO1dEEW8Y<`U@vWR#E#%2 z-%T<9_a{1$OUYe)T`eIhzZF~^x%$bq=JUg8+@f%-ktJoJ8Q3rnITXu$OWwCD5HZwG z^v7KfhOiF>6PJCzvp$Y(eyIu;A_r{)$))tcW&Xpx}ncGbF zEE*S#KBG_yR$igJm*?)%E16Jp0^<1or6W~1q^SEbPJFGH4N*&1)( z*&oc*vf|EX*Q?SGAih(zXBV|2b;>1X!f^*r#s-sWRNF&2QA!u6g~g(re$1S?_nbc7 z22ll!a-~!4syIkg4h2uW z86-d7ve>!0ikHr#7|^UmJim|Y4UoM784jn@9#uCi*_1*fbsBT8Vi=&VPV|WEZS`X} zxHFzklKQem*wdvCYp^pW+-Ca~i!618s5DHTUBR-*&`MRKErp-jSJQ_YwRzkb*n_?? z-4e4in>73YIM`(=h$ju`m18m6BD4WF{7O@WN??P-_*a+Xj$0QQwQ`(OP7B1+v<(dxNC5j*A5+p zqvQ5C`RVrrm>OYomR}&fZZ7Bu`UTzY>cJ0?UZwkZsqIW>cSUOKEAM|Y^Xn$eYFndZ zg!{QTT!P^tl8G5?YUOjey?V}H+EW{v-O{@q6M%d}$pdbM{uCpJ66G~9&Key2B!@DO zLa3dqZJlG$7z*hED-`S78q2JrdjZ&jE%Rb!fslqU;t1v)({(%`iN~w$0ik`{p?i?&qm5LLu%Q=^Ft& z+8c=rmwn?n4A&Ee(Z-_e=<6N?ZJWO`B z-ShS~p%h|X*RYuSl#Wm!4wrkiJ+;H?fY^Cam+UTY^`ceh@*qsbr^hs`HJ zJq3TkIkTB`B4DKC{6LZxvJ)&j<&mP01>thVOP^i4H0vE{#SB;2)=^26hEm%^i7;0@ z?I0L1kSBKG((7CL2O_gfcl)I1Y3$h~9~SMMPBmQJiqYOc z10)#}6_vh678;dS-@&D8^x%`AT}j_2|Jy5ya0QmN8ck(Kak(n|eh6kF<$iONB!$0M zq}7DYP~AqO;(XZbX*n82QMsbBp)sb-Qk}L?KHa+`hZCoizYGD% z&*l}dfiy2mlJ7CLHlul;M%OQjjo;ye>^%0I&nan~sapFL(eX6R&%>}?s6Wpt+xFf5 zgZfH^0Iaq68iBLNYnBb2#fB-tHHL6W{9cRu@~+n$#%k}&4~Mw*F3)=Tqo+3bUDk># zr;|(WiEm05-R^fp>%bbpszv~T3Pae{3jt@y!P*Ivn=?ulQ_LPz))DsP!+k;bM9zpG zAsXvf3f5Tl`pXUbdk74NJCx%I1=5@+oeI2OZ~e&JRI%t2=_q%mN?0*unHL*DJ{cl6 zJUn|Llq6!ecXW2hFojJ4^iw7r>%lEVGt8Rnh^9vlZsN0!X7M{2-}C61X_)%%^~kwl zz2-U9P~k+!N#xA2_xOmT(dw$`@{jU2+O%?bID^Z1-fGodFP=Oiu+W2?&Y`7 z-rSgv*O-q#J8G0ZJ3y@;Lo8`mbAf1tA&MPux>kUSg7rZ3!m<f<{Vx zZ8J$qs`0!H&WyE?_d~!(;vwI{@JP7d*e4pC&jAP4AJdknxU!nZl2c!^vSb?g@ew2JwN%nR6wAe`@d$DAeNZuJRk zn@C4lWsyefO0=gy7M-7Ry*KJG_c=VoJ$90WAnMhjUuMf?sfweD*8{7DL2NgOp8qAZrE|-hC=#s1XMF=1T8cL2lEe>5)$iiT_ zl4w*#cGv5%peJwMgk3h9oyp+n_+sXH)-#+z3M zCn}n3!}uQ@+F2f!^H63G`456kBM}kSlx)mu2MpDR{WqtCC|%Gop>c`v5=;0iIQqq4 zjg5_FO9_`C354N#bbHri>sHV7hXBd>tVZKD(;Kb_NE>QMuVxni;`nm<8-*D7y)SXM zsy$Z)pDptP{bT>Sl9lK0LW6-T}iW+L=XzR--MinjP0O3&4z7a9F1&f`zjXpd} zwTMNtw;_5TtOsR<=^nG>^3Yu&RpcLyi%{v8Jlerkxfhp!X$qR*2{x}CNP z*h6the~w+z?u~|1kb90q$t|E@3`f)^o<{cKVyC*GloFx_-EounftE04tP}+&>)>() zf8Y0-g=aD2IkI`uK6nym7>`97buH5aqNp*R(Nmv_!;m5%B`59uRXP%ll1d+c3DS9s(0DTrp~r5M!UJCj6(M-GAVQ4cP= zM0Xv&MoUWmrT7<6zyW4OdOC5Obj+aarnp*lSPK@&|J}M?jlP(+A*i?1gAHPl6?Zbz z_FK^OZRMM{(7#W|0g|S`1||t-I!zawMoVQo&*>S@1GiM&Z20Haxi0C$^Ho4a*5{WR zkAo%xpe?n)G%aqT`R;n}|GF)QmSzQ;>)#;*#Jm@uOHo*`hd` zy~FNrXDS30s8SG(P<4P?*VY>I87V+*i%VMyptt@VU@hMfXE2#tyLb(`sZo0Xx$HDY zUSwvPz3lW#t5Xm#X(OcyWnSm);mY-&{ z7ka1pyNvwQen8jFYl$XK8oj8xn)3|*@Q48AdW38mSF-L)k_cOcEUdxkZcAqlrKWO} zS=%tRATF{o9yh465BdK=5I;PSq^8f6V98Zf$<>bIeHUQA?&*=38OjE}q zrGjRS1`bGgHD=onUym%2#3c1?^dhWUkd)1!(q;Vfz)Ff1RuYfGp9euESLk%SMAEJ~ z0>d84rv`W8GrinDN{q}QY0>_a3Yx6^8NR{5*NY1Sm$ZKxwZ}OzPl?_tRj@=LFtN|E zh)QoAhkuTQ(aX%Uo_(MWyG);t=Jg6aolN*Etqa6{YC2`a*Ve|8SIn)>83*K5t%|C2 zh(8pvO>EcXf01gn@!aD@OAj0saIot=hK-NcNa28hRpF)ibyP)IqC!@Gz5I_9$Qtmu ztb~D3z1~m`%7Xl|5yUN#8nEIOMf>ym4HY^) zhi>_!^a03k$4-}S>(kNmEM4ux|L?V*nSqj%55RJlN1d1NF!0@OjW3)w4P1#FdS-*U z#^_ZCUIQJFH!fP#Hd5dh)rXDN6pFGR?b3lHqE&tk2n2r&I(p7IG92d#rQQBZ>zN9$ zEqDC=X>qx{ZVY}6Gs*g;?Uvxeu!Wh60L9g3iR9{pJpXPk4;(hcNIkAoySw>f3#I@&e(E7VQy#No|}K!vctp#><|X@9P8O_62XJSC%~e*d~vJ)kjVV*lrs zxJK2(dZLbMg0D`1FB}rJmF-%q^J+W}rg-KV;4saYmb*fUTw2)L?CnIwt5pjJQS<<2>ubz90S^=Q%+YZ5SEJUpP3QcZ zx&-Tvk_KsEH7kRmwcX z#v+M%_0Y$nDO+N2{_u6pXp!mRw&D2a@Z`hs-SQ985s*a-G>I5p z_tgjGKA?T`w=)aF^?qRdo;S3Rm$#lV91jO_Vz1fh2Rs;G^0mUgpG4SK z4m4KmCP8EQUFff0SN?IEePqOW;YNPgM0 z^6+pdJ)fM^Hl(L!G!!!?{0C@9`DTn4)c||^Ei5g$+hHL^C@)aR)F$Uy)BOh7`{n~A z`J2o+GfTQ{s|VGj>!640pCO|SC?o}9sD0Ct=?7q^OCm;=oGg+?O>MJ)`lgMU_*VA2 zefOLlMBn}XPw5MNVD-Mt&qGsLgJ3RbEgVow=FrO7e!$WEQgGDi-|8|^u0MyeVn3}=k1?9_qM4a$zhG)N8pY7%#*&}QAQrkCo(nwT4 z+~syR_u%DB=DKi0`x!_7?j{*;e3aveiC`CY%`L^dLG+iM=WZ);T$Yzd@5aqFw@Z@^ zQb)|J5&Jf^)K|xCH;!Ex223ex^(39^5;AAz9|h$caO|7AgCBfPi=nA<`u{e<0I=ZT z>vUOVNb=Kl9?(1#FoA;MGZO`kO+ow@tT5)7kM*%ToaCdC6VH3P^)3Or8nSl{Ks+T| zoir~=b;3o<@-FOsYlDIPwiHNP2Z$MsfWm(MhMOC3Nc5eU|5~O6aOtv`1%D#krm)U- z9+)_B8*e_Oz?5v5|2r1|3X4A&JK68a*18*^wWWIdS`Ar{l+EFpbSgm4kq^g}gKeS> z^&dP(b1i^Ubzk*DCDi8xWJ)WJ4F2xIUN~S-pNZhpTaEKP?ly%8*!eZ=-pNhQsEWIv zkFUj3-cjycAAl{M^GS*z=#Q5ra~IY#0rZT%*}9=d;|rFJodra^Q`=(mf&l z{8O_8|YmZ$86(I4ul-F08ZD#-X8hMi# zmA95@@cgjs-q8s!D6DZF5abH9w>-n#JhXD=rs+Yskm3|J0HNPlAfe~vqaM|DP{&f7 z2NS!+#6pgnmjb5){~hE2TBgmY>B+>wUE=`I9&dU%(EhdnD>)|$3uH4Iby2e+IM?Ivb7|0Qh{*C=12ckEzek>@{9hQQ<=+S z&#}`zdpp`dCrYl@Leu0Xm;8`IM?PHdX?zGL;_&)-ZuQyKM4BwL3EV+P9LQ&Soxce( zKDonrV+hJwyjADl{y!evfZLH|Ihx_gF<4FDqAJ|(*F6ORA z>E#2dCtrH=i8l>l{5van(jD#v^mNKs0m*7>>FYq<&ZlZaQ=8|(;iNw!Jp))P|3av7 z3kW05HVMwg6)&U-_(T2B7Hw5^{bS8MOwsCc(y{+X%J&|6B+Pv0p4g9uj?6yU&lcim z)BH0EpUp}t%I@?h54;5~cK@5K3FzV2Up5QIG_f0gwiNo|bSX)T9AH@kp>&16JVgvE zdK>tj)xb&a#<#fXow52_`5o(*%Z%$oH3lEYWeg|r6zXUFlgQrl(YNN zi1LGlP^AmHZJ!a8e&Cenzk_~>WrEJVU~j|N@~QDuM_G9yA$`6j=>aR6E)58eLfQks zJKIah{9xEclomPkca3F~!RbGfM@awG9I{5q%Thj|vTsg_e_FI-*VlLvRgu6GyCQ`# z7u_xX4A+}Hi+CnwS5b^CD6JHN7f!pHj7+U zTZ@5fPvP=_uN_GCiI<)ws&P|Tp_QhV!`DY<DVlJ_d`4sx2KNW*IK!*!o5R6y?UwCG8~G{iuWplK=tY0(W?&> z{}=R^PID-iuT*QN9~MN=VTqOGi1#3*)1q!Wrm4S21n_Qde`Cw^yBBLvMp8WWxoAil z1QN6L=3Hk7W^>(mdL{rY=g^l!W4+V?o*o1B{Qt(J-^D|mepp&3%^OVNuT;5$)$;4u z(%UX@R4tEY(!js61Qma^u&)9jei!L&bs%tQ&LPJ9 zU7TLf{L8V;B4}vy&SgQ6DkMVbqU-k2%^G7Mm80z@Dbv**($5F7b(POo7jdNE5YxW{ z6A#4I-eA*7qKA?mkd#-t5E44d%1z=4#p@n|=CfdKsPl;FIM|T0vos};gWER@%;7> zCAH-CjlYyH5CV7Lu0|}jR^4^S7GWvQT@D z40i8B4(%NQTT_KM`ge;!WQa~%k`#wPOzZP*@4L@i9O5emnDhip$UM)|oYZL4 zOpTg|eE90$H$%jffR`AGPD!IbIa1egyvK?B%NmG`4G=U8km35^VEG`}|GuUJvQuJw zawug>)pOnd-JWSU#9=Fnc4X*KE`n3}||$z`t#=+pw$-%ST;x}b;)EV_jnbq7qX z&R~04g=b+O=ubY+>HpEC^UzPtodX?_t(eAV>3v$cxDRH>a+vu)Dn4@n#Hn7y7wj^U zUHzY(1Hh7}xcvwJdqEN?M)?pbRE8plm2GCikgg5WQcrGS%ZT5l{wn_8UkQSP#$zBa zlu85pz!_!RF=KHOR(Nm>TfC>vzr%Uz21qwE{vStrA77jZ|2vrWDeS<>vj2um<&}RM zhyD54G5O*y$iKhp_p&zZJ;IX>lLx=&y#kBj6?{Qp1I C=c%y( literal 0 HcmV?d00001 diff --git a/docs/_static/libE_logo_white.png b/docs/_static/libE_logo_white.png new file mode 100644 index 0000000000000000000000000000000000000000..220de8766e0f6e3d184809735927f3fa52811755 GIT binary patch literal 32035 zcmeFYby!qU`!->A>f@@~4z19==bFXJTd+ohL6y(G)UJ|}ULPEljln_-yLV5{7LVEi7 z1u~+=_-k$=;_HcnlDP1b***pz!~xnD2@MA%BqoZ7|0k|Fg3gFWLj@TXF~ql8v&;X{ z-~IpLr=q6p3Zdq`qmrC55{r<&ufn_UqxZjl9R9}o^7Ea(@2~sOA3`kOfB(c%IQ&wx zHQS1WmGyc|tT-)x`T@|!0JVZI%IcKyNNJw?Rls6k5->0)Y{=h|F{*6JWQKDgYu=sF$thq1o|>@S#^#1x?q=M<1L@x;O(ntcV1Tw(V~r~Il9 z?K5oqU7DxI$Jqyck7IQ0Qh$gs+vamV%mt)S31m>~dn??B-=|!%V_96=4{C*?wA5{v zbUkT@8!KPQtLGNA(4aqb@$j<*H1pqHPdA3piv#~@!-+6Bos&`IaS@gZA$>8Tef^5l@$_6+@bhU*fJ&GE|Bxjy*~4g*-Z=9& z{3ThUkr8U;d{;UqP8$!+NFp7M6Y3ILSJa>NCcgfBw5sEW`|DeLG&y(Tvxgo*ezc$w zIuhe8#nhdP*$u<<4QtG+&*qN|V)-Bg)T9W%>rIgqytlsPPOCr49^g5780P!QZcG%y zGbLa7syWY)U+157NYaocRqoy*MvgS4tcb1C=6=zK&iV?U!BxYbc9*2nE6yVbbB-Sf z@axWDSs0?ZC2g58T17M+)BX_EpZ79@`_ifrz_(p~!$*&)f|xKD&|GMbKJa{5QFMY% ztf7?k?SoJN&NN16{#(|$xDU2TAhbusW;_O`PBB#@3m~B$>|W(6n5C7rfHrLEMR{`&8)j`cI{6;ZmLOi^rfv+!`BAytGvAMGvs-u(1eGe z>vt}sc@~bQNRxFR*rWt2_UvDtvS;kyoS5w0Tvs_TJZJ*2Y?{WrCfLS?ZU`fIq&87fYNK_}8Tod&vNbuiz|q)V z7OEA0QlB>OZ}0%3K4c+H=OzrUpblAb3WK%sUHeSuZHfoX1f-B-0UuHi3%Q(;@B}tg zjdN57Xp#HzwvmfIWrey_$b}(SheYII>g(&LW(ag@pEPRk-dX?PxQ#nWz`USB8M~-M zxecf}Mf5;|{=U(*!ntkhO~ed_X)Tj2>l$scm++ZOkyzPBKFC1Y<7zpi>wX5ba7O0n zG@D-D+=c~w4vUt2K$*pL%sOj%H{Q0Q6W+S$^AbG@D!v}5SoHyM?$@t+-XZ0z1O&8H zAQy*}eV)4;pT%TCLAuujVqp)t5r=kCFOCOhYJUTUtt! zN0XKe!-Y21%i;j9`Z^@1M^S`nZi&vs9^45WDA2m8jF~U$(@>#JelF$G1 z78TH zk(K)s->)~=>V&)7w(oU#R|Sv*hx8`*@w=HuS+0A&TKLs;oohVCvUI*ifyA z1G1Ov`K#G_J_Mii@GF@~@kK))6(>4;ni|^^lgh~kfH)NOjHUAs_{q_c5uIroRLQFeq(8>a^?q9LRKn&U1G z__WUM4hQ#*_u)`mV`Y?wE+12$sV)z7*&Siqcfx5lV}3}qoY4q33;3wowf`f<({?v$ z^*q8JmdVo35Yn!NGxfVs*lZpk=aLU0;Ma>f%x7Us{+5FSz%IDh*m62*hsJFtv}}ZH z>oGzjl0Ql}t~U8gV;T(q7)HknZ~dt`jz7V;7Y_wIE_2=OlIm4v7_{5Lu&naoMx`Vh ziK2(K9q?R7^0YRY;FLF&E)*ujU5Hw(r1D^nLfFY`pi7LRy=34lMwJT0J9Pea$G)}O z>c4WZ-&}lS%!$7dY+`w3vrm2JP)8a0P!{NqIvOGAOtRA@8$5XluHtpXRexL^zHEJg zsh2{GUW&g3&S_MT{cBa&`oN?)pK2BgdzGWun4F~6-~K=%U)Gzb_*&I|?mcj*Vl!WX zVQboj=H7!m<$QNAiT&wUf<0{97xy+A`dI!51(r3z1v7-ThD4MciFmJ)c~L8g^1lJRyxq@byG@Zn;kgdp z>$>9C6-wL>OE2Z$s(j9Y8xAwPTkvfnH`?CRsqOr%YdAFhm*RvO-Nn-Ri;pGS-w@N_ zSK^U5?A0>nbIsEj84e3VhOjR99}iaD)qI*SxE#s;t!722rSu4qormU%wiz->ctfht zjb~C_aEr@(tB>hE{0{PN%PQj*QBuOb0i25^$Zi9Z(`&T!v9;Dx$7tu`&UF|_`4ukA z7OoD2^M$W1T&wa0og@pz*e3J21IF;Od1>z*e1F{TEL!R_dWm5iXJ&UR=fb4jd zJx3^j1j}$?73n;x*+UT<`g>(zGK)m)_wpO(ip|rP@`&Y(71&S#sk7aeKi3IY!-a-u zs(7+tLy^G{58__}2xW9&Ty4JSVkrNH_~9k3PdWooQ!Q&Oii3IImXDk3IvcbTqG-N9 zHtZC)DBB751+0>y{*2sR3=0K3N3F-JwhjkIlbpo^70NvdpTdD2 zwMgqyM!5f%rLXGo&H$hxn(n>N2O(qV;80k!&5vR2>BWG71?eB7L5^4|I8akJO%+Ft z|H4LQpQLNFKrC{%LQbwaTfK)kDN(^EtyqWW3)v~=kwFK(feK=1t#^r=gI5DaEx%wr zT&f!1XyU(|2h@dpSY+Lubl2t!5LK@sT+Ej3al99xOdlNPOSR8GXj^WysH%Ij^)8&*R3Zjb0{N0k`8t!cCOwEgZ>S@~L7&X@+x-&m|6Hzr^_q zzWa}u!asAjaux^PGf7?7kX?rI1dR2nPmPa7VHt7vjvsFg&8~$ia_%gTd-iol$+eU3 z`Q?TJ&Vps`9ab+5nuMPImsQb?%8?@{8(Auh{MlxT*Rxwvu|5oLxr1A4guW#FCk&VL z;yinwzUI+^C&qbeR;An~%ub130(stb*Y!NvrSQ1OA_w*KW*2q-b&4WRo9PIcOj!3l z=$ETS&wrY-M&?2XR+(|{T-+|JLgdOJXt!HSzt z(ASgFcw6N>1}P!dvr+t*+ zs$BcNa;Z6;tn{g&TB_(C) z0hMex<5jZM1k^+Ac?4i~D8iJq47Tl`u$SNopi^K*Gw=0gdEKsfBFgw5YI0S1(juj@^+JzPYu&Y+=gd<7*S0FHKrtH5z zTi=+u^Y^h#BDrNu%$;JRDl0nev)-rmikWrq>y`r(+QlGJ+5bKkP3}sx%cwRxq1!~$ zxhH3O+sr&g@q}g2f zMKJ8s|Hw-`cvdGlvw?S=6ygENV|_6EO@Wf#%`VmC8r{9abr;zZ2}pdEZp9Zmz4deL zy%a=j%sI(wh3)qr9zuo>%Q8>g08-mi9+$l{Ho~A_$&x$f)$WCqCw$tb`%yr!d z;PKR;0ZnF&ne(P{p6!$>WDJyGE(`5qJK@`i{Mh*&IZ8; z_%b^ft;L))lNrU`EP0(52yl`){<^?T<3+n9jP##Us#mU!bFJ5L;Z*aMbaHL?Z6Smh z(Re_MFjy>?Z1|LPTVJ+|Hpw%N&ip(in`xTxyiDsVKO`_N;&K-+VBq{v2oixj?mN4V zT79PEsXcJXP`Z68SPFj?Ws%B0Leqs;Fy`LZe#{bC@rVpcxN~)k6PSeQ`O52H#m)~% zpJSQk{Sd&smU%xB_Z~c-pGDM6Vv?-cH*T&_cFmP!n)VKq7OLb{k|48tC0C%f_Fh2@ z!qe7IVlj~?wz^@{`LOlDAeFG!_7y;aYVs*nZpdu!9B6IsA!~TnIr zwLi+6a_`I5Zn%(p3*5GA(Oas+-&}x-J#YIjO9n}rPE6H!lo*Km`u*gb$9#^+@zDCF zB$XlkMYP;kR63}yHrTp}a}n#uP~Nh=0l(}8X&>Q8jyOaBv48R_*L9YZFc@a1AKE>f zp;6Nq>{(fvz{=W~(LF7xP)I(v{-=@l4gv-k~r=aikJo zB#|WuAO?Br8_-|DPl*Lx{S_@la?YhUc03et(GyQ4P0M_6@qx1oy`FAeUlj|F( zs~3oEcniFU9i9Z9ZCN0pe0d?FdU}`RtA?{;UosFY^^9cVEmYjJJ@snGAItb3=}zPK zzWM@4UPWLop`Y+w;xFCFHfzYrUH<(^MqdD3!i66FoJuZ9a4w)+U#-?0SnlZupTf13 zY)`2RTA%*QO`!wlEQs9neEPZj^`^GqzQyaiu$5q2a{&KwtQJlSZEg}H9jK^Am$ib4E9@6zQHXq74gv(EzU}C z4vda7SDI$orWb9+zD?sE_^fTYJ|*4wIqo3fIa^1aF{)?wnX*#>GbnPngA7z#aMbsY zl5nPZWF#OD;ch;R61~R^gYj*rTfb3=*aodA<%xwknP}}<^w;8D%a=ArB^kmTb3zc5 zb+Zawv47bGeq}6hHWD-DPQw+yQT|?yh};Q zT4l$EX4s!DshjL>@bsWW{v-5V#UfiU?CAUW-F#8JOs2vz%l5ct%3m^#$#wp&V&^Xv z*{52}d*H3hKX4XPMe3{To`ZkVb zuSTb`7Yt)wD3rD8A^mHaDB{^DPx zCgb;75RpQ&3rq2F5jW*^5+9OFHfd1aQ@J7|u77(FZWOZd+cpIus|!7>iI&KUFpi=z zwv(*tEtKw`CBR){UgS=Aez4k?tI4d4RJ|M97PblygP-A>4tP}F3$xReifk8np3MlX zQ0|H%k4H`8Rgq}3>Y*k%2vy?L|0?|*5_-4+^E?4!P6kMX-<@gl-Bx@{-jd$H3P*8*>j zgca%Rxp{gS<}xpuC!Z>PDNZP-6EiWt{0?x!)$&t^v4-P46LVjiazFR5&P{h6yScyt z@(5+o{3{vIJ!XdyMERJt*UDThSl7sD!I@-nex6Svca$UXUZFBpRR~gJ**wAD{B2uq zpV#c8W-^aG!~G4S(4}-2_L9d!3he%wu!oKXg|6P`UK$(!Kl;6hD4W19CO~tn< zh)BhJc)+Xk|1(?M4no=xxmTjn2=YiaK zEkB@3eD!j`xhjMqCR+u&xYSx%9(p`lQS0gCU`Jn3`TPU4r@ux4=M`y)3(#z0z0Vu4 z6jVv3=(rXmiIIcY)Z+MXVQmVN-b@?0>Yd$%DQPOP73H5Vmp5}+G_cFOp?NxC?>Mm# znd!fw)g6|nQCHXcbb7FTrTgK%tYvx^Kjr}%7y&j@4`EQ&sUyV20<`0vIESLOR&IpS6d$)8sZWslyUUDyR2&~sh z9Nc2&^)}`(@7J;X-_b$sauX6r;>akae;&z0^@*2cSgodE`PEVj0+-w>!2!Y!rKGg7 z)PlaLBsLJFK?Xl%cED5MzPy}S@P2c^s>bZ?-d8}}&(DJ<2N^CKo1uI@cOw_GyN19$ z53nGE09n@TX7)zws|DxKlUb|r!%zJt)y+-Jyp?))f|v0F1p-A~tLIhe_xru)Nd;b= zwXK_{_dQkJnct*AQx$t|eM^J_$jEP?^F`&&arb%=XE&Sfklz8M$R^4Zd8|RG?cZx7?L%xTK2#3jbOhVfE6lVbsOranoS{(BiPc4Gq0Db%a!lp%eDT zf+c8$^)_)R?ehsq`m$*lh&D_>Elj?Cl|ckABXrF$v0~rys;!=R(>4X$rnP9KCwHfY zVc9ZO5gt5WEqYChA!uiAD5hI6GV3#Oze@xG=vXbamz}!bRctxHW?#v)nFEMMm`H3a zW8Lkol61<#3r=sqRgqKDyop=-oEqu_?#N+236AVfJm^7srdd(b2Pt!`o(=O2T=TpjCqXcq!=o$9(gQoaGQ1h9VL=Nr$)~dym!Wb z8yYEe06wL#>vAh>;n)S5)Ve@r8YeAoBT@I1xagr1iPF z`uj%TJ~qrBIv?BOzw21`n7g|dU#6uj}OjnOI5Q3IJI9q z;tNqK&;*yxAtOT`<*j!e1?eOP1*hLpFv&G*@+ysvs+)~wZ|})Z z=|F<5A6QGf@THd7Q3JPfIGg(8N$eAR`g77r_foce>}1w=v7yb?x^ry|9CWAS9_U%E zj(4!p#nV1yY57kYkWx|yyx+{abOFO+nA)2^<6bN?!?4C3Ux}TiRIxS6hTz z&N#IeP=i9xCM{x)78s=EXyk`WObP9)4#6b+xn;%oBYGoEjna|eQ|MuuYqvmhFv65Q z*EAAP?FUX1dz+i?3wp$UvN(iCFS|sOc~76tktz`EOt z;))GXOs$o~vk!RC%ak?FlR|q(#X9TTiqta$`M%?qGsSvzvizu{X*c-y#MRSFU+_N974>6IA{Dg1*@QHW(J!Zo4cP zYPA~_sDU?rNUK$_BY;c9#?3RPUa*u}LIKhdOPP<$f67gu8ft%Qw!m@gO8e>V0VSNSzec50 zsL=%_TQIxCJ>8j`&f3~U@TGSc$92)Jv7ill2^=zSuS1fcM#l50^4}t^m~9TKrTDqS z6Zzzm-K@kR=dUa zEhea8J0UOQvV4^W*CMiQ{&R9Kc!dZ+-F}?JYMYxJYec*>On%C^8}@r<5#Mrs;s-4k z@?_l4X27IRP85_?W+%^blaGUBRlp{ekgdL6d|yeQbT-~iu6}F*EFO^P?cOMOyF*>? zD-~ouL1+@gz3BHAO4gwFt8o6y>6c{GBDqq)WNen!M!!i4oziKTCUzpP%J(d-s&%tN zeI0sOKhS!u99v#_PmvMlb6XT|!e02k-;c{@Rd~?27N#vNx?jbV+idovdMUFj0;MXC zIHaeQbnq2BO{62LIDJ5@ZMaYWwfng@)4X0~egCyk7hHvYXDxwUL~12xoL2?;xYO); zb(I=~c+$rgeTgnbdE2T{%dx|uBWRn8vl>-=+eXP#MY>(`8?uXbv3m87l~#&^5`XJ1 zv*xma<1fE&dnRJ zSnxq`p}Ml8L`hhVu}ICRyR%j(tYE`ol_l@c44}NY=yE?iAJ!)XorpP+(#n{L2XJ7* zYjn5w_EVf*Kn9WtQdbLM0nPX(n-(Nl03Fo|3&v`QFpPqhBoBiCtfH#8o2BgPs_L|Q zm6^j^&&Yejw93|*Xr`V!Kh>BBD1Xy^pppw3KXNQgD zLcNJ^XM5{ZP`hMTrKKv&2(-8>aX9c`hY*dkkF}4*EpyiRz``Sk;+IfAfJmsyg2qDM z@J9?A5JoY8=w~fkMiroM$=>;0(2-4$VVIPeym(30by?ps&P|O$Usqo_Y4$~Bq(Vp8 zx1iX6sMD|RAtP1ttZdKfQl>!0egsX^7VdWSANRfga%f@lfd`04EwRS zln1%&t~i`0hB&X~Aq`TxQKoS^T<(ByKXwXL$kZZ1|D>4{ML+XfI;06iHeyp}MR|t# zr&yFjckd^AaYf>qjc(hgv@nhm!lKM*rWpyy)Xm=#)|K1U;ZbMjx!Qpc0`#p_XC+*| z?Oo08P!lycr)c^K%h{;=7N#->5z6Y*i>2$;gkiU{8q&cFOtNG^$sSZiGCG7*Yz43< z@di5S5-7}}x!7m@gT!*Go#856Zq4ivYf4Im)|LDR$ddMY@JG?_#PbTB#ij#kCyDHE zK#BgszEFL752-=-$E%m%xdk+-$oN?ID@X%IJ@xwu>u3qVRPVFXfKV+~*Jln? z_B#b1V3%h~2O2|-TW#-4j2jDm^5G25X5=6(?T~v|@n*St;(KVjAWkf4b2MC2&^e+G zp+9i;2VgJfY>ccn?M=8u>-zP^wYd{-3JNVt25~&V`kfOyaT`nomn@Tt7F^XgtngE^ zW-3d5-OKGRe>brjHVSJ)2GGL{5A(#VZXAk~4_CsPPUnAr->6vZZqjSO`G^G#Z&&l4 zHIs3_|2s{a^eweVI&y>4q#_-i!nHfLt=qqZXh~#;%J7KA_+-DexFKWga@WvwTGCxi zYN_832RaF%q3&8rgk!(#u?^FP)VwqsteNF-_m!z)CDUBjxm9Gs|JHF(DyljeH=VA^ z;m9>GO+{KhOQX6Hq~Zmf8y344bh^TcBxW<3FfS#z8W_*7z1tuk(XurrCx}d;1%;z& zT{K$H&7B_Y&1&-vA_cE5`Wy>f4rDTN`22p}iBtyPweyxyJ|LAGh39brR zbeG=0)i}!(tJXeaT#UL`-lcpIaaW6@T3Wp>{VCr{Yhj}CGvm;;F};BkaE{6UDspVd zfDA}aMKxuNz~8Hv?#xKQyL;6n_XM#GLcRIRt;uG|=i}(&y~VaCbS%HpKM4hv7S&ZI zsZlZYBm@>{QqumMhQ}KQFNq3;h1$rcG;yVqdY+1%aF`{#oJmiPR4m8kUxnM1`kYBs zaz|YK_VpQXuC#4o6vGs<2F~?d5OAdPC0+{k`FIP2cx&Vl}eD4-|OU+0c0kGZ%GtT=%I}*^Bh%*7!>cA~p=8kef zMom(#xonx%$m9&;`!Z3CC954b^TQ8DfH=R^Tc-JN5m_lx-+pja#sOO#53)z#%2|96 zWFL!}KN*(T=QP1aZ}Ubv&2?SRMwA<}0?H`7ACNV=aEl1xb_b`e8#U zH2pA@AT^5w_H&!()h=F&)Z6Y8u(yH(-fCHRwl$Yc?DeTzuROlg1@2X1#CQy9IIhD;7lw8^$Ae#?%uG>n{F-WY%M zxV{mbeN%R!ao&tm?N-T2n7|g|h?!wjN2+vqOy-*mu7anz-h0N?9*FJ(@)iA7<#-Ko zzKucELzb>cToy|K)eT?wLt#2Z;Grq$qDYO4?uKiT%CUSqJH0ncgRWoX-B?b)2E#~Y zM4n$z^EjXPdcDJf?%9s;ob6M#d^njEPYByMGu*gM42=^!2#=j>Fq|psmU&y_{Yc~1?_KY8I#eT;u2&umNLrpUB>v_=K0rP5Woxh>ea=B6vs&a?P^lDb;UG@^os z7z-8TA-NW?&Dt-Y7r`g#=PH>X7Z$0d9>Q^PuGU~Gw#96AlPfJf=ldeW(_;4p9r1;M za7i{@4CsuIn$SKb*;g?kD*6MKPBX!kPqP8mSrnVE7hWj>=@toYazm!O7>?86G>kRc z+E1M7u`!#jTqWn)qD$)avdv)^;;>I+Ch27$zmDO6$YIZO1ID;loN&SJgRX>PsuA1B zDZeYi<3gwJ$+uX8I`jm7A?YGmA~@-q*&Z{*#?%$#}P0og`^{TA^UusCiaA zbcYS6EwcwY94pDU!ZaH1VS{&^op3#r7WbW;AEry5U|t#zW%jQcJ%z0kTW?&kz*S|Z z;R{;0BwP*)-oYV?u;HJcX!|dUBD2%PKJ^EXC6KG+!N~dPd z0WmZAuxkCwsW{P}?lP9;hbOG!B0?0GcWqlVl^GvvifQ$v>LB^mBc9XJL7|C&3@XrujNA zm!OvN#h)1Qk)o5*xI8C>y_~Gp(=fY}z`c-Qz}sE)kt~e&V$=LNvjhQhR|yQ3Jwc;3 z8O8wF8J8H-R(D*zLv5N*Uq*lrzp(1I0x zEZv*AAYBYVlupOOMB{@n=7Wje-#*@wKz&gm$Q0#Bxw`_Gy)fQL>NF)X)|8@6lcMk| z&xmtV$wwT;*tg#(C$n}mrl52!zf)3ShF5gO=$p})&h_bc4RmERYVA^B72>M63%mO7x+#C|j6A*-ZtX{oo3V~pOC))gCU;x)NvYdgKm#F$ac&&dJC zH_;z|46y2nE&PZ^Shwi1d^~XQg4uPc;Vtnox1(0JFqq1w=I7+=_twnuOn=l2LoBEe zwncBp5$dzb9!mI-2wMMTM%>cb!WfCk($Wn$=%l_tx(+ZfWf;_B4Nf4vrJ+4Ltqvv} zD5G@Ot>TT5xF`GU)GSTwCscUO}?#_WxblD~fy{|1jMzj6eiDEe41#QW;^=aKg$X~y^mGNy+y zQ%(hqKeS*^`V1o+Vq-#g&$0pqxTvEKI#Qp~t%0>{ zl0^szLWlQL*J&+i({n8Ne6d#~+&!>c{}5PK z{l=BCH*`6fG_%=HkO-#^PXxL>udL&uqrRm@?Bqn(eao4@5q6 zqUC6+bciVoBVu&z?}#SN%l=Gt z8I=G8gLF2Y^9U=o3{bUFPDS-Prkyfn%jph@F_egZcNet=(xGW9GTjki>T$b@dvzEc zm~Fwdq83MSppCEZG}@;JEuxmlT@%i(WzIR=%6aFXOgjrBOqk^-AOYGnc}q6xjac*v zee;v-RP6MPBPt>V8|zdMBgg+bfHFa$7Ap6$GdWgT3C^C2{Am@wT3_#E-5bWJ%g(b% z`mQ~x)S6&;(5Gi*OA34~=p+r$mI4)?mIz9=MVUY0NesY-Yk9VnTZZ$&R=)m>l=2CL z>W}ZvEO~kK|G3^pJGSC-B7c$#Uo%_bfGXMH8SZmWQ<4%VBy7E*cJG^9_D?&ZJ>#a~ z)M1Wyu)9rJn~_WQjFO!e1309dZ{;$8o@WrfT?)x3(JQ?y}0WydUs2c6TI)K|=fy;+swA=LQGs*cIwP0iuI3$Q zGQxgx)eHN~&qka*zyIW>;-I21IBwN4C86=kAz?9zk{z55UT!0kY8T6K;Dr5L>B$(4 zO<~Sx%niHqF|uGcr$XF%v>R~AaUs`?Sp6xzOe9sHxRQy8yS^c{GKA3kK!~joQK?0l z-dzriD}x)odyh6cl^v(ZA~UnHSt;eWl&WA!+LZ3T~Hxp&=4%_jZ|A{Ru&w9IuH zCR6VHf}Q;RM72jr8zi~;6to!ww{6#mT|kN z(i-jC|1xI{BpmnWhr-}lx;qubU4pNj_=Q*hEGS4ZMiMjf&rK%w4fgrGAj_FxYxP9K z$Bxq%?IQfq!#-${xID#6qaB4!=i1ZS6Jo5 ziNrK>Frb~a#`P-1eGn5sW78Evkz zmtU>eq^D}-q&RMNTH}8vxn9Un=C&T1b>O%tJdbVb5^c`zOsxmEULSiY=6=Dy(I%_G zX%tgthKsN2D#a3xcu+jQ@*Rz=g4}L@zSjZ-Qv`Bn>HSw?3vsI{vT_N+xrYb+)<2LI z92P7Ekes~kz=Q78_SRG=jYFMSvE4xj-l}uthCLDe6~C_)C#750fzltl8utPE^lb3elUZIa&5w1;`FKuivbDap2xM)Q>btF4)87>yvz{@)lYYc zCB401;XVoqPi}2IApET9HNC=}6P*7H8cPpW`SwMh19-(Wr;=NlHM)8)Im^lsG5xIi zT6KUGi?IlY=pB5KDAz@{u}O)?Ai*pYFCs1UST-1yz*ce6Au4jwsa+wu-j;oHpL|ws ztzB9k0zzi_*q4_qk=?tsk_eLmh6 z3OftqA~eh)7k~3j5I159rQ}+5Nc)ZK_XrHE3Du6>HT*Co3znTjM)^dcHI?)#UVTX` zf(A^>K|l&itp5I_U&)+$HDtdbNoK~dqWiy19>u!VQ2ocm0w*IETdHKf8|c7 zGuPh7rR%e)?5j^2bH%^InGgIHQ^tLy4c=~4H2X9vL@j?4nhLqv6xy{b&=(1N7EYaN za9w|mmFWMmJ+rFQ%qqt6clI(4-iIIyLXr#se^@9iP6cX)i~V-gq3bDa5Dh4yKBKKn zY^X`eiq5#lJh=O|aQ7oqu{$>pkPrw_V@5g@an-^0NeGb7{Z$?z0@F!>dN^B4le%X4z7xu z!?vPrk5R&bmJ^~48R;K3QsmgQ3m62$R-94i+Hj$znZR^VnWm@;5}2J}yYtmbAF&D#n4LAn8E+LDZ0_hA5sw1kG5-ZVT|52LKV8^Y`o48$aA|YjOSgqn zQgs^%m>l}$w-vAuhe#z0q`Lb_MA-2oe;5q0AK3~2-Cy1b5t6NK{P6-SZ~%9)k=Pvz zYGJ^zm$&q3ft_wZZyFmKm0Fd%AO3saE|FHI10PE2U?v2)`>E+UXmWDk(rAONh!zY} z@kibaRixro19QylWP4xTP_Tm@wS3u1of#))=lBTu7=!KhaXxhO2P>Z1IBkQAMwKA= z?-LRzAUp+R9B=m)NQg#gYXQKZUu|Qgkp)x%_E>}V?FmW$IK61oe$-$Zs>qRH@ky?E zR3XF1w>x+^*Az&&1Td6$_*)kAb9fgA_>|+053}V%#Y{n#M}pWjMIk>J;fWZ+lL|;t zFie7Ex;yh@+ejDbZQU(U0Z8~UC^sMW{$D)O0jk=L$2;LRMyS1{tsGgETN;q~oNWKN z_Fp%rQB%<+Y?R;X4J$6nZ!fC^|B!~l>daM6H_Q-H*Q6{hjTCZe93e|&fixhh4FzNk ze3-x|NG>@smo&tpdsTvG=@MJFabYEef+r=t%S;1Tf9)2aHvFm6WO*TisNsZF&TMKl>+1>`}yeg zl+Fh(njn9;6^IdpYGQ@{=1#(*b@F+DFI?5k=g7|nvok-|j3K6zs^mS0!lrm5&iUJ` z{`Xf~sN|rSq0B%+{sZeJ)<^yiBlzEMDd&9Tp@Cz!4qi2tz zxLlSA@(fvefzr_gRY=ZdVUcQ#KnQA=@`h ziSWSI@BUq}Drn8rS@~@h9tp@`l_0a{d~JZ2VD_fJr`Xv9Rgx7R2v(UVz8ayh*!y!)l=jxGbPU_^Y{P|vkpl1RO{W=M z``4iNAgIj;!K|VLPDjk((uv-#{OsEuPo#3)DAWovRQ=IXhQHVC9tlstDj_pZX-9R| z!R*AKMdf4n54D!>bmId7gh#qQp4JZVp~Rt29t=>*U02|65g1T6;$5F7JB`G_S8jM&OfY%hZx{Bgdz6mfacKYe z!}t@K0DP*qv_j+y7@}gcsN%9Vln7>eaO==5%uTBB&AG(6Dmyi9FP$r+^nngfup%Sa z>BAppnQ}n5D+jI|YtX3a>dNzQUBt^Eaf6vCH0c>=6Y%)WgdU?et`j1)yQE`@+p619 zDPI9Y)MaALqLoG%3#eH;?9M>Kp}c#8%b6l4_T$Bc@nYewv}XpzITuVM2zSa$s9Nao z!(cxz3d_#T-+Sz>I6X_dN!l%?d6@<&M9e!mev1i#ufH~c@`bmQy^0zx7OxC*gB8&b z5h4fWkW+avX&_&Gh4X~#KtNANoIkgo;qwyqT^;mM5TM&*Sv7TCFcEa-6SiGSn5ZD$ z|FpG9v;HJxrr8{HU1dU#c!j9PQ8Q5lgU@1u)|*TX?a!Qih36I%2hVI8`9m%HJ2M#O z3L_)y;_lSKL;mEL24V(vShNwAF`t-aV~E&#Xq&BuOJDr)NI(axgnvG%fFtD~Qr-Td z$aRjbFn(^Wgm}}0tX6Br8X3qH_oO1$OH#;qLk0(zQI5dA9Bu{3XmDsMZHMgI z4}Y9T6Ce`-xFI~WKzOg*$KIainbF$K5p6Hu{E=q@4#4#9c>>WYRWP}7=5`6bmn_|x z+H<#9%M{6y;{BNL%Jj$Qy5OG80!%C(UhU}z5n>h+OMejC3k*?4?~`fUd?ztyPzxxW z4*LRozLJehVECZ6EnI79p57lc{z~BGTQY}oX)HeM?`kb2ni%oS!0p(LGARH)o9mAyQvxTQ~H6fu%; zc|2SZN^tTxf%>~c=#Koe0B;~LU#J$Y#LJE{jhYU6j+d^qyt22J@|DOegpbBIM)I*@W#VZ<-1>tZq9 zP8u|Suyr?IQpgBl>KG%h!}JUUh2DKazJ1G*%j)8(!Z9n;!y6uv%_|D43{L$H?XX-? zV{W2YSgjH$#a&m4AA8{yb+ye)n4*L3wDr{thKj<{5x*BmLs1U4N{#(iq7qkKOyvH` zA`?GvFY;EgsVfD?eolrm7oB_Kd>S1%UHYZJ(J3&KHf0y5@E!OHU&fNf+eXLgRojil zJ1kYVuCRtOJsdBdR*-FybXb1D5F^UIvaYi*5h23rXk5jQBkU>q=?HJ9}vtvG6`Um4dII2pp)s z4Ce9rDQ4iCIYZNQ*;?`6M+VbIO_Y6NAwn!6o@>ByTZA!IKUt?UJF;ifW{sW?Q`Y}( zw440;$H7pJC6}!vl7oH%bR*rv??2K8CdcAKMcMl*TtlC(KGT3>l8dhb%Do z9@1GKJ(ns}>)jp&L#1hnQLr6H4e<1aFuvEX*kW(CdaeXT)v?VtfY2iC1C6}Nc&OcI zNhSDxZFTF_N4Rf?+Lq08Um5W8_@sGY16|8uzudp$A?#G>-OejTg&Jlp8RgR?kA+50 zRdyTwec>QpE8jAh)bfv=P*kHbF=o^Rqjry=VUr|@oGMok9j0~F3=(oEEA#oo;zPqN z(fmO*?UbrZry(N$WT-1UP|utQtP6<{dJC@>jaytIJhN;{{Y5p;--(|sKcz7c^7wF( z=G9*d2SrUj1aF%R!^UC+=HJqNaQePJY@}H&V_|ZAyu5`Hdl`u&(SE5F7TcIx94p>u zVId}AZkD$cPbLkIk9zgbas}#6y)rtG)=jlk=}-0$CV*Cj^PhJILt{=!Ss}Ix6hg<0@gHqXzIwPgl+awqty*Khu6J!Y?6D&wEzSJz{C{6i2k z;1z=kF6tXYLQB`Kax4sFVN(qaJ*FWjVH2VI%DFOdl4zfJ|9QCXou|$OiRx<1ZcS96 zO&NfwAsDLUuU5bcIoT<#dS8lpGeni?%smCw-6&XBNqGdwBvlblh2ZK;Zzp?EP7fTC z=%YR45TD}1(}NC1NsJ7sI^6ray~uzs>)u&v{akq8B$xIKrv9V@$J^74|S6yeai}6JOV;|=5@Q_X(OR>eiwKa^qgchd%4E!<8J}2^N9~e zb>*qLaSP=$%c!NDn9R8W_nc54@DKrQF0~B|EiqquRS0yGC2vb063li#q z=?8KuDq~J^L4U5XS3=iSa)8~NYh;K3JbkW?V-NA$(rfySa@X=U;rRb4N-hf&NUK>d z%UIs2mJdx@fwIxHSc>xD6 z4R7JNk7pG2$3scigeqx1ZE`3Ja`{dSFAT-%f-j~LKlNB*TSG3>*Oap#{vRyCrFrc! zs_ar-^@n#iedb<3g1*FMtk#rW#3m+E4`j9o*u7~u3itJW!R(V!1hKQ)Es?TgdpXo_ zYffG`#Jz!f&|yioWf20h^LqW2|JC3&bgYl<{$YUdL^vsM4QcNYScp#1d0a!Bdlb4* zA&8E2-TB~FAmn{~M!2V`_$YqhcKC$pAXS*S%x#d6A0G-Vmy7Qr1q%-^LAIF5d z^i2jK|L#c6(~*A$vAx2u&P|(c1xyHM;86O5L<@3c(n>yv|NRU>tdPm57P0=UGa&P4 zUf26)u{}rutVMcuzp(Y@b^bPb;Ex0}pLmocZ#^FY$vbT}S)EOx!9LUZWcd(q>gbB> z`)YUAW9Yn=A?8tU9!vZB=nH84A8>)3gS?aR5!XAg^;V&>75%o6IP0fPn(E>wvrpPh zx0s}wG!>NtcN_A(NC)Fh8bP^HF2>Ufz#XU#9tqLW(CZ${HXTuJ9Ye1 z1qx@442II&k+hmJ1Phr03aO@d>a5{>+2$o6Dn4x8xta_KNSx5;-m@L1-^WLMr4wUn zo3b>>fA8=OxWI#k_L$Z9a`#xeLuGmxJBK} zzl0En?6Ol~OuzV2>y4XW?_W*r)8;>S5M~Xtu*jjUz#diVV~$^H3|gXipBr{=s8$4g zFBK%-gy#)sE^t6%Yh#aMdD-0bm+fH~I!wCkMYx8WWA93mIx8bI;q(j_Tp+tjk*}ys z5{h#BD*C&V0Ffr`ulSJffT@!zwcCvF;{Hmvq0MJamfQ!oJJXkq{1KwNuK`yh;VAX! zzb$0JHqy}6QB2=QC$Mlq#t`uKpbjK$Qz)u!m${Tvdb}mV2pC=07aF~LKhMz%eZfhq zh#sYUSN~^hK`Z?PXSUs$kCIGl)lX46tFL?6bIr<+Nk}KdKgbR z+=GHefa;3kRRxO<44;0ad;H<4s0%4r zS3;F1=$hxw4Znf^i8cswBBbQIt_g;M?}8Ww=iON#V9LJtbwrT*X?#CA1+u4qNwntd z%2jtGcWE`BJAe44dqH?*>5oSf;HrvDQvwNDjlJGHZ2)8N0ppTI8ve1e2Gqh2RQ1bd zjOa!^^4hLYZ8R6tBnf{_;NH&y1m6&RUhmJJiweDxmRtrSM9kK&ttWHR2z7M)M<4NM zk8e>RBz*GH3;+|BU(R=}M3QlTjPm(i{Q{y;D$FN~_wW6?_!196EjH5k?Ge>UR6^Fc zmDT89{dhe1OYeF2_jhoJ{{w~=ZK}_gj?`4RH=@?~GS;jnAL>?y4e^z~Lp<9%g z_0>~EuGrixpaPEIdkDxTqFciOQ~6gmm+szjm63riLlQxo@2q_H@em#zd(q-xQ8{QEtxNF-eZy zFCaGB=#nj}j45-g#0}Gg6gFrjlBp z%nS|96>rGTn74{7jk=DzaC$UVg61ZFG|wLpy{>O;OioQtRV)^r3nUn=QRCRDR0o0S>w!sm&I6Kby}--Fq~BAOyMc3<7DqsuqLXPg3Z#q@xvH)OpC! zU7U;a?KjRCdeqg~O|{$o*8`2TW^nxM!Bx(9=vKmHXME}7$^KOG<^*c6?wj}oL&tb> zlc(4nNI#8A$8OTOLKV{$Dyv6o{}I>xD(GPiWDt&&Lo~3jhdNN|bFN;9US zeBkV8P(Q2SOx&^xCjawwC$AR=aQg6L*I%iG zYp4DBoA!t{mX{oMBA`6!F=HF>x`fbAr}2Gvq=~Lmm-6{B7Rr3@IJo`pDmuwGpRHVP zK9Q7}N+harMEOU@>WXRAW1OFQoWv$0x_;FFRrco>U@CmgH5N3Q-WcgGfp(72@ z?csrk!c?;m>H--aFCc2cqH|Mk5h;O;ng3}n?deClb5?-WvhUDqw0r79gA=VOh5XUA4M?qWJ-$XK%^K7eYzNImH}2&U^gaeDDIF>LwyNuazjB&kK%sEz zmyl0AEN&dLdRd6g3>9fSgq)P>c0TXFv@b)L&-)oE(j^)Z`+Dj1|x z`1{41cWJYc{SXQ&$c59kKS=2nB`^7=9%0xBMW#ymosJdYIaA5)z5}WW)vA#wgVLeI zoN1P1bn2OiA${vNma$tMPUfvTy9@gVmxA}9SJq0f`3S;wyfP#?5tMO0VTb&ib5ePS~ zlecgNg6j1LS+g*o%a`f4iKlr1pz#;U%|yX`7m||jE_2%Ce=Aq2(`B#B%osz8D_}tb z8*1!e>l#@3Klip>%T<@(bs%BhD{l7>mA^neQ6I+!bATg}x<5BAGDcerbFPT^s|o?K zW$sIkKVCviF3=|M?0K6%6im#FMQr;oIg1`Ww%4B1RE&S`MCY6qfh!9y$VZ`BPJ<4v zScD=*@+$$|l0OLjIwq%h|LKvfWTiMo2lAh?O|$o}l%L&Y(YtwsSjlCXyHjI#_<0}6 zsvZTT{uk zUa6ujgSu9w;TG{rr-^r~q)v^QGuTVkUCzspJ<08pFZBKJ*` zF@I$3yML+y=YtOi*F3-voZJdVAsb!{WIwFRXUR(~4fj9l%PpSE#(d1tRE*2b) z$JcFFP;hv^Ql1KnP(W-q7ycfD~Grv~? zvQvkul_F%>qJVbfQj<_2?iw@T5` z3_B+B`a3E!I&Xo|lNR!%Q7}JNzqHoMNio7B$1~w4br{O|NzWQj&0#5iL=Mn?Csy-_skIhK%$D^WOf;vFs_7VSvPfrNsdyp#i=B` znQyAM<=^HQlpQzwBgoh%$i{q5X*aAA=s{UT7}%yv5+MB90^pZ#-+xszWO}ya!%Uoe zk`$10+oTd6|NC>%4j+5_W@adAXjIl=LOkAMvxZLAZ&Ujy^U+VAVAKX*U=CZWNZsa_ zm0Xldc|9o2Goei^44TtS^K!WbspXHh6lltSF z#_^fwj{*4=8-yiqtRyS0Gnzds9poLZQn*Dry_3u|&Z&8TfD|g`nJCHK#WCZwef7n^ zLwkA+$)Q@8_PBcR4=NU;MVw;u>hLV1Y>_imD?!(GhA1B5mNd^M3(}sh{8<3M@kLSZ zwO6?|>G>PtKL)g(?2w&#_qQ5(vD_nsi0Gv8$oJ=FEMdW_&tP$1w0^yJY>Dla$L2nBEPre`A-e?& z=*^Kb{|rvcZsioMAy6_Z--lJ^l85unD>ep(!qf@Ghj*Xs%}DVV__yqp;h4uaq^6At zeNBu@>bk!QMbrpR&*0b%7rq_F>Datw-8h~iu2Q}Cx44=jJ%eP zXqS7%-tJZ4n1G!IWJe0I;(i4Jhrg6>)&0^z3gkuhKrwK@uVR|7voxBB%3%0gSrj=& zG^xK((`!F^$`^cz)|YwMQ)$^tKSqS`Z)}EGqt8ZNzCGg)42TPlF=M@M=M5*liwxCk zr#}8{)>P~X(2v6%7QXA@WV=NzMMHK{%EF++6PAX&)vO~8@78CYkOW<3Tv|I-ecCxr zjRtC{Y4`CEDmLNXzahEOF?4t|Eup+|s(Mhj4J&eQ%NyEoRt z2b+FC6Kp?FU*@Q5N|$@3%A-&IiG#`PSlx4@s=q>~(08e+Z;`WObF`t{-3Tc?sxMU! zQy)QAx+r6Uwpf+tK)6p8GKmGY(-`Ggj3+61Z&$KviDTBXX^c)sfD``Xl#UbvDo&ue znXCqMlD9O-LQm6?CE8i+9EE5$?`W0psK7govRE`)Zsld)Z4aurNdzv`MI6~~^ z#`4;d(&1#{uJf<06xHlkZ35M}1YCi1{jT8*Yn=LIVJ)xha(gZtMEY4)IV7HPmgDP; z%S$S<>N?KRi{07~biG|tpEQ)n=Pe6EcaM1Wyu!i@NMXzkF$ooydxscV1?X8bwMT9a zP1fLGY#IxJ_hpz1dXcdL_hoAFnm@M;Jo(Fh#hDjRmXcJ?p9=PjYLGL9#6%-ILnKwu zhv~0QPN_PyJ1;Hp`27s`pZwG>@0$p^)>N#1=tgnTr;)Uwz(l`S#H9UqL4DrMrxL)o@`W)WC5hxE@5jp<^Yp#9B{N7?|cR z^DBV9f(!Q2z_T`MZN|&}i5mKo9(XF+`wKm!1?aD5Fe!|TNCK{1z_sC!ZB|iHHsUo) zk;in0#>=GAqXOi=T{1?L2wJUxVv_%gQyF_Sf~8J}=arbM5j>Q;NIe`ex(KB>k)U8y zHdOJLblF(8Y3`gjHWJV^WMOD1Q0wBo%VG4cdYzw&ZajFP((cxY?1Xj$ysK{Yx z>#?4)g>pamnqyh7#^tpY$6)7R39FQiaJfj2V>bj^h!pT=sf@M=*;u7n64?DSQsiN~;tVYXBgtrOX zA#+m7<4C$4d~ctrSZ`g^N?7@QNcVQO3?Q`~)UM~qB1+fe@=Y2pG@-Qdo+_Mfw#d1_0xh|XF6%9keSCA@C zD4a>?=Rk9D8<{;9v_~+0{xo7bmP?7u98R{OkK=7x3MCw*zuA^-rr{^$k@%LW7QH3&256 z$w=7w*Rx04R`r6rWgsODM8=u0L8P*oChD4RBA+qJ3wJcu;EWlnjx7Ux#}4-R?B{Kl z-h?8{=oNYk-T_Y!mJpOWZ}LrAQnpk^FnPE!#gm6*>kGAZZ-@(pq(hUg~y)g@&vycf(eS=uFU@1 zNh&}Y&?Kt)$Dqp7)6fOmU+6-)?wJ~P>-%7+6^($#k%9}Fb4veI7ru@})qdOxnpEly z(8qJh_(ag>rn!k;VS8oF0Db#cZ96zLZ(kS81+z|Dx_V`WnkeL>Dy%44^CuWGV>r0k}vn32>l1R#caEMVaT6&8V^Fo={!U{Jc2*itT@Z&U6qWJ+Zk6hZ0 zVLRr=+XZMPU@3r5wIa2~<#ug)Bc6P#ggGV$MF<(AH~l&+O?Y)HyMW`>OHU3+P^nd% zy|$b;1xpm6H6A4%*qXLMyJ+0DI*mmpRE9^0SQ4abgKC*x51 zf#P^~egr9Jt1SrI&$HlD5&%3lX|vg3q;bBN16AM?wWEE~X}{dX-#7u*V>*A(^K+GP zJT+u7*~FuN>^nNOg8E*DpfrI7H(X>7t1~d@D0wvjD#NzjJe1~|v_<;_Hx!Vf(T_5( zxJeQOrunx`vLC}zK}FCX8ezc{4~;Z}j3l!^AL)t`Z;It4^a@%X-@(w9(`_Q^)YxOj z(Ylm@AW@IK!jJ^>ay#G8h$)Sx00b>WX_d6FM@XB!W;+Z;ZDgyD&aW*B5fUB!@YaBL zZQsTDa@D)KDDLQd`Ko63ITz4O(s1ibSi;Wfg{uIscF+_{@b%ZoKu`lJ=Xt!D&oIL` z@`CSSGMxZBV7N;#-Vl)<-k{r&H}F&9fQor!|4pfb9-%W5s>Qjj;p}KMd9rKY=j$Z? zp{lyyaiqa1gItgLYUix#C-vYJ7yuXjaw< znK8*gr(FJ70rV_<Dhb^K@xJpxBeotoPw#rUaVV_ubpq0kczZC+7TXLDyk5%Bbz2ohH%ymwP=g{$zVG?Ht0cdIy2!hC8n zjjD!KUNVxas(%%(xz5eI8qO#*{_Qhu9m-2M~{ zt1PeZDY7(qo+jB@bY`&Q{V(-HuywS1nR-i+`rL3)r=^CVkWA?7CFV!_J6|u34Mhu! zbJvd+!$ET}>F{{AplD4+&(@_-Xp@FECQQphS4jW>P=faIMrRp`Xy)UV`iveEJb3;E z#M{%XZSAP;PH)Qr5s%bibU}Zuq!$&jMo)}>-ki}W(JPqZk#Hh6U=#OVB25MygP6lF zn7RdV`c6=Gf=U5d5HQY5$1SBL4mBN9^Da`r&1Z~gv{e!a3UJutLn*x{eB6L(PBS){ z7()(zqED)wh{06Yfy_ejW%!?{0g4&S&mFOXJ7#~r4?`Wr<@J)|)m0Ah!KkI#Csiuh7wl6#MT zqOoQ!vat?HB7R4BWn&mXV&NZjshzl79!OJpPFV>O9T`K^GjtLeE~rc3#uK~DEdn!i z8NQO*Xy8?qrH*gfPbGjbrUl30)n1vgYQ$dFP9ux8>w9}Xf&*sR*HEB`hg`H!`t=AE zqr34|`Bk!rgKcKNW9`Im&)bY4H$44-15APQv#(leOVq_;hM}FaSGf!3<0TbsO|H%P zM|@+=uI#OORlWFmtucz{63i{y?iuvsnr#Q`>Q~`OA8_^%L)^Am;|KXrl@La|e&N|( zC0{z5&IebmGRf55U1$FY1CviG9&R1abmK|%=0?^fUmicWOc$|?Dv&wzpzpsOr2M7O z)O|i?>FsVlzor~j$K{)cujnmXlxTd&P=PDjB64ce<^T3>9-kW9?(x+3(;q5tlg(cm zG!MxV&hzJ{xK4d)6!{Y8aCsOH{5ojCd}{lvBM(xDt(qJuVX@m6bRmoU5rsV#`Z%K6 zCsSv*?c{n;=>E&xu&Eh+m{L;9kXT?{3IeK_A!}T(!NiU*W9hriNnBWuDNEy{R!c+>uD-; znjjCTM@J!K=TU_2R3Md)N1uYx7%n=p-Kz5rh=voV(Hd=}q}84iXUuZ=|EOrXTpGD} zEu?u079=G0BBg6UhF`-!mJ!4b+vslItNB^1J0P6~z`!#%gBhBD#*W6RiWG?D#QBZM-| z9e}N4ZQm>|o6LXqhUGuBn2+1fD4QCQi$4wJ$agVia^h`s9RiOVVwsBWAUm-F1H~QC zN7WK!gORoWU|t~>F+D}j1gC<-{|Uu%*1d63uXu3$`Pw=sv(VPi+_6I_KkC1KI?P^; zZ>~wrY1wNL?3qUip*u+J7m+AqmfYi*p4H`TYXF1hoB18K)i4}a3$s>KJ?=oSfl!vd z{@0>9=mwk7;3iw4EZu-Ya@KH^{SJ`nJHBhP=7|wZS+K+q2_dZ73~7;yt7*!wAs)T> z+70vglu3iU;~GnB@LK0&I}<_6o17~>)DVjf=NIgm49z$3>$6$C1L5)GD||lWyUXNP zQv6HvLm_?GiA}FtnkxFpsJl`4_Gg~O4$ogg0><;Z)dvf+u4FQCYnxqG67~n_(J~8c z()YcyRmUvzvj4(Bu~x5%Wq40?A1kjQiz(AClYMVW(K>bw|H`6#U8phaa~oSrfr@tm z#}dso3hb%kXRnwMC}5ExJ;r^GVQpn4|CWWnUte3_2Q0=I%Wd-coRRerdv}7W$uDnM%@)QMl{ThjfnUqxGdq34hGRmNTcT zm{BM-)f8Lejo>50JcZG!Hs(8v-Q}l&kzz)5e`6cc$#8Jk)9|omcc0O|yky3&!1bU8 zMjW@4iCpmK{3o@2kxVm;IOc zK7Y}~UX(o6b&BV5$o}1!BU`esQZBVT*Ms@`LJIBsA!a+2T>GH{$8}5jI#-VB8_h>4&>I5-C5D)a7!{Z7t&7uJ@vG~gph7A9%T6Uj zxG4;0sSN1^2@F6Zwy#;(K6uzhOj?;Var$bxyaOKaYhs``Ml*xW(UHe;5@^e3A2@mo zWp!@OlvxVn+SHqx#qSt)OY-F1xF)n`+VV*D{J!EQlda3-{GzG@iFVEY3)kpWcJS|D zLgc=0$_J6010x(`9Vt3w7m{FAV6g`F8D+ct6dLRQ;=dBuYGj(WS3n{<`6v?H?z)NU|y)nf^Nkd4uip8gn^* z(GzwOlhgKo-xRa*1c^GECrC=;n+&-S-m?N8!rFc2@(x)jZ7_q};1{G^5LXX<(FEOd zZ3%Es+$qjT3N%Xok85MkE&h2L0IIdq=Tdx7r6kVav}Q8y%O;7M$b!@P!q5O2>hs+O z^gTc(3fss0=5-yt<(HU{2-+s^#MH6yp-PWYSEsZVKHqUv>t77EA8x}kw~=kMRCe#7@d4UqxA3tRo zDByAV2TPU!GQ?AqjSA{n`Z~htIKz9m3@rXV+IGAMCEkz`(`tyG(V_;~Sfg`uo`P^6l*8rD znDDDGvFc%O9IQo;)st$G$u%>ekR}*+s%$kabPtQNf+erNW9-7%A5%U>F3fn8B{rz; zf#rvo?*(%=^VUD+gT!bl*h}Y>dFz{s0JxvGuZQj%5T*P|Hs->>X58Yaj(K7Lq{L{!;M<_|}ZZbLJN&Vb?tW+W%yU=XZnM0=y%U2i07Ou03-H&dUfH7zX=~OUP zRr(IL?bpox-ZCblszqCw9}r-@q2h=ut^0-h5zQvjX+&oBW&Q#sQ(huvM z*pyjL-Cskg)QI>n4@Zg>W(Z_rVIB9@H#E;y;48W2Ck5*W=DNPGL-Q27>s8XeQJ+K?R*!{j4F4MC_!^01xvly4O67-rOnJOUA;grApZo87g;YWcusR@2u?q#=ly> Y1;uaTIxs7?zrl!{w6avOgt7ns1GxHGc>n+a literal 0 HcmV?d00001 diff --git a/docs/conf.py b/docs/conf.py index 64349a4fe0..ab82bc4292 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -59,11 +59,6 @@ class AxParameterWarning(Warning): # Ensure it's a real warning subclass sys.modules["ax.exceptions.core"] = MagicMock() sys.modules["ax.exceptions.core"].AxParameterWarning = AxParameterWarning -# from libensemble import * -# from libensemble.alloc_funcs import * -# from libensemble.gen_funcs import * -# from libensemble.sim_funcs import * - # sys.path.insert(0, os.path.abspath('.')) sys.path.append(os.path.abspath("../libensemble")) @@ -112,13 +107,7 @@ class AxParameterWarning(Warning): # Ensure it's a real warning subclass bibtex_bibfiles = ["references.bib"] bibtex_default_style = "unsrt" -# autosectionlabel_prefix_document = True -# extensions = ['sphinx.ext.autodoc', 'sphinx.ext.napoleon', 'sphinx.ext.imgconverter'] -# breathe_projects = { "libEnsemble": "../code/src/xml/" } -# breathe_default_project = "libEnsemble" -##breathe_projects_source = {"libEnsemble" : ( "../code/src/", ["libE.py", "test.cpp"] )} -# breathe_projects_source = {"libEnsemble" : ( "../code/src/", ["test.cpp","test2.cpp"] )} autodoc_member_order = "bysource" model_show_field_summary = "bysource" @@ -185,6 +174,7 @@ class AxParameterWarning(Warning): # Ensure it's a real warning subclass # The name of the Pygments (syntax highlighting) style to use. pygments_style = "sphinx" +pygments_dark_style = "monokai" # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False @@ -210,9 +200,9 @@ class AxParameterWarning(Warning): # Ensure it's a real warning subclass # html_theme = 'sphinxdoc' # html_theme = "sphinx_book_theme" -html_theme = "sphinx_rtd_theme" +html_theme = "furo" -html_logo = "./images/libE_logo_white.png" +# html_logo = "./images/libE_logo_white.png" html_favicon = "./images/libE_logo_circle.png" html_title = "libEnsemble" @@ -221,7 +211,12 @@ class AxParameterWarning(Warning): # Ensure it's a real warning subclass # documentation. # html_theme_options = { - "logo_only": True, + "announcement": "libEnsemble v2.0 is released, with many new features and changes.", + "source_repository": "https://github.com/Libensemble/libensemble/", + "source_branch": "main", + "source_directory": "docs/", + "light_logo": "libE_logo.png", + "dark_logo": "libE_logo_white.png", } # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, @@ -240,22 +235,6 @@ def setup(app): app.connect("autodoc-process-docstring", remove_noqa) -# Custom sidebar templates, must be a dictionary that maps document names -# to template names. -# -# This is required for the alabaster theme -# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars -# html_sidebars = { -# '**': [ -# 'about.html', -# 'navigation.html', -# 'relations.html', # needs 'show_related': True theme option to display -# 'searchbox.html', -# 'donate.html', -# ] -# } - - # -- Options for HTMLHelp output ------------------------------------------ # Output file base name for HTML help builder. diff --git a/pixi.lock b/pixi.lock index 72332c8dda..ac568b30a6 100644 --- a/pixi.lock +++ b/pixi.lock @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:60267c2464fb5cbe166f6a1310fb545e8a629e7ef3bbfe14bef3be3d75a852a4 -size 1018785 +oid sha256:7cd8df93c366ab681af45a1951d5dee0ede9c08e8e02225b526862b2ba62a0d0 +size 1022855 diff --git a/pyproject.toml b/pyproject.toml index 3307bdb7bd..3909b16c0b 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -134,6 +134,7 @@ ipdb = ">=0.13.13,<0.14" mypy = ">=1.19.1,<2" types-psutil = ">=6.1.0.20241221,<7" types-pyyaml = ">=6.0.12.20250915,<7" +furo = ">=2025.12.19,<2026" [tool.pixi.tasks.build-docs] cmd = "cd docs && make html" @@ -243,7 +244,7 @@ extend-exclude = ["*.bib", "*.xml", "docs/nitpicky"] # Initial, permissive mypy configuration for libensemble. # Allows incremental adoption. To be tightened in future releases. packages = ["libensemble.utils"] -exclude = 'libensemble/utils/(launcher|loc_stack|runners|pydantic|output_directory)\.py$|libensemble/tests/(regression_tests|functionality_tests|unit_tests|scaling_tests)/.*' +exclude = 'docs/conf.py$|libensemble/utils/(launcher|loc_stack|runners|pydantic|output_directory)\.py$|libensemble/tests/(regression_tests|functionality_tests|unit_tests|scaling_tests)/.*' disable_error_code = ["import-not-found", "import-untyped"] ignore_missing_imports = true follow_imports = "skip" From 5e0ac53d800b5183b8c4ab5fff90bddb97a7e205 Mon Sep 17 00:00:00 2001 From: jlnav Date: Mon, 20 Apr 2026 14:45:32 -0500 Subject: [PATCH 05/34] deps --- pixi.lock | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pixi.lock b/pixi.lock index ac568b30a6..3f15d4eec4 100644 --- a/pixi.lock +++ b/pixi.lock @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:7cd8df93c366ab681af45a1951d5dee0ede9c08e8e02225b526862b2ba62a0d0 +oid sha256:8c1880c602bbe256e0015f88b5384e31d4808177aac9b6179aabc26d9cf546aa size 1022855 From 41734b7e0abd3bfdd4703133a65d25888e056ff3 Mon Sep 17 00:00:00 2001 From: jlnav Date: Tue, 21 Apr 2026 11:52:32 -0500 Subject: [PATCH 06/34] terse-er wording --- docs/overview_usecases.rst | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/docs/overview_usecases.rst b/docs/overview_usecases.rst index b1c31885d2..360ca62ce5 100644 --- a/docs/overview_usecases.rst +++ b/docs/overview_usecases.rst @@ -89,19 +89,18 @@ Glossary which may include executing tasks or submitting external jobs. Workers typically run simulators and return results to the manager. - * **Executor**: The executor provides a simple, portable interface for + * **Executor**: A simple, portable interface for launching and managing tasks (applications). Multiple executors are available, including the base ``Executor`` and ``MPIExecutor``. - * **Submit**: To enqueue or indicate that one or more jobs or tasks should be - launched. When using the libEnsemble Executor, a *submitted* task is either executed + * **Submit**: A *submitted* task is either executed immediately or queued for execution. * **Tasks**: Subprocesses or independent units of work. Tasks result from launching external programs for execution using the Executor. - * **Resource Manager**: libEnsemble includes a resource manager that can detect - (or be provided with) available resources (e.g., a list of nodes). *Resource sets* are + * **Resource Manager**: libEnsemble module that detects + (or is provided with) available resources (e.g., a list of nodes). *Resource sets* are divided among workers and can be dynamically reassigned. * **Resource Set**: The smallest unit of resources that can be assigned (and From f3d21255b08ab993d61f9aeb078180c06ff834f6 Mon Sep 17 00:00:00 2001 From: jlnav Date: Thu, 23 Apr 2026 09:13:31 -0500 Subject: [PATCH 07/34] tentative update of README example for v2.0 --- README.rst | 26 ++++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/README.rst b/README.rst index 5f4935f941..06bc0d94b0 100644 --- a/README.rst +++ b/README.rst @@ -41,39 +41,49 @@ Basic Usage =========== Create an ``Ensemble``, then customize it with general settings, simulation and generator parameters, -and an exit condition. Run the following four-worker example via ``python this_file.py``: +and an exit condition. .. code-block:: python import numpy as np + from gest_api.vocs import VOCS from libensemble import Ensemble - from libensemble.gen_funcs.sampling import uniform_random_sample + from libensemble.gen_classes.sampling import UniformSample from libensemble.sim_funcs.six_hump_camel import six_hump_camel from libensemble.specs import ExitCriteria, GenSpecs, LibeSpecs, SimSpecs if __name__ == "__main__": + # Define problem using VOCS + vocs = VOCS( + variables={"x": [-3, 3], "y": [-2, 2]}, + objectives={"f": "MINIMIZE"}, + ) + # General settings libE_specs = LibeSpecs(nworkers=4) + # Simulation parameters sim_specs = SimSpecs( sim_f=six_hump_camel, inputs=["x"], outputs=[("f", float)], ) + # Generator parameters (standardized generator) gen_specs = GenSpecs( - gen_f=uniform_random_sample, + generator=UniformSample(vocs), + inputs=["sim_id"], + persis_in=["x", "f"], outputs=[("x", float, 2)], - user={ - "gen_batch_size": 50, - "lb": np.array([-3, -2]), - "ub": np.array([3, 2]), - }, + vocs=vocs, + user={"gen_batch_size": 50}, ) + # Exit criteria exit_criteria = ExitCriteria(sim_max=100) + # Create and run ensemble sampling = Ensemble( libE_specs=libE_specs, sim_specs=sim_specs, From ba2d6624ec90b996587dfe116837bae091e44a8c Mon Sep 17 00:00:00 2001 From: jlnav Date: Thu, 23 Apr 2026 09:16:20 -0500 Subject: [PATCH 08/34] remove alloc mention from overview_usecases. it's too deep, too much info, too early --- docs/overview_usecases.rst | 9 --------- 1 file changed, 9 deletions(-) diff --git a/docs/overview_usecases.rst b/docs/overview_usecases.rst index 360ca62ce5..f6a6a7c28a 100644 --- a/docs/overview_usecases.rst +++ b/docs/overview_usecases.rst @@ -24,15 +24,6 @@ can launch and monitor external applications. All simulations and generated values are recorded in a NumPy structured array called the :ref:`history array`. -Allocator Function -~~~~~~~~~~~~~~~~~~ - -* :ref:`allocator`: Decides whether a simulator or generator should be - invoked (and with what inputs/resources) as workers become available - -The default allocator is appropriate for the majority of use cases but can be customized -for users interested in more advanced allocation strategies. - Example Use Cases ~~~~~~~~~~~~~~~~~ .. begin_usecases_rst_tag From 64177b74e5bcd9cef275fa5db84d55bdc3b6f4c9 Mon Sep 17 00:00:00 2001 From: jlnav Date: Thu, 23 Apr 2026 09:21:08 -0500 Subject: [PATCH 09/34] fixes to ensemble.py docstring --- libensemble/ensemble.py | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/libensemble/ensemble.py b/libensemble/ensemble.py index 009f99f0a2..cd1f32408f 100644 --- a/libensemble/ensemble.py +++ b/libensemble/ensemble.py @@ -47,6 +47,7 @@ class Ensemble: from libensemble.specs import ExitCriteria, GenSpecs, SimSpecs sampling = Ensemble(parse_args=True) + sampling.sim_specs = SimSpecs( sim_f=norm_eval, inputs=["x"], @@ -61,7 +62,7 @@ class Ensemble: generator = UniformSample(vocs=vocs) sampling.gen_specs = GenSpecs( - gen_f=generator, + generator=generator, batch_size=50, ) @@ -79,13 +80,14 @@ class Ensemble: :linenos: from libensemble import Ensemble + from libensemble.specs import SimSpecs from my_simulator import sim_find_energy - sim_specs = { - "sim_f": sim_find_energy, - "in": ["x"], - "out": [("y", float)], - } + sim_specs = SimSpecs( + sim_f=sim_find_energy, + inputs=["x"], + outputs=[("y", float)], + ) experiment = Ensemble(sim_specs=sim_specs) @@ -94,7 +96,8 @@ class Ensemble: .. code-block:: python :linenos: - from libensemble import Ensemble, SimSpecs + from libensemble import Ensemble + from libensemble.specs import SimSpecs from my_simulator import sim_find_energy sim_specs = SimSpecs( From d4934187eeb191659c7053ba256bd86939faf17d Mon Sep 17 00:00:00 2001 From: jlnav Date: Thu, 23 Apr 2026 10:12:57 -0500 Subject: [PATCH 10/34] claude-assisted dramatic improvement towards ensemble.ready() --- libensemble/ensemble.py | 77 ++++++++++++++- libensemble/tests/unit_tests/test_ensemble.py | 93 +++++++++++++++++++ 2 files changed, 165 insertions(+), 5 deletions(-) diff --git a/libensemble/ensemble.py b/libensemble/ensemble.py index cd1f32408f..f963e71d1b 100644 --- a/libensemble/ensemble.py +++ b/libensemble/ensemble.py @@ -121,7 +121,7 @@ class Ensemble: Specifications for the generator. - exit_criteria: class:`ExitCriteria`, Optional + exit_criteria: class:`ExitCriteria` Tell libEnsemble when to stop a run. @@ -140,7 +140,7 @@ class Ensemble: executor: :class:`Executor`, Optional - libEnsemble Executor instance for use within simulation or generator functions. + libEnsemble Executor instance for use within simulator functions or generators. H0: `NumPy structured array `_, Optional @@ -209,9 +209,76 @@ def _parse_args(self) -> tuple[int, bool, LibeSpecs]: return self.nworkers, self.is_manager, self._libE_specs - def ready(self) -> bool: - """Quickly verify that all necessary data has been provided""" - return all([i for i in [self.exit_criteria, self._libE_specs, self.sim_specs]]) + def ready(self) -> tuple[bool, list[str]]: + """Verify that all necessary data has been provided before calling :meth:`run`. + + Performs a pre-flight check on the ensemble configuration, covering: + + - A simulation callable (``sim_f`` or ``simulator``) is set on ``sim_specs``. + - At least one exit condition is configured on ``exit_criteria``. + - Workers are available (``nworkers > 0`` for local/threads/tcp comms, + or MPI comms is set, which infers workers from the MPI communicator). + - If both ``gen_specs`` and ``sim_specs`` use the classic field-name interface, + the generator output field names are a superset of the simulator input field names. + + Returns + ------- + tuple[bool, list[str]] + A 2-tuple of ``(is_ready, issues)``. + ``is_ready`` is ``True`` when all checks pass. + ``issues`` is a list of human-readable strings describing each problem found; + it is empty when ``is_ready`` is ``True``. + + Example + ------- + .. code-block:: python + + ok, issues = sampling.ready() + if not ok: + for issue in issues: + print(f" - {issue}") + """ + issues: list[str] = [] + + # --- sim_specs: a callable must be set --- + sim_callable = getattr(self.sim_specs, "sim_f", None) or getattr(self.sim_specs, "simulator", None) + if not sim_callable: + issues.append( + "sim_specs is missing a callable: set 'sim_f' (a function) or 'simulator' (a gest-api object)." + ) + + # --- exit_criteria: at least one stop condition must be set --- + ec = self.exit_criteria + if ec is None or not any( + getattr(ec, field, None) is not None for field in ("sim_max", "gen_max", "wallclock_max", "stop_val") + ): + issues.append( + "exit_criteria has no stop condition: set at least one of " + "'sim_max', 'gen_max', 'wallclock_max', or 'stop_val'." + ) + + # --- workers: must be determinable --- + comms = getattr(self._libE_specs, "comms", "mpi") + if comms in ("local", "threads", "tcp"): + if not self.nworkers: + issues.append( + f"libE_specs.comms is '{comms}' but 'nworkers' is not set. " + "Set 'libE_specs.nworkers' or pass '--nworkers N' on the command line." + ) + # For 'mpi', worker count is derived from the MPI communicator at runtime; no check needed here. + + # --- cross-spec field consistency (classic interface only) --- + gen_outputs = [f[0] for f in (getattr(self.gen_specs, "outputs", None) or [])] + sim_inputs = getattr(self.sim_specs, "inputs", None) or [] + if gen_outputs and sim_inputs: + missing = [field for field in sim_inputs if field not in gen_outputs] + if missing: + issues.append( + f"sim_specs.inputs requests field(s) {missing} that are not produced " + f"by gen_specs.outputs {gen_outputs}. Check that field names match." + ) + + return not issues, issues @property def libE_specs(self) -> LibeSpecs: diff --git a/libensemble/tests/unit_tests/test_ensemble.py b/libensemble/tests/unit_tests/test_ensemble.py index 59c5fbb6a2..0070c0b722 100644 --- a/libensemble/tests/unit_tests/test_ensemble.py +++ b/libensemble/tests/unit_tests/test_ensemble.py @@ -180,6 +180,94 @@ def test_local_comms_without_nworkers(): assert not flag, "'local' ensemble without nworkers should not be created" +def test_ready_missing_sim_callable(): + """ready() should flag a missing sim callable.""" + from libensemble.ensemble import Ensemble + from libensemble.specs import ExitCriteria, LibeSpecs, SimSpecs + + e = Ensemble( + libE_specs=LibeSpecs(comms="local", nworkers=4), + sim_specs=SimSpecs(), # no sim_f or simulator + exit_criteria=ExitCriteria(sim_max=10), + ) + ok, issues = e.ready() + assert not ok, "Should not be ready without a sim callable" + assert any("sim_f" in msg for msg in issues), f"Expected sim_f mention in issues: {issues}" + + +def test_ready_missing_exit_criteria(): + """ready() should flag an exit_criteria with no stop condition.""" + from libensemble.ensemble import Ensemble + from libensemble.sim_funcs.simple_sim import norm_eval + from libensemble.specs import ExitCriteria, LibeSpecs, SimSpecs + + e = Ensemble( + libE_specs=LibeSpecs(comms="local", nworkers=4), + sim_specs=SimSpecs(sim_f=norm_eval), + exit_criteria=ExitCriteria(), # nothing set + ) + ok, issues = e.ready() + assert not ok, "Should not be ready with no exit condition" + assert any("exit_criteria" in msg for msg in issues), f"Expected exit_criteria mention in issues: {issues}" + + +def test_ready_missing_nworkers_local(): + """ready() should flag local comms without nworkers.""" + from libensemble.ensemble import Ensemble + from libensemble.sim_funcs.simple_sim import norm_eval + from libensemble.specs import ExitCriteria, LibeSpecs, SimSpecs + + # Bypass the constructor ValueError by using mpi comms first, + # then patch to local after construction. + e = Ensemble( + libE_specs=LibeSpecs(comms="mpi"), + sim_specs=SimSpecs(sim_f=norm_eval), + exit_criteria=ExitCriteria(sim_max=10), + ) + # Manually force comms=local and nworkers=0 on the internal specs object + e._libE_specs.comms = "local" + e._nworkers = 0 + e._libE_specs.nworkers = 0 + + ok, issues = e.ready() + assert not ok, "Should not be ready with local comms and no nworkers" + assert any("nworkers" in msg for msg in issues), f"Expected nworkers mention in issues: {issues}" + + +def test_ready_field_mismatch(): + """ready() should flag when sim_specs.inputs requests fields not in gen_specs.outputs.""" + from libensemble.ensemble import Ensemble + from libensemble.sim_funcs.simple_sim import norm_eval + from libensemble.specs import ExitCriteria, GenSpecs, LibeSpecs, SimSpecs + + e = Ensemble( + libE_specs=LibeSpecs(comms="local", nworkers=4), + sim_specs=SimSpecs(sim_f=norm_eval, inputs=["x", "z"]), + gen_specs=GenSpecs(outputs=[("x", float, (1,))]), # missing "z" + exit_criteria=ExitCriteria(sim_max=10), + ) + ok, issues = e.ready() + assert not ok, "Should not be ready with mismatched gen/sim fields" + assert any("z" in msg for msg in issues), f"Expected missing field 'z' in issues: {issues}" + + +def test_ready_happy_path(): + """ready() should return (True, []) for a fully configured ensemble.""" + from libensemble.ensemble import Ensemble + from libensemble.sim_funcs.simple_sim import norm_eval + from libensemble.specs import ExitCriteria, GenSpecs, LibeSpecs, SimSpecs + + e = Ensemble( + libE_specs=LibeSpecs(comms="local", nworkers=4), + sim_specs=SimSpecs(sim_f=norm_eval, inputs=["x"], outputs=[("f", float)]), + gen_specs=GenSpecs(outputs=[("x", float, (1,))]), + exit_criteria=ExitCriteria(sim_max=10), + ) + ok, issues = e.ready() + assert ok, f"Should be ready but got issues: {issues}" + assert issues == [], f"Issues should be empty but got: {issues}" + + if __name__ == "__main__": test_ensemble_init() test_ensemble_parse_args_false() @@ -188,3 +276,8 @@ def test_local_comms_without_nworkers(): test_ensemble_specs_update_libE_specs() test_ensemble_prevent_comms_overwrite() test_local_comms_without_nworkers() + test_ready_missing_sim_callable() + test_ready_missing_exit_criteria() + test_ready_missing_nworkers_local() + test_ready_field_mismatch() + test_ready_happy_path() From 117dc47be2357f33126a0b6697361fbc297ebbe1 Mon Sep 17 00:00:00 2001 From: jlnav Date: Thu, 23 Apr 2026 10:21:40 -0500 Subject: [PATCH 11/34] monospacing adjusts --- libensemble/ensemble.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/libensemble/ensemble.py b/libensemble/ensemble.py index f963e71d1b..24b47d72b0 100644 --- a/libensemble/ensemble.py +++ b/libensemble/ensemble.py @@ -380,14 +380,14 @@ def nworkers(self, value): def save_output(self, basename: str, append_attrs: bool = True): """ Writes out History array and persis_info to files. - If using a workflow_dir, will place with specified filename in that directory. + If using a ``workflow_dir_path`` in ``libE_specs``, will place with specified filename in that directory. Parameters ---------- Format: ``_results_History_length=_evals=_ranks=`` - To have the filename be only the basename, set append_attrs=False + To have the filename be only the basename, set ``append_attrs=False`` Format: ``_results_History_length=_evals=_ranks=`` """ From 5bb30fb0bf10d0a73eb879a1253c06045d9a86aa Mon Sep 17 00:00:00 2001 From: jlnav Date: Thu, 23 Apr 2026 15:53:32 -0500 Subject: [PATCH 12/34] Claude-audited specs docs. Specify standardized sims/gens approaches --- docs/data_structures/alloc_specs.rst | 12 +----- docs/data_structures/gen_specs.rst | 61 +++++++++++++++++++--------- docs/data_structures/libE_specs.rst | 23 ++++------- docs/data_structures/sim_specs.rst | 49 +++++++++++++++------- libensemble/specs.py | 4 +- 5 files changed, 88 insertions(+), 61 deletions(-) diff --git a/docs/data_structures/alloc_specs.rst b/docs/data_structures/alloc_specs.rst index 159b9eacaf..f29c0e5a2d 100644 --- a/docs/data_structures/alloc_specs.rst +++ b/docs/data_structures/alloc_specs.rst @@ -19,7 +19,7 @@ Can be constructed and passed to libEnsemble as a Python class or a dictionary. * libEnsemble uses the following defaults if the user doesn't provide their own ``alloc_specs``: .. literalinclude:: ../../libensemble/specs.py - :start-at: alloc_f: Callable = start_only_persistent + :start-at: alloc_f: object = only_persistent_gens :end-before: end_alloc_tag :caption: Default settings for alloc_specs @@ -31,14 +31,4 @@ Can be constructed and passed to libEnsemble as a Python class or a dictionary. my_new_alloc = AllocSpecs() my_new_alloc.alloc_f = another_function -.. seealso:: - - `test_uniform_sampling_one_residual_at_a_time.py`_ specifies fields - to be used by the allocation function ``give_sim_work_first`` from - fast_alloc_and_pausing.py_. - - .. literalinclude:: ../../libensemble/tests/functionality_tests/test_uniform_sampling_one_residual_at_a_time.py - :start-at: alloc_specs - :end-before: end_alloc_specs_rst_tag - -.. _fast_alloc_and_pausing.py: https://github.com/Libensemble/libensemble/blob/develop/libensemble/alloc_funcs/fast_alloc_and_pausing.py .. _test_uniform_sampling_one_residual_at_a_time.py: https://github.com/Libensemble/libensemble/blob/develop/libensemble/tests/functionality_tests/test_uniform_sampling_one_residual_at_a_time.py diff --git a/docs/data_structures/gen_specs.rst b/docs/data_structures/gen_specs.rst index b3364e53f7..4552cf8863 100644 --- a/docs/data_structures/gen_specs.rst +++ b/docs/data_structures/gen_specs.rst @@ -5,26 +5,47 @@ Generator Specs Used to specify the generator, its inputs and outputs, and user data. -.. code-block:: python - :linenos: - - ... - import numpy as np - from libensemble import GenSpecs - from generator import gen_random_sample - - ... - - gen_specs = GenSpecs( - gen_f=gen_random_sample, - outputs=[("x", float, (1,))], - user={ - "lower": np.array([-3]), - "upper": np.array([3]), - "gen_batch_size": 5, - }, - ) - ... +.. tab-set:: + + .. tab-item:: Standardized (gest-api) + + .. code-block:: python + :linenos: + + from libensemble import GenSpecs + from libensemble.gen_classes import UniformSample + from gest_api.vocs import VOCS + + vocs = VOCS( + variables={"x": [-3.0, 3.0]}, + objectives={"y": "MINIMIZE"}, + ) + + gen_specs = GenSpecs( + generator=UniformSample(vocs), + vocs=vocs, + ) + ... + + .. tab-item:: Classic (gen_f) + + .. code-block:: python + :linenos: + + import numpy as np + from libensemble import GenSpecs + from generator import gen_random_sample + + gen_specs = GenSpecs( + gen_f=gen_random_sample, + outputs=[("x", float, (1,))], + user={ + "lower": np.array([-3]), + "upper": np.array([3]), + "gen_batch_size": 5, + }, + ) + ... .. autopydantic_model:: libensemble.specs.GenSpecs :model-show-json: False diff --git a/docs/data_structures/libE_specs.rst b/docs/data_structures/libE_specs.rst index 1a31b41b8d..a45635416b 100644 --- a/docs/data_structures/libE_specs.rst +++ b/docs/data_structures/libE_specs.rst @@ -19,7 +19,7 @@ libEnsemble is primarily customized by setting options within a ``LibeSpecs`` in .. tab-item:: General **comms** [str] = ``"mpi"``: - Manager/Worker communications mode: ``'mpi'``, ``'local'``, or ``'tcp'``. + Manager/Worker communications mode: ``'mpi'``, ``'local'``, ``'threads'``, or ``'tcp'``. If ``nworkers`` is specified, then ``local`` comms will be used unless a parallel MPI environment is detected. @@ -156,8 +156,8 @@ libEnsemble is primarily customized by setting options within a ``LibeSpecs`` in **profile** [bool] = ``False``: Profile manager and worker logic using ``cProfile``. - **safe_mode** [bool] = ``True``: - Prevents user functions from overwriting internal fields, but requires moderate overhead. + **safe_mode** [bool] = ``False``: + Prevents user functions from overwriting protected History fields, but requires moderate overhead. **stats_fmt** [dict]: A dictionary of options for formatting ``"libE_stats.txt"``. @@ -199,14 +199,14 @@ libEnsemble is primarily customized by setting options within a ``LibeSpecs`` in **save_H_and_persis_on_abort** [bool] = ``True``: Save states of ``H`` and ``persis_info`` to file on aborting after an exception. - **save_H_on_completion** bool | None = ``False`` + **save_H_on_completion** [bool] = ``False``: Save state of ``H`` to file upon completing a workflow. Also enabled when either ``save_every_k_sims`` or ``save_every_k_gens`` is set. - **save_H_with_date** bool | None = ``False`` - Save ``H`` filename contains date and timestamp. + **save_H_with_date** [bool] = ``False``: + ``H`` filename contains date and timestamp. - **H_file_prefix** str | None = ``"libE_history"`` + **H_file_prefix** [str] = ``"libE_history"``: Prefix for ``H`` filename. **use_persis_return_gen** [bool] = ``False``: @@ -255,8 +255,8 @@ libEnsemble is primarily customized by setting options within a ``LibeSpecs`` in By default the GPUs on each node are treated as a group. **use_tiles_as_gpus** [bool] = ``False``: - If ``True`` then treat a GPU tile as one GPU, assuming - ``tiles_per_GPU`` is provided in ``platform_specs`` or detected. + If ``True`` then treat a GPU tile as one GPU when GPU tiles + are provided in ``platform_specs`` or auto-detected. **enforce_worker_core_bounds** [bool] = ``False``: Permit submission of tasks with a @@ -268,11 +268,6 @@ libEnsemble is primarily customized by setting options within a ``LibeSpecs`` in Instructs libEnsemble’s MPI executor not to run applications on nodes where libEnsemble processes (manager and workers) are running. - **zero_resource_workers** [list of ints]: - List of workers (by IDs) that require no resources. For when a fixed mapping of workers - to resources is required. Otherwise, use ``num_resource_sets``. - For use with supported allocation functions. - **resource_info** [dict]: Provide resource information that will override automatically detected resources. The allowable fields are given below in "Overriding Resource Auto-Detection" diff --git a/docs/data_structures/sim_specs.rst b/docs/data_structures/sim_specs.rst index 9a023f5491..45740075bc 100644 --- a/docs/data_structures/sim_specs.rst +++ b/docs/data_structures/sim_specs.rst @@ -3,24 +3,45 @@ Simulation Specs ================ -Used to specify the simulation, its inputs and outputs, and user data. +Used to specify the simulation function, its inputs and outputs, and user data. -.. code-block:: python - :linenos: +.. tab-set:: - ... - from libensemble import SimSpecs - from simulator import sim_find_sine + .. tab-item:: Standardized (gest-api) - ... + .. code-block:: python + :linenos: - sim_specs = SimSpecs( - sim_f=sim_find_sine, - inputs=["x"], - outputs=[("y", float)], - user={"batch": 1234}, - ) - ... + from libensemble import SimSpecs + from gest_api.vocs import VOCS + from my_package import my_sim_callable + + vocs = VOCS( + variables={"x": [-3.0, 3.0]}, + objectives={"y": "MINIMIZE"}, + ) + + sim_specs = SimSpecs( + simulator=my_sim_callable, + vocs=vocs, + ) + ... + + .. tab-item:: Classic (sim_f) + + .. code-block:: python + :linenos: + + from libensemble import SimSpecs + from simulator import sim_find_sine + + sim_specs = SimSpecs( + sim_f=sim_find_sine, + inputs=["x"], + outputs=[("y", float)], + user={"batch": 1234}, + ) + ... .. autopydantic_model:: libensemble.specs.SimSpecs :model-show-json: False diff --git a/libensemble/specs.py b/libensemble/specs.py index 926590b478..d983948259 100644 --- a/libensemble/specs.py +++ b/libensemble/specs.py @@ -88,8 +88,8 @@ class SimSpecs(BaseModel): simulator: object | None = None """ - A pre-initialized simulator object or callable in gest-api format. - When provided, sim_f defaults to gest_api_sim wrapper. + A callable (function) in gest-api format. + When provided, ``sim_f`` defaults to the ``gest_api_sim`` wrapper. """ inputs: list[str] | None = Field(default=[], alias="in") From 1504c232f2ca1012b2438045d24e16cc61245755 Mon Sep 17 00:00:00 2001 From: jlnav Date: Thu, 23 Apr 2026 16:33:15 -0500 Subject: [PATCH 13/34] remove summit docs. move Resources and History sections to Additional Resources. rewrite aposmm tutorial for 2.0 --- docs/data_structures/data_structures.rst | 2 +- docs/index.rst | 2 + docs/platforms/platforms_index.rst | 1 - docs/platforms/summit.rst | 206 -------------------- docs/programming_libE.rst | 2 - docs/resource_manager/resources_index.rst | 6 +- docs/tutorials/aposmm_tutorial.rst | 217 ++++++---------------- 7 files changed, 59 insertions(+), 377 deletions(-) delete mode 100644 docs/platforms/summit.rst diff --git a/docs/data_structures/data_structures.rst b/docs/data_structures/data_structures.rst index 35a5ba0158..423010feb4 100644 --- a/docs/data_structures/data_structures.rst +++ b/docs/data_structures/data_structures.rst @@ -11,7 +11,7 @@ See :ref:`here` for instruction on constructing a complete workflow libE_specs gen_specs sim_specs + exit_criteria alloc_specs platform_specs persis_info - exit_criteria diff --git a/docs/index.rst b/docs/index.rst index 2a2c40075e..9428baf2d8 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -43,6 +43,8 @@ :maxdepth: 1 :caption: Additional References: + function_guides/history_array + resource_manager/resources_index FAQ known_issues release_notes diff --git a/docs/platforms/platforms_index.rst b/docs/platforms/platforms_index.rst index 591ed445db..f679c36e56 100644 --- a/docs/platforms/platforms_index.rst +++ b/docs/platforms/platforms_index.rst @@ -238,7 +238,6 @@ libEnsemble on specific HPC systems. improv perlmutter polaris - summit srun example_scripts diff --git a/docs/platforms/summit.rst b/docs/platforms/summit.rst deleted file mode 100644 index aed321f8e2..0000000000 --- a/docs/platforms/summit.rst +++ /dev/null @@ -1,206 +0,0 @@ -======================= -Summit (Decommissioned) -======================= - -Summit_ was an IBM AC922 system located at the Oak Ridge Leadership Computing -Facility (OLCF). Each of the approximately 4,600 compute nodes on Summit contained two -IBM POWER9 processors and six NVIDIA Volta V100 accelerators. - -Summit featured three tiers of nodes: login, launch, and compute nodes. - -Users on login nodes submit batch runs to the launch nodes. -Batch scripts and interactive sessions run on the launch nodes. Only the launch -nodes can submit MPI runs to the compute nodes via ``jsrun``. - -These docs are maintained to guide libEnsemble's usage on three-tier systems and/or -`jsrun` systems similar to Summit. - -Configuring Python ------------------- - -Begin by loading the Python 3 Anaconda module:: - - $ module load python - -You can now create and activate your own custom conda_ environment:: - - conda create --name myenv python=3.11 - export PYTHONNOUSERSITE=1 # Make sure get python from conda env - . activate myenv - -If you are installing any packages with extensions, ensure that the correct compiler -module is loaded. If using mpi4py_, this must be installed from source, -referencing the compiler. Currently, mpi4py must be built with gcc:: - - module load gcc - -With your environment activated, run :: - - CC=mpicc MPICC=mpicc pip install mpi4py --no-binary mpi4py - -Installing libEnsemble ----------------------- - -Obtaining libEnsemble is now as simple as ``pip install libensemble``. -Your prompt should be similar to the following line: - -.. code-block:: console - - (my_env) user@login5:~$ pip install libensemble - -.. note:: - If you encounter pip errors, run ``python -m pip install --upgrade pip`` first - -Or, you can install via ``conda``: - -.. code-block:: console - - (my_env) user@login5:~$ conda config --add channels conda-forge - (my_env) user@login5:~$ conda install -c conda-forge libensemble - -See :doc:`here<../advanced_installation>` for more information on advanced options -for installing libEnsemble. -Special note on resource sets and Executor submit options - ---------------------------------------------------------- - -When using the portable MPI run configuration options (e.g., num_nodes) to the -:doc:`MPIExecutor<../executor/mpi_executor>` ``submit`` function, it is important -to note that, due to the resource sets used on Summit, the options refer to -resource sets as follows: - -- num_procs (int, optional) – The total number resource sets for this run. - -- num_nodes (int, optional) – The number of nodes on which to submit the run. - -- procs_per_node (int, optional) – The number of resource sets per node. - -It is recommended that the user defines a resource set as the minimal configuration -of CPU cores/processes and GPUs. These can be added to the ``extra_args`` option -of the *submit* function. Alternatively, the portable options can be ignored and -everything expressed in ``extra_args``. - -For example, the following *jsrun* line would run three resource sets, -each having one core (with one process), and one GPU, along with some extra options:: - - jsrun -n 3 -a 1 -g 1 -c 1 --bind=packed:1 --smpiargs="-gpu" - -To express this line in the ``submit`` function may look -something like the following:: - - exctr = Executor.executor - task = exctr.submit(app_name="mycode", - num_procs=3, - extra_args="-a 1 -g 1 -c 1 --bind=packed:1 --smpiargs="-gpu"" - app_args="-i input") - -This would be equivalent to:: - - exctr = Executor.executor - task = exctr.submit(app_name="mycode", - extra_args="-n 3 -a 1 -g 1 -c 1 --bind=packed:1 --smpiargs="-gpu"" - app_args="-i input") - -The libEnsemble resource manager works out the resources available to each worker, -but unlike some other systems, ``jsrun`` on Summit dynamically schedules runs to -available slots across and within nodes. It can also queue tasks. This allows variable -size runs to easily be handled on Summit. If oversubscription to the `jsrun` system -is desired, then libEnsemble's resource manager can be disabled in the -calling script via:: - - libE_specs["disable_resource_manager"] = True - -In the above example, the task being submitted used three GPUs, which is half those -available on a Summit node, and thus two such tasks may be allocated to each node -(from different workers), if they were running at the same time. - -Job Submission --------------- - -Summit used LSF_ for job management and submission. For libEnsemble, the most -important command is ``bsub`` for submitting batch scripts from the login nodes -to execute on the launch nodes. - -It is recommended to run libEnsemble on the launch nodes (assuming workers are -submitting MPI applications) using the ``local`` communications mode (multiprocessing). - -Interactive Runs -^^^^^^^^^^^^^^^^ - -You can run interactively with ``bsub`` by specifying the ``-Is`` flag, -similarly to the following:: - - $ bsub -W 30 -P [project] -nnodes 8 -Is - -This will place you on a launch node. - -.. note:: - You will need to reactivate your conda virtual environment. - -Batch Runs -^^^^^^^^^^ - -Batch scripts specify run settings using ``#BSUB`` statements. The following -simple example depicts configuring and launching libEnsemble to a launch node with -multiprocessing. This script also assumes the user is using the ``parse_args()`` -convenience function from libEnsemble's :doc:`tools module<../utilities>`. - -.. code-block:: bash - - #!/bin/bash -x - #BSUB -P - #BSUB -J libe_mproc - #BSUB -W 60 - #BSUB -nnodes 128 - #BSUB -alloc_flags "smt1" - - # --- Prepare Python --- - - # Load conda module and gcc. - module load python - module load gcc - - # Name of conda environment - export CONDA_ENV_NAME=my_env - - # Activate conda environment - export PYTHONNOUSERSITE=1 - source activate $CONDA_ENV_NAME - - # --- Prepare libEnsemble --- - - # Name of calling script - export EXE=calling_script.py - - # Communication Method - export COMMS="--comms local" - - # Number of workers. - export NWORKERS="--nworkers 128" - - hash -r # Check no commands hashed (pip/python...) - - # Launch libE - python $EXE $COMMS $NWORKERS > out.txt 2>&1 - -With this saved as ``myscript.sh``, allocating, configuring, and queueing -libEnsemble on Summit is achieved by running :: - - $ bsub myscript.sh - -Example submission scripts are also given in the :doc:`examples`. - -Launching User Applications from libEnsemble Workers ----------------------------------------------------- - -Only the launch nodes can submit MPI runs to the compute nodes via ``jsrun``. -This can be accomplished in user simulator functions directly. However, it is highly -recommended that the :doc:`Executor<../executor/ex_index>` interface -be used inside the simulator or generator, because this provides a portable interface -with many advantages including automatic resource detection, portability, -launch failure resilience, and ease of use. - -.. _conda: https://conda.io/en/latest/ -.. _LSF: https://www.olcf.ornl.gov/wp-content/uploads/2018/12/summit_workshop_fuson.pdf -.. _mpi4py: https://mpi4py.readthedocs.io/en/stable/ -.. _Summit: https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit/ diff --git a/docs/programming_libE.rst b/docs/programming_libE.rst index e385ff91f8..03e8e97fb6 100644 --- a/docs/programming_libE.rst +++ b/docs/programming_libE.rst @@ -8,8 +8,6 @@ Constructing Workflows libe_module data_structures/data_structures history_output_logging - function_guides/history_array - resource_manager/resources_index .. toctree:: :caption: Writing User Functions: diff --git a/docs/resource_manager/resources_index.rst b/docs/resource_manager/resources_index.rst index 5ab1f951b3..1802d13872 100644 --- a/docs/resource_manager/resources_index.rst +++ b/docs/resource_manager/resources_index.rst @@ -7,9 +7,7 @@ libEnsemble comes with built-in resource management. This entails the detection of available resources (e.g., nodelists, core counts, and GPUs), and the allocation of resources to workers. -Resource management can be disabled by setting -``libE_specs["disable_resource_manager"] = True``. This will prevent libEnsemble -from doing any resource detection or management. +It can be disabled by setting ``libE_specs["disable_resource_manager"] = True``. .. toctree:: :maxdepth: 2 @@ -19,4 +17,4 @@ from doing any resource detection or management. overview resource_detection scheduler_module - Worker Resources Module (query resources for current worker) + worker_resources diff --git a/docs/tutorials/aposmm_tutorial.rst b/docs/tutorials/aposmm_tutorial.rst index 0837df276e..4f959cb77b 100644 --- a/docs/tutorials/aposmm_tutorial.rst +++ b/docs/tutorials/aposmm_tutorial.rst @@ -26,35 +26,20 @@ below: :align: center Create a new Python file named ``six_hump_camel.py``. This will be our -``sim_f``, incorporating the above function. Write the following: +simulator callable, incorporating the above function. Write the following: .. code-block:: python :linenos: - import numpy as np - - - def six_hump_camel(H, _, sim_specs): - """Six-Hump Camel sim_f.""" - - batch = len(H["x"]) # Num evaluations each sim_f call. - H_o = np.zeros(batch, dtype=sim_specs["out"]) # Define output array H - - for i, x in enumerate(H["x"]): - H_o["f"][i] = six_hump_camel_func(x) # Function evaluations placed into H - - return H_o - - def six_hump_camel_func(x): """Six-Hump Camel function definition""" - x1 = x[0] - x2 = x[1] + x1 = x["x1"] + x2 = x["x2"] term1 = (4 - 2.1 * x1**2 + (x1**4) / 3) * x1**2 term2 = x1 * x2 term3 = (-4 + 4 * x2**2) * x2**2 - return term1 + term2 + term3 + return {"f": term1 + term2 + term3} APOSMM Operations ----------------- @@ -100,160 +85,83 @@ Throughout, generated and evaluated points are appended to the ``"local_pt"`` being ``True`` if the point is part of a local optimization run, and ``"local_min"`` being ``True`` if the point has been ruled a local minimum. -APOSMM Persistence ------------------- - -APOSMM is implemented as a Persistent generator. A single worker process initiates -APOSMM so that it "persists" the course of a given libEnsemble run. - -APOSMM begins its own concurrent optimization runs, each of which independently -produces a linear sequence of points trying to find a local minimum. These -points are given to workers and evaluated by simulation routines. - -If there are more workers than optimization runs at any iteration of the -generator, additional random sample points are generated to keep the workers -busy. - -In practice, since a single worker becomes "persistent" for APOSMM, users -should initiate one more worker than the number of parallel simulations:: - - python my_aposmm_routine.py --nworkers 4 - -results in three workers running simulations and one running APSOMM. - -If running libEnsemble using `mpi4py` communications, enough MPI ranks should be -given to support libEnsemble's manager, a persistent worker to run APOSMM, and -simulation routines. The following:: - - mpiexec -n 3 python my_aposmm_routine.py - -results in only one worker process to perform simulation evaluations. - Calling Script -------------- -Create a new Python file named ``my_first_aposmm.py``. Start by importing NumPy, -libEnsemble routines, APOSMM, our ``sim_f``, and a specialized allocation -function: +Create a new Python file named ``my_first_aposmm.py``. Start by importing +libEnsemble classes, APOSMM, and our simulator callable: .. code-block:: python :linenos: - import numpy as np + from six_hump_camel import six_hump_camel_func - from six_hump_camel import six_hump_camel + import libensemble.gen_funcs + + libensemble.gen_funcs.rc.aposmm_optimizers = "scipy" - from libensemble.libE import libE - from libensemble.gen_funcs.persistent_aposmm import aposmm - from libensemble.alloc_funcs.persistent_aposmm_alloc import persistent_aposmm_alloc - from libensemble.tools import parse_args + from libensemble import Ensemble + from libensemble.gen_classes import APOSMM + from gest_api.vocs import VOCS + from libensemble.specs import SimSpecs, GenSpecs, ExitCriteria -This allocation function starts a single Persistent APOSMM routine and provides -``sim_f`` output for points requested by APOSMM. Points can be sampled points -or points from local optimization runs. +APOSMM supports a wide variety of external optimizers. The ``rc.aposmm_optimizers`` +statement above indicates to APOSMM which optimization method package to use, +helping prevent unnecessary imports or package installations. -APOSMM supports a wide variety of external optimizers. The following statements -set optimizer settings to ``"scipy"`` to indicate to APOSMM which optimization -method to use, and help prevent unnecessary imports or package installations: +Next, initialize the ``Ensemble`` and define our variables and objectives using +a ``VOCS`` object: .. code-block:: python :linenos: - import libensemble.gen_funcs + if __name__ == "__main__": + workflow = Ensemble(parse_args=True) - libensemble.gen_funcs.rc.aposmm_optimizers = "scipy" + vocs = VOCS( + variables={"x1": [-2, 2], "x2": [-1, 1], "x1_on_cube": [-2, 2], "x2_on_cube": [-1, 1]}, + objectives={"f": "MINIMIZE"}, + ) + +Notice the addition of ``x1_on_cube`` and ``x2_on_cube``. APOSMM requires variables scaled to the unit cube internally. By defining both sets of variables, APOSMM can translate between our actual domain and its internal domain. -Set up :doc:`parse_args()<../utilities>`, -our :doc:`sim_specs<../data_structures/sim_specs>`, -:doc:`gen_specs<../data_structures/gen_specs>`, -and :doc:`alloc_specs<../data_structures/alloc_specs>`: +Now, configure APOSMM. Because APOSMM internally uses variables named ``x``, ``x_on_cube``, and an objective named ``f``, we must map our ``VOCS`` fields to these internal names using ``variables_mapping``: .. code-block:: python :linenos: - nworkers, is_manager, libE_specs, _ = parse_args() - - sim_specs = { - "sim_f": six_hump_camel, # Simulation function - "in": ["x"], # Accepts "x" values - "out": [("f", float)], # Returns f(x) values - } - - gen_out = [ - ("x", float, 2), # Produces "x" values - ("x_on_cube", float, 2), # "x" values scaled to unit cube - ("sim_id", int), # Produces sim_id's for History array indexing - ("local_min", bool), # Is a point a local minimum? - ("local_pt", bool), # Is a point from a local opt run? - ] - - gen_specs = { - "gen_f": aposmm, # APOSMM generator function - "persis_in": ["f"] + [n[0] for n in gen_out], - "out": gen_out, # Output defined like above dict - "user": { - "initial_sample_size": 100, # Random sample 100 points to start - "localopt_method": "scipy_Nelder-Mead", - "opt_return_codes": [0], # Status integers specific to localopt_method - "max_active_runs": 6, # Occur in parallel - "lb": np.array([-2, -1]), # Lower bound of search domain - "ub": np.array([2, 1]), # Upper bound of search domain - }, - } - - alloc_specs = {"alloc_f": persistent_aposmm_alloc} - -``gen_specs["user"]`` fields above that are required for APOSMM are: - - * ``"lb"`` - Search domain lower bound - * ``"ub"`` - Search domain upper bound - * ``"localopt_method"`` - Chosen local optimization method - * ``"initial_sample_size"`` - Number of uniformly sampled points generated - before local optimization runs. - * ``"opt_return_codes"`` - A list of integers that local optimization - methods return when a minimum is detected. SciPy's Nelder-Mead returns 0, - but other methods (not used in this tutorial) return 1. - -Also note the following: - - * ``gen_specs["in"]`` is empty. For other ``gen_f``'s this defines what - fields to give to the ``gen_f`` when called, but here APOSMM's - ``alloc_f`` defines those fields. - * ``"x_on_cube"`` in ``gen_specs["out"]``. APOSMM works internally on - ``"x"`` values scaled to the unit cube. To avoid back-and-forth scaling - issues, both types of ``"x"``'s are communicated back, even though the - simulation will likely use ``"x"`` values. (APOSMM performs handshake to - ensure that the ``x_on_cube`` that was given to be evaluated is the same - the one that is given back.) - * ``"sim_id"`` in ``gen_specs["out"]``. APOSMM produces points in its - local History array that it will need to update later, and can best - reference those points (and avoid a search) if APOSMM produces the IDs - itself, instead of libEnsemble. - -Other options and configurations for APOSMM can be found in the -APOSMM :doc:`API reference<../examples/aposmm>`. - -Set :ref:`exit_criteria` so libEnsemble knows -when to complete, and :ref:`persis_info` for -random sampling seeding: + aposmm = APOSMM( + vocs, + max_active_runs=workflow.nworkers, + variables_mapping={"x": ["x1", "x2"], "x_on_cube": ["x1_on_cube", "x2_on_cube"], "f": ["f"]}, + initial_sample_size=100, + localopt_method="scipy_Nelder-Mead", + opt_return_codes=[0], + ) -.. code-block:: python - :linenos: + workflow.gen_specs = GenSpecs( + generator=aposmm, + vocs=vocs, + batch_size=5, + initial_batch_size=10, + ) - exit_criteria = {"sim_max": 2000} - persis_info = {} +APOSMM is instantiated directly as a standardized generator. It handles its own required fields, simplifying our configurations. ``opt_return_codes`` is a list of integers that local optimization methods return when a minimum is detected. SciPy's Nelder-Mead returns 0. -Finally, add statements to :doc:`initiate libEnsemble<../libe_module>`, and quickly -check calculated minima: +Finally, we configure the simulation function, exit criteria, and run the workflow. We can also print out any points that APOSMM identified as local minima: .. code-block:: python :linenos: - if __name__ == "__main__": # required by multiprocessing on macOS and windows - H, persis_info, flag = libE(sim_specs, gen_specs, exit_criteria, persis_info, alloc_specs, libE_specs) + workflow.sim_specs = SimSpecs(simulator=six_hump_camel_func, vocs=vocs) + workflow.exit_criteria = ExitCriteria(sim_max=2000) + + H, _, _ = workflow.run() - if is_manager: - print("Minima:", H[np.where(H["local_min"])]["x"]) + if workflow.is_manager: + # We can map our variables back to an array for easy printing + minima = [[row["x1"], row["x2"]] for row in H if row["local_min"]] + print("Minima:", minima) Final Setup, Run, and Output ---------------------------- @@ -272,27 +180,10 @@ the routine. After a couple seconds, the output should resemble the following:: - [0] libensemble.libE (MANAGER_WARNING): - ******************************************************************************* - User generator script will be creating sim_id. - Take care to do this sequentially. - Also, any information given back for existing sim_id values will be overwritten! - So everything in gen_specs["out"] should be in gen_specs["in"]! - ******************************************************************************* - - Minima: [[ 0.08993295 -0.71265804] - [ 1.70360676 -0.79614982] - [-1.70368421 0.79606073] - [-0.08988064 0.71270945] - [-1.60699361 -0.56859108] - [ 1.60713962 0.56869567]] - -The first section labeled ``MANAGER_WARNING`` is a default libEnsemble warning -for generator functions that create ``sim_id``'s, like APOSMM. It does not -indicate a failure. + Minima: [[0.08988580227184285, -0.7126604246830723], [-0.08983226938927827, 0.7126622830878125], [-1.7036480556534283, 0.7960787201083437], [1.7035677028481488, -0.7961234727197022], [1.607106093246473, 0.5686524941018596], [-1.607102046898864, -0.568650772274404]] The local minima for the Six-Hump Camel simulation function as evaluated by -APOSMM with libEnsemble should be listed directly below the warning. +APOSMM with libEnsemble should be listed directly above. Please see the API reference :doc:`here<../examples/aposmm>` for more APOSMM configuration options and other information. From 8d8ad6f62f02fdbc3c6ab0300bbafb256e5ac2a3 Mon Sep 17 00:00:00 2001 From: jlnav Date: Fri, 24 Apr 2026 09:23:23 -0500 Subject: [PATCH 14/34] move work_dict and worker_array info to dev guide section --- docs/dev_guide/dev_API/developer_API.rst | 2 ++ docs/{function_guides => dev_guide/dev_API}/work_dict.rst | 2 +- .../{function_guides => dev_guide/dev_API}/worker_array.rst | 0 docs/function_guides/function_guide_index.rst | 2 -- docs/function_guides/sim_gen_alloc_api.rst | 6 +++--- libensemble/tools/parse_args.py | 2 +- 6 files changed, 7 insertions(+), 7 deletions(-) rename docs/{function_guides => dev_guide/dev_API}/work_dict.rst (96%) rename docs/{function_guides => dev_guide/dev_API}/worker_array.rst (100%) diff --git a/docs/dev_guide/dev_API/developer_API.rst b/docs/dev_guide/dev_API/developer_API.rst index c09647db46..6774cbd629 100644 --- a/docs/dev_guide/dev_API/developer_API.rst +++ b/docs/dev_guide/dev_API/developer_API.rst @@ -17,3 +17,5 @@ This section documents the internal modules of libEnsemble. node_resources_module mpi_resources_module scheduler_module + work_dict + worker_array diff --git a/docs/function_guides/work_dict.rst b/docs/dev_guide/dev_API/work_dict.rst similarity index 96% rename from docs/function_guides/work_dict.rst rename to docs/dev_guide/dev_API/work_dict.rst index 4252919de0..0afeebabfb 100644 --- a/docs/function_guides/work_dict.rst +++ b/docs/dev_guide/dev_API/work_dict.rst @@ -21,7 +21,7 @@ the data given to worker ``i``. Populated in the allocation function. ``Work[i]` "persistent" [bool]: True if worker i will enter persistent mode (Default: False) The work dictionary is typically set using the ``gen_work`` or ``sim_work`` -:doc:`helper functions<../function_guides/allocator>` in the allocation function. +:doc:`helper functions<../../function_guides/allocator>` in the allocation function. ``H_fields``, for example, is usually packed from either ``sim_specs["in"]``, ``gen_specs["in"]`` or the equivalent "persis_in" variants. diff --git a/docs/function_guides/worker_array.rst b/docs/dev_guide/dev_API/worker_array.rst similarity index 100% rename from docs/function_guides/worker_array.rst rename to docs/dev_guide/dev_API/worker_array.rst diff --git a/docs/function_guides/function_guide_index.rst b/docs/function_guides/function_guide_index.rst index 621bf36d27..f223720faf 100644 --- a/docs/function_guides/function_guide_index.rst +++ b/docs/function_guides/function_guide_index.rst @@ -22,7 +22,5 @@ These guides describe common development patterns and optional components: :caption: Useful Data Structures calc_status - work_dict - worker_array .. _NumPy: http://www.numpy.org diff --git a/docs/function_guides/sim_gen_alloc_api.rst b/docs/function_guides/sim_gen_alloc_api.rst index 546806edef..76d311f48c 100644 --- a/docs/function_guides/sim_gen_alloc_api.rst +++ b/docs/function_guides/sim_gen_alloc_api.rst @@ -11,9 +11,9 @@ libEnsemble package. :doc:`See here for more in-depth guides to writing user functions` -As of v0.10.0, valid simulator and generator functions +Valid simulator and generator functions can *accept and return a smaller subset of the listed parameters and return values*. For instance, -a ``def my_simulation(one_Input) -> one_Output`` function is now accepted, +a ``def my_simulation(one_Input) -> one_Output`` function is accepted, as is ``def my_generator(Input, persis_info) -> Output, persis_info``. sim_f API @@ -102,7 +102,7 @@ Parameters: *********** **W**: ``numpy structured array`` - :doc:`(example)` + :doc:`(example)<../../dev_guide/dev_API/worker_array>` **H**: ``numpy structured array`` :ref:`(example)` diff --git a/libensemble/tools/parse_args.py b/libensemble/tools/parse_args.py index f89504e6ba..ca1ce53a49 100644 --- a/libensemble/tools/parse_args.py +++ b/libensemble/tools/parse_args.py @@ -149,7 +149,7 @@ def _client_parse_args(args): def parse_args(): """ - Parses command-line arguments. Use in calling script. + Parses command-line arguments. .. code-block:: python From c02acba4ef752398c6eeebea1de415dc1f53efe7 Mon Sep 17 00:00:00 2001 From: jlnav Date: Fri, 24 Apr 2026 09:26:30 -0500 Subject: [PATCH 15/34] simply enough, clarify in SUPPORT.rst that issues can be opened --- SUPPORT.rst | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/SUPPORT.rst b/SUPPORT.rst index e86b6a1e6a..bf6f68d9cb 100644 --- a/SUPPORT.rst +++ b/SUPPORT.rst @@ -1,6 +1,10 @@ Support ------- +Open issues on Github at: + +* https://github.com/Libensemble/libensemble/issues + Join the libEnsemble mailing list at: * https://lists.mcs.anl.gov/mailman/listinfo/libensemble From 2be8f2337c81e432a5dea4e55e350619bf8df0f6 Mon Sep 17 00:00:00 2001 From: jlnav Date: Fri, 24 Apr 2026 10:01:08 -0500 Subject: [PATCH 16/34] remove sim_gen_alloc_api - it had become redundant compared to the examples. Add new section for writing gest-api generators to generator.rst --- AGENTS.md | 6 +- docs/function_guides/function_guide_index.rst | 8 - docs/function_guides/generator.rst | 138 ++++++++++++----- docs/function_guides/sim_gen_alloc_api.rst | 140 ------------------ docs/index.rst | 1 + 5 files changed, 104 insertions(+), 189 deletions(-) delete mode 100644 docs/function_guides/sim_gen_alloc_api.rst diff --git a/AGENTS.md b/AGENTS.md index 75086f46c6..c45ccca322 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -43,10 +43,10 @@ Information about Generators Its fields match ``sim_specs/gen_specs["out"]`` or ``vocs`` attributes, plus additional reserved fields for metadata. - Prior to libEnsemble v1.6.0, generators were plain functions. They often ran in "persistent" mode, meaning they executed in a long-running loop, sending and receiving points to and from the manager until the ensemble was complete. -- A ``gest-api`` or "standardized" generator is a class that at a minimum implements ``suggest`` and ``ingest`` methods, and is parameterized by a ``vocs``. -- See ``libensemble/generators.py`` for more information about the ``gest-api`` standard. +- A ``gest-api`` or "standardized" generator is a class that inherits from ``gest_api.Generator``, implements ``suggest`` and ``ingest`` methods (which process lists of dictionaries, not NumPy arrays), and is parameterized by a ``vocs``. +- See ``libensemble/gen_classes/external/sampling.py`` for simple examples of the pure ``gest-api`` interface. (Note: ``libensemble.generators.LibensembleGenerator`` exists to wrap legacy NumPy-based workflows, but pure ``gest_api.Generator`` is preferred). - Generators are often used for simple sampling, optimization, calibration, uncertainty quantification, and other simulation-based tasks. -- **Automatic Variable Mapping**: Subclasses of ``LibensembleGenerator`` (like ``UniformSample``) automatically map all ``VOCS`` variables to a single multi-dimensional ``"x"`` field in the History array if no explicit ``variables_mapping`` is provided. +- **Automatic Variable Mapping**: When using ``LibensembleGenerator`` subclasses, they automatically map all ``VOCS`` variables to a single multi-dimensional ``"x"`` field in the History array if no explicit ``variables_mapping`` is provided. Pure ``gest_api.Generator`` classes handle variables natively. - **Mandatory Input Fields**: Even for simple generators that don't ingest data, ``gen_specs["in"]`` or ``gen_specs["persis_in"]`` must be defined if using an allocation function like ``only_persistent_gens`` that attempts to send rows. If these are empty, the manager will raise an ``AssertionError`` stating that no fields were requested to be sent. - **Default Allocator**: ``only_persistent_gens`` is the default allocator for standardized ``gest-api`` generators. It treats these generators as persistent entities that communicate throughout the run. diff --git a/docs/function_guides/function_guide_index.rst b/docs/function_guides/function_guide_index.rst index f223720faf..2423849de7 100644 --- a/docs/function_guides/function_guide_index.rst +++ b/docs/function_guides/function_guide_index.rst @@ -2,10 +2,6 @@ Writing User Functions ====================== -User functions typically require only some familiarity with NumPy_, but if they conform to -the :ref:`user function APIs`, they can incorporate methods from machine-learning, -mathematics, resource management, or other libraries/applications. - These guides describe common development patterns and optional components: .. toctree:: @@ -14,13 +10,9 @@ These guides describe common development patterns and optional components: generator simulator - allocator - sim_gen_alloc_api .. toctree:: :maxdepth: 2 :caption: Useful Data Structures calc_status - -.. _NumPy: http://www.numpy.org diff --git a/docs/function_guides/generator.rst b/docs/function_guides/generator.rst index ad0484fbad..b9bc27218a 100644 --- a/docs/function_guides/generator.rst +++ b/docs/function_guides/generator.rst @@ -1,67 +1,129 @@ .. _funcguides-gen: -Generator Functions -=================== +Generators +========== -Generator and :ref:`Simulator functions` have relatively similar interfaces. +Generators and :ref:`Simulators` have relatively similar interfaces. Writing a Generator ------------------- -.. code-block:: python +.. tab-set:: - def my_generator(Input, persis_info, gen_specs, libE_info): - batch_size = gen_specs["user"]["batch_size"] + .. tab-item:: Standardized Generator (gest-api) - Output = np.zeros(batch_size, gen_specs["out"]) - # ... - Output["x"], persis_info = generate_next_simulation_inputs(Input["f"], persis_info) + Standardized generators are classes that inherit from ``gest_api.Generator``. + They adhere to the ``gest-api`` standard and are parameterized by a ``VOCS`` + object defining the problem's variables and objectives. - return Output, persis_info + A basic generator implements the ``suggest()`` and ``ingest()`` methods, which + operate on lists of dictionaries: -Most ``gen_f`` function definitions written by users resemble:: + .. code-block:: python + :linenos: - def my_generator(Input, persis_info, gen_specs, libE_info): + import numpy as np + from gest_api import Generator + from gest_api.vocs import VOCS -where: - * ``Input`` is a selection of the :ref:`History array`, a NumPy structured array. - * :ref:`persis_info` is a dictionary containing state information. - * :ref:`gen_specs` is a dictionary of generator parameters. - * ``libE_info`` is a dictionary containing miscellaneous entries. + class UniformSample(Generator): + """Samples over the domain specified in the VOCS.""" -Valid generator functions can accept a subset of the above parameters. So a very simple generator can start:: + def __init__(self, vocs: VOCS): + self.vocs = vocs + self.rng = np.random.default_rng(1) + super().__init__(vocs) - def my_generator(Input): + def _validate_vocs(self, vocs): + assert len(self.vocs.variable_names), "VOCS must contain variables." -If ``gen_specs`` was initially defined: + def suggest(self, n_trials): + output = [] + for _ in range(n_trials): + trial = {} + for key in self.vocs.variables: + trial[key] = self.rng.uniform(self.vocs.variables[key].domain[0], self.vocs.variables[key].domain[1]) + output.append(trial) + return output -.. code-block:: python + def ingest(self, calc_in): + pass # random sample so nothing to ingest + + libEnsemble's handling of standardized generators is specified using ``GenSpecs``: + + .. code-block:: python + + gen_specs = GenSpecs( + generator=UniformSample(vocs), + inputs=["sim_id"], + persis_in=["x", "f"], + outputs=[("x", float, 2)], + vocs=vocs, + user={"batch_size": 128}, + ) + + .. note:: + Ensure that ``gen_specs.inputs`` or ``gen_specs.persis_in`` requests at least one field + (like ``"sim_id"`` or ``"f"``) to be sent back, even if the generator does not + process them. + + .. tab-item:: Legacy Generator Function + + .. code-block:: python + + def my_generator(Input, persis_info, gen_specs, libE_info): + batch_size = gen_specs["user"]["batch_size"] + + Output = np.zeros(batch_size, gen_specs["out"]) + # ... + Output["x"], persis_info = generate_next_simulation_inputs(Input["f"], persis_info) + + return Output, persis_info + + Most ``gen_f`` function definitions written by users resemble:: + + def my_generator(Input, persis_info, gen_specs, libE_info): + + where: + + * ``Input`` is a selection of the :ref:`History array`, a NumPy structured array. + * :ref:`persis_info` is a dictionary containing state information. + * :ref:`gen_specs` is a dictionary of generator parameters. + * ``libE_info`` is a dictionary containing miscellaneous entries. + + Valid generator functions can accept a subset of the above parameters. So a very simple generator can start:: + + def my_generator(Input): + + If ``gen_specs`` was initially defined: + + .. code-block:: python - gen_specs = GenSpecs( - gen_f=my_generator, - inputs=["f"], - outputs=["x", float, (1,)], - user={"batch_size": 128}, - ) + gen_specs = GenSpecs( + gen_f=my_generator, + inputs=["f"], + outputs=["x", float, (1,)], + user={"batch_size": 128}, + ) -Then user parameters and a *local* array of outputs may be obtained/initialized like:: + Then user parameters and a *local* array of outputs may be obtained/initialized like:: - batch_size = gen_specs["user"]["batch_size"] - Output = np.zeros(batch_size, dtype=gen_specs["out"]) + batch_size = gen_specs["user"]["batch_size"] + Output = np.zeros(batch_size, dtype=gen_specs["out"]) -This array should be populated by whatever values are generated within -the function:: + This array should be populated by whatever values are generated within + the function:: - Output["x"], persis_info = generate_next_simulation_inputs(Input["f"], persis_info) + Output["x"], persis_info = generate_next_simulation_inputs(Input["f"], persis_info) -Then return the array and ``persis_info`` to libEnsemble:: + Then return the array and ``persis_info`` to libEnsemble:: - return Output, persis_info + return Output, persis_info -Between the ``Output`` definition and the ``return``, any computation can be performed. -Users can try an :doc:`executor<../executor/overview>` to submit applications to parallel -resources, or plug in components from other libraries to serve their needs. + Between the ``Output`` definition and the ``return``, any computation can be performed. + Users can try an :doc:`executor<../executor/overview>` to submit applications to parallel + resources, or plug in components from other libraries to serve their needs. .. note:: diff --git a/docs/function_guides/sim_gen_alloc_api.rst b/docs/function_guides/sim_gen_alloc_api.rst deleted file mode 100644 index 76d311f48c..0000000000 --- a/docs/function_guides/sim_gen_alloc_api.rst +++ /dev/null @@ -1,140 +0,0 @@ -User Function API ------------------ -.. _user_api: - -libEnsemble requires functions for generation, simulation, and allocation. - -While libEnsemble provides a default allocation function, the simulator and generator functions -must be specified. The required API and example arguments are given here. -:doc:`Example sim and gen functions<../examples/examples_index>` are provided in the -libEnsemble package. - -:doc:`See here for more in-depth guides to writing user functions` - -Valid simulator and generator functions -can *accept and return a smaller subset of the listed parameters and return values*. For instance, -a ``def my_simulation(one_Input) -> one_Output`` function is accepted, -as is ``def my_generator(Input, persis_info) -> Output, persis_info``. - -sim_f API -~~~~~~~~~ -.. _api_sim_f: - -The simulator function will be called by libEnsemble's workers with *up to* the following arguments and returns:: - - Out, persis_info, calc_status = sim_f(H[sim_specs["in"]][sim_ids_from_allocf], persis_info, sim_specs, libE_info) - -Parameters: -*********** - - **H**: ``numpy structured array`` - :ref:`(example)` - - **persis_info**: :obj:`dict` - :ref:`(example)` - - **sim_specs**: :obj:`dict` - :ref:`(example)` - - **libE_info**: :obj:`dict` - :ref:`(example)` - -Returns: -******** - - **H**: ``numpy structured array`` - with keys/value-sizes matching those in sim_specs["out"] - :ref:`(example)` - - **persis_info**: :obj:`dict` - :ref:`(example)` - - **calc_status**: :obj:`int`, optional - Provides a task status to the manager and the libE_stats.txt file - :ref:`(example)` - -gen_f API -~~~~~~~~~ -.. _api_gen_f: - -The generator function will be called by libEnsemble's workers with *up to* the following arguments and returns:: - - Out, persis_info, calc_status = gen_f(H[gen_specs["in"]][sim_ids_from_allocf], persis_info, gen_specs, libE_info) - -Parameters: -*********** - - **H**: ``numpy structured array`` - :ref:`(example)` - - **persis_info**: :obj:`dict` - :ref:`(example)` - - **gen_specs**: :obj:`dict` - :ref:`(example)` - - **libE_info**: :obj:`dict` - :ref:`(example)` - -Returns: -******** - - **H**: ``numpy structured array`` - with keys/value-sizes matching those in gen_specs["out"] - :ref:`(example)` - - **persis_info**: :obj:`dict` - :ref:`(example)` - - **calc_status**: :obj:`int`, optional - Provides a task status to the manager and the libE_stats.txt file - :ref:`(example)` - -alloc_f API -~~~~~~~~~~~ -.. _api_alloc_f: - -The allocation function will be called by libEnsemble's manager with the following API:: - - Work, persis_info, stop_flag = alloc_f(W, H, sim_specs, gen_specs, alloc_specs, persis_info, libE_info) - -Parameters: -*********** - - **W**: ``numpy structured array`` - :doc:`(example)<../../dev_guide/dev_API/worker_array>` - - **H**: ``numpy structured array`` - :ref:`(example)` - - **sim_specs**: :obj:`dict` - :ref:`(example)` - - **gen_specs**: :obj:`dict` - :ref:`(example)` - - **alloc_specs**: :obj:`dict` - :ref:`(example)` - - **persis_info**: :obj:`dict` - :ref:`(example)` - - **libE_info**: :obj:`dict` - Various statistics useful to the allocation function for determining how much - work has been evaluated, or if the routine should prepare to complete. See - the :doc:`allocation function guide` for more - information. - -Returns: -******** - - **Work**: :obj:`dict` - Dictionary with integer keys ``i`` for work to be sent to worker ``i``. - :ref:`(example)` - - **persis_info**: :obj:`dict` - :doc:`(example)<../data_structures/persis_info>` - - **stop_flag**: :obj:`int`, optional - Set to 1 to request libEnsemble manager to stop giving additional work after - receiving existing work diff --git a/docs/index.rst b/docs/index.rst index 9428baf2d8..9125815e2e 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -45,6 +45,7 @@ function_guides/history_array resource_manager/resources_index + function_guides/allocator FAQ known_issues release_notes From 6e1f4e6888182561664c08520dc9123909623f07 Mon Sep 17 00:00:00 2001 From: jlnav Date: Fri, 24 Apr 2026 10:23:29 -0500 Subject: [PATCH 17/34] tabbed content refactoring - gen_f/sim_f references point to the function guides instead of the now-gone api --- docs/examples/alloc_funcs.rst | 4 +- docs/examples/gen_funcs.rst | 2 +- docs/examples/sim_funcs.rst | 2 +- docs/executor/overview.rst | 2 +- docs/function_guides/generator.rst | 202 ++++++++++++----------- docs/function_guides/history_array.rst | 6 +- docs/overview_usecases.rst | 4 +- docs/tutorials/aposmm_tutorial.rst | 4 +- docs/tutorials/calib_cancel_tutorial.rst | 4 +- docs/tutorials/local_sine_tutorial.rst | 2 +- 10 files changed, 116 insertions(+), 116 deletions(-) diff --git a/docs/examples/alloc_funcs.rst b/docs/examples/alloc_funcs.rst index 3734d7cb0d..0454366a81 100644 --- a/docs/examples/alloc_funcs.rst +++ b/docs/examples/alloc_funcs.rst @@ -8,9 +8,7 @@ Below are example allocation functions available in libEnsemble. Many users use these unmodified. .. IMPORTANT:: - See the API for allocation functions :ref:`here`. - - **The default allocation function changed in libEnsemble v2.0 from `give_sim_work_first` to `start_only_persistent `.** + The default allocation function changed in libEnsemble v2.0 from `give_sim_work_first` to `start_only_persistent `. .. note:: diff --git a/docs/examples/gen_funcs.rst b/docs/examples/gen_funcs.rst index c475cefe53..0bae6f7642 100644 --- a/docs/examples/gen_funcs.rst +++ b/docs/examples/gen_funcs.rst @@ -4,7 +4,7 @@ Generator Functions Here we list many generator functions included with libEnsemble. .. IMPORTANT:: - See the API for generator functions :ref:`here`. + See the API for generator functions :ref:`here`. Sampling -------- diff --git a/docs/examples/sim_funcs.rst b/docs/examples/sim_funcs.rst index be4374d884..0e018db472 100644 --- a/docs/examples/sim_funcs.rst +++ b/docs/examples/sim_funcs.rst @@ -8,7 +8,7 @@ function launching tasks, see the :doc:`Electrostatic Forces tutorial <../tutorials/executor_forces_tutorial>`. .. IMPORTANT:: - See the API for simulation functions :ref:`here`. + See the API for simulation functions :ref:`here`. .. role:: underline :class: underline diff --git a/docs/executor/overview.rst b/docs/executor/overview.rst index 196ba38b8b..6a0d23489d 100644 --- a/docs/executor/overview.rst +++ b/docs/executor/overview.rst @@ -2,7 +2,7 @@ Executor Overview ================= Most computationally expensive libEnsemble workflows involve launching applications -from a :ref:`sim_f` or :ref:`gen_f` running on a worker to the +from a :ref:`sim_f` or :ref:`gen_f` running on a worker to the compute nodes of a supercomputer, cluster, or other compute resource. The **Executor** provides a portable interface for running applications on any system. diff --git a/docs/function_guides/generator.rst b/docs/function_guides/generator.rst index b9bc27218a..f756b5447e 100644 --- a/docs/function_guides/generator.rst +++ b/docs/function_guides/generator.rst @@ -3,11 +3,13 @@ Generators ========== -Generators and :ref:`Simulators` have relatively similar interfaces. - Writing a Generator ------------------- +.. note:: + The `gest-api` generator interface is the recommended approach for new libEnsemble projects. + The "Legacy Generator Function" interface is supported for backward compatibility but may be deprecated in a future release. + .. tab-set:: .. tab-item:: Standardized Generator (gest-api) @@ -125,150 +127,150 @@ Writing a Generator Users can try an :doc:`executor<../executor/overview>` to submit applications to parallel resources, or plug in components from other libraries to serve their needs. -.. note:: + .. note:: - State ``gen_f`` information like checkpointing should be - appended to ``persis_info``. + State ``gen_f`` information like checkpointing should be + appended to ``persis_info``. -.. _persistent-gens: + .. _persistent-gens: -Persistent Generators ---------------------- + Persistent Generators + --------------------- -While non-persistent generators return after completing their calculation, persistent -generators do the following in a loop: + While non-persistent generators return after completing their calculation, persistent + generators do the following in a loop: - 1. Receive simulation results and metadata; exit if metadata instructs. - 2. Perform analysis. - 3. Send subsequent simulation parameters. + 1. Receive simulation results and metadata; exit if metadata instructs. + 2. Perform analysis. + 3. Send subsequent simulation parameters. -Persistent generators don't need to be re-initialized on each call, but are typically -more complicated. The persistent :doc:`APOSMM<../examples/aposmm>` -optimization generator function included with libEnsemble maintains -local optimization subprocesses based on results from complete simulations. + Persistent generators don't need to be re-initialized on each call, but are typically + more complicated. The persistent :doc:`APOSMM<../examples/aposmm>` + optimization generator function included with libEnsemble maintains + local optimization subprocesses based on results from complete simulations. -Use ``GenSpecs.persis_in`` to specify fields to send back to the generator throughout the run. -``GenSpecs.inputs`` only describes the input fields when the function is **first called**. + Use ``GenSpecs.persis_in`` to specify fields to send back to the generator throughout the run. + ``GenSpecs.inputs`` only describes the input fields when the function is **first called**. -Functions for a persistent generator to communicate directly with the manager -are available in the :ref:`libensemble.tools.persistent_support` class. + Functions for a persistent generator to communicate directly with the manager + are available in the :ref:`libensemble.tools.persistent_support` class. -Sending/receiving data is supported by the :ref:`PersistentSupport` class:: + Sending/receiving data is supported by the :ref:`PersistentSupport` class:: - from libensemble.tools import PersistentSupport - from libensemble.message_numbers import STOP_TAG, PERSIS_STOP, EVAL_GEN_TAG, FINISHED_PERSISTENT_GEN_TAG + from libensemble.tools import PersistentSupport + from libensemble.message_numbers import STOP_TAG, PERSIS_STOP, EVAL_GEN_TAG, FINISHED_PERSISTENT_GEN_TAG - my_support = PersistentSupport(libE_info, EVAL_GEN_TAG) + my_support = PersistentSupport(libE_info, EVAL_GEN_TAG) -Implementing functions from the above class is relatively simple: + Implementing functions from the above class is relatively simple: -.. tab-set:: + .. tab-set:: - .. tab-item:: send + .. tab-item:: send - .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport - .. autofunction:: send + .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport + .. autofunction:: send - This function call typically resembles:: + This function call typically resembles:: - my_support.send(local_H_out[selected_IDs]) + my_support.send(local_H_out[selected_IDs]) - Note that this function has no return. + Note that this function has no return. - .. tab-item:: recv + .. tab-item:: recv - .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport - .. autofunction:: recv + .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport + .. autofunction:: recv - This function call typically resembles:: + This function call typically resembles:: - tag, Work, calc_in = my_support.recv() + tag, Work, calc_in = my_support.recv() - if tag in [STOP_TAG, PERSIS_STOP]: - cleanup() - break + if tag in [STOP_TAG, PERSIS_STOP]: + cleanup() + break - The logic following the function call is typically used to break the persistent - generator's main loop and return. + The logic following the function call is typically used to break the persistent + generator's main loop and return. - .. tab-item:: send_recv + .. tab-item:: send_recv - .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport - .. autofunction:: send_recv + .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport + .. autofunction:: send_recv - This function performs both of the previous functions in a single statement. Its - usage typically resembles:: + This function performs both of the previous functions in a single statement. Its + usage typically resembles:: - tag, Work, calc_in = my_support.send_recv(local_H_out[selected_IDs]) - if tag in [STOP_TAG, PERSIS_STOP]: - cleanup() - break + tag, Work, calc_in = my_support.send_recv(local_H_out[selected_IDs]) + if tag in [STOP_TAG, PERSIS_STOP]: + cleanup() + break - Once the persistent generator's loop has been broken because of - the tag from the manager, it should return with an additional tag:: + Once the persistent generator's loop has been broken because of + the tag from the manager, it should return with an additional tag:: - return local_H_out, persis_info, FINISHED_PERSISTENT_GEN_TAG + return local_H_out, persis_info, FINISHED_PERSISTENT_GEN_TAG -See :ref:`calc_status` for more information about -the message tags. + See :ref:`calc_status` for more information about + the message tags. -.. _gen_active_recv: + .. _gen_active_recv: -Active receive mode -------------------- + Active receive mode + ------------------- -By default, a persistent worker is expected to -receive and send data in a *ping pong* fashion. Alternatively, -a worker can be initiated in *active receive* mode by the allocation -function (see :ref:`start_only_persistent`). -The persistent worker can then send and receive from the manager at any time. + By default, a persistent worker is expected to + receive and send data in a *ping pong* fashion. Alternatively, + a worker can be initiated in *active receive* mode by the allocation + function (see :ref:`start_only_persistent`). + The persistent worker can then send and receive from the manager at any time. -Ensure there are no communication deadlocks in this mode. In manager-worker message exchanges, only the worker-side -receive is blocking by default (a non-blocking option is available). + Ensure there are no communication deadlocks in this mode. In manager-worker message exchanges, only the worker-side + receive is blocking by default (a non-blocking option is available). -Cancelling Simulations ----------------------- + Cancelling Simulations + ---------------------- -Previously submitted simulations can be cancelled by sending a message to the manager: + Previously submitted simulations can be cancelled by sending a message to the manager: -.. currentmodule:: libensemble.tools.persistent_support.PersistentSupport -.. autofunction:: request_cancel_sim_ids + .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport + .. autofunction:: request_cancel_sim_ids -- If a generated point is cancelled by the generator **before sending** to another worker for simulation, then it won't be sent. -- If that point has **already been evaluated** by a simulation, the ``cancel_requested`` field will remain ``True``. -- If that point is **currently being evaluated**, a kill signal will be sent to the corresponding worker; it must be manually processed in the simulation function. + - If a generated point is cancelled by the generator **before sending** to another worker for simulation, then it won't be sent. + - If that point has **already been evaluated** by a simulation, the ``cancel_requested`` field will remain ``True``. + - If that point is **currently being evaluated**, a kill signal will be sent to the corresponding worker; it must be manually processed in the simulation function. -The :doc:`Borehole Calibration tutorial<../tutorials/calib_cancel_tutorial>` gives an example -of the capability to cancel pending simulations. + The :doc:`Borehole Calibration tutorial<../tutorials/calib_cancel_tutorial>` gives an example + of the capability to cancel pending simulations. -Modification of existing points -------------------------------- + Modification of existing points + ------------------------------- -To change existing fields of the History array, create a NumPy structured array where the ``dtype`` contains -the ``sim_id`` and the fields to be modified. Send this array with ``keep_state=True`` to the manager. -This will overwrite the manager's History array. + To change existing fields of the History array, create a NumPy structured array where the ``dtype`` contains + the ``sim_id`` and the fields to be modified. Send this array with ``keep_state=True`` to the manager. + This will overwrite the manager's History array. -For example, the cancellation function ``request_cancel_sim_ids`` could be replicated by -the following (where ``sim_ids_to_cancel`` is a list of integers): + For example, the cancellation function ``request_cancel_sim_ids`` could be replicated by + the following (where ``sim_ids_to_cancel`` is a list of integers): -.. code-block:: python + .. code-block:: python - # Send only these fields to existing H rows and libEnsemble will slot in the change. - H_o = np.zeros(len(sim_ids_to_cancel), dtype=[("sim_id", int), ("cancel_requested", bool)]) - H_o["sim_id"] = sim_ids_to_cancel - H_o["cancel_requested"] = True - ps.send(H_o, keep_state=True) + # Send only these fields to existing H rows and libEnsemble will slot in the change. + H_o = np.zeros(len(sim_ids_to_cancel), dtype=[("sim_id", int), ("cancel_requested", bool)]) + H_o["sim_id"] = sim_ids_to_cancel + H_o["cancel_requested"] = True + ps.send(H_o, keep_state=True) -Generator initiated shutdown ----------------------------- + Generator initiated shutdown + ---------------------------- -If using a supporting allocation function, the generator can prompt the ensemble to shutdown -by simply exiting the function (e.g., on a test for a converged value). For example, the -allocation function :ref:`start_only_persistent` closes down -the ensemble as soon as a persistent generator returns. The usual return values should be given. + If using a supporting allocation function, the generator can prompt the ensemble to shutdown + by simply exiting the function (e.g., on a test for a converged value). For example, the + allocation function :ref:`start_only_persistent` closes down + the ensemble as soon as a persistent generator returns. The usual return values should be given. -Examples --------- + Examples + -------- -Examples of non-persistent and persistent generator functions -can be found :doc:`here<../examples/gen_funcs>`. + Examples of non-persistent and persistent generator functions + can be found :doc:`here<../examples/gen_funcs>`. diff --git a/docs/function_guides/history_array.rst b/docs/function_guides/history_array.rst index 6820b6faec..f10d4c73d2 100644 --- a/docs/function_guides/history_array.rst +++ b/docs/function_guides/history_array.rst @@ -15,8 +15,8 @@ libEnsemble uses a NumPy structured array to store information about each point The manager maintains a global copy. Each row contains: - 1. Data generated by the :ref:`gen_f` - 2. Resultant output from the :ref:`sim_f` + 1. Data generated by the :ref:`gen_f` + 2. Resultant output from the :ref:`sim_f` 3. :ref:`Reserved fields` containing metadata When the history array is initialized, it creates fields for each @@ -136,7 +136,7 @@ reserved fields: ``sim_id``, ``sim_started``, and ``sim_ended`` are shown for br | -:ref:`gen_f` and :ref:`sim_f` functions accept a local history +:ref:`gen_f` and :ref:`sim_f` functions accept a local history array as the first argument that contains only the rows and fields specified. For new function calls these will be specified by either ``gen_specs["in"]`` or ``sim_specs["in"]``. For generators this may be empty. diff --git a/docs/overview_usecases.rst b/docs/overview_usecases.rst index f6a6a7c28a..81b2dcaa80 100644 --- a/docs/overview_usecases.rst +++ b/docs/overview_usecases.rst @@ -8,8 +8,8 @@ Manager, Workers, Generators, and Simulators libEnsemble's **manager** allocates work from **generators** to **workers**, which perform computations via **simulators**: -* :ref:`generator`: Generates inputs for the *simulator* -* :ref:`simulator`: Performs an evaluation using parameters from the *generator* +* :ref:`generator`: Generates inputs for the *simulator* +* :ref:`simulator`: Performs an evaluation using parameters from the *generator* .. figure:: images/adaptiveloop.png :alt: Adaptive loops diff --git a/docs/tutorials/aposmm_tutorial.rst b/docs/tutorials/aposmm_tutorial.rst index 4f959cb77b..cca7f13e00 100644 --- a/docs/tutorials/aposmm_tutorial.rst +++ b/docs/tutorials/aposmm_tutorial.rst @@ -5,8 +5,8 @@ Optimization with APOSMM This tutorial demonstrates libEnsemble's capability to identify multiple minima of simulation output using the built-in :doc:`APOSMM<../examples/aposmm>` (Asynchronously Parallel Optimization Solver for finding Multiple Minima) -:ref:`gen_f`. In this tutorial, we'll create a simple -simulation :ref:`sim_f` that defines a function with +:ref:`gen_f`. In this tutorial, we'll create a simple +simulation :ref:`sim_f` that defines a function with multiple minima, then write a libEnsemble calling script that imports APOSMM and parameterizes it to check for minima over a domain of outputs from our ``sim_f``. diff --git a/docs/tutorials/calib_cancel_tutorial.rst b/docs/tutorials/calib_cancel_tutorial.rst index c008100d73..7edae8aa96 100644 --- a/docs/tutorials/calib_cancel_tutorial.rst +++ b/docs/tutorials/calib_cancel_tutorial.rst @@ -12,7 +12,7 @@ compute resources may then be more effectively applied toward critical evaluatio For a somewhat different approach than libEnsemble's :doc:`other tutorials`, we'll emphasize the settings, functions, and data fields within the calling script, -:ref:`persistent generator`, manager, and :ref:`sim_f` +:ref:`persistent generator`, manager, and :ref:`sim_f` that make this capability possible, rather than outlining a step-by-step process. The libEnsemble regression test ``test_persistent_surmise_calib.py`` demonstrates @@ -36,7 +36,7 @@ gravitational constant, and the corresponding computer model could be the set of differential equations that govern the drop. In a case where the computation of the computer model is relatively expensive, we employ a fast surrogate model to approximate the model and to inform good parameters to test next. Here the computer -model :math:`f(\theta, x)` is accessible only through performing :ref:`sim_f` +model :math:`f(\theta, x)` is accessible only through performing :ref:`sim_f` evaluations. As a convenience for testing, the ``observed`` data values are modelled by calling the ``sim_f`` diff --git a/docs/tutorials/local_sine_tutorial.rst b/docs/tutorials/local_sine_tutorial.rst index 49b36b015b..56943a7cc3 100644 --- a/docs/tutorials/local_sine_tutorial.rst +++ b/docs/tutorials/local_sine_tutorial.rst @@ -66,7 +66,7 @@ need to write a new allocation function. .. tab-item:: 3. Simulator - Next, we'll write our simulator function or :ref:`sim_f`. Simulator + Next, we'll write our simulator function or :ref:`sim_f`. Simulator functions perform calculations based on values from the generator. :ref:`sim_specs` is a dictionary containing user-defined fields and parameters. From d46335c73b2b52f9f423f2783ed3ada2cbffa4d8 Mon Sep 17 00:00:00 2001 From: jlnav Date: Fri, 24 Apr 2026 10:33:42 -0500 Subject: [PATCH 18/34] remove some redundant content. emphasize simulator-workers --- docs/running_libE.rst | 75 ++++++------------------------------------- 1 file changed, 10 insertions(+), 65 deletions(-) diff --git a/docs/running_libE.rst b/docs/running_libE.rst index 50e58afbe5..6e1afa3730 100644 --- a/docs/running_libE.rst +++ b/docs/running_libE.rst @@ -3,28 +3,6 @@ Running libEnsemble =================== -Introduction ------------- - -libEnsemble runs with one manager and multiple workers. Each worker may run either -a generator or simulator function (both are Python scripts). Generators -determine the parameters/inputs for simulations. Simulator functions run and -manage simulations, which often involve running a user application (see -:doc:`Executor`). - -To use libEnsemble, you will need a calling script, which in turn will specify -generator and simulator functions. Many :doc:`examples` -are available. - -There are currently three communication options for libEnsemble (determining how -the Manager and Workers communicate). These are ``local``, ``mpi``, ``tcp``. -The default is ``local`` if ``nworkers`` is specified, otherwise ``mpi``. - -Note that ``local`` comms can be used on multi-node systems, where -the :doc:`MPI executor` is used to distribute MPI applications -across the nodes. Indeed, this is the most commonly used option, even on large -supercomputers. - .. note:: You do not need the ``mpi`` communication mode to use the :doc:`MPI Executor`. The communication modes described @@ -35,22 +13,16 @@ supercomputers. .. tab-item:: Local Comms Uses Python's built-in multiprocessing_ module. - The ``comms`` type ``local`` and number of workers ``nworkers`` may - be provided in :ref:`libE_specs`. + The ``comms`` type ``local`` and number of workers ``nworkers`` for running simulators + may be provided in :ref:`libE_specs`. - Then run:: + Run: python myscript.py Or, if the script uses the :meth:`parse_args` function or an :class:`Ensemble` object with ``Ensemble(parse_args=True)``, - you can specify these on the command line:: - - python myscript.py --nworkers N - - This will launch one manager and ``N`` workers. - - The following abbreviated line is equivalent to the above:: + this can be specified on the command line: python myscript.py -n N @@ -63,8 +35,8 @@ supercomputers. system (e.g., Summit), ensuring the whole compute-node allocation is available for launching apps. Make sure there are no imports of ``mpi4py`` in your Python scripts. - Note that on macOS (since Python 3.8) and Windows, the default multiprocessing method - is ``"spawn"`` instead of ``"fork"``; to resolve many related issues, we recommend placing + Note that on macOS and Windows, the default multiprocessing method is ``"spawn"`` + instead of ``"fork"``; to resolve many related issues, we recommend placing calling script code in an ``if __name__ == "__main__":`` block. **Limitations of local mode** @@ -81,7 +53,7 @@ supercomputers. mpirun -np N python myscript.py where ``N`` is the number of processes. This will launch one manager and - ``N-1`` workers. + ``N-1`` simulator workers. This option requires ``mpi4py`` to be installed to interface with the MPI on your system. It works on a standalone system, and with both @@ -120,7 +92,7 @@ supercomputers. **Limitations of TCP mode** - - There cannot be two calls to ``libE()`` or ``Ensemble.run()`` in the same script. + - There cannot be two calls to ``Ensemble.run()`` or ``libE()`` in the same script. Further Command Line Options ---------------------------- @@ -128,32 +100,6 @@ Further Command Line Options See the :meth:`parse_args` function in :doc:`Convenience Tools` for further command line options. -Persistent Workers ------------------- -.. _persis_worker: - -In a regular (non-persistent) worker, the user's generator or simulation function is called -whenever the worker receives work. A persistent worker is one that continues to run the -generator or simulation function between work units, maintaining the local data environment. - -A common use-case consists of a persistent generator (such as :doc:`persistent_aposmm`) -that maintains optimization data while generating new simulation inputs. The persistent generator runs -on a dedicated worker while in persistent mode. This requires an appropriate -:doc:`allocation function` that will run the generator as persistent. - -When running with a persistent generator, it is important to remember that a worker will be dedicated -to the generator and cannot run simulations. For example, the following run:: - - mpirun -np 3 python my_script.py - -starts one manager, one worker with a persistent generator, and one worker for running simulations. - -If this example was run as:: - - mpirun -np 2 python my_script.py - -No simulations will be able to run. - Environment Variables --------------------- @@ -166,8 +112,8 @@ set in your simulation script before the Executor *submit* command will export t to your run. For running a bash script in a sub environment when using the Executor, see the ``env_script`` option to the :doc:`MPI Executor`. -Further Run Information ------------------------ +Running on Multi-Node Systems +----------------------------- For running on multi-node platforms and supercomputers, there are alternative ways to configure libEnsemble to resources. See the :doc:`Running on HPC Systems` @@ -176,4 +122,3 @@ guide for more information, including some examples for specific systems. .. _mpi4py: https://mpi4py.readthedocs.io/en/stable/ .. _MPICH: https://www.mpich.org/ .. _multiprocessing: https://docs.python.org/3/library/multiprocessing.html -.. _PSI/J: https://exaworks.org/psij From 81953fef5afbdc54cdc13078d29489cc83b7b779 Mon Sep 17 00:00:00 2001 From: jlnav Date: Fri, 24 Apr 2026 10:36:17 -0500 Subject: [PATCH 19/34] remove zero-resource-workers mention --- docs/platforms/platforms_index.rst | 7 ------- 1 file changed, 7 deletions(-) diff --git a/docs/platforms/platforms_index.rst b/docs/platforms/platforms_index.rst index f679c36e56..e6731e8a9e 100644 --- a/docs/platforms/platforms_index.rst +++ b/docs/platforms/platforms_index.rst @@ -116,13 +116,6 @@ The :ref:`resource manager` detects node lists from and partitions these to workers. The :doc:`MPI Executor<../executor/mpi_executor>` accesses the resources available to the current worker when launching tasks. -Zero-resource workers ---------------------- - -Users with persistent ``gen_f`` functions may notice that the persistent workers -are still automatically assigned system resources. This can be resolved by -:ref:`fixing the number of resource sets`. - Assigning GPUs -------------- From 7600b399acc139a537738473b46b5fa6a9f4e19b Mon Sep 17 00:00:00 2001 From: jlnav Date: Fri, 24 Apr 2026 13:18:01 -0500 Subject: [PATCH 20/34] more pixi info. update executor overview. mypy fixes --- docs/advanced_installation.rst | 17 +++++++++++ docs/executor/overview.rst | 19 ++---------- libensemble/executors/executor.py | 42 +++++++++++++++------------ libensemble/executors/mpi_executor.py | 23 ++++++++------- 4 files changed, 56 insertions(+), 45 deletions(-) diff --git a/docs/advanced_installation.rst b/docs/advanced_installation.rst index 8151cb31fa..f638dd769c 100644 --- a/docs/advanced_installation.rst +++ b/docs/advanced_installation.rst @@ -54,6 +54,22 @@ Further recommendations for selected HPC systems are given in the uv pip install libensemble + .. tab-item:: pixi + + Add to your pixi_ environment:: + + pixi add libensemble + + libEnsemble is also distributed with locked pixi environments for different versions of Python + and various dependency sets, primarily for testing but also useful for guaranteed working environments. + See a list with:: + + pixi workspace environment list + + and activate with:: + + pixi shell -e + .. tab-item:: conda Install libEnsemble with Conda_ from the conda-forge channel:: @@ -183,6 +199,7 @@ Globus Compute .. _NumPy: http://www.numpy.org .. _Open MPI: https://www.open-mpi.org/ .. _psutil: https://pypi.org/project/psutil/ +.. _pixi: https://pixi.prefix.dev/latest/ .. _pydantic: https://docs.pydantic.dev/1.10/ .. _PyPI: https://pypi.org .. _Python: http://www.python.org diff --git a/docs/executor/overview.rst b/docs/executor/overview.rst index 6a0d23489d..8d8d043623 100644 --- a/docs/executor/overview.rst +++ b/docs/executor/overview.rst @@ -1,11 +1,8 @@ Executor Overview ================= -Most computationally expensive libEnsemble workflows involve launching applications -from a :ref:`sim_f` or :ref:`gen_f` running on a worker to the -compute nodes of a supercomputer, cluster, or other compute resource. - -The **Executor** provides a portable interface for running applications on any system. +The **Executor** provides a portable interface for running applications on any system and +any number of compute resources. .. dropdown:: Detailed description @@ -40,8 +37,6 @@ The **Executor** provides a portable interface for running applications on any s Basic usage ----------- -**In calling script** - To set up an MPI executor, register an MPI application, and add to the ensemble object. @@ -54,10 +49,6 @@ to the ensemble object. exctr.register_app(full_path="/path/to/my/exe", app_name="sim1") ensemble = Ensemble(executor=exctr) -If using the ``libE()`` call, the Executor in the calling script does **not** -have to be passed to the ``libE()`` function. It is transferred via the -``Executor.executor`` class variable. - **In user simulation function**:: def sim_func(H, persis_info, sim_specs, libE_info): @@ -178,10 +169,4 @@ which partitions resources among workers, ensuring that runs utilize different resources (e.g., nodes). Furthermore, the ``MPIExecutor`` offers resilience via the feature of re-launching tasks that fail to start because of system factors. -Various back-end mechanisms may be used by the Executor to best interact -with each system, including proxy launchers or task management systems. -Currently, these Executors launch at the application level within -an existing resource pool. However, submissions to a batch scheduler may be -supported in future Executors. - .. _concurrent futures: https://docs.python.org/library/concurrent.futures.html diff --git a/libensemble/executors/executor.py b/libensemble/executors/executor.py index 990ea2bc95..fbb7cc0841 100644 --- a/libensemble/executors/executor.py +++ b/libensemble/executors/executor.py @@ -63,7 +63,7 @@ class ExecutorException(Exception): class TimeoutExpired(Exception): """Timeout exception raised when Timeout expires""" - def __init__(self, task: str, timeout: float) -> None: + def __init__(self, task: str, timeout: float | None) -> None: self.task = task self.timeout = timeout @@ -151,9 +151,9 @@ def __init__( self.stderr = stderr or self.name + ".err" self.workdir = workdir self.dry_run = dry_run - self.runline = None + self.runline: str | None = None self.run_attempts = 0 - self.env = {} + self.env: dict[str, str] = {} self.ngpus_req = 0 def reset(self) -> None: @@ -239,6 +239,7 @@ def _set_complete(self) -> None: self.state = "FINISHED" else: self.calc_task_timing() + assert self.process is not None self.errcode = self.process.returncode self.success = self.errcode == 0 self.state = "FINISHED" if self.success else "FAILED" @@ -254,6 +255,7 @@ def poll(self) -> None: return # Poll the task + assert self.process is not None poll = self.process.poll() if poll is None: self.state = "RUNNING" @@ -330,7 +332,7 @@ def done(self) -> bool: self.poll() return self.finished - def kill(self, wait_time: int = 60) -> None: + def kill(self, wait_time: int | None = 60) -> None: """Kills or cancels the supplied task Parameters @@ -426,11 +428,11 @@ def __init__(self) -> None: """ self.manager_signal = None - self.default_apps = {"sim": None, "gen": None} - self.apps = {} + self.default_apps: dict[str, Application | None] = {"sim": None, "gen": None} + self.apps: dict[str, Application] = {} self.wait_time = 60 - self.list_of_tasks = [] + self.list_of_tasks: list[Task] = [] self.workerID = None self.comm = None self.last_task = 0 @@ -448,12 +450,12 @@ def serial_setup(self): pass # To be overloaded @property - def sim_default_app(self) -> Application: + def sim_default_app(self) -> Application | None: """Returns the default simulation app""" return self.default_apps["sim"] @property - def gen_default_app(self) -> Application: + def gen_default_app(self) -> Application | None: """Returns the default generator app""" return self.default_apps["gen"] @@ -468,7 +470,7 @@ def get_app(self, app_name: str) -> Application: ) return app - def default_app(self, calc_type: str) -> Application: + def default_app(self, calc_type: str) -> Application | None: """Gets the default app for a given calc type""" app = self.default_apps.get(calc_type) jassert(calc_type in ["sim", "gen"], "Unrecognized calculation type", calc_type) @@ -541,7 +543,7 @@ def register_app( jassert(calc_type in self.default_apps, "Unrecognized calculation type", calc_type) self.default_apps[calc_type] = self.apps[app_name] - def manager_poll(self) -> int: + def manager_poll(self) -> int | None: """ .. _manager_poll_label: @@ -552,12 +554,13 @@ def manager_poll(self) -> int: self.manager_signal = None # Reset + assert self.comm is not None # Check for messages; disregard anything but a stop signal if not self.comm.mail_flag(): - return + return None mtag, man_signal = self.comm.recv() if mtag != STOP_TAG: - return + return None # Process the signal and push back on comm (for now) self.manager_signal = man_signal @@ -580,8 +583,8 @@ def manager_kill_received(self) -> bool: def polling_loop( self, task: Task, timeout: int | None = None, delay: float = 0.1, poll_manager: bool = False ) -> int: - """Optional, blocking, generic task status polling loop. Operates until the task - finishes, times out, or is optionally killed via a manager signal. On completion, returns a + """Blocking, generic task status polling loop. Operates until the task + finishes, times out, or is killed via a manager signal. On completion, returns a presumptive :ref:`calc_status` integer. Useful for running an application via the Executor until it stops without monitoring its intermediate output. @@ -709,13 +712,13 @@ def submit( app_args: str | None = None, stdout: str | None = None, stderr: str | None = None, - dry_run: bool | None = False, - wait_on_start: bool | None = False, + dry_run: bool = False, + wait_on_start: bool = False, env_script: str | None = None, ) -> Task: """Create a new task and run as a local serial subprocess. - The created :class:`task` object is returned. + Returns :class:`task` object. Parameters ---------- @@ -758,6 +761,7 @@ def submit( The launched task object """ + app: Application | None = None if app_name is not None: app = self.get_app(app_name) elif calc_type is not None: @@ -765,6 +769,8 @@ def submit( else: raise ExecutorException("Either app_name or calc_type must be set") + assert app is not None + default_workdir = os.getcwd() task = Task(app, app_args, default_workdir, stdout, stderr, self.workerID, dry_run) diff --git a/libensemble/executors/mpi_executor.py b/libensemble/executors/mpi_executor.py index 4547753741..5a0190d5c4 100644 --- a/libensemble/executors/mpi_executor.py +++ b/libensemble/executors/mpi_executor.py @@ -1,9 +1,9 @@ """ This module launches and controls the running of MPI applications. -In order to create an MPI executor, the calling script should contain: +In order to create an MPI executor, the script should contain:: -.. code-block:: python + from libensemble.executors.mpi_executor import MPIExecutor exctr = MPIExecutor() @@ -17,7 +17,7 @@ import time import libensemble.utils.launcher as launcher -from libensemble.executors.executor import Executor, ExecutorException, Task +from libensemble.executors.executor import Application, Executor, ExecutorException, Task from libensemble.executors.mpi_runner import MPIRunner from libensemble.resources.mpi_resources import get_MPI_variant @@ -183,7 +183,7 @@ def _launch_with_retries( else: break - def submit( + def submit( # type: ignore[override] self, calc_type: str | None = None, app_name: str | None = None, @@ -196,18 +196,18 @@ def submit( stdout: str | None = None, stderr: str | None = None, stage_inout: str | None = None, - hyperthreads: bool | None = False, - dry_run: bool | None = False, - wait_on_start: bool | None = False, + hyperthreads: bool = False, + dry_run: bool = False, + wait_on_start: bool = False, extra_args: str | None = None, - auto_assign_gpus: bool | None = False, - match_procs_to_gpus: bool | None = False, + auto_assign_gpus: bool = False, + match_procs_to_gpus: bool = False, env_script: str | None = None, mpi_runner_type: str | dict | None = None, ) -> Task: """Creates a new task, and either executes or schedules execution. - The created :class:`task` object is returned. + Returns :class:`task` object. The user must supply either the app_name or calc_type arguments (app_name is recommended). All other arguments are optional. @@ -304,6 +304,7 @@ def submit( then the available resources will be divided among workers. """ + app: Application | None = None if app_name is not None: app = self.get_app(app_name) elif calc_type is not None: @@ -311,6 +312,8 @@ def submit( else: raise ExecutorException("Either app_name or calc_type must be set") + assert app is not None + default_workdir = os.getcwd() task = Task(app, app_args, default_workdir, stdout, stderr, self.workerID, dry_run) From d503a69fd50411f326ae28ad54fe3535fef0aa45 Mon Sep 17 00:00:00 2001 From: jlnav Date: Fri, 24 Apr 2026 15:07:42 -0500 Subject: [PATCH 21/34] small tweaks. remove summit_submir_mproc.sh. refer to calling scripts as "top-level scripts" instead in a handful of spots --- docs/examples/alloc_funcs.rst | 4 +- docs/examples/calling_scripts.rst | 13 ++--- docs/examples/examples_index.rst | 2 +- libensemble/specs.py | 2 +- .../submission_scripts/summit_submit_mproc.sh | 52 ------------------- libensemble/tools/tools.py | 2 +- 6 files changed, 7 insertions(+), 68 deletions(-) delete mode 100755 libensemble/tests/scaling_tests/forces/submission_scripts/summit_submit_mproc.sh diff --git a/docs/examples/alloc_funcs.rst b/docs/examples/alloc_funcs.rst index 0454366a81..8c50a9153d 100644 --- a/docs/examples/alloc_funcs.rst +++ b/docs/examples/alloc_funcs.rst @@ -8,12 +8,10 @@ Below are example allocation functions available in libEnsemble. Many users use these unmodified. .. IMPORTANT:: - The default allocation function changed in libEnsemble v2.0 from `give_sim_work_first` to `start_only_persistent `. + The default allocation function changed in libEnsemble v2.0 from ``give_sim_work_first`` to ``start_only_persistent``. .. note:: - The default allocation function for persistent generators is :ref:`start_only_persistent`. - The most commonly used allocation function for non-persistent generators is :ref:`give_sim_work_first`. .. role:: underline diff --git a/docs/examples/calling_scripts.rst b/docs/examples/calling_scripts.rst index 708a9d1280..a92a9d6c91 100644 --- a/docs/examples/calling_scripts.rst +++ b/docs/examples/calling_scripts.rst @@ -1,14 +1,7 @@ -Calling Scripts -=============== +Top-Level Scripts +================= -Below are example calling scripts used to populate specifications for each user -function and libEnsemble before initiating libEnsemble via the primary ``libE()`` -call. The primary libEnsemble-relevant portions have been highlighted in each -example. Non-highlighted portions may include setup routines, compilation steps -for user applications, or output processing. The first two scripts correspond to -random sampling calculations, while the third corresponds to an optimization routine. - -Many other examples of calling scripts can be found in libEnsemble's `regression tests`_. +Many other examples of top-level scripts can be found in libEnsemble's `regression tests`_. Local Sine Tutorial ------------------- diff --git a/docs/examples/examples_index.rst b/docs/examples/examples_index.rst index 1e92e21c03..5fa59a9d76 100644 --- a/docs/examples/examples_index.rst +++ b/docs/examples/examples_index.rst @@ -2,7 +2,7 @@ Overview of Examples ==================== Here we give example generation, simulation, and allocation functions for -libEnsemble, as well as example calling scripts. +libEnsemble, as well as example top-level scripts. The examples come from the libEnsemble repository and the `libEnsemble Community Repository`_. diff --git a/libensemble/specs.py b/libensemble/specs.py index d983948259..05530512dc 100644 --- a/libensemble/specs.py +++ b/libensemble/specs.py @@ -682,7 +682,7 @@ def set_calc_dirs_on_input_dir(self): worker_cmd: list[str] | None = [] """ TCP Only: Split string corresponding to worker/client Python process invocation. Contains - a local Python path, calling script, and manager/server format-fields for ``manager_ip``, + a local Python path, user script, and manager/server format-fields for ``manager_ip``, ``manager_port``, ``authkey``, and ``workerID``. ``nworkers`` is specified normally. """ diff --git a/libensemble/tests/scaling_tests/forces/submission_scripts/summit_submit_mproc.sh b/libensemble/tests/scaling_tests/forces/submission_scripts/summit_submit_mproc.sh deleted file mode 100755 index 268ba64a36..0000000000 --- a/libensemble/tests/scaling_tests/forces/submission_scripts/summit_submit_mproc.sh +++ /dev/null @@ -1,52 +0,0 @@ -#!/bin/bash -x -#BSUB -P -#BSUB -J libe_mproc -#BSUB -W 20 -#BSUB -nnodes 4 -#BSUB -alloc_flags "smt1" - -# Script to run libEnsemble using multiprocessing on launch nodes. -# Assumes Conda environment is set up. - -# To be run with central job management -# - Manager and workers run on launch node. -# - Workers submit tasks to the nodes in the job available. - -# Name of calling script- -export EXE=run_libe_forces.py - -# Communication Method -export COMMS="--comms local" - -# Number of workers. -export NWORKERS="--nworkers 5" - -# Wallclock for libE. Slightly smaller than job wallclock -#export LIBE_WALLCLOCK=15 # Optional if pass to script - -# Name of Conda environment -export CONDA_ENV_NAME= - -export LIBE_PLOTS=true # Require plot scripts in $PLOT_DIR (see at end) -export PLOT_DIR=.. - -# Need these if not already loaded -# module load python -# module load gcc/4.8.5 - -# Activate conda environment -export PYTHONNOUSERSITE=1 -. activate $CONDA_ENV_NAME - -# hash -d python # Check pick up python in conda env -hash -r # Check no commands hashed (pip/python...) - -# Launch libE. -#python $EXE $NUM_WORKERS $LIBE_WALLCLOCK > out.txt 2>&1 -python $EXE $COMMS $NWORKERS > out.txt 2>&1 - -if [[ $LIBE_PLOTS = "true" ]]; then - python $PLOT_DIR/plot_libe_calcs_util_v_time.py - python $PLOT_DIR/plot_libe_tasks_util_v_time.py - python $PLOT_DIR/plot_libe_histogram.py -fi diff --git a/libensemble/tools/tools.py b/libensemble/tools/tools.py index 4caa408737..398255b069 100644 --- a/libensemble/tools/tools.py +++ b/libensemble/tools/tools.py @@ -1,5 +1,5 @@ """ -The libEnsemble utilities module assists in writing consistent calling scripts +The libEnsemble utilities module assists in writing consistent top-level scripts and user functions. """ From 9f7cd8917540c8af5592e051f11d90a0c9919d02 Mon Sep 17 00:00:00 2001 From: jlnav Date: Tue, 28 Apr 2026 11:09:58 -0500 Subject: [PATCH 22/34] fix persis_info examples --- docs/data_structures/persis_info.rst | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/data_structures/persis_info.rst b/docs/data_structures/persis_info.rst index d5327241f5..8d48474cb5 100644 --- a/docs/data_structures/persis_info.rst +++ b/docs/data_structures/persis_info.rst @@ -27,9 +27,9 @@ Examples: .. literalinclude:: ../../libensemble/gen_funcs/sampling.py :linenos: - :start-at: def uniform_random_sample(_, persis_info, gen_specs): - :end-before: def uniform_random_sample_with_variable_resources(_, persis_info, gen_specs): - :emphasize-lines: 17 + :start-at: def uniform_random_sample(_, persis_info, gen_specs, libE_info): + :end-before: def uniform_random_sample_with_variable_resources(_, persis_info, gen_specs, libE_info): + :emphasize-lines: 10 :caption: libensemble/libensemble/gen_funcs/sampling.py .. tab-item:: Incrementing indexes or process counts @@ -44,7 +44,7 @@ Examples: .. literalinclude:: ../../libensemble/alloc_funcs/start_only_persistent.py :linenos: - :start-at: avail_workers = support.avail_worker_ids(persistent=False, zero_resource_workers=True, gen_workers=True) + :start-at: avail_workers = support.avail_worker_ids(persistent=False, gen_workers=True) :end-before: return Work, persis_info, 0 :emphasize-lines: 18 :caption: libensemble/alloc_funcs/start_only_persistent.py From 4c26b0703e756d47be01aa4f9e524cf15c8cc340 Mon Sep 17 00:00:00 2001 From: jlnav Date: Tue, 28 Apr 2026 11:29:19 -0500 Subject: [PATCH 23/34] updating history_array.rst for gest-api considerations. remove more summit mentions/scripts --- docs/function_guides/history_array.rst | 82 ++++++------------- docs/platforms/example_scripts.rst | 7 -- .../summit_submit_mproc.sh | 44 ---------- 3 files changed, 25 insertions(+), 108 deletions(-) delete mode 100644 examples/libE_submission_scripts/summit_submit_mproc.sh diff --git a/docs/function_guides/history_array.rst b/docs/function_guides/history_array.rst index f10d4c73d2..d09c19c27f 100644 --- a/docs/function_guides/history_array.rst +++ b/docs/function_guides/history_array.rst @@ -15,25 +15,25 @@ libEnsemble uses a NumPy structured array to store information about each point The manager maintains a global copy. Each row contains: - 1. Data generated by the :ref:`gen_f` - 2. Resultant output from the :ref:`sim_f` + 1. Data generated by the :ref:`generator` + 2. Resultant output from the :ref:`simulator function` 3. :ref:`Reserved fields` containing metadata -When the history array is initialized, it creates fields for each -``gen_specs["out"]`` and ``sim_specs["out"]`` entry. These entries may resemble:: +**Simulator functions** (``sim_f``) must return their data as arrays with the same +:ref:`types` as ``sim_specs["out"]``. Alternatively, a ``simulator`` +callable in gest-api format (accepting and returning a ``dict``) can be provided via +``SimSpecs.simulator``; libEnsemble wraps it automatically and handles the NumPy +conversion. - gen_specs["out"] = [("x", float, 2), ("theta", int)] - sim_specs["out"] = [("f", float)] +**Generators** that adhere to the ``gest_api`` standard implement ``suggest()`` and +``ingest()`` methods that operate on lists of Python dictionaries. libEnsemble +automatically casts their ``dict`` outputs to NumPy for inclusion in the History array. -.. In this example, ``x`` is a two-dimensional coordinate, ``theta`` represents some -.. integer input parameter, and ``f`` is a scalar output of the simulation to be -.. run with the generated ``x`` and ``theta`` values. - -Therefore, the ``gen_f`` and ``sim_f`` must return output as NumPy -structured arrays for slotting into these fields. - -.. (The manager's history array will update any fields -.. returned to it.) +When using a ``VOCS`` object (from ``gest_api.vocs``) to parameterize ``GenSpecs`` or +``SimSpecs``, field names in the History array are derived automatically from the VOCS +variable, objective, and constraint keys. ``LibensembleGenerator`` subclasses optionally +collapse all VOCS variables into a single ``"x"`` array field (and objectives into +``"f"``) unless an explicit ``variables_mapping`` is provided. Ensure input/output field names for a function match each other or a :ref:`reserved field`:: @@ -48,45 +48,12 @@ Reserved Fields User fields and reserved fields are combined together in the final History array returned by libEnsemble. -.. Automatically tracked fields within the History array include: - -.. 1. ``sim_id``, to globally identify the point. Assigned by manager if the generator doesn't provide. -.. 2. ``cancel_requested``, - -.. The manager's history array also contains several reserved fields. These -.. include a ``sim_id`` to globally identify the point (on the manager this is -.. usually the same as the array index). The ``sim_id`` can be provided by the -.. user from the ``gen_f``, but is otherwise assigned by the manager as generated -.. points are received. - -.. The reserved boolean field ``cancel_requested`` can also be set in a user -.. function to request that libEnsemble cancels the evaluation of the point. - -.. The remaining reserved fields are protected (populated by libEnsemble), and -.. store information about each entry. These include boolean fields for the -.. current scheduling status of the point (``sim_started`` when the sim evaluation -.. has started out, ``sim_ended`` when sim evaluation has completed, and -.. ``gen_informed`` when the sim output has been passed back to the generator). -.. Timing fields give the time (since the epoch) corresponding to each state, and -.. when the point was generated. Other protected fields include the worker IDs on -.. which points were generated or evaluated. - -.. The user fields and the reserved fields together make up the final history array -.. returned by libEnsemble. - These reserved fields can be modified to adjust how/when a point is evaluated: * ``sim_id`` [int]: Each unit of work must have a ``sim_id``. This can be set by the generator or by the manager by default. Users should ensure these IDs are sequential and unique when running multiple generators. -.. * The generator can assign this, but users must be -.. careful to ensure that points are added in order. For example, if ``alloc_f`` -.. allows for two ``gen_f`` instances to be running simultaneously, ``alloc_f`` -.. should ensure that both don't generate points with the same ``sim_id``. -.. If the generator does not provide, then a ``sim_id`` will be assigned by the -.. manager as generated points are received. - * ``cancel_requested`` [bool]: Can be set ``True`` in a generator to request attempted cancellation of the corresponding simulation. @@ -114,11 +81,9 @@ The following fields are automatically populated by libEnsemble: ``kill_sent`` [bool]: ``True`` if a kill signal was sent to worker for this entry -Other than ``"sim_id"`` and ``cancel_requested``, these fields cannot be -overwritten by user functions unless ``libE_specs["safe_mode"]`` is set to ``False``. - -.. warning:: - Adjusting values in protected fields may crash libEnsemble. +Other than ``"sim_id"`` and ``"cancel_requested"``, these fields cannot be +overwritten by user functions when ``libE_specs["safe_mode"]`` is set to ``True`` +(protection is opt-in; the default value of ``safe_mode`` is ``False``). Example Workflow updating History --------------------------------- @@ -136,10 +101,13 @@ reserved fields: ``sim_id``, ``sim_started``, and ``sim_ended`` are shown for br | -:ref:`gen_f` and :ref:`sim_f` functions accept a local history -array as the first argument that contains only the rows and fields specified. -For new function calls these will be specified by either ``gen_specs["in"]`` or -``sim_specs["in"]``. For generators this may be empty. +For legacy generator functions (``gen_f``), the function accepts a local history +array slice as the first argument containing only the rows and fields specified by +``gen_specs["in"]`` (may be empty). It returns a NumPy structured array that +libEnsemble writes into H. + +For gest-api generators, ``suggest(n)`` returns a list of dicts and ``ingest(results)`` +receives a list of dicts; libEnsemble handles all conversions to and from NumPy. | diff --git a/docs/platforms/example_scripts.rst b/docs/platforms/example_scripts.rst index d534f0c662..d6d7892abd 100644 --- a/docs/platforms/example_scripts.rst +++ b/docs/platforms/example_scripts.rst @@ -95,10 +95,3 @@ SLURM - MPI / Distributed Mode (co-locate workers & MPI applications) .. literalinclude:: ../../examples/libE_submission_scripts/submit_distrib_mpi4py.sh :caption: /examples/libE_submission_scripts/submit_distrib_mpi4py.sh :language: bash - -Summit (Decommissioned) - On Launch Nodes with Multiprocessing -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. literalinclude:: ../../examples/libE_submission_scripts/summit_submit_mproc.sh - :caption: /examples/libE_submission_scripts/summit_submit_mproc.sh - :language: bash diff --git a/examples/libE_submission_scripts/summit_submit_mproc.sh b/examples/libE_submission_scripts/summit_submit_mproc.sh deleted file mode 100644 index ba565f6c82..0000000000 --- a/examples/libE_submission_scripts/summit_submit_mproc.sh +++ /dev/null @@ -1,44 +0,0 @@ -#!/bin/bash -x -#BSUB -P -#BSUB -J libe_mproc -#BSUB -W 30 -#BSUB -nnodes 4 -#BSUB -alloc_flags "smt1" - -# Script to run libEnsemble using multiprocessing on launch nodes. -# Assumes Conda environment is set up. - -# To be run with central job management -# - Manager and workers run on launch node. -# - Workers submit tasks to the compute nodes in the allocation. - -# Name of calling script- -export EXE=libE_calling_script.py - -# Communication Method -export COMMS="--comms local" - -# Number of workers. -export NWORKERS="--nworkers 4" - -# Wallclock for libE. (allow clean shutdown) -export LIBE_WALLCLOCK=25 # Optional if pass to script - -# Name of Conda environment -export CONDA_ENV_NAME= - -# Need these if not already loaded -# module load python -# module load gcc/4.8.5 - -# Activate conda environment -export PYTHONNOUSERSITE=1 -. activate $CONDA_ENV_NAME - -# hash -d python # Check pick up python in conda env -hash -r # Check no commands hashed (pip/python...) - -# Launch libE -# python $EXE $NUM_WORKERS > out.txt 2>&1 # No args. All defined in calling script -# python $EXE $COMMS $NWORKERS > out.txt 2>&1 # If calling script is using parse_args() -python $EXE $LIBE_WALLCLOCK $COMMS $NWORKERS > out.txt 2>&1 # If calling script takes wall-clock as positional arg. From c192b1c43fd649f01d44e6d500ae0d631218c024 Mon Sep 17 00:00:00 2001 From: jlnav Date: Tue, 28 Apr 2026 12:05:26 -0500 Subject: [PATCH 24/34] additional AGENTS.md considerations. New writing-a-new-simf tab --- AGENTS.md | 2 + docs/function_guides/history_array.rst | 4 +- docs/function_guides/simulator.rst | 117 ++++++++++++++++++------- 3 files changed, 87 insertions(+), 36 deletions(-) diff --git a/AGENTS.md b/AGENTS.md index c45ccca322..f5673f64a7 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -97,3 +97,5 @@ When modernizing existing libEnsemble scripts (functionality tests, regression t - **Remove Explicit `AllocSpecs`**: In libEnsemble 2.0, `only_persistent_gens` is the default allocator. Scripts that previously used `give_sim_work_first` or other simple allocators can often remove `alloc_specs` entirely when switching to standardized generators. - **Generator Placement**: By default, generators run on the manager thread (Worker 0). This means all allocated workers are available for simulation tasks unless `gen_on_worker` is explicitly set to `True` in `libE_specs`. - **Mandatory Fields**: Ensure `gen_specs["in"]` or `gen_specs["persis_in"]` includes at least one field (e.g., `["sim_id"]`) if feedback is sent back to the generator, to satisfy the allocator's requirements. +- **gest-api Simulators**: The gest-api pattern also applies to simulators. Set `SimSpecs.simulator` to a callable with signature `(input_dict: dict, **kwargs) -> dict` instead of providing a `sim_f`. libEnsemble automatically wraps it with `gest_api_sim` from `libensemble.sim_funcs.gest_api_wrapper` and handles all NumPy conversions. `SimSpecs.inputs` and `SimSpecs.outputs` can be derived automatically when `SimSpecs.vocs` is provided. +- **`safe_mode` is opt-in**: `libE_specs["safe_mode"]` defaults to `False`, meaning protected History fields (`gen_worker`, `gen_started_time`, `gen_ended_time`, `sim_worker`, `sim_started`, `sim_started_time`, `sim_ended`, `sim_ended_time`, `gen_informed`, `gen_informed_time`, `kill_sent`) are freely overwritable by default. Set `safe_mode=True` to enable protection. Overwriting these fields without understanding their purpose may crash libEnsemble. diff --git a/docs/function_guides/history_array.rst b/docs/function_guides/history_array.rst index d09c19c27f..03ded946d9 100644 --- a/docs/function_guides/history_array.rst +++ b/docs/function_guides/history_array.rst @@ -20,9 +20,9 @@ The manager maintains a global copy. Each row contains: 3. :ref:`Reserved fields` containing metadata **Simulator functions** (``sim_f``) must return their data as arrays with the same -:ref:`types` as ``sim_specs["out"]``. Alternatively, a ``simulator`` +dtype as ``sim_specs["out"]``. Alternatively, a ``simulator`` callable in gest-api format (accepting and returning a ``dict``) can be provided via -``SimSpecs.simulator``; libEnsemble wraps it automatically and handles the NumPy +``SimSpecs.simulator``; libEnsemble wraps it automatically and handles the dtype conversion. **Generators** that adhere to the ``gest_api`` standard implement ``suggest()`` and diff --git a/docs/function_guides/simulator.rst b/docs/function_guides/simulator.rst index 46c625488d..65181e8537 100644 --- a/docs/function_guides/simulator.rst +++ b/docs/function_guides/simulator.rst @@ -8,59 +8,108 @@ Simulator and :ref:`Generator functions` have relatively similar Writing a Simulator ------------------- -.. code-block:: python +.. note:: + The `gest-api` simulator interface is the recommended approach for new libEnsemble projects. + The "Legacy Simulator Function" interface is supported for backward compatibility but may be deprecated in a future release. + +.. tab-set:: + + .. tab-item:: Standardized Simulator (gest-api) + + Standardized simulators are plain callables — no base class required — with the signature:: + + def my_simulation(input_dict: dict, **kwargs) -> dict: + + They receive a single point as a Python dictionary (keyed by VOCS variable and constant + names) and return a dictionary of outputs (keyed by VOCS objective, observable, and + constraint names). + + .. code-block:: python + + def my_simulation(input_dict: dict, **kwargs) -> dict: + x1 = input_dict["x1"] + x2 = input_dict["x2"] + f = (x1 - 1) ** 2 + (x2 - 2) ** 2 + return {"f": f} + + Configure it with ``SimSpecs`` using a ``VOCS`` object. ``inputs`` and ``outputs`` + are derived automatically from the VOCS when not set explicitly: + + .. code-block:: python + + from gest_api.vocs import VOCS + from libensemble.specs import SimSpecs + + vocs = VOCS( + variables={"x1": [0, 1.0], "x2": [0, 10.0]}, + objectives={"f": "MINIMIZE"}, + ) + + sim_specs = SimSpecs( + simulator=my_simulation, + vocs=vocs, + ) + + If ``libE_info`` is needed (e.g., to access the :doc:`executor<../executor/overview>`), + declare it as a keyword argument and libEnsemble will pass it automatically:: + + def my_simulation(input_dict: dict, libE_info=None, **kwargs) -> dict: + + .. tab-item:: Legacy Simulator Function + + .. code-block:: python - def my_simulation(Input, persis_info, sim_specs, libE_info): - batch_size = sim_specs["user"]["batch_size"] + def my_simulation(Input, persis_info, sim_specs, libE_info): + batch_size = sim_specs["user"]["batch_size"] - Output = np.zeros(batch_size, sim_specs["out"]) - # ... - Output["f"], persis_info = do_a_simulation(Input["x"], persis_info) + Output = np.zeros(batch_size, sim_specs["out"]) + # ... + Output["f"], persis_info = do_a_simulation(Input["x"], persis_info) - return Output, persis_info + return Output, persis_info -Most ``sim_f`` function definitions written by users resemble:: + Most ``sim_f`` function definitions written by users resemble:: - def my_simulation(Input, persis_info, sim_specs, libE_info): + def my_simulation(Input, persis_info, sim_specs, libE_info): -where: + where: - * ``Input`` is a selection of the :ref:`History array`, a NumPy structured array. - * :ref:`persis_info` is a dictionary containing state information. - * :ref:`sim_specs` is a dictionary of simulation parameters. - * ``libE_info`` is a dictionary containing libEnsemble-specific entries. + * ``Input`` is a selection of the :ref:`History array`, a NumPy structured array. + * :ref:`persis_info` is a dictionary containing state information. + * :ref:`sim_specs` is a dictionary of simulation parameters. + * ``libE_info`` is a dictionary containing libEnsemble-specific entries. -Valid simulator functions can accept a subset of the above parameters. So a very simple simulator function can start:: + Valid simulator functions can accept a subset of the above parameters. So a very simple simulator function can start:: - def my_simulation(Input): + def my_simulation(Input): -If ``sim_specs`` was initially defined: + If ``sim_specs`` was initially defined: -.. code-block:: python + .. code-block:: python - sim_specs = SimSpecs( - sim_f=my_simulation, - inputs=["x"], - outputs=["f", float, (1,)], - user={"batch_size": 128}, - ) + sim_specs = SimSpecs( + sim_f=my_simulation, + inputs=["x"], + outputs=["f", float, (1,)], + user={"batch_size": 128}, + ) -Then user parameters and a *local* array of outputs may be obtained/initialized like:: + Then user parameters and a *local* array of outputs may be obtained/initialized like:: - batch_size = sim_specs["user"]["batch_size"] - Output = np.zeros(batch_size, dtype=sim_specs["out"]) + batch_size = sim_specs["user"]["batch_size"] + Output = np.zeros(batch_size, dtype=sim_specs["out"]) -This array should be populated with output values from the simulation:: + This array should be populated with output values from the simulation:: - Output["f"], persis_info = do_a_simulation(Input["x"], persis_info) + Output["f"], persis_info = do_a_simulation(Input["x"], persis_info) -Then return the array and ``persis_info`` to libEnsemble:: + Then return the array and ``persis_info`` to libEnsemble:: - return Output, persis_info + return Output, persis_info -Between the ``Output`` definition and the ``return``, any computation can be performed. -Users can try an :doc:`executor<../executor/overview>` to submit applications to parallel -resources, or plug in components from other libraries to serve their needs. + Between the ``Output`` definition and the ``return``, any computation can be performed. + Users can try an :doc:`executor<../executor/overview>` to submit applications to parallel + resources, or plug in components from other libraries to serve their needs. Executor -------- From 56d57e6408d37af4c54367c66c335cf054e773c7 Mon Sep 17 00:00:00 2001 From: jlnav Date: Tue, 28 Apr 2026 12:18:44 -0500 Subject: [PATCH 25/34] convert Executor pages to single page with tabs for Overview, Base, MPI --- docs/executor/ex_index.rst | 261 ++++++++++++++++++- docs/executor/executor.rst | 62 ----- docs/executor/mpi_executor.rst | 40 --- docs/executor/overview.rst | 172 ------------ docs/function_guides/generator.rst | 2 +- docs/function_guides/simulator.rst | 6 +- docs/overview_usecases.rst | 2 +- docs/platforms/bebop.rst | 2 +- docs/platforms/perlmutter.rst | 4 +- docs/platforms/platforms_index.rst | 8 +- docs/platforms/srun.rst | 6 +- docs/resource_manager/overview.rst | 4 +- docs/resource_manager/resource_detection.rst | 2 +- docs/running_libE.rst | 4 +- docs/tutorials/aposmm_tutorial.rst | 2 +- docs/tutorials/calib_cancel_tutorial.rst | 4 +- docs/tutorials/executor_forces_tutorial.rst | 6 +- 17 files changed, 279 insertions(+), 308 deletions(-) delete mode 100644 docs/executor/executor.rst delete mode 100644 docs/executor/mpi_executor.rst delete mode 100644 docs/executor/overview.rst diff --git a/docs/executor/ex_index.rst b/docs/executor/ex_index.rst index ee4698c21f..0a05698ba5 100644 --- a/docs/executor/ex_index.rst +++ b/docs/executor/ex_index.rst @@ -6,11 +6,256 @@ Executors libEnsemble's Executors can be used within user functions to provide a simple, portable interface for running and managing user applications. -.. toctree:: - :maxdepth: 2 - :titlesonly: - :caption: libEnsemble Executors: - - overview - executor - mpi_executor +.. tab-set:: + + .. tab-item:: Overview + + The **Executor** provides a portable interface for running applications on any system and + any number of compute resources. + + .. dropdown:: Detailed description + + An **Executor** interface is provided by libEnsemble to remove the burden + of system interaction from the user and improve workflow portability. Users + first register their applications to Executor instances, which then return + corresponding ``Task`` objects upon submission within user functions. + + **Task** attributes and retrieval functions can be queried to determine + the status of running application instances. Functions are also provided + to access and interrogate files in the task's working directory. + + libEnsemble's Executors and Tasks contain many familiar features and methods + to Python's native `concurrent futures`_ interface. Executors feature the + ``submit()`` function for launching apps (detailed below), but currently do + not support ``map()`` or ``shutdown()``. Tasks are much like ``futures``. + They feature the ``cancel()``, ``cancelled()``, ``running()``, ``done()``, + ``result()``, and ``exception()`` functions from the standard. + + The main ``Executor`` class can subprocess serial applications in place, + while the ``MPIExecutor`` is used for running MPI applications. + + Typically, users choose and parameterize their ``Executor`` objects in their + calling scripts, where each executable generator or simulation application is + registered to it. Once in the user-side worker code (sim/gen func), the Executor + can be retrieved without any need to specify the type. + + Once the Executor is retrieved, tasks can be submitted by specifying the + ``app_name`` from registration in the calling script alongside other optional + parameters described in the API. + + Basic usage + ----------- + + To set up an MPI executor, register an MPI application, and add + to the ensemble object. + + .. code-block:: python + + from libensemble import Ensemble + from libensemble.executors import MPIExecutor + + exctr = MPIExecutor() + exctr.register_app(full_path="/path/to/my/exe", app_name="sim1") + ensemble = Ensemble(executor=exctr) + + **In user simulation function**:: + + def sim_func(H, persis_info, sim_specs, libE_info): + + input_param = str(int(H["x"][0][0])) + exctr = libE_info["executor"] + + task = exctr.submit( + app_name="sim1", + num_procs=8, + app_args=input_param, + stdout="out.txt", + stderr="err.txt", + ) + + # Wait for task to complete + task.wait() + + Example use-cases: + + * :doc:`Electrostatic Forces example <../tutorials/executor_forces_tutorial>`: Launches the ``forces.x`` MPI application. + + * :doc:`Forces example with GPUs <../tutorials/forces_gpu_tutorial>`: Auto-assigns GPUs via executor. + + See :doc:`Running on HPC Systems<../platforms/platforms_index>` for illustrations + of how common options such as ``libE_specs["dedicated_mode"]`` affect the + run configuration on clusters and supercomputers. + + Advanced Features + ----------------- + + **Example of polling output and killing application:** + + In simulation function (sim_f). + + .. code-block:: python + + import time + + + def sim_func(H, persis_info, sim_specs, libE_info): + input_param = str(int(H["x"][0][0])) + exctr = libE_info["executor"] + + task = exctr.submit( + app_name="sim1", + num_procs=8, + app_args=input_param, + stdout="out.txt", + stderr="err.txt", + ) + + timeout_sec = 600 + poll_delay_sec = 1 + + while not task.finished: + # Has manager sent a finish signal + if exctr.manager_kill_received(): + task.kill() + my_cleanup() + + # Check output file for error and kill task + elif task.stdout_exists(): + if "Error" in task.read_stdout(): + task.kill() + + elif task.runtime > timeout_sec: + task.kill() # Timeout + + else: + time.sleep(poll_delay_sec) + task.poll() + + print(task.state) # state may be finished/failed/killed + + Users who wish to poll only for manager kill signals and timeouts don't necessarily + need to construct a polling loop like above, but can instead use the ``Executor`` + built-in ``polling_loop()`` method. An alternative to the above simulation function + may resemble: + + .. code-block:: python + + def sim_func(H, persis_info, sim_specs, libE_info): + input_param = str(int(H["x"][0][0])) + exctr = libE_info["executor"] + + task = exctr.submit( + app_name="sim1", + num_procs=8, + app_args=input_param, + stdout="out.txt", + stderr="err.txt", + ) + + timeout_sec = 600 + poll_delay_sec = 1 + + exctr.polling_loop(task, timeout=timeout_sec, delay=poll_delay_sec) + + print(task.state) # state may be finished/failed/killed + + The ``MPIExecutor`` autodetects system criteria such as the appropriate MPI launcher + and mechanisms to poll and kill tasks. It also has access to the resource manager, + which partitions resources among workers, ensuring that runs utilize different + resources (e.g., nodes). Furthermore, the ``MPIExecutor`` offers resilience via the + feature of re-launching tasks that fail to start because of system factors. + + .. _concurrent futures: https://docs.python.org/library/concurrent.futures.html + + .. tab-item:: Base Executor + + .. automodule:: executor + :no-undoc-members: + + Only for running local serial-launched applications. + To run MPI applications and use detected resources, use the `MPI Executor` tab. + + .. tab-set:: + + .. tab-item:: Base Executor + + .. autoclass:: libensemble.executors.executor.Executor + :members: + :exclude-members: serial_setup, sim_default_app, gen_default_app, get_app, default_app, set_resources, get_task, set_workerID, set_worker_info, new_tasks_timing, add_platform_info, set_gen_procs_gpus, kill, poll + + .. automethod:: __init__ + + .. tab-item:: Task + + .. _task_tag: + + Tasks are created and returned by the Executor's ``submit()``. Tasks + can be polled, killed, and waited on with the respective ``poll``, ``kill``, and ``wait`` functions. + Task information can be queried through instance attributes and query functions. + + .. autoclass:: libensemble.executors.executor.Task + :members: + :exclude-members: calc_task_timing, check_poll + + .. tab-item:: Task Attributes + + .. note:: + These should not be set directly. Tasks are launched by the Executor, + and task information can be queried through the task attributes + below and the query functions. + + :task.state: (string) The task status. One of + ("UNKNOWN"|"CREATED"|"WAITING"|"RUNNING"|"FINISHED"|"USER_KILLED"|"FAILED"|"FAILED_TO_START") + + :task.process: (process obj) The process object used by the underlying process + manager (e.g., return value of subprocess.Popen). + :task.errcode: (int) The error code (or return code) used by the underlying process manager. + :task.finished: (boolean) True means task has finished running - not whether it was successful. + :task.success: (boolean) Did task complete successfully (e.g., the return code is zero)? + :task.runtime: (int) Time in seconds that task has been running. + :task.submit_time: (int) Time since epoch that task was submitted. + :task.total_time: (int) Total time from task submission to completion (only available when task is finished). + + Run configuration attributes - some will be autogenerated: + + :task.workdir: (string) Work directory for the task + :task.name: (string) Name of task - autogenerated + :task.app: (app obj) Use application/executable, registered using exctr.register_app + :task.app_args: (string) Application arguments as a string + :task.stdout: (string) Name of file where the standard output of the task is written (in task.workdir) + :task.stderr: (string) Name of file where the standard error of the task is written (in task.workdir) + :task.dry_run: (boolean) True if task corresponds to dry run (no actual submission) + :task.runline: (string) Complete, parameterized command to be subprocessed to launch app + + .. tab-item:: MPI Executor + + .. automodule:: mpi_executor + :no-undoc-members: + + .. autoclass:: libensemble.executors.mpi_executor.MPIExecutor + :show-inheritance: + :inherited-members: + :exclude-members: serial_setup, sim_default_app, gen_default_app, get_app, default_app, set_resources, get_task, set_workerID, set_worker_info, new_tasks_timing, add_platform_info, set_gen_procs_gpus, kill, poll + + Class-specific Attributes + ------------------------- + + Class-specific attributes can be set directly to alter the behavior of the MPI + Executor. However, they should be used with caution, because they may not + be implemented in other executors. + + :max_submit_attempts: (int) Maximum number of launch attempts for a given + task. *Default: 5*. + :fail_time: (int or float) *Only if wait_on_start is set.* Maximum run time to failure in + seconds that results in relaunch. *Default: 2*. + :retry_delay_incr: (int or float) Delay increment between launch attempts in seconds. + *Default: 5*. (i.e., First retry after 5 seconds, then 10 seconds, then 15, etc...) + + Example. To increase resilience against submission failures:: + + taskctrl = MPIExecutor() + taskctrl.max_launch_attempts = 8 + taskctrl.fail_time = 5 + taskctrl.retry_delay_incr = 10 + + .. _customizer: diff --git a/docs/executor/executor.rst b/docs/executor/executor.rst deleted file mode 100644 index 6784134a05..0000000000 --- a/docs/executor/executor.rst +++ /dev/null @@ -1,62 +0,0 @@ -Base Executor - Local apps -========================== - -.. automodule:: executor - :no-undoc-members: - -See the Executor APIs for optional arguments. - -.. tab-set:: - - .. tab-item:: Base Executor - - Only for running local serial-launched applications. - To run MPI applications and use detected resources, use the :doc:`MPIExecutor<../executor/mpi_executor>` - - .. autoclass:: libensemble.executors.executor.Executor - :members: - :exclude-members: serial_setup, sim_default_app, gen_default_app, get_app, default_app, set_resources, get_task, set_workerID, set_worker_info, new_tasks_timing, add_platform_info, set_gen_procs_gpus, kill, poll - - .. automethod:: __init__ - - .. tab-item:: Task - - .. _task_tag: - - Tasks are created and returned by the Executor's ``submit()``. Tasks - can be polled, killed, and waited on with the respective ``poll``, ``kill``, and ``wait`` functions. - Task information can be queried through instance attributes and query functions. - - .. autoclass:: libensemble.executors.executor.Task - :members: - :exclude-members: calc_task_timing, check_poll - - .. tab-item:: Task Attributes - - .. note:: - These should not be set directly. Tasks are launched by the Executor, - and task information can be queried through the task attributes - below and the query functions. - - :task.state: (string) The task status. One of - ("UNKNOWN"|"CREATED"|"WAITING"|"RUNNING"|"FINISHED"|"USER_KILLED"|"FAILED"|"FAILED_TO_START") - - :task.process: (process obj) The process object used by the underlying process - manager (e.g., return value of subprocess.Popen). - :task.errcode: (int) The error code (or return code) used by the underlying process manager. - :task.finished: (boolean) True means task has finished running - not whether it was successful. - :task.success: (boolean) Did task complete successfully (e.g., the return code is zero)? - :task.runtime: (int) Time in seconds that task has been running. - :task.submit_time: (int) Time since epoch that task was submitted. - :task.total_time: (int) Total time from task submission to completion (only available when task is finished). - - Run configuration attributes - some will be autogenerated: - - :task.workdir: (string) Work directory for the task - :task.name: (string) Name of task - autogenerated - :task.app: (app obj) Use application/executable, registered using exctr.register_app - :task.app_args: (string) Application arguments as a string - :task.stdout: (string) Name of file where the standard output of the task is written (in task.workdir) - :task.stderr: (string) Name of file where the standard error of the task is written (in task.workdir) - :task.dry_run: (boolean) True if task corresponds to dry run (no actual submission) - :task.runline: (string) Complete, parameterized command to be subprocessed to launch app diff --git a/docs/executor/mpi_executor.rst b/docs/executor/mpi_executor.rst deleted file mode 100644 index 13773f5ad5..0000000000 --- a/docs/executor/mpi_executor.rst +++ /dev/null @@ -1,40 +0,0 @@ -MPI Executor - MPI apps -======================= - -.. automodule:: mpi_executor - :no-undoc-members: - -See this :doc:`example` for usage. - -.. autoclass:: libensemble.executors.mpi_executor.MPIExecutor - :show-inheritance: - :inherited-members: - :exclude-members: serial_setup, sim_default_app, gen_default_app, get_app, default_app, set_resources, get_task, set_workerID, set_worker_info, new_tasks_timing, add_platform_info, set_gen_procs_gpus, kill, poll - -.. .. automethod:: __init__ - -.. :member-order: bysource -.. :members: __init__, register_app, submit, manager_poll - -Class-specific Attributes -------------------------- - -Class-specific attributes can be set directly to alter the behavior of the MPI -Executor. However, they should be used with caution, because they may not -be implemented in other executors. - -:max_submit_attempts: (int) Maximum number of launch attempts for a given - task. *Default: 5*. -:fail_time: (int or float) *Only if wait_on_start is set.* Maximum run time to failure in - seconds that results in relaunch. *Default: 2*. -:retry_delay_incr: (int or float) Delay increment between launch attempts in seconds. - *Default: 5*. (i.e., First retry after 5 seconds, then 10 seconds, then 15, etc...) - -Example. To increase resilience against submission failures:: - - taskctrl = MPIExecutor() - taskctrl.max_launch_attempts = 8 - taskctrl.fail_time = 5 - taskctrl.retry_delay_incr = 10 - -.. _customizer: diff --git a/docs/executor/overview.rst b/docs/executor/overview.rst deleted file mode 100644 index 8d8d043623..0000000000 --- a/docs/executor/overview.rst +++ /dev/null @@ -1,172 +0,0 @@ -Executor Overview -================= - -The **Executor** provides a portable interface for running applications on any system and -any number of compute resources. - -.. dropdown:: Detailed description - - An **Executor** interface is provided by libEnsemble to remove the burden - of system interaction from the user and improve workflow portability. Users - first register their applications to Executor instances, which then return - corresponding ``Task`` objects upon submission within user functions. - - **Task** attributes and retrieval functions can be queried to determine - the status of running application instances. Functions are also provided - to access and interrogate files in the task's working directory. - - libEnsemble's Executors and Tasks contain many familiar features and methods - to Python's native `concurrent futures`_ interface. Executors feature the - ``submit()`` function for launching apps (detailed below), but currently do - not support ``map()`` or ``shutdown()``. Tasks are much like ``futures``. - They feature the ``cancel()``, ``cancelled()``, ``running()``, ``done()``, - ``result()``, and ``exception()`` functions from the standard. - - The main ``Executor`` class can subprocess serial applications in place, - while the ``MPIExecutor`` is used for running MPI applications. - - Typically, users choose and parameterize their ``Executor`` objects in their - calling scripts, where each executable generator or simulation application is - registered to it. Once in the user-side worker code (sim/gen func), the Executor - can be retrieved without any need to specify the type. - - Once the Executor is retrieved, tasks can be submitted by specifying the - ``app_name`` from registration in the calling script alongside other optional - parameters described in the API. - -Basic usage ------------ - -To set up an MPI executor, register an MPI application, and add -to the ensemble object. - -.. code-block:: python - - from libensemble import Ensemble - from libensemble.executors import MPIExecutor - - exctr = MPIExecutor() - exctr.register_app(full_path="/path/to/my/exe", app_name="sim1") - ensemble = Ensemble(executor=exctr) - -**In user simulation function**:: - - def sim_func(H, persis_info, sim_specs, libE_info): - - input_param = str(int(H["x"][0][0])) - exctr = libE_info["executor"] - - task = exctr.submit( - app_name="sim1", - num_procs=8, - app_args=input_param, - stdout="out.txt", - stderr="err.txt", - ) - - # Wait for task to complete - task.wait() - -Example use-cases: - -* :doc:`Electrostatic Forces example <../tutorials/executor_forces_tutorial>`: Launches the ``forces.x`` MPI application. - -* :doc:`Forces example with GPUs <../tutorials/forces_gpu_tutorial>`: Auto-assigns GPUs via executor. - -See the :doc:`Executor` or :doc:`MPIExecutor` interface -for the complete API. - -See :doc:`Running on HPC Systems<../platforms/platforms_index>` for illustrations -of how common options such as ``libE_specs["dedicated_mode"]`` affect the -run configuration on clusters and supercomputers. - -Advanced Features ------------------ - -**Example of polling output and killing application:** - -In simulation function (sim_f). - -.. code-block:: python - - import time - - - def sim_func(H, persis_info, sim_specs, libE_info): - input_param = str(int(H["x"][0][0])) - exctr = libE_info["executor"] - - task = exctr.submit( - app_name="sim1", - num_procs=8, - app_args=input_param, - stdout="out.txt", - stderr="err.txt", - ) - - timeout_sec = 600 - poll_delay_sec = 1 - - while not task.finished: - # Has manager sent a finish signal - if exctr.manager_kill_received(): - task.kill() - my_cleanup() - - # Check output file for error and kill task - elif task.stdout_exists(): - if "Error" in task.read_stdout(): - task.kill() - - elif task.runtime > timeout_sec: - task.kill() # Timeout - - else: - time.sleep(poll_delay_sec) - task.poll() - - print(task.state) # state may be finished/failed/killed - -.. The Executor can also be retrieved using Python's ``with`` context switching statement, -.. although this is effectively syntactical sugar to above:: -.. -.. from libensemble.executors import Executor -.. -.. with Executor.executor as exctr: -.. task = exctr.submit(app_name="sim1", num_procs=8, app_args="input.txt", -.. stdout="out.txt", stderr="err.txt") -.. ... - -Users who wish to poll only for manager kill signals and timeouts don't necessarily -need to construct a polling loop like above, but can instead use the ``Executor`` -built-in ``polling_loop()`` method. An alternative to the above simulation function -may resemble: - -.. code-block:: python - - def sim_func(H, persis_info, sim_specs, libE_info): - input_param = str(int(H["x"][0][0])) - exctr = libE_info["executor"] - - task = exctr.submit( - app_name="sim1", - num_procs=8, - app_args=input_param, - stdout="out.txt", - stderr="err.txt", - ) - - timeout_sec = 600 - poll_delay_sec = 1 - - exctr.polling_loop(task, timeout=timeout_sec, delay=poll_delay_sec) - - print(task.state) # state may be finished/failed/killed - -The ``MPIExecutor`` autodetects system criteria such as the appropriate MPI launcher -and mechanisms to poll and kill tasks. It also has access to the resource manager, -which partitions resources among workers, ensuring that runs utilize different -resources (e.g., nodes). Furthermore, the ``MPIExecutor`` offers resilience via the -feature of re-launching tasks that fail to start because of system factors. - -.. _concurrent futures: https://docs.python.org/library/concurrent.futures.html diff --git a/docs/function_guides/generator.rst b/docs/function_guides/generator.rst index f756b5447e..c2c8fbbeb4 100644 --- a/docs/function_guides/generator.rst +++ b/docs/function_guides/generator.rst @@ -124,7 +124,7 @@ Writing a Generator return Output, persis_info Between the ``Output`` definition and the ``return``, any computation can be performed. - Users can try an :doc:`executor<../executor/overview>` to submit applications to parallel + Users can try an :doc:`executor<../executor/ex_index>` to submit applications to parallel resources, or plug in components from other libraries to serve their needs. .. note:: diff --git a/docs/function_guides/simulator.rst b/docs/function_guides/simulator.rst index 65181e8537..40374c55c8 100644 --- a/docs/function_guides/simulator.rst +++ b/docs/function_guides/simulator.rst @@ -50,7 +50,7 @@ Writing a Simulator vocs=vocs, ) - If ``libE_info`` is needed (e.g., to access the :doc:`executor<../executor/overview>`), + If ``libE_info`` is needed (e.g., to access the :doc:`executor<../executor/ex_index>`), declare it as a keyword argument and libEnsemble will pass it automatically:: def my_simulation(input_dict: dict, libE_info=None, **kwargs) -> dict: @@ -108,7 +108,7 @@ Writing a Simulator return Output, persis_info Between the ``Output`` definition and the ``return``, any computation can be performed. - Users can try an :doc:`executor<../executor/overview>` to submit applications to parallel + Users can try an :doc:`executor<../executor/ex_index>` to submit applications to parallel resources, or plug in components from other libraries to serve their needs. Executor @@ -116,7 +116,7 @@ Executor libEnsemble's Executors are commonly used within simulator functions to launch and monitor applications. An excellent overview is already available -:doc:`here<../executor/overview>`. +:doc:`here<../executor/ex_index>`. See the :doc:`Ensemble with an MPI Application tutorial<../tutorials/executor_forces_tutorial>` for an additional example to try out. diff --git a/docs/overview_usecases.rst b/docs/overview_usecases.rst index 81b2dcaa80..04ebb5e14f 100644 --- a/docs/overview_usecases.rst +++ b/docs/overview_usecases.rst @@ -18,7 +18,7 @@ which perform computations via **simulators**: | -An :doc:`executor` interface is available so generators and simulators +An :doc:`executor` interface is available so generators and simulators can launch and monitor external applications. All simulations and generated values are recorded in a NumPy diff --git a/docs/platforms/bebop.rst b/docs/platforms/bebop.rst index e57172c1b3..61403eb973 100644 --- a/docs/platforms/bebop.rst +++ b/docs/platforms/bebop.rst @@ -75,7 +75,7 @@ Now run your script with four workers (one for generator and three for simulatio **three** workers to one allocated compute node, with three nodes available for the workers to launch calculations with the Executor or a launch command. This is an example of running in :doc:`centralized` mode, and, -if using the :doc:`Executor<../executor/mpi_executor>`, libEnsemble should +if using the :doc:`Executor<../executor/ex_index>`, libEnsemble should be initiated with ``libE_specs["dedicated_mode"]=True`` .. note:: diff --git a/docs/platforms/perlmutter.rst b/docs/platforms/perlmutter.rst index a2768e1d26..755e5bb7eb 100644 --- a/docs/platforms/perlmutter.rst +++ b/docs/platforms/perlmutter.rst @@ -161,14 +161,14 @@ Some FAQs specific to Perlmutter. See more on the :doc:`FAQ<../FAQ>` page. #SBATCH --gpus-per-task=1 Instead provide these to sub-tasks via the ``extra_args`` option to - the :doc:`MPIExecutor<../executor/mpi_executor>` ``submit`` function. + the :doc:`MPIExecutor<../executor/ex_index>` ``submit`` function. .. dropdown:: **GTL_DEBUG: [0] cudaHostRegister: no CUDA-capable device is detected** If using the environment variable ``MPICH_GPU_SUPPORT_ENABLED``, then ``srun`` commands, at time of writing, expect an option for allocating GPUs (e.g.~ ``--gpus-per-task=1`` would allocate one GPU to each MPI task of the MPI run). It is recommended that tasks submitted - via the :doc:`MPIExecutor<../executor/mpi_executor>` specify this in the ``extra_args`` + via the :doc:`MPIExecutor<../executor/ex_index>` specify this in the ``extra_args`` option to the ``submit`` function (rather than using an ``#SBATCH`` command). This is needed even when using setting ``CUDA_VISIBLE_DEVICES`` or other options. diff --git a/docs/platforms/platforms_index.rst b/docs/platforms/platforms_index.rst index e6731e8a9e..5d1cf3e128 100644 --- a/docs/platforms/platforms_index.rst +++ b/docs/platforms/platforms_index.rst @@ -19,7 +19,7 @@ Centralized Running ------------------- The default communications scheme places the manager and workers on the first node. -The :doc:`MPI Executor<../executor/mpi_executor>` can then be invoked by each +The :doc:`MPI Executor<../executor/ex_index>` can then be invoked by each simulation worker, and libEnsemble will distribute user applications across the node allocation. This is the **most common approach** where each simulation runs an MPI application. @@ -103,7 +103,7 @@ the nodes within that allocation. *How does libEnsemble know where to run tasks (user applications)?* -The libEnsemble :doc:`MPI Executor<../executor/mpi_executor>` can be initialized from the user calling +The libEnsemble :doc:`MPI Executor<../executor/ex_index>` can be initialized from the user calling script, and then used by workers to run tasks. The Executor will automatically detect the nodes available on most systems. Alternatively, the user can provide a file called **node_list** in the run directory. By default, the Executor will divide up the nodes evenly to each worker. @@ -113,7 +113,7 @@ Mapping Tasks to Resources The :ref:`resource manager` detects node lists from :ref:`common batch schedulers`, -and partitions these to workers. The :doc:`MPI Executor<../executor/mpi_executor>` +and partitions these to workers. The :doc:`MPI Executor<../executor/ex_index>` accesses the resources available to the current worker when launching tasks. Assigning GPUs @@ -138,7 +138,7 @@ System detection for resources can be overridden using the :ref:`resource_info` for more. +`custom_info` argument. See the :doc:`MPI Executor<../executor/ex_index>` for more. Systems with Launch/MOM Nodes ----------------------------- diff --git a/docs/platforms/srun.rst b/docs/platforms/srun.rst index 5ec8a64839..101b441bc5 100644 --- a/docs/platforms/srun.rst +++ b/docs/platforms/srun.rst @@ -11,7 +11,7 @@ Example SLURM submission scripts for various systems are given in the :doc:`examples`. Further examples are given in some of the specific platform guides (e.g., :doc:`Perlmutter guide`) -By default, the :doc:`MPIExecutor<../executor/mpi_executor>` uses ``mpirun`` +By default, the :doc:`MPIExecutor<../executor/ex_index>` uses ``mpirun`` as a priority over ``srun`` as it works better in some cases. If ``mpirun`` does not work well, then try telling the MPIExecutor to use ``srun`` when it is initiated in the calling script:: @@ -45,14 +45,14 @@ when assigning more than one worker to any given node. #SBATCH --gpus-per-task=1 Instead provide these to sub-tasks via the ``extra_args`` option to the - :doc:`MPIExecutor<../executor/mpi_executor>` ``submit`` function. + :doc:`MPIExecutor<../executor/ex_index>` ``submit`` function. .. dropdown:: **GTL_DEBUG: [0] cudaHostRegister: no CUDA-capable device is detected** If using the environment variable ``MPICH_GPU_SUPPORT_ENABLED``, then ``srun`` commands may expect an option for allocating GPUs (e.g., ``--gpus-per-task=1`` would allocate one GPU to each MPI task of the MPI run). It is recommended that tasks submitted - via the :doc:`MPIExecutor<../executor/mpi_executor>` specify this in the ``extra_args`` + via the :doc:`MPIExecutor<../executor/ex_index>` specify this in the ``extra_args`` option to the ``submit`` function (rather than using an ``#SBATCH`` command). If running the libEnsemble calling script with ``srun``, then it is recommended that diff --git a/docs/resource_manager/overview.rst b/docs/resource_manager/overview.rst index 556e9c0f34..f980eca3b3 100644 --- a/docs/resource_manager/overview.rst +++ b/docs/resource_manager/overview.rst @@ -9,7 +9,7 @@ libEnsemble comes with built-in resource management. This entails the core counts, and GPUs), and the allocation of resources to workers. By default, the provisioned resources are divided by the number of workers. -libEnsemble's :doc:`MPI Executor<../executor/mpi_executor>` is aware of +libEnsemble's :doc:`MPI Executor<../executor/ex_index>` is aware of these supplied resources, and if not given any of ``num_nodes``, ``num_procs``, or ``procs_per_node`` in the submit function, it will try to use all nodes and CPU cores available to the worker. @@ -119,7 +119,7 @@ Accessing resources from the simulation function In the user's simulation function, the resources supplied to the worker can be :doc:`interrogated directly via the resources class attribute`. -libEnsemble's executors (e.g., the :doc:`MPI Executor<../executor/mpi_executor>`) are +libEnsemble's executors (e.g., the :doc:`MPI Executor<../executor/ex_index>`) are aware of these supplied resources, and if not given any of ``num_nodes``, ``num_procs``, or ``procs_per_node`` in the submit function, it will try to use all nodes and CPU cores available. diff --git a/docs/resource_manager/resource_detection.rst b/docs/resource_manager/resource_detection.rst index 2048eb2793..e294b82b9f 100644 --- a/docs/resource_manager/resource_detection.rst +++ b/docs/resource_manager/resource_detection.rst @@ -4,7 +4,7 @@ Resource Detection ================== The resource manager can detect system resources, and partition -these to workers. The :doc:`MPI Executor<../executor/mpi_executor>` +these to workers. The :doc:`MPI Executor<../executor/ex_index>` accesses the resources available to the current worker when launching tasks. Node-lists are detected by an environment variable on the following systems: diff --git a/docs/running_libE.rst b/docs/running_libE.rst index 6e1afa3730..c78dcdaabb 100644 --- a/docs/running_libE.rst +++ b/docs/running_libE.rst @@ -5,7 +5,7 @@ Running libEnsemble .. note:: You do not need the ``mpi`` communication mode to use the - :doc:`MPI Executor`. The communication modes described + :doc:`MPI Executor`. The communication modes described here only refer to how the libEnsemble manager and workers communicate. .. tab-set:: @@ -110,7 +110,7 @@ For example:: set in your simulation script before the Executor *submit* command will export the setting to your run. For running a bash script in a sub environment when using the Executor, see -the ``env_script`` option to the :doc:`MPI Executor`. +the ``env_script`` option to the :doc:`MPI Executor`. Running on Multi-Node Systems ----------------------------- diff --git a/docs/tutorials/aposmm_tutorial.rst b/docs/tutorials/aposmm_tutorial.rst index cca7f13e00..d5b3f4f04a 100644 --- a/docs/tutorials/aposmm_tutorial.rst +++ b/docs/tutorials/aposmm_tutorial.rst @@ -195,7 +195,7 @@ Applications APOSMM is not limited to evaluating minima from pure Python simulation functions. Many common libEnsemble use-cases involve using -libEnsemble's :doc:`MPI Executor<../executor/overview>` to launch user +libEnsemble's :doc:`MPI Executor<../executor/ex_index>` to launch user applications with parameters requested by APOSMM, then evaluate their output using APOSMM, and repeat until minima are identified. A currently supported example can be found in libEnsemble's `WarpX Scaling Test`_. diff --git a/docs/tutorials/calib_cancel_tutorial.rst b/docs/tutorials/calib_cancel_tutorial.rst index 7edae8aa96..316e56ba1d 100644 --- a/docs/tutorials/calib_cancel_tutorial.rst +++ b/docs/tutorials/calib_cancel_tutorial.rst @@ -213,8 +213,8 @@ by a user function, otherwise it will be ignored. To demonstrate this, the test captures and processes this signal from the manager. In order to do this, a compiled version of the borehole function is launched by ``sim_funcs/borehole_kills.py`` -via the :doc:`Executor<../executor/overview>`. As the borehole application used here is serial, we use the -:doc:`Executor base class<../executor/executor>` rather than the commonly used :doc:`MPIExecutor<../executor/mpi_executor>` +via the :doc:`Executor<../executor/ex_index>`. As the borehole application used here is serial, we use the +:doc:`Executor base class<../executor/ex_index>` rather than the commonly used :doc:`MPIExecutor<../executor/ex_index>` class. The base Executor submit routine simply sub-processes a serial application in-place. After the initial sample batch of evaluations has been processed, an artificial delay is added to the sub-processed borehole to allow time to receive the kill signal and terminate the application. Killed simulations will be reported at diff --git a/docs/tutorials/executor_forces_tutorial.rst b/docs/tutorials/executor_forces_tutorial.rst index a083aa2a82..9fcb1ae743 100644 --- a/docs/tutorials/executor_forces_tutorial.rst +++ b/docs/tutorials/executor_forces_tutorial.rst @@ -4,7 +4,7 @@ Ensemble with an MPI Application This tutorial highlights libEnsemble's capability to portably execute and monitor external scripts or user applications within simulation or generator -functions using the :doc:`executor<../executor/overview>`. +functions using the :doc:`executor<../executor/ex_index>`. |Open in Colab| @@ -13,7 +13,7 @@ electrostatic forces between a collection of particles. The simulator function launches instances of this executable and reads output files to determine the result. -This tutorial uses libEnsemble's :doc:`MPI Executor<../executor/mpi_executor>`, +This tutorial uses libEnsemble's :doc:`MPI Executor<../executor/ex_index>`, which automatically detects available MPI runners and resources. This example also uses a persistent generator. This generator runs on a @@ -49,7 +49,7 @@ generation functions and call libEnsemble. Create a Python file called :linenos: :end-at: ensemble = Ensemble -We first instantiate our :doc:`MPI Executor<../executor/mpi_executor>`. +We first instantiate our :doc:`MPI Executor<../executor/ex_index>`. Registering an application is as easy as providing the full file-path and giving it a memorable name. This Executor will later be used within our simulation function to launch the registered app. From 5e09359b21d4da35f501952c0637a72d66ef6a5b Mon Sep 17 00:00:00 2001 From: jlnav Date: Tue, 28 Apr 2026 12:33:20 -0500 Subject: [PATCH 26/34] standardized APOSMM example --- docs/examples/calling_scripts.rst | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/docs/examples/calling_scripts.rst b/docs/examples/calling_scripts.rst index a92a9d6c91..9a9ed0b1dd 100644 --- a/docs/examples/calling_scripts.rst +++ b/docs/examples/calling_scripts.rst @@ -38,15 +38,18 @@ One worker runs a persistent generator and the other four run the forces simulat :caption: tests/scaling_tests/forces/forces_simple/run_libe_forces.py :linenos: -Persistent APOSMM with Gradients --------------------------------- +APOSMM with a Standardized Generator +-------------------------------------- -This example is also from the regression tests and demonstrates configuring a -persistent run via a custom allocation function. +This example from the regression tests demonstrates the v2.0 gest-api interface: +a standardized ``APOSMM`` generator class parameterized by a ``VOCS`` object, +paired with a gest-api ``simulator`` callable. The generator runs on the manager +thread by default, leaving all workers available for simulations. -.. literalinclude:: ../../libensemble/tests/regression_tests/test_persistent_aposmm_with_grad.py +.. literalinclude:: ../../libensemble/tests/regression_tests/test_asktell_aposmm_nlopt.py :language: python - :caption: tests/regression_tests/test_persistent_aposmm_with_grad.py + :caption: tests/regression_tests/test_asktell_aposmm_nlopt.py :linenos: + :end-at: workflow.exit_criteria = ExitCriteria(sim_max=2000) .. _regression tests: https://github.com/Libensemble/libensemble/tree/develop/libensemble/tests/regression_tests From 9034598f491df9c4f8d3b1b77b433fbcdfd704b0 Mon Sep 17 00:00:00 2001 From: jlnav Date: Tue, 28 Apr 2026 14:29:18 -0500 Subject: [PATCH 27/34] fix test --- libensemble/tests/unit_tests/test_ensemble.py | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/libensemble/tests/unit_tests/test_ensemble.py b/libensemble/tests/unit_tests/test_ensemble.py index 1e3c1803fa..0e5de32239 100644 --- a/libensemble/tests/unit_tests/test_ensemble.py +++ b/libensemble/tests/unit_tests/test_ensemble.py @@ -22,14 +22,14 @@ def test_ensemble_parse_args_false(): from libensemble.specs import LibeSpecs # Ensemble(parse_args=False) by default, so these specs won't be overwritten: - e = Ensemble(libE_specs={"comms": "local", "nworkers": 4}) + e = Ensemble(libE_specs=LibeSpecs(comms="local", nworkers=4)) assert hasattr(e, "nworkers"), "nworkers should've passed from libE_specs to Ensemble class" - assert isinstance(e.libE_specs, LibeSpecs), "libE_specs should've been cast to class" + assert isinstance(e.libE_specs, LibeSpecs), "libE_specs should be a LibeSpecs instance" - # test pass attribute as dict - e = Ensemble(libE_specs={"comms": "local", "nworkers": 4}) + # test passing a second instance + e = Ensemble(libE_specs=LibeSpecs(comms="local", nworkers=4)) assert hasattr(e, "nworkers"), "nworkers should've passed from libE_specs to Ensemble class" - assert isinstance(e.libE_specs, LibeSpecs), "libE_specs should've been cast to class" + assert isinstance(e.libE_specs, LibeSpecs), "libE_specs should be a LibeSpecs instance" # test that adjusting Ensemble.nworkers also changes libE_specs e.nworkers = 8 From 89de7f1e2f3878b8922389958df4f9e7c6e2a673 Mon Sep 17 00:00:00 2001 From: jlnav Date: Wed, 29 Apr 2026 14:24:29 -0500 Subject: [PATCH 28/34] I guess sections cant be within tabs, unfortunately? remove zero_resource_workers.rst. other docs refactors --- .../generate-scripts/references/aposmm.md | 4 -- docs/examples/gest_api/aposmm.rst | 2 +- docs/executor/ex_index.rst | 9 +-- docs/function_guides/function_guide_index.rst | 11 +-- docs/function_guides/generator.rst | 18 ++--- .../zero_resource_workers.rst | 69 ------------------- docs/tutorials/executor_forces_tutorial.rst | 23 +------ libensemble/executors/executor.py | 2 - libensemble/tools/live_data/live_data.py | 4 +- 9 files changed, 20 insertions(+), 122 deletions(-) delete mode 100644 docs/resource_manager/zero_resource_workers.rst diff --git a/.claude/skills/generate-scripts/references/aposmm.md b/.claude/skills/generate-scripts/references/aposmm.md index 892007b87a..bac7c2d8c3 100644 --- a/.claude/skills/generate-scripts/references/aposmm.md +++ b/.claude/skills/generate-scripts/references/aposmm.md @@ -62,10 +62,6 @@ When using a SciPy method, must also supply `opt_return_codes` — e.g. [0] for | `lhs_divisions` | int | Latin hypercube partitions (0 or 1 = uniform) | | `rk_const` | float | Multiplier for r_k value | -## Worker Configuration - -With `gen_on_manager=True`, the persistent generator runs on the manager process and all `nworkers` are available for simulations. - ## Local Optimizer Methods ### SciPy (no extra install) diff --git a/docs/examples/gest_api/aposmm.rst b/docs/examples/gest_api/aposmm.rst index a472ced11d..a047f72331 100644 --- a/docs/examples/gest_api/aposmm.rst +++ b/docs/examples/gest_api/aposmm.rst @@ -15,7 +15,7 @@ APOSMM .. literalinclude:: ../../../libensemble/tests/regression_tests/test_asktell_aposmm_nlopt.py :linenos: - :start-at: workflow.libE_specs.gen_on_manager = True + :start-at: workflow = Ensemble(parse_args=True) :end-before: # Perform the run .. tab-item:: APOSMM standalone diff --git a/docs/executor/ex_index.rst b/docs/executor/ex_index.rst index 0a05698ba5..1213e086fe 100644 --- a/docs/executor/ex_index.rst +++ b/docs/executor/ex_index.rst @@ -43,8 +43,7 @@ portable interface for running and managing user applications. ``app_name`` from registration in the calling script alongside other optional parameters described in the API. - Basic usage - ----------- + **Basic usage** To set up an MPI executor, register an MPI application, and add to the ensemble object. @@ -86,8 +85,7 @@ portable interface for running and managing user applications. of how common options such as ``libE_specs["dedicated_mode"]`` affect the run configuration on clusters and supercomputers. - Advanced Features - ----------------- + **Advanced Features** **Example of polling output and killing application:** @@ -237,8 +235,7 @@ portable interface for running and managing user applications. :inherited-members: :exclude-members: serial_setup, sim_default_app, gen_default_app, get_app, default_app, set_resources, get_task, set_workerID, set_worker_info, new_tasks_timing, add_platform_info, set_gen_procs_gpus, kill, poll - Class-specific Attributes - ------------------------- + **Class-specific Attributes** Class-specific attributes can be set directly to alter the behavior of the MPI Executor. However, they should be used with caution, because they may not diff --git a/docs/function_guides/function_guide_index.rst b/docs/function_guides/function_guide_index.rst index 2423849de7..916a6fdd50 100644 --- a/docs/function_guides/function_guide_index.rst +++ b/docs/function_guides/function_guide_index.rst @@ -1,12 +1,13 @@ -====================== -Writing User Functions -====================== +===================== +Writing Gens and Sims +===================== -These guides describe common development patterns and optional components: +These guides describe common development patterns and optional components +for users writing generators and simulators for libEnsemble. .. toctree:: :maxdepth: 2 - :caption: Writing User Functions + :caption: Writing Gens and Sims generator simulator diff --git a/docs/function_guides/generator.rst b/docs/function_guides/generator.rst index c2c8fbbeb4..ab87b4480d 100644 --- a/docs/function_guides/generator.rst +++ b/docs/function_guides/generator.rst @@ -134,8 +134,7 @@ Writing a Generator .. _persistent-gens: - Persistent Generators - --------------------- + **Persistent Generators** While non-persistent generators return after completing their calculation, persistent generators do the following in a loop: @@ -216,8 +215,7 @@ Writing a Generator .. _gen_active_recv: - Active receive mode - ------------------- + **Active receive mode** By default, a persistent worker is expected to receive and send data in a *ping pong* fashion. Alternatively, @@ -228,8 +226,7 @@ Writing a Generator Ensure there are no communication deadlocks in this mode. In manager-worker message exchanges, only the worker-side receive is blocking by default (a non-blocking option is available). - Cancelling Simulations - ---------------------- + **Cancelling Simulations** Previously submitted simulations can be cancelled by sending a message to the manager: @@ -243,8 +240,7 @@ Writing a Generator The :doc:`Borehole Calibration tutorial<../tutorials/calib_cancel_tutorial>` gives an example of the capability to cancel pending simulations. - Modification of existing points - ------------------------------- + **Modification of existing points** To change existing fields of the History array, create a NumPy structured array where the ``dtype`` contains the ``sim_id`` and the fields to be modified. Send this array with ``keep_state=True`` to the manager. @@ -261,16 +257,14 @@ Writing a Generator H_o["cancel_requested"] = True ps.send(H_o, keep_state=True) - Generator initiated shutdown - ---------------------------- + **Generator initiated shutdown** If using a supporting allocation function, the generator can prompt the ensemble to shutdown by simply exiting the function (e.g., on a test for a converged value). For example, the allocation function :ref:`start_only_persistent` closes down the ensemble as soon as a persistent generator returns. The usual return values should be given. - Examples - -------- + **Examples** Examples of non-persistent and persistent generator functions can be found :doc:`here<../examples/gen_funcs>`. diff --git a/docs/resource_manager/zero_resource_workers.rst b/docs/resource_manager/zero_resource_workers.rst deleted file mode 100644 index f60d854336..0000000000 --- a/docs/resource_manager/zero_resource_workers.rst +++ /dev/null @@ -1,69 +0,0 @@ -.. _zero_resource_workers: - -Zero-resource workers -~~~~~~~~~~~~~~~~~~~~~ - -Users with persistent ``gen_f`` functions may notice that the persistent workers -are still automatically assigned resources. This can be wasteful if those workers -only run ``gen_f`` functions in-place (i.e., they do not use the Executor -to submit applications to allocated nodes). Suppose the user is using the -:meth:`parse_args()` function and runs:: - - python run_ensemble_persistent_gen.py --nworkers 3 - -If three nodes are available in the node allocation, the result may look like the -following. - - .. image:: ../images/persis_wasted_node.png - :alt: persis_wasted_node - :scale: 40 - :align: center - -To avoid the the wasted node above, add an extra worker:: - - python run_ensemble_persistent_gen.py --nworkers 4 - -and in the calling script (*run_ensemble_persistent_gen.py*), explicitly set the -number of resource sets to the number of workers that will be running simulations. - -.. code-block:: python - - nworkers, is_manager, libE_specs, _ = parse_args() - libE_specs["num_resource_sets"] = nworkers - 1 - -When the ``num_resource_sets`` option is used, libEnsemble will use the dynamic -resource scheduler, and any worker may assign work to any node. This works well -for most users. - - .. image:: ../images/persis_add_worker.png - :alt: persis_add_worker - :scale: 40 - :align: center - -**Optional**: An alternative way to express the above would be to use the command -line:: - - python run_ensemble_persistent_gen.py --comms local --nsim_workers 3 - -This would automatically set the ``num_resource_sets`` option and add a single -worker for the persistent generator - a common use-case. - -In general, the number of resource sets should be set to enable the maximum -concurrency desired by the ensemble, taking into account generators and simulators. - -Users can set generator resources using the *libE_specs* options -``gen_num_procs`` and/or ``gen_num_gpus``, which take integer values. -If only ``gen_num_gpus`` is set, then the number of processors is set to match. - -To vary generator resources, ``persis_info`` settings can be used in allocation -functions before calling the ``gen_work`` support function. This takes the -same options (``gen_num_procs`` and ``gen_num_gpus``). - -Alternatively, the setting ``persis_info["gen_resources"]`` can also be set to -a number of resource sets. - -The available nodes are always divided by the number of resource sets, and there -may be multiple nodes or a partition of a node in each resource set. If the split -is uneven, resource sets are not split between nodes. For example, if there are -two nodes and five resource sets, one node will have three resource sets, and -the other will have two. diff --git a/docs/tutorials/executor_forces_tutorial.rst b/docs/tutorials/executor_forces_tutorial.rst index 9fcb1ae743..33142fb601 100644 --- a/docs/tutorials/executor_forces_tutorial.rst +++ b/docs/tutorials/executor_forces_tutorial.rst @@ -82,32 +82,15 @@ expect, and also to parameterize user functions: :end-at: gen_specs_end_tag :lineno-start: 37 -Next, configure an allocation function, which starts the one persistent -generator and farms out the simulations. We also tell it to wait for all -simulations to return their results, before generating more parameters. - -.. literalinclude:: ../../libensemble/tests/functionality_tests/test_executor_forces_tutorial.py - :language: python - :linenos: - :start-at: ensemble.alloc_specs = AllocSpecs - :end-at: ) - :lineno-start: 55 - -Now we set :ref:`exit_criteria` to -exit after running eight simulations. - -We also give each worker a seeded random stream, via the -:ref:`persis_info` option. -These can be used for random number generation if required. - -Finally we :doc:`run<../libe_module>` the ensemble. +Next, we set :ref:`exit_criteria` to +exit after running eight simulations, and finally we :doc:`run<../libe_module>` the ensemble. .. literalinclude:: ../../libensemble/tests/functionality_tests/test_executor_forces_tutorial.py :language: python :linenos: :start-at: Instruct libEnsemble :end-at: ensemble.run() - :lineno-start: 62 + :lineno-start: 55 Exercise ^^^^^^^^ diff --git a/libensemble/executors/executor.py b/libensemble/executors/executor.py index fbb7cc0841..369308ada7 100644 --- a/libensemble/executors/executor.py +++ b/libensemble/executors/executor.py @@ -545,8 +545,6 @@ def register_app( def manager_poll(self) -> int | None: """ - .. _manager_poll_label: - Polls for a manager signal The executor manager_signal attribute will be updated. diff --git a/libensemble/tools/live_data/live_data.py b/libensemble/tools/live_data/live_data.py index 88d1cebcb2..7d50d75a8b 100644 --- a/libensemble/tools/live_data/live_data.py +++ b/libensemble/tools/live_data/live_data.py @@ -1,8 +1,6 @@ from abc import ABC, abstractmethod -from typing import TYPE_CHECKING -if TYPE_CHECKING: - import numpy.typing as npt +import numpy.typing as npt class LiveData(ABC): From 3dff0bfa7f04a38cf3126b9020973645aec37010 Mon Sep 17 00:00:00 2001 From: jlnav Date: Wed, 29 Apr 2026 15:28:15 -0500 Subject: [PATCH 29/34] refactors of generator and simulator guides using pytorch tutorials as a reference. tabs link to separate pages, in-page navigation should work --- docs/function_guides/generator.rst | 266 +----------------- docs/function_guides/generator_legacy.rst | 201 +++++++++++++ .../generator_standardized.rst | 60 ++++ docs/function_guides/simulator.rst | 110 +------- docs/function_guides/simulator_legacy.rst | 58 ++++ .../simulator_standardized.rst | 43 +++ 6 files changed, 386 insertions(+), 352 deletions(-) create mode 100644 docs/function_guides/generator_legacy.rst create mode 100644 docs/function_guides/generator_standardized.rst create mode 100644 docs/function_guides/simulator_legacy.rst create mode 100644 docs/function_guides/simulator_standardized.rst diff --git a/docs/function_guides/generator.rst b/docs/function_guides/generator.rst index ab87b4480d..c560ce3934 100644 --- a/docs/function_guides/generator.rst +++ b/docs/function_guides/generator.rst @@ -3,6 +3,8 @@ Generators ========== +**Introduction** \|\| `Standardized Generator (gest-api) `__ \|\| `Legacy Generator Function `__ + Writing a Generator ------------------- @@ -10,261 +12,15 @@ Writing a Generator The `gest-api` generator interface is the recommended approach for new libEnsemble projects. The "Legacy Generator Function" interface is supported for backward compatibility but may be deprecated in a future release. -.. tab-set:: - - .. tab-item:: Standardized Generator (gest-api) - - Standardized generators are classes that inherit from ``gest_api.Generator``. - They adhere to the ``gest-api`` standard and are parameterized by a ``VOCS`` - object defining the problem's variables and objectives. - - A basic generator implements the ``suggest()`` and ``ingest()`` methods, which - operate on lists of dictionaries: - - .. code-block:: python - :linenos: - - import numpy as np - from gest_api import Generator - from gest_api.vocs import VOCS - - - class UniformSample(Generator): - """Samples over the domain specified in the VOCS.""" - - def __init__(self, vocs: VOCS): - self.vocs = vocs - self.rng = np.random.default_rng(1) - super().__init__(vocs) - - def _validate_vocs(self, vocs): - assert len(self.vocs.variable_names), "VOCS must contain variables." - - def suggest(self, n_trials): - output = [] - for _ in range(n_trials): - trial = {} - for key in self.vocs.variables: - trial[key] = self.rng.uniform(self.vocs.variables[key].domain[0], self.vocs.variables[key].domain[1]) - output.append(trial) - return output - - def ingest(self, calc_in): - pass # random sample so nothing to ingest - - libEnsemble's handling of standardized generators is specified using ``GenSpecs``: - - .. code-block:: python - - gen_specs = GenSpecs( - generator=UniformSample(vocs), - inputs=["sim_id"], - persis_in=["x", "f"], - outputs=[("x", float, 2)], - vocs=vocs, - user={"batch_size": 128}, - ) - - .. note:: - Ensure that ``gen_specs.inputs`` or ``gen_specs.persis_in`` requests at least one field - (like ``"sim_id"`` or ``"f"``) to be sent back, even if the generator does not - process them. - - .. tab-item:: Legacy Generator Function - - .. code-block:: python - - def my_generator(Input, persis_info, gen_specs, libE_info): - batch_size = gen_specs["user"]["batch_size"] - - Output = np.zeros(batch_size, gen_specs["out"]) - # ... - Output["x"], persis_info = generate_next_simulation_inputs(Input["f"], persis_info) - - return Output, persis_info - - Most ``gen_f`` function definitions written by users resemble:: - - def my_generator(Input, persis_info, gen_specs, libE_info): - - where: - - * ``Input`` is a selection of the :ref:`History array`, a NumPy structured array. - * :ref:`persis_info` is a dictionary containing state information. - * :ref:`gen_specs` is a dictionary of generator parameters. - * ``libE_info`` is a dictionary containing miscellaneous entries. - - Valid generator functions can accept a subset of the above parameters. So a very simple generator can start:: - - def my_generator(Input): - - If ``gen_specs`` was initially defined: - - .. code-block:: python - - gen_specs = GenSpecs( - gen_f=my_generator, - inputs=["f"], - outputs=["x", float, (1,)], - user={"batch_size": 128}, - ) - - Then user parameters and a *local* array of outputs may be obtained/initialized like:: - - batch_size = gen_specs["user"]["batch_size"] - Output = np.zeros(batch_size, dtype=gen_specs["out"]) - - This array should be populated by whatever values are generated within - the function:: - - Output["x"], persis_info = generate_next_simulation_inputs(Input["f"], persis_info) - - Then return the array and ``persis_info`` to libEnsemble:: - - return Output, persis_info - - Between the ``Output`` definition and the ``return``, any computation can be performed. - Users can try an :doc:`executor<../executor/ex_index>` to submit applications to parallel - resources, or plug in components from other libraries to serve their needs. - - .. note:: - - State ``gen_f`` information like checkpointing should be - appended to ``persis_info``. - - .. _persistent-gens: - - **Persistent Generators** - - While non-persistent generators return after completing their calculation, persistent - generators do the following in a loop: - - 1. Receive simulation results and metadata; exit if metadata instructs. - 2. Perform analysis. - 3. Send subsequent simulation parameters. - - Persistent generators don't need to be re-initialized on each call, but are typically - more complicated. The persistent :doc:`APOSMM<../examples/aposmm>` - optimization generator function included with libEnsemble maintains - local optimization subprocesses based on results from complete simulations. - - Use ``GenSpecs.persis_in`` to specify fields to send back to the generator throughout the run. - ``GenSpecs.inputs`` only describes the input fields when the function is **first called**. - - Functions for a persistent generator to communicate directly with the manager - are available in the :ref:`libensemble.tools.persistent_support` class. - - Sending/receiving data is supported by the :ref:`PersistentSupport` class:: - - from libensemble.tools import PersistentSupport - from libensemble.message_numbers import STOP_TAG, PERSIS_STOP, EVAL_GEN_TAG, FINISHED_PERSISTENT_GEN_TAG - - my_support = PersistentSupport(libE_info, EVAL_GEN_TAG) - - Implementing functions from the above class is relatively simple: - - .. tab-set:: - - .. tab-item:: send - - .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport - .. autofunction:: send - - This function call typically resembles:: - - my_support.send(local_H_out[selected_IDs]) - - Note that this function has no return. - - .. tab-item:: recv - - .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport - .. autofunction:: recv - - This function call typically resembles:: - - tag, Work, calc_in = my_support.recv() - - if tag in [STOP_TAG, PERSIS_STOP]: - cleanup() - break - - The logic following the function call is typically used to break the persistent - generator's main loop and return. - - .. tab-item:: send_recv - - .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport - .. autofunction:: send_recv - - This function performs both of the previous functions in a single statement. Its - usage typically resembles:: - - tag, Work, calc_in = my_support.send_recv(local_H_out[selected_IDs]) - if tag in [STOP_TAG, PERSIS_STOP]: - cleanup() - break - - Once the persistent generator's loop has been broken because of - the tag from the manager, it should return with an additional tag:: - - return local_H_out, persis_info, FINISHED_PERSISTENT_GEN_TAG - - See :ref:`calc_status` for more information about - the message tags. - - .. _gen_active_recv: - - **Active receive mode** - - By default, a persistent worker is expected to - receive and send data in a *ping pong* fashion. Alternatively, - a worker can be initiated in *active receive* mode by the allocation - function (see :ref:`start_only_persistent`). - The persistent worker can then send and receive from the manager at any time. - - Ensure there are no communication deadlocks in this mode. In manager-worker message exchanges, only the worker-side - receive is blocking by default (a non-blocking option is available). - - **Cancelling Simulations** - - Previously submitted simulations can be cancelled by sending a message to the manager: - - .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport - .. autofunction:: request_cancel_sim_ids - - - If a generated point is cancelled by the generator **before sending** to another worker for simulation, then it won't be sent. - - If that point has **already been evaluated** by a simulation, the ``cancel_requested`` field will remain ``True``. - - If that point is **currently being evaluated**, a kill signal will be sent to the corresponding worker; it must be manually processed in the simulation function. - - The :doc:`Borehole Calibration tutorial<../tutorials/calib_cancel_tutorial>` gives an example - of the capability to cancel pending simulations. - - **Modification of existing points** - - To change existing fields of the History array, create a NumPy structured array where the ``dtype`` contains - the ``sim_id`` and the fields to be modified. Send this array with ``keep_state=True`` to the manager. - This will overwrite the manager's History array. - - For example, the cancellation function ``request_cancel_sim_ids`` could be replicated by - the following (where ``sim_ids_to_cancel`` is a list of integers): - - .. code-block:: python - - # Send only these fields to existing H rows and libEnsemble will slot in the change. - H_o = np.zeros(len(sim_ids_to_cancel), dtype=[("sim_id", int), ("cancel_requested", bool)]) - H_o["sim_id"] = sim_ids_to_cancel - H_o["cancel_requested"] = True - ps.send(H_o, keep_state=True) - - **Generator initiated shutdown** +Tutorial sections +----------------- - If using a supporting allocation function, the generator can prompt the ensemble to shutdown - by simply exiting the function (e.g., on a test for a converged value). For example, the - allocation function :ref:`start_only_persistent` closes down - the ensemble as soon as a persistent generator returns. The usual return values should be given. +1. Introduction (this page) +2. :doc:`Standardized Generator (gest-api) ` +3. :doc:`Legacy Generator Function ` - **Examples** +.. toctree:: + :hidden: - Examples of non-persistent and persistent generator functions - can be found :doc:`here<../examples/gen_funcs>`. + generator_standardized + generator_legacy diff --git a/docs/function_guides/generator_legacy.rst b/docs/function_guides/generator_legacy.rst new file mode 100644 index 0000000000..eac9910abe --- /dev/null +++ b/docs/function_guides/generator_legacy.rst @@ -0,0 +1,201 @@ +Legacy Generator Function +========================= + +**Introduction** \|\| `Standardized Generator (gest-api) `__ \|\| **Legacy Generator Function** + +.. code-block:: python + + def my_generator(Input, persis_info, gen_specs, libE_info): + batch_size = gen_specs["user"]["batch_size"] + + Output = np.zeros(batch_size, gen_specs["out"]) + # ... + Output["x"], persis_info = generate_next_simulation_inputs(Input["f"], persis_info) + + return Output, persis_info + +Most ``gen_f`` function definitions written by users resemble:: + + def my_generator(Input, persis_info, gen_specs, libE_info): + +where: + + * ``Input`` is a selection of the :ref:`History array`, a NumPy structured array. + * :ref:`persis_info` is a dictionary containing state information. + * :ref:`gen_specs` is a dictionary of generator parameters. + * ``libE_info`` is a dictionary containing miscellaneous entries. + +Valid generator functions can accept a subset of the above parameters. So a very simple generator can start:: + + def my_generator(Input): + +If ``gen_specs`` was initially defined: + +.. code-block:: python + + gen_specs = GenSpecs( + gen_f=my_generator, + inputs=["f"], + outputs=["x", float, (1,)], + user={"batch_size": 128}, + ) + +Then user parameters and a *local* array of outputs may be obtained/initialized like:: + + batch_size = gen_specs["user"]["batch_size"] + Output = np.zeros(batch_size, dtype=gen_specs["out"]) + +This array should be populated by whatever values are generated within +the function:: + + Output["x"], persis_info = generate_next_simulation_inputs(Input["f"], persis_info) + +Then return the array and ``persis_info`` to libEnsemble:: + + return Output, persis_info + +Between the ``Output`` definition and the ``return``, any computation can be performed. +Users can try an :doc:`executor<../executor/ex_index>` to submit applications to parallel +resources, or plug in components from other libraries to serve their needs. + +.. note:: + + State ``gen_f`` information like checkpointing should be + appended to ``persis_info``. + +.. _persistent-gens: + +**Persistent Generators** + +While non-persistent generators return after completing their calculation, persistent +generators do the following in a loop: + + 1. Receive simulation results and metadata; exit if metadata instructs. + 2. Perform analysis. + 3. Send subsequent simulation parameters. + +Persistent generators don't need to be re-initialized on each call, but are typically +more complicated. The persistent :doc:`APOSMM<../examples/aposmm>` +optimization generator function included with libEnsemble maintains +local optimization subprocesses based on results from complete simulations. + +Use ``GenSpecs.persis_in`` to specify fields to send back to the generator throughout the run. +``GenSpecs.inputs`` only describes the input fields when the function is **first called**. + +Functions for a persistent generator to communicate directly with the manager +are available in the :ref:`libensemble.tools.persistent_support` class. + +Sending/receiving data is supported by the :ref:`PersistentSupport` class:: + + from libensemble.tools import PersistentSupport + from libensemble.message_numbers import STOP_TAG, PERSIS_STOP, EVAL_GEN_TAG, FINISHED_PERSISTENT_GEN_TAG + + my_support = PersistentSupport(libE_info, EVAL_GEN_TAG) + +Implementing functions from the above class is relatively simple: + +.. tab-set:: + + .. tab-item:: send + + .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport + .. autofunction:: send + + This function call typically resembles:: + + my_support.send(local_H_out[selected_IDs]) + + Note that this function has no return. + + .. tab-item:: recv + + .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport + .. autofunction:: recv + + This function call typically resembles:: + + tag, Work, calc_in = my_support.recv() + + if tag in [STOP_TAG, PERSIS_STOP]: + cleanup() + break + + The logic following the function call is typically used to break the persistent + generator's main loop and return. + + .. tab-item:: send_recv + + .. currentmodule:: libensemble.tools.persistent_support.PersistentSupport + .. autofunction:: send_recv + + This function performs both of the previous functions in a single statement. Its + usage typically resembles:: + + tag, Work, calc_in = my_support.send_recv(local_H_out[selected_IDs]) + if tag in [STOP_TAG, PERSIS_STOP]: + cleanup() + break + + Once the persistent generator's loop has been broken because of + the tag from the manager, it should return with an additional tag:: + + return local_H_out, persis_info, FINISHED_PERSISTENT_GEN_TAG + +See :ref:`calc_status` for more information about +the message tags. + +.. _gen_active_recv: + +**Active receive mode** + +By default, a persistent worker is expected to +receive and send data in a *ping pong* fashion. Alternatively, +a worker can be initiated in *active receive* mode by the allocation +function (see :ref:`start_only_persistent`). +The persistent worker can then send and receive from the manager at any time. + +Ensure there are no communication deadlocks in this mode. In manager-worker message exchanges, only the worker-side +receive is blocking by default (a non-blocking option is available). + +**Cancelling Simulations** + +Previously submitted simulations can be cancelled by sending a message to the manager: + +.. currentmodule:: libensemble.tools.persistent_support.PersistentSupport +.. autofunction:: request_cancel_sim_ids + +- If a generated point is cancelled by the generator **before sending** to another worker for simulation, then it won't be sent. +- If that point has **already been evaluated** by a simulation, the ``cancel_requested`` field will remain ``True``. +- If that point is **currently being evaluated**, a kill signal will be sent to the corresponding worker; it must be manually processed in the simulation function. + +The :doc:`Borehole Calibration tutorial<../tutorials/calib_cancel_tutorial>` gives an example +of the capability to cancel pending simulations. + +**Modification of existing points** + +To change existing fields of the History array, create a NumPy structured array where the ``dtype`` contains +the ``sim_id`` and the fields to be modified. Send this array with ``keep_state=True`` to the manager. +This will overwrite the manager's History array. + +For example, the cancellation function ``request_cancel_sim_ids`` could be replicated by +the following (where ``sim_ids_to_cancel`` is a list of integers): + +.. code-block:: python + + # Send only these fields to existing H rows and libEnsemble will slot in the change. + H_o = np.zeros(len(sim_ids_to_cancel), dtype=[("sim_id", int), ("cancel_requested", bool)]) + H_o["sim_id"] = sim_ids_to_cancel + H_o["cancel_requested"] = True + ps.send(H_o, keep_state=True) + +**Generator initiated shutdown** + +If using a supporting allocation function, the generator can prompt the ensemble to shutdown +by simply exiting the function (e.g., on a test for a converged value). For example, the +allocation function :ref:`start_only_persistent` closes down +the ensemble as soon as a persistent generator returns. The usual return values should be given. + +**Examples** + +Examples of non-persistent and persistent generator functions +can be found :doc:`here<../examples/gen_funcs>`. diff --git a/docs/function_guides/generator_standardized.rst b/docs/function_guides/generator_standardized.rst new file mode 100644 index 0000000000..d09e01a842 --- /dev/null +++ b/docs/function_guides/generator_standardized.rst @@ -0,0 +1,60 @@ +Standardized Generator (gest-api) +================================= + +**Introduction** \|\| **Standardized Generator (gest-api)** \|\| `Legacy Generator Function `__ + +Standardized generators are classes that inherit from ``gest_api.Generator``. +They adhere to the ``gest-api`` standard and are parameterized by a ``VOCS`` +object defining the problem's variables and objectives. + +A basic generator implements the ``suggest()`` and ``ingest()`` methods, which +operate on lists of dictionaries: + +.. code-block:: python + :linenos: + + import numpy as np + from gest_api import Generator + from gest_api.vocs import VOCS + + + class UniformSample(Generator): + """Samples over the domain specified in the VOCS.""" + + def __init__(self, vocs: VOCS): + self.vocs = vocs + self.rng = np.random.default_rng(1) + super().__init__(vocs) + + def _validate_vocs(self, vocs): + assert len(self.vocs.variable_names), "VOCS must contain variables." + + def suggest(self, n_trials): + output = [] + for _ in range(n_trials): + trial = {} + for key in self.vocs.variables: + trial[key] = self.rng.uniform(self.vocs.variables[key].domain[0], self.vocs.variables[key].domain[1]) + output.append(trial) + return output + + def ingest(self, calc_in): + pass # random sample so nothing to ingest + +libEnsemble's handling of standardized generators is specified using ``GenSpecs``: + +.. code-block:: python + + gen_specs = GenSpecs( + generator=UniformSample(vocs), + inputs=["sim_id"], + persis_in=["x", "f"], + outputs=[("x", float, 2)], + vocs=vocs, + user={"batch_size": 128}, + ) + +.. note:: + Ensure that ``gen_specs.inputs`` or ``gen_specs.persis_in`` requests at least one field + (like ``"sim_id"`` or ``"f"``) to be sent back, even if the generator does not + process them. diff --git a/docs/function_guides/simulator.rst b/docs/function_guides/simulator.rst index 40374c55c8..5d69a4f79b 100644 --- a/docs/function_guides/simulator.rst +++ b/docs/function_guides/simulator.rst @@ -3,6 +3,8 @@ Simulator Functions =================== +**Introduction** \|\| `Standardized Simulator (gest-api) `__ \|\| `Legacy Simulator Function `__ + Simulator and :ref:`Generator functions` have relatively similar interfaces. Writing a Simulator @@ -12,104 +14,12 @@ Writing a Simulator The `gest-api` simulator interface is the recommended approach for new libEnsemble projects. The "Legacy Simulator Function" interface is supported for backward compatibility but may be deprecated in a future release. -.. tab-set:: - - .. tab-item:: Standardized Simulator (gest-api) - - Standardized simulators are plain callables — no base class required — with the signature:: - - def my_simulation(input_dict: dict, **kwargs) -> dict: - - They receive a single point as a Python dictionary (keyed by VOCS variable and constant - names) and return a dictionary of outputs (keyed by VOCS objective, observable, and - constraint names). - - .. code-block:: python - - def my_simulation(input_dict: dict, **kwargs) -> dict: - x1 = input_dict["x1"] - x2 = input_dict["x2"] - f = (x1 - 1) ** 2 + (x2 - 2) ** 2 - return {"f": f} - - Configure it with ``SimSpecs`` using a ``VOCS`` object. ``inputs`` and ``outputs`` - are derived automatically from the VOCS when not set explicitly: - - .. code-block:: python - - from gest_api.vocs import VOCS - from libensemble.specs import SimSpecs - - vocs = VOCS( - variables={"x1": [0, 1.0], "x2": [0, 10.0]}, - objectives={"f": "MINIMIZE"}, - ) - - sim_specs = SimSpecs( - simulator=my_simulation, - vocs=vocs, - ) - - If ``libE_info`` is needed (e.g., to access the :doc:`executor<../executor/ex_index>`), - declare it as a keyword argument and libEnsemble will pass it automatically:: - - def my_simulation(input_dict: dict, libE_info=None, **kwargs) -> dict: - - .. tab-item:: Legacy Simulator Function - - .. code-block:: python - - def my_simulation(Input, persis_info, sim_specs, libE_info): - batch_size = sim_specs["user"]["batch_size"] - - Output = np.zeros(batch_size, sim_specs["out"]) - # ... - Output["f"], persis_info = do_a_simulation(Input["x"], persis_info) +Tutorial sections +----------------- - return Output, persis_info - - Most ``sim_f`` function definitions written by users resemble:: - - def my_simulation(Input, persis_info, sim_specs, libE_info): - - where: - - * ``Input`` is a selection of the :ref:`History array`, a NumPy structured array. - * :ref:`persis_info` is a dictionary containing state information. - * :ref:`sim_specs` is a dictionary of simulation parameters. - * ``libE_info`` is a dictionary containing libEnsemble-specific entries. - - Valid simulator functions can accept a subset of the above parameters. So a very simple simulator function can start:: - - def my_simulation(Input): - - If ``sim_specs`` was initially defined: - - .. code-block:: python - - sim_specs = SimSpecs( - sim_f=my_simulation, - inputs=["x"], - outputs=["f", float, (1,)], - user={"batch_size": 128}, - ) - - Then user parameters and a *local* array of outputs may be obtained/initialized like:: - - batch_size = sim_specs["user"]["batch_size"] - Output = np.zeros(batch_size, dtype=sim_specs["out"]) - - This array should be populated with output values from the simulation:: - - Output["f"], persis_info = do_a_simulation(Input["x"], persis_info) - - Then return the array and ``persis_info`` to libEnsemble:: - - return Output, persis_info - - Between the ``Output`` definition and the ``return``, any computation can be performed. - Users can try an :doc:`executor<../executor/ex_index>` to submit applications to parallel - resources, or plug in components from other libraries to serve their needs. +1. Introduction (this page) +2. :doc:`Standardized Simulator (gest-api) ` +3. :doc:`Legacy Simulator Function ` Executor -------- @@ -135,3 +45,9 @@ function returns. An example routine using a persistent simulator can be found in test_persistent_sim_uniform_sampling_. .. _test_persistent_sim_uniform_sampling: https://github.com/Libensemble/libensemble/blob/develop/libensemble/tests/functionality_tests/test_persistent_sim_uniform_sampling.py + +.. toctree:: + :hidden: + + simulator_standardized + simulator_legacy diff --git a/docs/function_guides/simulator_legacy.rst b/docs/function_guides/simulator_legacy.rst new file mode 100644 index 0000000000..01927401b2 --- /dev/null +++ b/docs/function_guides/simulator_legacy.rst @@ -0,0 +1,58 @@ +Legacy Simulator Function +========================= + +**Introduction** \|\| `Standardized Simulator (gest-api) `__ \|\| **Legacy Simulator Function** + +.. code-block:: python + + def my_simulation(Input, persis_info, sim_specs, libE_info): + batch_size = sim_specs["user"]["batch_size"] + + Output = np.zeros(batch_size, sim_specs["out"]) + # ... + Output["f"], persis_info = do_a_simulation(Input["x"], persis_info) + + return Output, persis_info + +Most ``sim_f`` function definitions written by users resemble:: + + def my_simulation(Input, persis_info, sim_specs, libE_info): + +where: + + * ``Input`` is a selection of the :ref:`History array`, a NumPy structured array. + * :ref:`persis_info` is a dictionary containing state information. + * :ref:`sim_specs` is a dictionary of simulation parameters. + * ``libE_info`` is a dictionary containing libEnsemble-specific entries. + +Valid simulator functions can accept a subset of the above parameters. So a very simple simulator function can start:: + + def my_simulation(Input): + +If ``sim_specs`` was initially defined: + +.. code-block:: python + + sim_specs = SimSpecs( + sim_f=my_simulation, + inputs=["x"], + outputs=["f", float, (1,)], + user={"batch_size": 128}, + ) + +Then user parameters and a *local* array of outputs may be obtained/initialized like:: + + batch_size = sim_specs["user"]["batch_size"] + Output = np.zeros(batch_size, dtype=sim_specs["out"]) + +This array should be populated with output values from the simulation:: + + Output["f"], persis_info = do_a_simulation(Input["x"], persis_info) + +Then return the array and ``persis_info`` to libEnsemble:: + + return Output, persis_info + +Between the ``Output`` definition and the ``return``, any computation can be performed. +Users can try an :doc:`executor<../executor/ex_index>` to submit applications to parallel +resources, or plug in components from other libraries to serve their needs. diff --git a/docs/function_guides/simulator_standardized.rst b/docs/function_guides/simulator_standardized.rst new file mode 100644 index 0000000000..6561ea16cb --- /dev/null +++ b/docs/function_guides/simulator_standardized.rst @@ -0,0 +1,43 @@ +Standardized Simulator (gest-api) +================================= + +**Introduction** \|\| **Standardized Simulator (gest-api)** \|\| `Legacy Simulator Function `__ + +Standardized simulators are plain callables — no base class required — with the signature:: + + def my_simulation(input_dict: dict, **kwargs) -> dict: + +They receive a single point as a Python dictionary (keyed by VOCS variable and constant +names) and return a dictionary of outputs (keyed by VOCS objective, observable, and +constraint names). + +.. code-block:: python + + def my_simulation(input_dict: dict, **kwargs) -> dict: + x1 = input_dict["x1"] + x2 = input_dict["x2"] + f = (x1 - 1) ** 2 + (x2 - 2) ** 2 + return {"f": f} + +Configure it with ``SimSpecs`` using a ``VOCS`` object. ``inputs`` and ``outputs`` +are derived automatically from the VOCS when not set explicitly: + +.. code-block:: python + + from gest_api.vocs import VOCS + from libensemble.specs import SimSpecs + + vocs = VOCS( + variables={"x1": [0, 1.0], "x2": [0, 10.0]}, + objectives={"f": "MINIMIZE"}, + ) + + sim_specs = SimSpecs( + simulator=my_simulation, + vocs=vocs, + ) + +If ``libE_info`` is needed (e.g., to access the :doc:`executor<../executor/ex_index>`), +declare it as a keyword argument and libEnsemble will pass it automatically:: + + def my_simulation(input_dict: dict, libE_info=None, **kwargs) -> dict: From 699e06fd2346b2ba69ede7e0d4ec7d33bd84c7f7 Mon Sep 17 00:00:00 2001 From: jlnav Date: Thu, 30 Apr 2026 12:53:32 -0500 Subject: [PATCH 30/34] move alloc_f data into one page in Additional References --- docs/examples/alloc_funcs.rst | 100 --------------------- docs/examples/examples_index.rst | 1 - docs/examples/persistent_sampling.rst | 3 + docs/function_guides/allocator.rst | 124 +++++++++++++++++++++++--- docs/index.rst | 1 - 5 files changed, 114 insertions(+), 115 deletions(-) delete mode 100644 docs/examples/alloc_funcs.rst diff --git a/docs/examples/alloc_funcs.rst b/docs/examples/alloc_funcs.rst deleted file mode 100644 index 8c50a9153d..0000000000 --- a/docs/examples/alloc_funcs.rst +++ /dev/null @@ -1,100 +0,0 @@ -.. _examples-alloc: - -Allocation Functions -==================== - -Below are example allocation functions available in libEnsemble. - -Many users use these unmodified. - -.. IMPORTANT:: - The default allocation function changed in libEnsemble v2.0 from ``give_sim_work_first`` to ``start_only_persistent``. - -.. note:: - - The most commonly used allocation function for non-persistent generators is :ref:`give_sim_work_first`. - -.. role:: underline - :class: underline - -.. _start_only_persistent_label: - -start_only_persistent ---------------------- -.. automodule:: start_only_persistent - :members: - :undoc-members: - -.. dropdown:: :underline:`start_only_persistent.py` - - .. literalinclude:: ../../libensemble/alloc_funcs/start_only_persistent.py - :language: python - :linenos: - -.. _gswf_label: - -give_sim_work_first -------------------- -.. automodule:: give_sim_work_first - :members: - :undoc-members: - -.. dropdown:: :underline:`give_sim_work_first.py` - - .. literalinclude:: ../../libensemble/alloc_funcs/give_sim_work_first.py - :language: python - :linenos: - -fast_alloc ----------- -.. automodule:: fast_alloc - :members: - :undoc-members: - -.. dropdown:: :underline:`fast_alloc.py` - - .. literalinclude:: ../../libensemble/alloc_funcs/fast_alloc.py - :language: python - :linenos: - -start_persistent_local_opt_gens -------------------------------- -.. automodule:: start_persistent_local_opt_gens - :members: - :undoc-members: - -fast_alloc_and_pausing ----------------------- -.. automodule:: fast_alloc_and_pausing - :members: - :undoc-members: - -only_one_gen_alloc ------------------- -.. automodule:: only_one_gen_alloc - :members: - :undoc-members: - -start_fd_persistent -------------------- -.. automodule:: start_fd_persistent - :members: - :undoc-members: - -persistent_aposmm_alloc ------------------------ -.. automodule:: persistent_aposmm_alloc - :members: - :undoc-members: - -give_pregenerated_work ----------------------- -.. automodule:: give_pregenerated_work - :members: - :undoc-members: - -inverse_bayes_allocf --------------------- -.. automodule:: inverse_bayes_allocf - :members: - :undoc-members: diff --git a/docs/examples/examples_index.rst b/docs/examples/examples_index.rst index 5fa59a9d76..c1d6abfb28 100644 --- a/docs/examples/examples_index.rst +++ b/docs/examples/examples_index.rst @@ -12,7 +12,6 @@ The examples come from the libEnsemble repository and the `libEnsemble Community gen_funcs sim_funcs - alloc_funcs calling_scripts .. _libEnsemble Community Repository: https://github.com/Libensemble/libe-community-examples diff --git a/docs/examples/persistent_sampling.rst b/docs/examples/persistent_sampling.rst index 7f778a8e8c..cf33eaa554 100644 --- a/docs/examples/persistent_sampling.rst +++ b/docs/examples/persistent_sampling.rst @@ -1,6 +1,9 @@ persistent_sampling ------------------- +.. role:: underline + :class: underline + .. automodule:: persistent_sampling :members: :undoc-members: diff --git a/docs/function_guides/allocator.rst b/docs/function_guides/allocator.rst index 0620105825..ec1189e84a 100644 --- a/docs/function_guides/allocator.rst +++ b/docs/function_guides/allocator.rst @@ -4,23 +4,21 @@ Allocation Functions ==================== Although the included allocation functions are sufficient for -most users, those who want to fine-tune how data or resources are allocated to their generator or simulator can write their own. +most users, those who want to fine-tune how data or resources +may be allocated to their generator or simulator can write their own. -The ``alloc_f`` is unique since it is called by libEnsemble's manager instead of a worker. +We encourage experimenting with: -For allocation functions, as with the other user functions, the level of complexity can -vary widely. We encourage experimenting with: - - 1. Prioritization of simulations - 2. Sending results immediately or in batch - 3. Assigning varying resources to evaluations +1. Prioritization of simulations +2. Sending results immediately or in batch +3. Assigning varying resources to evaluations .. dropdown:: Example .. literalinclude:: ../../libensemble/alloc_funcs/fast_alloc.py :caption: libensemble.alloc_funcs.fast_alloc.give_sim_work_first -Most ``alloc_f`` function definitions written by users resemble:: +The ``alloc_f`` function definition resembles:: def my_allocator(W, H, sim_specs, gen_specs, alloc_specs, persis_info, libE_info): @@ -35,14 +33,14 @@ Most users first check that it is appropriate to allocate work:: if libE_info["sim_max_given"] or not libE_info["any_idle_workers"]: return {}, persis_info -If the allocation is to continue, a support class is instantiated and a -:ref:`Work dictionary` is initialized:: +If the allocation is to continue, instantiate a support class to assist with the +:ref:`Work dictionary` construction:: manage_resources = "resource_sets" in H.dtype.names or libE_info["use_resource_sets"] support = AllocSupport(W, manage_resources, persis_info, libE_info) Work = {} -This Work dictionary is populated with integer keys ``wid`` for each worker and +The Work dictionary is populated with integer keys ``wid`` for each worker and dictionary values to give to those workers: .. dropdown:: Example ``Work`` @@ -126,10 +124,110 @@ or mark points for cancellation. The remaining values above are useful for efficient filtering of H values (e.g., ``sim_ended_count`` saves filtering by an entire column of H.) -Descriptions of included allocation functions can be found :doc:`here<../examples/alloc_funcs>`. The default allocation function is ``start_only_persistent``. During its worker ID loop, it checks if there's unallocated work and assigns simulations for that work. Otherwise, it initializes generators for up to ``"num_active_gens"`` instances. Other settings like ``batch_mode`` are also supported. See :ref:`here` for more information. + +.. _examples-alloc: + +Examples +======== + +Below are example allocation functions available in libEnsemble. + +Many users use these unmodified. + +.. IMPORTANT:: + The default allocation function changed in libEnsemble v2.0 from ``give_sim_work_first`` to ``start_only_persistent``. + +.. note:: + + The most commonly used allocation function for non-persistent generators is :ref:`give_sim_work_first`. + +.. role:: underline + :class: underline + +.. _start_only_persistent_label: + +start_only_persistent +--------------------- +.. automodule:: start_only_persistent + :members: + :undoc-members: + +.. dropdown:: :underline:`start_only_persistent.py` + + .. literalinclude:: ../../libensemble/alloc_funcs/start_only_persistent.py + :language: python + :linenos: + +.. _gswf_label: + +give_sim_work_first +------------------- +.. automodule:: give_sim_work_first + :members: + :undoc-members: + +.. dropdown:: :underline:`give_sim_work_first.py` + + .. literalinclude:: ../../libensemble/alloc_funcs/give_sim_work_first.py + :language: python + :linenos: + +fast_alloc +---------- +.. automodule:: fast_alloc + :members: + :undoc-members: + +.. dropdown:: :underline:`fast_alloc.py` + + .. literalinclude:: ../../libensemble/alloc_funcs/fast_alloc.py + :language: python + :linenos: + +start_persistent_local_opt_gens +------------------------------- +.. automodule:: start_persistent_local_opt_gens + :members: + :undoc-members: + +fast_alloc_and_pausing +---------------------- +.. automodule:: fast_alloc_and_pausing + :members: + :undoc-members: + +only_one_gen_alloc +------------------ +.. automodule:: only_one_gen_alloc + :members: + :undoc-members: + +start_fd_persistent +------------------- +.. automodule:: start_fd_persistent + :members: + :undoc-members: + +persistent_aposmm_alloc +----------------------- +.. automodule:: persistent_aposmm_alloc + :members: + :undoc-members: + +give_pregenerated_work +---------------------- +.. automodule:: give_pregenerated_work + :members: + :undoc-members: + +inverse_bayes_allocf +-------------------- +.. automodule:: inverse_bayes_allocf + :members: + :undoc-members: diff --git a/docs/index.rst b/docs/index.rst index 9125815e2e..e222e41e3a 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -35,7 +35,6 @@ examples/gest_api examples/gen_funcs examples/sim_funcs - examples/alloc_funcs examples/calling_scripts Submission Scripts From bb81e0367e8c4ab7a4164c598be8e079d41c55eb Mon Sep 17 00:00:00 2001 From: jlnav Date: Fri, 1 May 2026 08:43:47 -0500 Subject: [PATCH 31/34] remove xSDK_policy_compatibility. They have our older compatibility.md file already, and xSDK has been absorbed into e4s --- docs/xSDK_policy_compatibility.md | 82 ------------------------------- 1 file changed, 82 deletions(-) delete mode 100644 docs/xSDK_policy_compatibility.md diff --git a/docs/xSDK_policy_compatibility.md b/docs/xSDK_policy_compatibility.md deleted file mode 100644 index 0dc83520d0..0000000000 --- a/docs/xSDK_policy_compatibility.md +++ /dev/null @@ -1,82 +0,0 @@ -# xSDK Community Policy Compatibility for libEnsemble - -This document summarizes the efforts of libEnsemble -to achieve compatibility with the xSDK community policies. - -**Website:** https://github.com/Libensemble/libensemble - -### Mandatory Policies - -[General libEnsemble Note](#liben-note) - -| Policy |Support| Notes | -|------------------------|-------|-------------------------| -|**M1.** Support xSDK community GNU Autoconf or CMake options. |N/A| libEnsemble is a Python package and provides a `setup.py` file for installation. This is compatible with Python's built-in installation feature (`python setup.py install`) and with the ubiquitous `pip` installer. libEnsemble is also in the Spack repository and can be installed with `spack install py-libensemble`. GNU Autoconf or CMake are unsuitable for a Python package. -|**M2.** Provide a comprehensive test suite for correctness of installation verification. |Full| libEnsemble has a test suite that includes both unit tests and regression tests that are run on every push to GitHub via [Travis CI](https://travis-ci.org/Libensemble/libensemble). In addition to this test suite, further scaling tests are manually run on HPC platforms including Cori, Theta, and Summit. -|**M3.** Employ user-provided MPI communicator (no MPI_COMM_WORLD). |Full|libEnsemble takes an MPI communicator as an option; if libEnsemble is configured for MPI mode, this provided communicator will be employed. If no communicator is given, a duplicate of MPI_COMM_WORLD is taken as a default. | -|**M4.** Give best effort at portability to key architectures (standard Linux distributions, GNU, Clang, vendor compilers, and target machines at ALCF, NERSC, OLCF). |Full| libEnsemble is tested regularly, including prior to every release, on ALCF (Theta), OLCF (Summit) and NERSC (Cori) platforms. [M4 details](#m4-details)| -|**M5.** Provide a documented, reliable way to contact the development team. |Full| The libEnsemble team can be contacted through: 1) The public [issues page on GitHub](https://github.com/Libensemble/libensemble/issues). 2) [Slack](https://libensemble.slack.com). 3) The public email list libensemble@mcs.anl.gov. | -|**M6.** Respect system resources and settings made by other previously called packages (e.g., signal handling). |Full| libEnsemble does not modify system resources or settings. | -|**M7.** Come with an open source (BSD style) license. |Full| libEnsemble uses a 3-clause BSD license stated in the `LICENSE` file in the top level of the GitHub repository. | -|**M8.** Provide a runtime API to return the current version number of the software. |Full| The version can be returned within Python via: `libensemble.__version__`| -|**M9.** Use a limited and well-defined symbol, macro, library, and include file name space. |Full| All libEnsemble symbols (e.g., functions, variables, modules, packages) begin with the prefix `libensemble.`. This prevents any namespace conflicts.| -|**M10.** Provide an xSDK team accessible repository (not necessarily publicly available). |Full| The libEnsemble repository is public and can be found at https://github.com/Libensemble/libensemble. Gitflow is used, along with pull requests, whereby only those with administrator privileges can accept pull requests into the master or develop branches. The workflow guidelines are provided in a `CONTRIBUTING.rst` file at the top level of the repository and a release process is given in the documentation. | -|**M11.** Have no hardwired print or IO statements that cannot be turned off. |Full| All output from the libEnsemble core package, except for the raising of exceptions, is routed through a libEnsemble logger, which is isolated from the Python root logger. Log messages of type `MANAGER_WARNING` or above are duplicated to standard error by default to ensure they are not missed. This can be turned off through the API. The API also allows the user to change the logging verbosity level and the name of the log file. This would allow a user, for example, to append logging to an existing log file, or to keep it separate. libEnsemble contains no interactive input. libEnsemble creates the files `ensemble.log` and `libE_stats.txt`, but the creation of these files can be preempted. [M11 details](#m11-details)| -|**M12.** For external dependencies, allow installing, building, and linking against an outside copy of external software. |Full| libEnsemble does not contain any other package's source code within. Note that Python packages are imported using the conventional `sys.path` system. Alternative instances of a package can be used by, for example, including in the `PYTHONPATH` environment variable.| -|**M13.** Install headers and libraries under \/include and \/lib. |Full| The standard Python installation is used for Python dependencies. This installs external Python packages under `/lib/python/site-packages/` When installed through Spack, the `` is specific to each Python package. This is added to `PYTHONPATH` when the Spack module for that library is loaded.| -|**M14.** Be buildable using 64 bit pointers. 32 bit is optional. |Full| There is no explicit use of pointers in libEnsemble, as Python handles pointers internally and depends on the install of Python (e.g., CPython), which will generally be 64-bit on supported systems. | -|**M15.** All xSDK compatibility changes should be sustainable. |Full| The xSDK-compatible package is in the standard release path. All the changes here should be sustainable. | -|**M16.** The package must support production-quality installation compatible with the xSDK install tool and xSDK metapackage. |Full|libEnsemble configure and install has full support from Spack. | - -M4 details : libEnsemble is a Python code and so does -not directly use compilers. It does, however, use NumPy, SciPy and mpi4py which -use compiled extensions. The current CI tests of libEnsemble use the standard -CPython compatible builds of these extensions (which are built using the GNU -compilers). libEnsemble is also regularly tested using the Intel distribution -for Python. - -libEnsemble is supported on Linux platforms and macOS. Windows platforms are -currently not supported. - -M11 details : Note: The sub-packages in the libensemble -directory structure such as `sim_specs` and `gen_specs` may contain print -statements. These are considered examples for users, rather than core -libEnsemble packages. - -A special exception exists in the `node_resources.py` module; part of -libEnsemble's resource detection infrastructure. The routine -`_print_local_cpu_resources()` can be launched by libEnsemble to probe -resources on a target node, and the output of this independent program is -captured by libEnsemble. - -### Recommended Policies - -| Policy |Support| Notes | -|------------------------|-------|-------------------------| -|**R1.** Have a public repository. |Full| Yes (see M10 above). | -|**R2.** Possible to run test suite under valgrind in order to test for memory corruption issues. |Full| It is possible to run the test suite under Valgrind. While libEnsemble is Python code, this may be useful for compiled extensions that are imported. PYTHONMALLOC=malloc must be set on the run line. CPython also provides a suppression file.| -|**R3.** Adopt and document consistent system for error conditions/exceptions. |Full| libEnsemble defines and raises exceptions according to module. All exceptions on workers are passed to the manager for processing. Warnings are handled by the logger. [R3 details](#r3-details)|| -|**R4.** Free all system resources acquired as soon as they are no longer needed. |Full| Python has built-in garbage collection that frees memory when it becomes unreferenced. When opening files, wherever possible, `with` expressions or `try/finally` blocks are used to ensure file handles are closed, even in the case of an error.| -|**R5.** Provide a mechanism to export ordered list of library dependencies. |Full| The dependencies for libEnsemble are given in `setup.py` and when pip install or pip setup.py egg_info are run, a file is created `libensemble.egg-info/requires.txt` containing the list of required and optional dependencies. If installing through pip, these will automatically be installed if they do not exist (`pip install libensemble` installs req. dependencies, while `pip install libensemble[extras]` installs both required and optional dependencies.| -|**R6.** Document versions of packages that it works with or depends upon, preferably in machine-readable form. |Full| Dependencies are given in the documentation. In some cases, this includes a lower bound on the version number. These dependencies are also specified in the Spack package, and automatically resolved during installation.| -|**R7.** Have README, SUPPORT, LICENSE, and CHANGELOG files in top directory. |Full| These files are present in the top directory.| - -R3 details : libEnsemble catches all exceptions -(explicitly raised and unexpected) from the manager and worker processes at the -libEnsemble level, resulting in libEnsemble dumping the key ensemble state to -files. In `mpi4py` mode, the default is to then call MPI_ABORT to prevent a -hang. However, this can be turned off (via the `libE_specs` argument). In the -case it is turned off, or if other communication modes are used, the exception -is then raised. The user can in turn catch these exceptions from their calling -script. - -libEnsemble Note : The nature of libEnsemble's -interoperability with other libraries is different from typical xSDK libraries. -libEnsemble is a Python code and interaction with other libraries may take -several forms. These include: libEnsemble calling other libraries through -Python bindings, libEnsemble launching applications (possibly providing a -sub-communicator), libEnsemble being called from a Python level infrastructure, -libEnsemble being launched as part of a campaign level workflow, or libEnsemble -potentially being activated via a system call or embedded interpreter; a more -unconventional approach. This is, therefore, a good opportunity to consider -interoperability from a Python and broader workflow perspective. From 1d8d60dbb8401a23ea3d44a326563faef106ace8 Mon Sep 17 00:00:00 2001 From: jlnav Date: Fri, 1 May 2026 08:46:17 -0500 Subject: [PATCH 32/34] remove posters.rst - this content is/will-be very out-of-date --- docs/index.rst | 1 - docs/posters.rst | 23 ----------------------- 2 files changed, 24 deletions(-) delete mode 100644 docs/posters.rst diff --git a/docs/index.rst b/docs/index.rst index e222e41e3a..98b6448a5d 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -49,7 +49,6 @@ known_issues release_notes contributing - posters .. toctree:: :maxdepth: 1 diff --git a/docs/posters.rst b/docs/posters.rst deleted file mode 100644 index 78c9af9117..0000000000 --- a/docs/posters.rst +++ /dev/null @@ -1,23 +0,0 @@ -Posters and Presentations -========================= - -Exascale Computing Project 2023 -------------------------------- - -.. raw:: html - - - -SciPy 2020 ----------- - -.. raw:: html - - - -CSE 2019 --------- - -.. raw:: html - - From 76df2dc1c964557387d767ffd21e66deda86056b Mon Sep 17 00:00:00 2001 From: jlnav Date: Fri, 1 May 2026 08:57:07 -0500 Subject: [PATCH 33/34] remove more Summit/Sierra code/docs on this branch too --- docs/advanced_installation.rst | 6 +----- docs/known_issues.rst | 2 -- docs/nitpicky | 1 - docs/platforms/platforms_index.rst | 3 +-- docs/running_libE.rst | 4 ++-- libensemble/resources/platforms.py | 12 ------------ .../functionality_tests/test_mpi_gpu_settings.py | 8 ++++---- .../scaling_tests/forces/forces_app/build_forces.sh | 7 ------- 8 files changed, 8 insertions(+), 35 deletions(-) diff --git a/docs/advanced_installation.rst b/docs/advanced_installation.rst index f638dd769c..962e0fad4b 100644 --- a/docs/advanced_installation.rst +++ b/docs/advanced_installation.rst @@ -1,7 +1,7 @@ Advanced Installation ===================== -libEnsemble can be installed from ``pip``, ``uv``, ``Conda``, or ``Spack``. +libEnsemble can be installed from ``pip``, ``uv``, ``pixi``, ``Conda``, or ``Spack``. libEnsemble requires the following dependencies, which are typically automatically installed alongside libEnsemble: @@ -44,10 +44,6 @@ Further recommendations for selected HPC systems are given in the MPICC=mpiicc pip install mpi4py --no-binary mpi4py - On Summit, the following line is recommended (with gcc compilers):: - - CC=mpicc MPICC=mpicc pip install mpi4py --no-binary mpi4py - .. tab-item:: uv To install the latest PyPI_ release via uv_:: diff --git a/docs/known_issues.rst b/docs/known_issues.rst index 89c596aae5..a68f1bcf47 100644 --- a/docs/known_issues.rst +++ b/docs/known_issues.rst @@ -19,8 +19,6 @@ may occur when using libEnsemble. * Local comms mode (multiprocessing) may fail if MPI is initialized before forking processors. This is thought to be responsible for issues combining multiprocessing with PETSc on some platforms. -* Remote detection of logical cores via ``LSB_HOSTS`` (e.g., Summit) returns the - number of physical cores as SMT info not available. * TCP mode does not support (1) more than one libEnsemble call in a given script or (2) the auto-resources option to the Executor. diff --git a/docs/nitpicky b/docs/nitpicky index 66bac90c00..5f46003f50 100644 --- a/docs/nitpicky +++ b/docs/nitpicky @@ -47,7 +47,6 @@ py:class libensemble.resources.platforms.Perlmutter py:class libensemble.resources.platforms.PerlmutterCPU py:class libensemble.resources.platforms.PerlmutterGPU py:class libensemble.resources.platforms.Polaris -py:class libensemble.resources.platforms.Summit py:class libensemble.resources.rset_resources.RSetResources py:class libensemble.resources.env_resources.EnvResources py:class libensemble.resources.resources.Resources diff --git a/docs/platforms/platforms_index.rst b/docs/platforms/platforms_index.rst index 5d1cf3e128..d1de30c45d 100644 --- a/docs/platforms/platforms_index.rst +++ b/docs/platforms/platforms_index.rst @@ -146,8 +146,7 @@ Systems with Launch/MOM Nodes Some large systems have a 3-tier node setup. That is, they have a separate set of launch nodes (known as MOM nodes on Cray Systems). User batch jobs or interactive sessions run on a launch node. Most such systems supply a special MPI runner that has some application-level scheduling -capability (e.g., ``aprun``, ``jsrun``). MPI applications can only be submitted from these nodes. Examples -of these systems include Summit and Sierra. +capability (e.g., ``aprun``, ``jsrun``). MPI applications can only be submitted from these nodes. There are two ways of running libEnsemble on these kinds of systems. The first, and simplest, is to run libEnsemble on the launch nodes. This is often sufficient if the worker's simulation diff --git a/docs/running_libE.rst b/docs/running_libE.rst index c78dcdaabb..aaed63342f 100644 --- a/docs/running_libE.rst +++ b/docs/running_libE.rst @@ -32,7 +32,7 @@ Running libEnsemble set ``libE_specs["dedicated_mode"] = True``. This mode can also be used to run on a **launch** node of a three-tier - system (e.g., Summit), ensuring the whole compute-node allocation is available for + system, ensuring the whole compute-node allocation is available for launching apps. Make sure there are no imports of ``mpi4py`` in your Python scripts. Note that on macOS and Windows, the default multiprocessing method is ``"spawn"`` @@ -69,7 +69,7 @@ Running libEnsemble This nesting does work with MPICH_ and its derivative MPI implementations. It is also unsuitable to use this mode when running on the **launch** nodes of - three-tier systems (e.g., Summit). In that case ``local`` mode is recommended. + three-tier systems. In that case ``local`` mode is recommended. .. tab-item:: TCP Comms diff --git a/libensemble/resources/platforms.py b/libensemble/resources/platforms.py index 44b2e76b28..69c36242fc 100644 --- a/libensemble/resources/platforms.py +++ b/libensemble/resources/platforms.py @@ -230,16 +230,6 @@ class Polaris(Platform): scheduler_match_slots: bool = True -class Summit(Platform): - mpi_runner: str = "jsrun" - cores_per_node: int = 42 - logical_cores_per_node: int = 168 - gpus_per_node: int = 6 - gpu_setting_type: str = "option_gpus_per_task" - gpu_setting_name: str = "-g" - scheduler_match_slots: bool = False - - class Known_platforms(BaseModel): """A list of platforms with known configurations. @@ -287,7 +277,6 @@ class Known_platforms(BaseModel): perlmutter_c: PerlmutterCPU = PerlmutterCPU() perlmutter_g: PerlmutterGPU = PerlmutterGPU() polaris: Polaris = Polaris() - summit: Summit = Summit() # Dictionary of known systems (or system partitions) detectable by domain name @@ -295,7 +284,6 @@ class Known_platforms(BaseModel): "frontier.olcf.ornl.gov": "frontier", "hostmgmt.cm.aurora.alcf.anl.gov": "aurora", "hsn.cm.polaris.alcf.anl.gov": "polaris", - "summit.olcf.ornl.gov": "summit", # Need to detect gpu count } diff --git a/libensemble/tests/functionality_tests/test_mpi_gpu_settings.py b/libensemble/tests/functionality_tests/test_mpi_gpu_settings.py index f307ca01be..d83e7a13cd 100644 --- a/libensemble/tests/functionality_tests/test_mpi_gpu_settings.py +++ b/libensemble/tests/functionality_tests/test_mpi_gpu_settings.py @@ -52,7 +52,7 @@ # Import libEnsemble items for this test from libensemble.libE import libE -from libensemble.resources.platforms import Aurora, Frontier, PerlmutterGPU, Platform, Polaris, Summit +from libensemble.resources.platforms import Aurora, Frontier, PerlmutterGPU, Platform, Polaris from libensemble.sim_funcs import six_hump_camel from libensemble.sim_funcs.var_resources import gpu_variable_resources as sim_f from libensemble.tools import parse_args @@ -190,7 +190,7 @@ del libE_specs["platform_specs"] # Fourth set - use platform setting ------------------------------------------------------------ - for platform in ["summit", "frontier", "perlmutter_g", "polaris", "aurora"]: + for platform in ["frontier", "perlmutter_g", "polaris", "aurora"]: print(f"\nRunning GPU setting checks (via known platform) for {platform} ------------------- ") libE_specs["platform"] = platform @@ -206,7 +206,7 @@ del libE_specs["platform"] # Fifth set - use platform environment setting ----------------------------------------------- - for platform in ["summit", "frontier", "perlmutter_g", "polaris", "aurora"]: + for platform in ["frontier", "perlmutter_g", "polaris", "aurora"]: print(f"\nRunning GPU setting checks (via known platform env. variable) for {platform} ----- ") os.environ["LIBE_PLATFORM"] = platform @@ -222,7 +222,7 @@ del os.environ["LIBE_PLATFORM"] # Sixth set - use platform_specs with known systems ------------------------------------------- - for platform in [Summit, Frontier, PerlmutterGPU, Polaris, Aurora]: + for platform in [Frontier, PerlmutterGPU, Polaris, Aurora]: print(f"\nRunning GPU setting checks (via known platform - platform_specs) for {platform} ------------------- ") libE_specs["platform_specs"] = platform() diff --git a/libensemble/tests/scaling_tests/forces/forces_app/build_forces.sh b/libensemble/tests/scaling_tests/forces/forces_app/build_forces.sh index 8dd599ffed..972695850f 100755 --- a/libensemble/tests/scaling_tests/forces/forces_app/build_forces.sh +++ b/libensemble/tests/scaling_tests/forces/forces_app/build_forces.sh @@ -49,10 +49,3 @@ fi # Nvidia (nvc) compiler with mpicc and on Cray system with target (Perlmutter) # mpicc -DGPU -O3 -fopenmp -mp=gpu -o forces.x forces.c # cc -DGPU -Wl,-znoexecstack -O3 -fopenmp -mp=gpu -target-accel=nvidia80 -o forces.x forces.c - -# xl (plain and using mpicc on Summit) -# xlc_r -DGPU -O3 -qsmp=omp -qoffload -o forces.x forces.c -# mpicc -DGPU -O3 -qsmp=omp -qoffload -o forces.x forces.c - -# Summit with gcc (Need up to offload capable gcc: module load gcc/12.1.0) - slower than xlc -# mpicc -DGPU -Ofast -fopenmp -Wl,-rpath=/sw/summit/gcc/12.1.0-0/lib64 -lm -foffload=nvptx-none forces.c -o forces.x From d83fc900948c2ca71020d205a2c78345792a5893 Mon Sep 17 00:00:00 2001 From: jlnav Date: Wed, 6 May 2026 15:47:04 -0500 Subject: [PATCH 34/34] large refactoring, replacing many uses of tabs throughout with tabbed-links to separate pages, instead --- docs/advanced_installation.rst | 205 ---------- .../advanced_installation.rst | 41 ++ .../advanced_installation_conda.rst | 49 +++ .../advanced_installation_pip.rst | 29 ++ .../advanced_installation_pixi.rst | 20 + .../advanced_installation_spack.rst | 77 ++++ .../advanced_installation_uv.rst | 11 + docs/data_structures/data_structures.rst | 2 +- docs/data_structures/gen_specs.rst | 82 ++-- docs/data_structures/libE_specs.rst | 356 ------------------ .../data_structures/libE_specs/libE_specs.rst | 108 ++++++ .../libE_specs/libE_specs_directories.rst | 99 +++++ .../libE_specs/libE_specs_general.rst | 40 ++ .../libE_specs/libE_specs_history.rst | 27 ++ .../libE_specs/libE_specs_profiling.rst | 17 + .../libE_specs/libE_specs_resources.rst | 60 +++ .../libE_specs/libE_specs_tcp.rst | 24 ++ docs/data_structures/persis_info.rst | 74 ++-- docs/data_structures/platform_specs.rst | 50 +-- docs/data_structures/sim_specs.rst | 74 ++-- docs/examples/calling_scripts.rst | 2 +- docs/examples/gest_api/aposmm.rst | 26 +- docs/examples/sim_funcs.rst | 1 + docs/executor/ex_base.rst | 62 +++ docs/executor/ex_index.rst | 257 +------------ docs/executor/ex_mpi.rst | 34 ++ docs/executor/ex_overview.rst | 159 ++++++++ docs/function_guides/calc_status.rst | 156 ++++---- docs/function_guides/generator_legacy.rst | 63 ++-- .../generator_standardized.rst | 2 +- docs/function_guides/simulator_legacy.rst | 2 +- .../simulator_standardized.rst | 2 +- docs/index.rst | 5 +- docs/introduction.rst | 2 +- docs/latex_index.rst | 2 +- docs/platforms/aurora.rst | 2 +- docs/platforms/bebop.rst | 2 +- docs/platforms/frontier.rst | 2 +- docs/platforms/improv.rst | 2 +- docs/platforms/perlmutter.rst | 2 +- docs/platforms/polaris.rst | 2 +- docs/running_libE.rst | 117 +++--- docs/tutorials/local_sine_tutorial.rst | 275 -------------- .../local_sine_tutorial.rst | 28 ++ .../local_sine_tutorial_1.rst | 28 ++ .../local_sine_tutorial_2.rst | 30 ++ .../local_sine_tutorial_3.rst | 20 + .../local_sine_tutorial_4.rst | 121 ++++++ .../local_sine_tutorial_5.rst | 71 ++++ docs/tutorials/tutorials.rst | 2 +- docs/utilities.rst | 64 ++-- docs/welcome.rst | 2 +- libensemble/libE.py | 2 +- libensemble/tools/parse_args.py | 2 +- 54 files changed, 1541 insertions(+), 1453 deletions(-) delete mode 100644 docs/advanced_installation.rst create mode 100644 docs/advanced_installation/advanced_installation.rst create mode 100644 docs/advanced_installation/advanced_installation_conda.rst create mode 100644 docs/advanced_installation/advanced_installation_pip.rst create mode 100644 docs/advanced_installation/advanced_installation_pixi.rst create mode 100644 docs/advanced_installation/advanced_installation_spack.rst create mode 100644 docs/advanced_installation/advanced_installation_uv.rst delete mode 100644 docs/data_structures/libE_specs.rst create mode 100644 docs/data_structures/libE_specs/libE_specs.rst create mode 100644 docs/data_structures/libE_specs/libE_specs_directories.rst create mode 100644 docs/data_structures/libE_specs/libE_specs_general.rst create mode 100644 docs/data_structures/libE_specs/libE_specs_history.rst create mode 100644 docs/data_structures/libE_specs/libE_specs_profiling.rst create mode 100644 docs/data_structures/libE_specs/libE_specs_resources.rst create mode 100644 docs/data_structures/libE_specs/libE_specs_tcp.rst create mode 100644 docs/executor/ex_base.rst create mode 100644 docs/executor/ex_mpi.rst create mode 100644 docs/executor/ex_overview.rst delete mode 100644 docs/tutorials/local_sine_tutorial.rst create mode 100644 docs/tutorials/local_sine_tutorial/local_sine_tutorial.rst create mode 100644 docs/tutorials/local_sine_tutorial/local_sine_tutorial_1.rst create mode 100644 docs/tutorials/local_sine_tutorial/local_sine_tutorial_2.rst create mode 100644 docs/tutorials/local_sine_tutorial/local_sine_tutorial_3.rst create mode 100644 docs/tutorials/local_sine_tutorial/local_sine_tutorial_4.rst create mode 100644 docs/tutorials/local_sine_tutorial/local_sine_tutorial_5.rst diff --git a/docs/advanced_installation.rst b/docs/advanced_installation.rst deleted file mode 100644 index 962e0fad4b..0000000000 --- a/docs/advanced_installation.rst +++ /dev/null @@ -1,205 +0,0 @@ -Advanced Installation -===================== - -libEnsemble can be installed from ``pip``, ``uv``, ``pixi``, ``Conda``, or ``Spack``. - -libEnsemble requires the following dependencies, which are typically -automatically installed alongside libEnsemble: - -* Python_ ``>= 3.11`` -* NumPy_ ``>= 1.21`` -* psutil_ ``>= 5.9.4`` -* `pydantic`_ ``>= 2`` -* gest-api_ ``>= 0.1,<0.2`` - -We recommend installing in a virtual environment from ``uv``, ``conda`` or another source. - -Further recommendations for selected HPC systems are given in the -:ref:`HPC platform guides`. - -.. tab-set:: - - .. tab-item:: pip - - To install the latest PyPI_ release:: - - pip install libensemble - - To pip install libEnsemble from the latest develop branch:: - - python -m pip install --upgrade git+https://github.com/Libensemble/libensemble.git@develop - - **Installing with mpi4py** - - If you wish to use ``mpi4py`` with libEnsemble (choosing MPI out of the three - :doc:`communications options`), then this should - be installed to work with the existing MPI on your system. For example, - the following line:: - - pip install mpi4py - - will use the ``mpicc`` compiler wrapper on your PATH to identify the MPI library. - To specify a different compiler wrapper, add the ``MPICC`` option. - You also may wish to avoid existing binary builds; for example,:: - - MPICC=mpiicc pip install mpi4py --no-binary mpi4py - - .. tab-item:: uv - - To install the latest PyPI_ release via uv_:: - - uv pip install libensemble - - .. tab-item:: pixi - - Add to your pixi_ environment:: - - pixi add libensemble - - libEnsemble is also distributed with locked pixi environments for different versions of Python - and various dependency sets, primarily for testing but also useful for guaranteed working environments. - See a list with:: - - pixi workspace environment list - - and activate with:: - - pixi shell -e - - .. tab-item:: conda - - Install libEnsemble with Conda_ from the conda-forge channel:: - - conda config --add channels conda-forge - conda install -c conda-forge libensemble - - This package comes with some useful optional dependencies, including - optimizers and will install quickly as ready binary packages. - - **Installing with mpi4py with Conda** - - If you wish to use ``mpi4py`` with libEnsemble (choosing MPI out of the three - :doc:`communications options`), you can use the - following. - - .. note:: - For clusters and HPC systems, always install ``mpi4py`` to use the - system MPI library (see pip instructions above). - - For a standalone build that comes with an MPI implementation, you can install - libEnsemble using one of the following variants. - - To install libEnsemble with MPICH_:: - - conda install -c conda-forge libensemble=*=mpi_mpich* - - To install libEnsemble with `Open MPI`_:: - - conda install -c conda-forge libensemble=*=mpi_openmpi* - - The asterisks will pick up the latest version and build. - - .. note:: - This syntax may not work without adjustments on macOS or any non-bash - shell. In these cases, try:: - - conda install -c conda-forge libensemble='*'=mpi_mpich'*' - - For a complete list of builds for libEnsemble on Conda:: - - conda search libensemble --channel conda-forge - - .. tab-item:: Spack - - Install libEnsemble using the Spack_ distribution:: - - spack install py-libensemble - - The above command will install the latest release of libEnsemble with - the required dependencies only. Other optional - dependencies can be specified through variants. The following - line installs libEnsemble version 1.5.0 with some common variants - (e.g., using :doc:`APOSMM<../examples/aposmm>`): - - .. code-block:: bash - - spack install py-libensemble @1.5.0 +mpi +scipy +mpmath +petsc4py +nlopt - - The list of variants can be found by running:: - - spack info py-libensemble - - On some platforms you may wish to run libEnsemble without ``mpi4py``, - using a serial PETSc build. This is often preferable if running on - the launch nodes of a three-tier system:: - - spack install py-libensemble +scipy +mpmath +petsc4py ^py-petsc4py~mpi ^petsc~mpi~hdf5~hypre~superlu-dist - - The installation will create modules for libEnsemble and the dependent - packages. These can be loaded by running:: - - spack load -r py-libensemble - - Any Python packages will be added to the PYTHONPATH when the modules are loaded. If you do not have - modules on your system you may need to install ``lmod`` (also available in Spack):: - - spack install lmod - . $(spack location -i lmod)/lmod/lmod/init/bash - spack load lmod - - Alternatively, Spack could be used to build the serial ``petsc4py``, and Conda could use this by loading - the ``py-petsc4py`` module thus created. - - **Hint**: When combining Spack and Conda, you can access your Conda Python and packages in your - ``~/.spack/packages.yaml`` while your Conda environment is activated, using ``CONDA_PREFIX`` - For example, if you have an activated Conda environment with Python 3.11 and SciPy installed: - - .. code-block:: yaml - - packages: - python: - externals: - - spec: "python" - prefix: $CONDA_PREFIX - buildable: False - py-numpy: - externals: - - spec: "py-numpy" - prefix: $CONDA_PREFIX/lib/python3.11/site-packages/numpy - buildable: False - py-scipy: - externals: - - spec: "py-scipy" - prefix: $CONDA_PREFIX/lib/python3.11/site-packages/scipy - buildable: True - - For more information on Spack builds and any particular considerations - for specific systems, see the spack_libe_ repository. In particular, this - includes some example ``packages.yaml`` files (which go in ``~/.spack/``). - These files are used to specify dependencies that Spack must obtain from - the given system (rather than building from scratch). This may include - ``Python`` and the packages distributed with it (e.g., ``numpy``), and will - often include the system MPI library. - -Globus Compute --------------- - -`Globus Compute`_ may be installed optionally to submit simulation function instances to remote Globus Compute endpoints. - -.. _conda-forge: https://conda-forge.org/ -.. _Conda: https://docs.conda.io/en/latest/ -.. _gest-api: https://github.com/campa-consortium/gest-api -.. _GitHub: https://github.com/Libensemble/libensemble -.. _Globus Compute: https://www.globus.org/compute -.. _MPICH: https://www.mpich.org/ -.. _NumPy: http://www.numpy.org -.. _Open MPI: https://www.open-mpi.org/ -.. _psutil: https://pypi.org/project/psutil/ -.. _pixi: https://pixi.prefix.dev/latest/ -.. _pydantic: https://docs.pydantic.dev/1.10/ -.. _PyPI: https://pypi.org -.. _Python: http://www.python.org -.. _Spack: https://spack.readthedocs.io/en/latest -.. _spack_libe: https://github.com/Libensemble/spack_libe -.. _tqdm: https://tqdm.github.io/ -.. _uv: https://docs.astral.sh/uv/ diff --git a/docs/advanced_installation/advanced_installation.rst b/docs/advanced_installation/advanced_installation.rst new file mode 100644 index 0000000000..fc2e8546a4 --- /dev/null +++ b/docs/advanced_installation/advanced_installation.rst @@ -0,0 +1,41 @@ +Advanced Installation +===================== + +`pip `__ \|\| `uv `__ \|\| `pixi `__ \|\| `conda `__ \|\| `Spack `__ + +libEnsemble can be installed from ``pip``, ``uv``, ``pixi``, ``Conda``, or ``Spack``. + +libEnsemble requires the following dependencies, which are typically +automatically installed alongside libEnsemble: + +* Python_ ``>= 3.11`` +* NumPy_ ``>= 1.21`` +* psutil_ ``>= 5.9.4`` +* `pydantic`_ ``>= 2`` +* gest-api_ ``>= 0.1,<0.2`` + +We recommend installing in a virtual environment from ``uv``, ``conda`` or another source. + +Further recommendations for selected HPC systems are given in the +:ref:`HPC platform guides`. + +.. toctree:: + :hidden: + + advanced_installation_pip + advanced_installation_uv + advanced_installation_pixi + advanced_installation_conda + advanced_installation_spack + +Globus Compute +-------------- + +`Globus Compute`_ may be installed optionally to submit simulation function instances to remote Globus Compute endpoints. + +.. _Globus Compute: https://www.globus.org/compute +.. _Python: http://www.python.org +.. _NumPy: http://www.numpy.org +.. _psutil: https://pypi.org/project/psutil/ +.. _pydantic: https://docs.pydantic.dev/1.10/ +.. _gest-api: https://github.com/campa-consortium/gest-api diff --git a/docs/advanced_installation/advanced_installation_conda.rst b/docs/advanced_installation/advanced_installation_conda.rst new file mode 100644 index 0000000000..c34ce25b1a --- /dev/null +++ b/docs/advanced_installation/advanced_installation_conda.rst @@ -0,0 +1,49 @@ +conda +===== + +`Advanced Installation `__ \|\| `pip `__ \|\| `uv `__ \|\| `pixi `__ \|\| **conda** \|\| `Spack `__ + +Install libEnsemble with Conda_ from the conda-forge channel:: + + conda config --add channels conda-forge + conda install -c conda-forge libensemble + +This package comes with some useful optional dependencies, including +optimizers and will install quickly as ready binary packages. + +**Installing with mpi4py with Conda** + +If you wish to use ``mpi4py`` with libEnsemble (choosing MPI out of the three +:doc:`communications options<../running_libE>`), you can use the +following. + +.. note:: + For clusters and HPC systems, always install ``mpi4py`` to use the + system MPI library (see pip instructions above). + +For a standalone build that comes with an MPI implementation, you can install +libEnsemble using one of the following variants. + +To install libEnsemble with MPICH_:: + + conda install -c conda-forge libensemble=*=mpi_mpich* + +To install libEnsemble with `Open MPI`_:: + + conda install -c conda-forge libensemble=*=mpi_openmpi* + +The asterisks will pick up the latest version and build. + +.. note:: + This syntax may not work without adjustments on macOS or any non-bash + shell. In these cases, try:: + + conda install -c conda-forge libensemble='*'=mpi_mpich'*' + +For a complete list of builds for libEnsemble on Conda:: + + conda search libensemble --channel conda-forge + +.. _Conda: https://docs.conda.io/en/latest/ +.. _MPICH: https://www.mpich.org/ +.. _Open MPI: https://www.open-mpi.org/ diff --git a/docs/advanced_installation/advanced_installation_pip.rst b/docs/advanced_installation/advanced_installation_pip.rst new file mode 100644 index 0000000000..9416765b1c --- /dev/null +++ b/docs/advanced_installation/advanced_installation_pip.rst @@ -0,0 +1,29 @@ +pip +=== + +`Advanced Installation `__ \|\| **pip** \|\| `uv `__ \|\| `pixi `__ \|\| `conda `__ \|\| `Spack `__ + +To install the latest PyPI_ release:: + + pip install libensemble + +To pip install libEnsemble from the latest develop branch:: + + python -m pip install --upgrade git+https://github.com/Libensemble/libensemble.git@develop + +**Installing with mpi4py** + +If you wish to use ``mpi4py`` with libEnsemble (choosing MPI out of the three +:doc:`communications options<../running_libE>`), then this should +be installed to work with the existing MPI on your system. For example, +the following line:: + + pip install mpi4py + +will use the ``mpicc`` compiler wrapper on your PATH to identify the MPI library. +To specify a different compiler wrapper, add the ``MPICC`` option. +You also may wish to avoid existing binary builds; for example,:: + + MPICC=mpiicc pip install mpi4py --no-binary mpi4py + +.. _PyPI: https://pypi.org diff --git a/docs/advanced_installation/advanced_installation_pixi.rst b/docs/advanced_installation/advanced_installation_pixi.rst new file mode 100644 index 0000000000..8227fcbd87 --- /dev/null +++ b/docs/advanced_installation/advanced_installation_pixi.rst @@ -0,0 +1,20 @@ +pixi +==== + +`Advanced Installation `__ \|\| `pip `__ \|\| `uv `__ \|\| **pixi** \|\| `conda `__ \|\| `Spack `__ + +Add to your pixi_ environment:: + + pixi add libensemble + +libEnsemble is also distributed with locked pixi environments for different versions of Python +and various dependency sets, primarily for testing but also useful for guaranteed working environments. +See a list with:: + + pixi workspace environment list + +and activate with:: + + pixi shell -e + +.. _pixi: https://pixi.prefix.dev/latest/ diff --git a/docs/advanced_installation/advanced_installation_spack.rst b/docs/advanced_installation/advanced_installation_spack.rst new file mode 100644 index 0000000000..3e9b1132e3 --- /dev/null +++ b/docs/advanced_installation/advanced_installation_spack.rst @@ -0,0 +1,77 @@ +Spack +===== + +`Advanced Installation `__ \|\| `pip `__ \|\| `uv `__ \|\| `pixi `__ \|\| `conda `__ \|\| **Spack** + +Install libEnsemble using the Spack_ distribution:: + + spack install py-libensemble + +The above command will install the latest release of libEnsemble with +the required dependencies only. Other optional +dependencies can be specified through variants. The following +line installs libEnsemble version 1.5.0 with some common variants +(e.g., using :doc:`APOSMM<../examples/gest_api/aposmm>`): + +.. code-block:: bash + + spack install py-libensemble @1.5.0 +mpi +scipy +mpmath +petsc4py +nlopt + +The list of variants can be found by running:: + + spack info py-libensemble + +On some platforms you may wish to run libEnsemble without ``mpi4py``, +using a serial PETSc build. This is often preferable if running on +the launch nodes of a three-tier system:: + + spack install py-libensemble +scipy +mpmath +petsc4py ^py-petsc4py~mpi ^petsc~mpi~hdf5~hypre~superlu-dist + +The installation will create modules for libEnsemble and the dependent +packages. These can be loaded by running:: + + spack load -r py-libensemble + +Any Python packages will be added to the PYTHONPATH when the modules are loaded. If you do not have +modules on your system you may need to install ``lmod`` (also available in Spack):: + + spack install lmod + . $(spack location -i lmod)/lmod/lmod/init/bash + spack load lmod + +Alternatively, Spack could be used to build the serial ``petsc4py``, and Conda could use this by loading +the ``py-petsc4py`` module thus created. + +**Hint**: When combining Spack and Conda, you can access your Conda Python and packages in your +``~/.spack/packages.yaml`` while your Conda environment is activated, using ``CONDA_PREFIX`` +For example, if you have an activated Conda environment with Python 3.11 and SciPy installed: + +.. code-block:: yaml + + packages: + python: + externals: + - spec: "python" + prefix: $CONDA_PREFIX + buildable: False + py-numpy: + externals: + - spec: "py-numpy" + prefix: $CONDA_PREFIX/lib/python3.11/site-packages/numpy + buildable: False + py-scipy: + externals: + - spec: "py-scipy" + prefix: $CONDA_PREFIX/lib/python3.11/site-packages/scipy + buildable: True + +For more information on Spack builds and any particular considerations +for specific systems, see the spack_libe_ repository. In particular, this +includes some example ``packages.yaml`` files (which go in ``~/.spack/``). +These files are used to specify dependencies that Spack must obtain from +the given system (rather than building from scratch). This may include +``Python`` and the packages distributed with it (e.g., ``numpy``), and will +often include the system MPI library. + +.. _Spack: https://spack.readthedocs.io/en/latest +.. _spack_libe: https://github.com/Libensemble/spack_libe diff --git a/docs/advanced_installation/advanced_installation_uv.rst b/docs/advanced_installation/advanced_installation_uv.rst new file mode 100644 index 0000000000..b10b64bfa5 --- /dev/null +++ b/docs/advanced_installation/advanced_installation_uv.rst @@ -0,0 +1,11 @@ +uv +== + +`Advanced Installation `__ \|\| `pip `__ \|\| **uv** \|\| `pixi `__ \|\| `conda `__ \|\| `Spack `__ + +To install the latest PyPI_ release via uv_:: + + uv pip install libensemble + +.. _PyPI: https://pypi.org +.. _uv: https://docs.astral.sh/uv/ diff --git a/docs/data_structures/data_structures.rst b/docs/data_structures/data_structures.rst index 423010feb4..a5a71862e8 100644 --- a/docs/data_structures/data_structures.rst +++ b/docs/data_structures/data_structures.rst @@ -8,7 +8,7 @@ See :ref:`here` for instruction on constructing a complete workflow :maxdepth: 2 :caption: libEnsemble Specifications: - libE_specs + libE_specs/libE_specs gen_specs sim_specs exit_criteria diff --git a/docs/data_structures/gen_specs.rst b/docs/data_structures/gen_specs.rst index 4552cf8863..e95950731e 100644 --- a/docs/data_structures/gen_specs.rst +++ b/docs/data_structures/gen_specs.rst @@ -5,47 +5,47 @@ Generator Specs Used to specify the generator, its inputs and outputs, and user data. -.. tab-set:: - - .. tab-item:: Standardized (gest-api) - - .. code-block:: python - :linenos: - - from libensemble import GenSpecs - from libensemble.gen_classes import UniformSample - from gest_api.vocs import VOCS - - vocs = VOCS( - variables={"x": [-3.0, 3.0]}, - objectives={"y": "MINIMIZE"}, - ) - - gen_specs = GenSpecs( - generator=UniformSample(vocs), - vocs=vocs, - ) - ... - - .. tab-item:: Classic (gen_f) - - .. code-block:: python - :linenos: - - import numpy as np - from libensemble import GenSpecs - from generator import gen_random_sample - - gen_specs = GenSpecs( - gen_f=gen_random_sample, - outputs=[("x", float, (1,))], - user={ - "lower": np.array([-3]), - "upper": np.array([3]), - "gen_batch_size": 5, - }, - ) - ... +Standardized (gest-api) +----------------------- + +.. code-block:: python + :linenos: + + from libensemble import GenSpecs + from libensemble.gen_classes import UniformSample + from gest_api.vocs import VOCS + + vocs = VOCS( + variables={"x": [-3.0, 3.0]}, + objectives={"y": "MINIMIZE"}, + ) + + gen_specs = GenSpecs( + generator=UniformSample(vocs), + vocs=vocs, + ) + ... + +Classic (gen_f) +--------------- + +.. code-block:: python + :linenos: + + import numpy as np + from libensemble import GenSpecs + from generator import gen_random_sample + + gen_specs = GenSpecs( + gen_f=gen_random_sample, + outputs=[("x", float, (1,))], + user={ + "lower": np.array([-3]), + "upper": np.array([3]), + "gen_batch_size": 5, + }, + ) + ... .. autopydantic_model:: libensemble.specs.GenSpecs :model-show-json: False diff --git a/docs/data_structures/libE_specs.rst b/docs/data_structures/libE_specs.rst deleted file mode 100644 index 9e6009b567..0000000000 --- a/docs/data_structures/libE_specs.rst +++ /dev/null @@ -1,356 +0,0 @@ -.. _datastruct-libe-specs: - -LibE Specs -========== - -libEnsemble is primarily customized by setting options within a ``LibeSpecs`` instance. - -.. code-block:: python - - from libensemble.specs import LibeSpecs - - specs = LibeSpecs(save_every_k_gens=100, sim_dirs_make=True, nworkers=4) - -.. dropdown:: Settings by Category - :open: - - .. tab-set:: - - .. tab-item:: General - - **comms** [str] = ``"mpi"``: - Manager/Worker communications mode: ``'mpi'``, ``'local'``, ``'threads'``, or ``'tcp'``. - If ``nworkers`` is specified, then ``local`` comms will be used unless a - parallel MPI environment is detected. - - **nworkers** [int]: - Number of worker processes in ``"local"``, ``"threads"``, or ``"tcp"``. - - **gen_on_worker** [bool] = False - Instructs Worker process to run generator instead of Manager. - - **mpi_comm** [MPI communicator] = ``MPI.COMM_WORLD``: - libEnsemble MPI communicator. - - **dry_run** [bool] = ``False``: - Whether libEnsemble should immediately exit after validating all inputs. - - **abort_on_exception** [bool] = ``True``: - In MPI mode, whether to call ``MPI_ABORT`` on an exception. - If ``False``, an exception will be raised by the manager. - - **worker_timeout** [int] = ``1``: - On libEnsemble shutdown, number of seconds after which workers considered timed out, - then terminated. - - **kill_canceled_sims** [bool] = ``False``: - Try to kill sims with ``cancel_requested`` set to ``True``. - If ``False``, the manager avoids this moderate overhead. - - **disable_log_files** [bool] = ``False``: - Disable ``ensemble.log`` and ``libE_stats.txt`` log files. - - **gen_workers** [list of ints]: - List of workers that should run only generators. All other workers will run - only simulator functions. - - .. tab-item:: Directories - - .. tab-set:: - - .. tab-item:: General - - **use_workflow_dir** [bool] = ``False``: - Whether to place *all* log files, dumped arrays, and default ensemble-directories in a - separate ``workflow`` directory. Each run is suffixed with a hash. - If copying back an ensemble directory from another location, the copy is placed here. - - **workflow_dir_path** [str]: - Optional path to the workflow directory. - - **ensemble_dir_path** [str] = ``"./ensemble"``: - Path to main ensemble directory. Can serve - as single working directory for workers, or contain calculation directories. - - .. code-block:: python - - LibeSpecs.ensemble_dir_path = "/scratch/my_ensemble" - - **ensemble_copy_back** [bool] = ``False``: - Whether to copy back contents of ``ensemble_dir_path`` to launch - location. Useful if ``ensemble_dir_path`` is located on node-local storage. - - **reuse_output_dir** [bool] = ``False``: - Whether to allow overwrites and access to previous ensemble and workflow directories in subsequent runs. - ``False`` by default to protect results. - - **calc_dir_id_width** [int] = ``4``: - The width of the numerical ID component of a calculation directory name. Leading - zeros are padded to the sim/gen ID. - - **use_worker_dirs** [bool] = ``False``: - Whether to organize calculation directories under worker-specific directories: - - .. tab-set:: - - .. tab-item:: False - - .. code-block:: - - - /ensemble_dir - - /sim0000 - - /gen0001 - - /sim0001 - ... - - .. tab-item:: True - - .. code-block:: - - - /ensemble_dir - - /worker1 - - /sim0000 - - /gen0001 - - /sim0004 - ... - - /worker2 - ... - - .. tab-item:: Sims - - **sim_dirs_make** [bool] = ``False``: - Whether to make calculation directories for each simulation function call. - - **sim_dir_copy_files** [list]: - Paths to files or directories to copy into each sim directory, or ensemble directory. - List of strings or ``pathlib.Path`` objects. - - **sim_dir_symlink_files** [list]: - Paths to files or directories to symlink into each sim directory, or ensemble directory. - List of strings or ``pathlib.Path`` objects. - - **sim_input_dir** [str]: - Copy this directory's contents into the working directory upon calling the simulation function. - Forms the base of a simulation directory. - - .. tab-item:: Gens - - **gen_dirs_make** [bool] = ``False``: - Whether to make generator-specific calculation directories for each generator function call. - *Each persistent generator creates a single directory*. - - **gen_dir_copy_files** [list]: - Paths to copy into the working directory upon calling the generator function. - List of strings or ``pathlib.Path`` objects - - **gen_dir_symlink_files** [list]: - Paths to files or directories to symlink into each gen directory. - List of strings or ``pathlib.Path`` objects - - **gen_input_dir** [str]: - Copy this directory's contents into the working directory upon calling the generator function. - Forms the base of a generator directory. - - .. tab-item:: Profiling - - **profile** [bool] = ``False``: - Profile manager and worker logic using ``cProfile``. - - **safe_mode** [bool] = ``False``: - Prevents user functions from overwriting protected History fields, but requires moderate overhead. - - **stats_fmt** [dict]: - A dictionary of options for formatting ``"libE_stats.txt"``. - See "Formatting Options for libE_stats.txt". - - **live_data** [LiveData] = None: - Add a live data capture object (e.g., for plotting). - - .. tab-item:: TCP - - **workers** [list]: - TCP Only: A list of worker hostnames. - - **ip** [str]: - TCP Only: IP address for Manager's system. - - **port** [int]: - TCP Only: Port number for Manager's system. - - **authkey** [str]: - TCP Only: Authkey for Manager's system. - - **workerID** [int]: - TCP Only: Worker ID number assigned to the new process. - - **worker_cmd** [list]: - TCP Only: Split string corresponding to worker/client Python process invocation. Contains - a local Python path, calling script, and manager/server format-fields for ``manager_ip``, - ``manager_port``, ``authkey``, and ``workerID``. ``nworkers`` is specified normally. - - .. tab-item:: History - - **save_every_k_sims** [int]: - Save history array to file after every k simulated points. - - **save_every_k_gens** [int]: - Save history array to file after every k generated points. - - **save_H_and_persis_on_abort** [bool] = ``True``: - Save states of ``H`` and ``persis_info`` to file on aborting after an exception. - - **save_H_on_completion** [bool] = ``False``: - Save state of ``H`` to file upon completing a workflow. Also enabled when either ``save_every_k_sims`` - or ``save_every_k_gens`` is set. - - **save_H_with_date** [bool] = ``False``: - ``H`` filename contains date and timestamp. - - **H_file_prefix** [str] = ``"libE_history"``: - Prefix for ``H`` filename. - - **final_gen_send** [bool] = ``False``: - Send final simulation results to persistent generators before shutdown. - The results will be sent along with the ``PERSIS_STOP`` tag. - - .. tab-item:: Resources - - **disable_resource_manager** [bool] = ``False``: - Disable the built-in resource manager, including automatic resource detection - and/or assignment of resources to workers. ``"resource_info"`` will be ignored. - - **platform** [str]: - Name of a :ref:`known platform`, e.g., ``LibeSpecs.platform = "perlmutter_g"`` - Alternatively set the ``LIBE_PLATFORM`` environment variable. - - **platform_specs** [Platform|dict]: - A ``Platform`` object (or dictionary) specifying :ref:`settings for a platform.`. - Fields not provided will be auto-detected. Can be set to a :ref:`known platform object`. - - **num_resource_sets** [int]: - The total number of resource sets into which resources will be divided. - By default resources will be divided by workers (excluding - ``zero_resource_workers``). - - **gen_num_procs** [int] = ``0``: - The default number of processors (MPI ranks) required by generators. Unless - overridden by equivalent ``persis_info`` settings, generators will be allocated - this many processors for applications launched via the MPIExecutor. - - **gen_num_gpus** [int] = ``0``: - The default number of GPUs required by generators. Unless overridden by - the equivalent ``persis_info`` settings, generators will be allocated this - many GPUs. - - **gpus_per_group** [int]: - Number of GPUs for each group in the scheduler. This can be used when - running on nodes with different numbers of GPUs. In effect a - block of this many GPUs will be treated as a virtual node. - By default the GPUs on each node are treated as a group. - - **use_tiles_as_gpus** [bool] = ``False``: - If ``True`` then treat a GPU tile as one GPU when GPU tiles - are provided in ``platform_specs`` or auto-detected. - - **enforce_worker_core_bounds** [bool] = ``False``: - Permit submission of tasks with a - higher processor count than the CPUs available to the worker. - Larger node counts are not allowed. Ignored when - ``disable_resource_manager`` is set. - - **dedicated_mode** [bool] = ``False``: - Instructs libEnsemble’s MPI executor not to run applications on nodes where - libEnsemble processes (manager and workers) are running. - - **resource_info** [dict]: - Provide resource information that will override automatically detected resources. - The allowable fields are given below in "Overriding Resource Auto-Detection" - Ignored if ``disable_resource_manager`` is set. - - **scheduler_opts** [dict]: - Options for the resource scheduler. - See "Scheduler Options" for more options. - -.. dropdown:: Complete Class API - - .. autopydantic_model:: libensemble.specs.LibeSpecs - :model-show-json: False - :model-show-config-member: False - :model-show-config-summary: False - :model-show-validator-members: False - :model-show-validator-summary: False - :field-list-validators: False - :model-show-field-summary: False - -Scheduler Options ------------------ - -See options for :ref:`built-in scheduler`. - -.. _resource_info: - -Overriding Resource Auto-Detection ----------------------------------- - -Note that ``"cores_on_node"`` and ``"gpus_on_node"`` are supported for backward -compatibility, but use of :ref:`Platform specification` is -recommended for these settings. - -.. dropdown:: Resource Info Fields - - The allowable ``libE_specs["resource_info"]`` fields are:: - - "cores_on_node" [tuple (int, int)]: - Tuple (physical cores, logical cores) on nodes. - - "gpus_on_node" [int]: - Number of GPUs on each node. - - "node_file" [str]: - Name of file containing a node-list. Default is "node_list". - - "nodelist_env_slurm" [str]: - The environment variable giving a node list in Slurm format - (Default: Uses ``SLURM_NODELIST``). Queried only if - a ``node_list`` file is not provided and the resource manager is - enabled. - - "nodelist_env_cobalt" [str]: - The environment variable giving a node list in Cobalt format - (Default: Uses ``COBALT_PARTNAME``) Queried only - if a ``node_list`` file is not provided and the resource manager - is enabled. - - "nodelist_env_lsf" [str]: - The environment variable giving a node list in LSF format - (Default: Uses ``LSB_HOSTS``) Queried only - if a ``node_list`` file is not provided and the resource manager - is enabled. - - "nodelist_env_lsf_shortform" [str]: - The environment variable giving a node list in LSF short-form - format (Default: Uses ``LSB_MCPU_HOSTS``) Queried only - if a ``node_list`` file is not provided and the resource manager is - enabled. - - For example:: - - customizer = {cores_on_node": (16, 64), - "node_file": "libe_nodes"} - - libE_specs["resource_info"] = customizer - -Formatting Options for libE_stats File --------------------------------------- - -The allowable ``libE_specs["stats_fmt"]`` fields are:: - - "task_timing" [bool] = ``False``: - Outputs elapsed time for each task launched by the executor. - - "task_datetime" [bool] = ``False``: - Outputs the elapsed time and start and end time for each task launched by the executor. - Can be used with the ``"plot_libe_tasks_util_v_time.py"`` to give task utilization plots. - - "show_resource_sets" [bool] = ``False``: - Shows the resource set IDs assigned to each worker for each call of the user function. diff --git a/docs/data_structures/libE_specs/libE_specs.rst b/docs/data_structures/libE_specs/libE_specs.rst new file mode 100644 index 0000000000..a219109851 --- /dev/null +++ b/docs/data_structures/libE_specs/libE_specs.rst @@ -0,0 +1,108 @@ +.. _datastruct-libe-specs: + +**Introduction** \|\| `General `__ \|\| `Directories `__ \|\| `Profiling `__ \|\| `TCP `__ \|\| `History `__ \|\| `Resources `__ + +LibE Specs +========== + +libEnsemble is primarily customized by setting options within a ``LibeSpecs`` instance. + +.. code-block:: python + + from libensemble.specs import LibeSpecs + + specs = LibeSpecs(save_every_k_gens=100, sim_dirs_make=True, nworkers=4) + +.. toctree:: + :hidden: + + libE_specs_general + libE_specs_directories + libE_specs_profiling + libE_specs_tcp + libE_specs_history + libE_specs_resources + +.. dropdown:: Complete Class API + + .. autopydantic_model:: libensemble.specs.LibeSpecs + :model-show-json: False + :model-show-config-member: False + :model-show-config-summary: False + :model-show-validator-members: False + :model-show-validator-summary: False + :field-list-validators: False + :model-show-field-summary: False + +Scheduler Options +----------------- + +See options for :ref:`built-in scheduler`. + +.. _resource_info: + +Overriding Resource Auto-Detection +---------------------------------- + +Note that ``"cores_on_node"`` and ``"gpus_on_node"`` are supported for backward +compatibility, but use of :ref:`Platform specification` is +recommended for these settings. + +.. dropdown:: Resource Info Fields + + The allowable ``libE_specs["resource_info"]`` fields are:: + + "cores_on_node" [tuple (int, int)]: + Tuple (physical cores, logical cores) on nodes. + + "gpus_on_node" [int]: + Number of GPUs on each node. + + "node_file" [str]: + Name of file containing a node-list. Default is "node_list". + + "nodelist_env_slurm" [str]: + The environment variable giving a node list in Slurm format + (Default: Uses ``SLURM_NODELIST``). Queried only if + a ``node_list`` file is not provided and the resource manager is + enabled. + + "nodelist_env_cobalt" [str]: + The environment variable giving a node list in Cobalt format + (Default: Uses ``COBALT_PARTNAME``) Queried only + if a ``node_list`` file is not provided and the resource manager + is enabled. + + "nodelist_env_lsf" [str]: + The environment variable giving a node list in LSF format + (Default: Uses ``LSB_HOSTS``) Queried only + if a ``node_list`` file is not provided and the resource manager + is enabled. + + "nodelist_env_lsf_shortform" [str]: + The environment variable giving a node list in LSF short-form + format (Default: Uses ``LSB_MCPU_HOSTS``) Queried only + if a ``node_list`` file is not provided and the resource manager is + enabled. + + For example:: + + customizer = {cores_on_node": (16, 64), + "node_file": "libe_nodes"} + + libE_specs["resource_info"] = customizer + +Formatting Options for libE_stats File +-------------------------------------- + +The allowable ``libE_specs["stats_fmt"]`` fields are:: + + "task_timing" [bool] = ``False``: + Outputs elapsed time for each task launched by the executor. + + "task_datetime" [bool] = ``False``: + Outputs the elapsed time and start and end time for each task launched by the executor. + Can be used with the ``"plot_libe_tasks_util_v_time.py"`` to give task utilization plots. + + "show_resource_sets" [bool] = ``False``: + Shows the resource set IDs assigned to each worker for each call of the user function. diff --git a/docs/data_structures/libE_specs/libE_specs_directories.rst b/docs/data_structures/libE_specs/libE_specs_directories.rst new file mode 100644 index 0000000000..76c848da05 --- /dev/null +++ b/docs/data_structures/libE_specs/libE_specs_directories.rst @@ -0,0 +1,99 @@ +Directories +=========== + +`Introduction `__ \|\| `General `__ \|\| **Directories** \|\| `Profiling `__ \|\| `TCP `__ \|\| `History `__ \|\| `Resources `__ + +.. tab-set:: + + .. tab-item:: General + + **use_workflow_dir** [bool] = ``False``: + Whether to place *all* log files, dumped arrays, and default ensemble-directories in a + separate ``workflow`` directory. Each run is suffixed with a hash. + If copying back an ensemble directory from another location, the copy is placed here. + + **workflow_dir_path** [str]: + Optional path to the workflow directory. + + **ensemble_dir_path** [str] = ``"./ensemble"``: + Path to main ensemble directory. Can serve + as single working directory for workers, or contain calculation directories. + + .. code-block:: python + + LibeSpecs.ensemble_dir_path = "/scratch/my_ensemble" + + **ensemble_copy_back** [bool] = ``False``: + Whether to copy back contents of ``ensemble_dir_path`` to launch + location. Useful if ``ensemble_dir_path`` is located on node-local storage. + + **reuse_output_dir** [bool] = ``False``: + Whether to allow overwrites and access to previous ensemble and workflow directories in subsequent runs. + ``False`` by default to protect results. + + **calc_dir_id_width** [int] = ``4``: + The width of the numerical ID component of a calculation directory name. Leading + zeros are padded to the sim/gen ID. + + **use_worker_dirs** [bool] = ``False``: + Whether to organize calculation directories under worker-specific directories: + + .. tab-set:: + + .. tab-item:: False + + .. code-block:: + + - /ensemble_dir + - /sim0000 + - /gen0001 + - /sim0001 + ... + + .. tab-item:: True + + .. code-block:: + + - /ensemble_dir + - /worker1 + - /sim0000 + - /gen0001 + - /sim0004 + ... + - /worker2 + ... + + .. tab-item:: Sims + + **sim_dirs_make** [bool] = ``False``: + Whether to make calculation directories for each simulation function call. + + **sim_dir_copy_files** [list]: + Paths to files or directories to copy into each sim directory, or ensemble directory. + List of strings or ``pathlib.Path`` objects. + + **sim_dir_symlink_files** [list]: + Paths to files or directories to symlink into each sim directory, or ensemble directory. + List of strings or ``pathlib.Path`` objects. + + **sim_input_dir** [str]: + Copy this directory's contents into the working directory upon calling the simulation function. + Forms the base of a simulation directory. + + .. tab-item:: Gens + + **gen_dirs_make** [bool] = ``False``: + Whether to make generator-specific calculation directories for each generator function call. + *Each persistent generator creates a single directory*. + + **gen_dir_copy_files** [list]: + Paths to copy into the working directory upon calling the generator function. + List of strings or ``pathlib.Path`` objects + + **gen_dir_symlink_files** [list]: + Paths to files or directories to symlink into each gen directory. + List of strings or ``pathlib.Path`` objects + + **gen_input_dir** [str]: + Copy this directory's contents into the working directory upon calling the generator function. + Forms the base of a generator directory. diff --git a/docs/data_structures/libE_specs/libE_specs_general.rst b/docs/data_structures/libE_specs/libE_specs_general.rst new file mode 100644 index 0000000000..f7f07f75fa --- /dev/null +++ b/docs/data_structures/libE_specs/libE_specs_general.rst @@ -0,0 +1,40 @@ +General +======= + +`Introduction `__ \|\| **General** \|\| `Directories `__ \|\| `Profiling `__ \|\| `TCP `__ \|\| `History `__ \|\| `Resources `__ + +**comms** [str] = ``"mpi"``: + Manager/Worker communications mode: ``'mpi'``, ``'local'``, ``'threads'``, or ``'tcp'``. + If ``nworkers`` is specified, then ``local`` comms will be used unless a + parallel MPI environment is detected. + +**nworkers** [int]: + Number of worker processes in ``"local"``, ``"threads"``, or ``"tcp"``. + +**gen_on_worker** [bool] = False + Instructs Worker process to run generator instead of Manager. + +**mpi_comm** [MPI communicator] = ``MPI.COMM_WORLD``: + libEnsemble MPI communicator. + +**dry_run** [bool] = ``False``: + Whether libEnsemble should immediately exit after validating all inputs. + +**abort_on_exception** [bool] = ``True``: + In MPI mode, whether to call ``MPI_ABORT`` on an exception. + If ``False``, an exception will be raised by the manager. + +**worker_timeout** [int] = ``1``: + On libEnsemble shutdown, number of seconds after which workers considered timed out, + then terminated. + +**kill_canceled_sims** [bool] = ``False``: + Try to kill sims with ``cancel_requested`` set to ``True``. + If ``False``, the manager avoids this moderate overhead. + +**disable_log_files** [bool] = ``False``: + Disable ``ensemble.log`` and ``libE_stats.txt`` log files. + +**gen_workers** [list of ints]: + List of workers that should run only generators. All other workers will run + only simulator functions. diff --git a/docs/data_structures/libE_specs/libE_specs_history.rst b/docs/data_structures/libE_specs/libE_specs_history.rst new file mode 100644 index 0000000000..55e9089696 --- /dev/null +++ b/docs/data_structures/libE_specs/libE_specs_history.rst @@ -0,0 +1,27 @@ +History +======= + +`Introduction `__ \|\| `General `__ \|\| `Directories `__ \|\| `Profiling `__ \|\| `TCP `__ \|\| **History** \|\| `Resources `__ + +**save_every_k_sims** [int]: + Save history array to file after every k simulated points. + +**save_every_k_gens** [int]: + Save history array to file after every k generated points. + +**save_H_and_persis_on_abort** [bool] = ``True``: + Save states of ``H`` and ``persis_info`` to file on aborting after an exception. + +**save_H_on_completion** [bool] = ``False``: + Save state of ``H`` to file upon completing a workflow. Also enabled when either ``save_every_k_sims`` + or ``save_every_k_gens`` is set. + +**save_H_with_date** [bool] = ``False``: + ``H`` filename contains date and timestamp. + +**H_file_prefix** [str] = ``"libE_history"``: + Prefix for ``H`` filename. + +**final_gen_send** [bool] = ``False``: + Send final simulation results to persistent generators before shutdown. + The results will be sent along with the ``PERSIS_STOP`` tag. diff --git a/docs/data_structures/libE_specs/libE_specs_profiling.rst b/docs/data_structures/libE_specs/libE_specs_profiling.rst new file mode 100644 index 0000000000..6a855c8ce6 --- /dev/null +++ b/docs/data_structures/libE_specs/libE_specs_profiling.rst @@ -0,0 +1,17 @@ +Profiling +========= + +`Introduction `__ \|\| `General `__ \|\| `Directories `__ \|\| **Profiling** \|\| `TCP `__ \|\| `History `__ \|\| `Resources `__ + +**profile** [bool] = ``False``: + Profile manager and worker logic using ``cProfile``. + +**safe_mode** [bool] = ``False``: + Prevents user functions from overwriting protected History fields, but requires moderate overhead. + +**stats_fmt** [dict]: + A dictionary of options for formatting ``"libE_stats.txt"``. + See "Formatting Options for libE_stats.txt". + +**live_data** [LiveData] = None: + Add a live data capture object (e.g., for plotting). diff --git a/docs/data_structures/libE_specs/libE_specs_resources.rst b/docs/data_structures/libE_specs/libE_specs_resources.rst new file mode 100644 index 0000000000..6b6118d663 --- /dev/null +++ b/docs/data_structures/libE_specs/libE_specs_resources.rst @@ -0,0 +1,60 @@ +Resources +========= + +`Introduction `__ \|\| `General `__ \|\| `Directories `__ \|\| `Profiling `__ \|\| `TCP `__ \|\| `History `__ \|\| **Resources** + +**disable_resource_manager** [bool] = ``False``: + Disable the built-in resource manager, including automatic resource detection + and/or assignment of resources to workers. ``"resource_info"`` will be ignored. + +**platform** [str]: + Name of a :ref:`known platform`, e.g., ``LibeSpecs.platform = "perlmutter_g"`` + Alternatively set the ``LIBE_PLATFORM`` environment variable. + +**platform_specs** [Platform|dict]: + A ``Platform`` object (or dictionary) specifying :ref:`settings for a platform.`. + Fields not provided will be auto-detected. Can be set to a :ref:`known platform object`. + +**num_resource_sets** [int]: + The total number of resource sets into which resources will be divided. + By default resources will be divided by workers (excluding + ``zero_resource_workers``). + +**gen_num_procs** [int] = ``0``: + The default number of processors (MPI ranks) required by generators. Unless + overridden by equivalent ``persis_info`` settings, generators will be allocated + this many processors for applications launched via the MPIExecutor. + +**gen_num_gpus** [int] = ``0``: + The default number of GPUs required by generators. Unless overridden by + the equivalent ``persis_info`` settings, generators will be allocated this + many GPUs. + +**gpus_per_group** [int]: + Number of GPUs for each group in the scheduler. This can be used when + running on nodes with different numbers of GPUs. In effect a + block of this many GPUs will be treated as a virtual node. + By default the GPUs on each node are treated as a group. + +**use_tiles_as_gpus** [bool] = ``False``: + If ``True`` then treat a GPU tile as one GPU when GPU tiles + are provided in ``platform_specs`` or auto-detected. + +**enforce_worker_core_bounds** [bool] = ``False``: + Permit submission of tasks with a + higher processor count than the CPUs available to the worker. + Larger node counts are not allowed. Ignored when + ``disable_resource_manager`` is set. + +**dedicated_mode** [bool] = ``False``: + Instructs libEnsemble’s MPI executor not to run applications on nodes where + libEnsemble processes (manager and workers) are running. + +**resource_info** [dict]: + Provide resource information that will override automatically detected resources. + The allowable fields are given below in "Overriding Resource Auto-Detection" + Ignored if ``disable_resource_manager`` is set. + +**scheduler_opts** [dict]: + Options for the resource scheduler. + See "Scheduler Options" for more options. diff --git a/docs/data_structures/libE_specs/libE_specs_tcp.rst b/docs/data_structures/libE_specs/libE_specs_tcp.rst new file mode 100644 index 0000000000..d0d2a05655 --- /dev/null +++ b/docs/data_structures/libE_specs/libE_specs_tcp.rst @@ -0,0 +1,24 @@ +TCP +=== + +`Introduction `__ \|\| `General `__ \|\| `Directories `__ \|\| `Profiling `__ \|\| **TCP** \|\| `History `__ \|\| `Resources `__ + +**workers** [list]: + TCP Only: A list of worker hostnames. + +**ip** [str]: + TCP Only: IP address for Manager's system. + +**port** [int]: + TCP Only: Port number for Manager's system. + +**authkey** [str]: + TCP Only: Authkey for Manager's system. + +**workerID** [int]: + TCP Only: Worker ID number assigned to the new process. + +**worker_cmd** [list]: + TCP Only: Split string corresponding to worker/client Python process invocation. Contains + a local Python path, calling script, and manager/server format-fields for ``manager_ip``, + ``manager_port``, ``authkey``, and ``workerID``. ``nworkers`` is specified normally. diff --git a/docs/data_structures/persis_info.rst b/docs/data_structures/persis_info.rst index 8d48474cb5..6e620f886c 100644 --- a/docs/data_structures/persis_info.rst +++ b/docs/data_structures/persis_info.rst @@ -21,42 +21,44 @@ between ensemble invocations, or in the allocation function. Examples: -.. tab-set:: - - .. tab-item:: RNG or reusable structures - - .. literalinclude:: ../../libensemble/gen_funcs/sampling.py - :linenos: - :start-at: def uniform_random_sample(_, persis_info, gen_specs, libE_info): - :end-before: def uniform_random_sample_with_variable_resources(_, persis_info, gen_specs, libE_info): - :emphasize-lines: 10 - :caption: libensemble/libensemble/gen_funcs/sampling.py - - .. tab-item:: Incrementing indexes or process counts - - .. literalinclude:: ../../libensemble/alloc_funcs/fast_alloc.py - :linenos: - :start-at: for wid in support.avail_worker_ids(gen_workers=False): - :end-before: # Give gen work if possible - :caption: libensemble/alloc_funcs/fast_alloc.py - - .. tab-item:: Tracking running generators - - .. literalinclude:: ../../libensemble/alloc_funcs/start_only_persistent.py - :linenos: - :start-at: avail_workers = support.avail_worker_ids(persistent=False, gen_workers=True) - :end-before: return Work, persis_info, 0 - :emphasize-lines: 18 - :caption: libensemble/alloc_funcs/start_only_persistent.py - - .. tab-item:: Allocation function triggers shutdown - - .. literalinclude:: ../../libensemble/alloc_funcs/start_only_persistent.py - :linenos: - :start-at: if gen_count < persis_info.get("num_gens_started", 0): - :end-before: # Give evaluated results back to a running persistent gen - :emphasize-lines: 1 - :caption: libensemble/alloc_funcs/start_only_persistent.py +RNG or reusable structures +-------------------------- + +.. literalinclude:: ../../libensemble/gen_funcs/sampling.py + :linenos: + :start-at: def uniform_random_sample(_, persis_info, gen_specs, libE_info): + :end-before: def uniform_random_sample_with_variable_resources(_, persis_info, gen_specs, libE_info): + :emphasize-lines: 10 + :caption: libensemble/libensemble/gen_funcs/sampling.py + +Incrementing indexes or process counts +-------------------------------------- + +.. literalinclude:: ../../libensemble/alloc_funcs/fast_alloc.py + :linenos: + :start-at: for wid in support.avail_worker_ids(gen_workers=False): + :end-before: # Give gen work if possible + :caption: libensemble/alloc_funcs/fast_alloc.py + +Tracking running generators +--------------------------- + +.. literalinclude:: ../../libensemble/alloc_funcs/start_only_persistent.py + :linenos: + :start-at: avail_workers = support.avail_worker_ids(persistent=False, gen_workers=True) + :end-before: return Work, persis_info, 0 + :emphasize-lines: 18 + :caption: libensemble/alloc_funcs/start_only_persistent.py + +Allocation function triggers shutdown +------------------------------------- + +.. literalinclude:: ../../libensemble/alloc_funcs/start_only_persistent.py + :linenos: + :start-at: if gen_count < persis_info.get("num_gens_started", 0): + :end-before: # Give evaluated results back to a running persistent gen + :emphasize-lines: 1 + :caption: libensemble/alloc_funcs/start_only_persistent.py .. - Random number generators or other structures for use on consecutive calls .. - Incrementing array row indexes or process counts diff --git a/docs/data_structures/platform_specs.rst b/docs/data_structures/platform_specs.rst index 35198535f1..bfc4104059 100644 --- a/docs/data_structures/platform_specs.rst +++ b/docs/data_structures/platform_specs.rst @@ -15,37 +15,37 @@ A ``Platform`` object or dictionary specifying settings for a platform. To define a platform (in calling script): -.. tab-set:: +Platform Object +^^^^^^^^^^^^^^^ - .. tab-item:: Platform Object - - .. code-block:: python +.. code-block:: python - from libensemble.resources.platforms import Platform + from libensemble.resources.platforms import Platform - libE_specs["platform_specs"] = Platform( - mpi_runner="srun", - cores_per_node=64, - logical_cores_per_node=128, - gpus_per_node=8, - gpu_setting_type="runner_default", - gpu_env_fallback="ROCR_VISIBLE_DEVICES", - scheduler_match_slots=False, - ) + libE_specs["platform_specs"] = Platform( + mpi_runner="srun", + cores_per_node=64, + logical_cores_per_node=128, + gpus_per_node=8, + gpu_setting_type="runner_default", + gpu_env_fallback="ROCR_VISIBLE_DEVICES", + scheduler_match_slots=False, + ) - .. tab-item:: Dictionary +Dictionary +^^^^^^^^^^ - .. code-block:: python +.. code-block:: python - libE_specs["platform_specs"] = { - "mpi_runner": "srun", - "cores_per_node": 64, - "logical_cores_per_node": 128, - "gpus_per_node": 8, - "gpu_setting_type": "runner_default", - "gpu_env_fallback": "ROCR_VISIBLE_DEVICES", - "scheduler_match_slots": False, - } + libE_specs["platform_specs"] = { + "mpi_runner": "srun", + "cores_per_node": 64, + "logical_cores_per_node": 128, + "gpus_per_node": 8, + "gpu_setting_type": "runner_default", + "gpu_env_fallback": "ROCR_VISIBLE_DEVICES", + "scheduler_match_slots": False, + } The list of platform fields is given below. Any fields not given will be auto-detected by libEnsemble. diff --git a/docs/data_structures/sim_specs.rst b/docs/data_structures/sim_specs.rst index 45740075bc..0c937c5e82 100644 --- a/docs/data_structures/sim_specs.rst +++ b/docs/data_structures/sim_specs.rst @@ -5,43 +5,43 @@ Simulation Specs Used to specify the simulation function, its inputs and outputs, and user data. -.. tab-set:: - - .. tab-item:: Standardized (gest-api) - - .. code-block:: python - :linenos: - - from libensemble import SimSpecs - from gest_api.vocs import VOCS - from my_package import my_sim_callable - - vocs = VOCS( - variables={"x": [-3.0, 3.0]}, - objectives={"y": "MINIMIZE"}, - ) - - sim_specs = SimSpecs( - simulator=my_sim_callable, - vocs=vocs, - ) - ... - - .. tab-item:: Classic (sim_f) - - .. code-block:: python - :linenos: - - from libensemble import SimSpecs - from simulator import sim_find_sine - - sim_specs = SimSpecs( - sim_f=sim_find_sine, - inputs=["x"], - outputs=[("y", float)], - user={"batch": 1234}, - ) - ... +Standardized (gest-api) +----------------------- + +.. code-block:: python + :linenos: + + from libensemble import SimSpecs + from gest_api.vocs import VOCS + from my_package import my_sim_callable + + vocs = VOCS( + variables={"x": [-3.0, 3.0]}, + objectives={"y": "MINIMIZE"}, + ) + + sim_specs = SimSpecs( + simulator=my_sim_callable, + vocs=vocs, + ) + ... + +Classic (sim_f) +--------------- + +.. code-block:: python + :linenos: + + from libensemble import SimSpecs + from simulator import sim_find_sine + + sim_specs = SimSpecs( + sim_f=sim_find_sine, + inputs=["x"], + outputs=[("y", float)], + user={"batch": 1234}, + ) + ... .. autopydantic_model:: libensemble.specs.SimSpecs :model-show-json: False diff --git a/docs/examples/calling_scripts.rst b/docs/examples/calling_scripts.rst index 9a9ed0b1dd..394f3946c9 100644 --- a/docs/examples/calling_scripts.rst +++ b/docs/examples/calling_scripts.rst @@ -6,7 +6,7 @@ Many other examples of top-level scripts can be found in libEnsemble's `regressi Local Sine Tutorial ------------------- -This example is from the Local Sine :doc:`Tutorial<../tutorials/local_sine_tutorial>`, +This example is from the Local Sine :doc:`Tutorial<../tutorials/local_sine_tutorial/local_sine_tutorial>`, meant to run with Python's multiprocessing as the primary ``comms`` method. .. literalinclude:: ../../examples/tutorials/simple_sine/test_local_sine_tutorial.py diff --git a/docs/examples/gest_api/aposmm.rst b/docs/examples/gest_api/aposmm.rst index a047f72331..dbbd4f7ad1 100644 --- a/docs/examples/gest_api/aposmm.rst +++ b/docs/examples/gest_api/aposmm.rst @@ -7,20 +7,18 @@ APOSMM :show-inheritance: -.. seealso:: +APOSMM with libEnsemble +^^^^^^^^^^^^^^^^^^^^^^^ - .. tab-set:: +.. literalinclude:: ../../../libensemble/tests/regression_tests/test_asktell_aposmm_nlopt.py + :linenos: + :start-at: workflow = Ensemble(parse_args=True) + :end-before: # Perform the run - .. tab-item:: APOSMM with libEnsemble +APOSMM standalone +^^^^^^^^^^^^^^^^^ - .. literalinclude:: ../../../libensemble/tests/regression_tests/test_asktell_aposmm_nlopt.py - :linenos: - :start-at: workflow = Ensemble(parse_args=True) - :end-before: # Perform the run - - .. tab-item:: APOSMM standalone - - .. literalinclude:: ../../../libensemble/tests/unit_tests/test_persistent_aposmm.py - :linenos: - :start-at: def test_asktell_ingest_first(): - :end-before: assert persis_info.get("run_order"), "Standalone persistent_aposmm didn't do any localopt runs" +.. literalinclude:: ../../../libensemble/tests/unit_tests/test_persistent_aposmm.py + :linenos: + :start-at: def test_asktell_ingest_first(): + :end-before: assert persis_info.get("run_order"), "Standalone persistent_aposmm didn't do any localopt runs" diff --git a/docs/examples/sim_funcs.rst b/docs/examples/sim_funcs.rst index 0e018db472..37fb6ecf14 100644 --- a/docs/examples/sim_funcs.rst +++ b/docs/examples/sim_funcs.rst @@ -60,5 +60,6 @@ Special simulation functions :maxdepth: 1 sim_funcs/mock_sim + sim_funcs/surmise_test_function .. _build_forces.sh: https://github.com/Libensemble/libensemble/blob/main/libensemble/tests/scaling_tests/forces/forces_app/build_forces.sh diff --git a/docs/executor/ex_base.rst b/docs/executor/ex_base.rst new file mode 100644 index 0000000000..1a4d3cf31d --- /dev/null +++ b/docs/executor/ex_base.rst @@ -0,0 +1,62 @@ +Base Executor +============= + +`Overview `__ \|\| **Base Executor** \|\| `MPI Executor `__ + +.. automodule:: executor + :no-undoc-members: + +Only for running local serial-launched applications. +To run MPI applications and use detected resources, use the `MPI Executor `__ tab. + +.. tab-set:: + + .. tab-item:: Base Executor + + .. autoclass:: libensemble.executors.executor.Executor + :members: + :exclude-members: serial_setup, sim_default_app, gen_default_app, get_app, default_app, set_resources, get_task, set_workerID, set_worker_info, new_tasks_timing, add_platform_info, set_gen_procs_gpus, kill, poll + + .. automethod:: __init__ + + .. tab-item:: Task + + .. _task_tag: + + Tasks are created and returned by the Executor's ``submit()``. Tasks + can be polled, killed, and waited on with the respective ``poll``, ``kill``, and ``wait`` functions. + Task information can be queried through instance attributes and query functions. + + .. autoclass:: libensemble.executors.executor.Task + :members: + :exclude-members: calc_task_timing, check_poll + + .. tab-item:: Task Attributes + + .. note:: + These should not be set directly. Tasks are launched by the Executor, + and task information can be queried through the task attributes + below and the query functions. + + :task.state: (string) The task status. One of + ("UNKNOWN"|"CREATED"|"WAITING"|"RUNNING"|"FINISHED"|"USER_KILLED"|"FAILED"|"FAILED_TO_START") + + :task.process: (process obj) The process object used by the underlying process + manager (e.g., return value of subprocess.Popen). + :task.errcode: (int) The error code (or return code) used by the underlying process manager. + :task.finished: (boolean) True means task has finished running - not whether it was successful. + :task.success: (boolean) Did task complete successfully (e.g., the return code is zero)? + :task.runtime: (int) Time in seconds that task has been running. + :task.submit_time: (int) Time since epoch that task was submitted. + :task.total_time: (int) Total time from task submission to completion (only available when task is finished). + + Run configuration attributes - some will be autogenerated: + + :task.workdir: (string) Work directory for the task + :task.name: (string) Name of task - autogenerated + :task.app: (app obj) Use application/executable, registered using exctr.register_app + :task.app_args: (string) Application arguments as a string + :task.stdout: (string) Name of file where the standard output of the task is written (in task.workdir) + :task.stderr: (string) Name of file where the standard error of the task is written (in task.workdir) + :task.dry_run: (boolean) True if task corresponds to dry run (no actual submission) + :task.runline: (string) Complete, parameterized command to be subprocessed to launch app diff --git a/docs/executor/ex_index.rst b/docs/executor/ex_index.rst index 1213e086fe..a4f33cb39a 100644 --- a/docs/executor/ex_index.rst +++ b/docs/executor/ex_index.rst @@ -1,258 +1,21 @@ .. _executor_index: +**Overview** \|\| `Base Executor `__ \|\| `MPI Executor `__ + Executors ========= libEnsemble's Executors can be used within user functions to provide a simple, portable interface for running and managing user applications. -.. tab-set:: - - .. tab-item:: Overview - - The **Executor** provides a portable interface for running applications on any system and - any number of compute resources. - - .. dropdown:: Detailed description - - An **Executor** interface is provided by libEnsemble to remove the burden - of system interaction from the user and improve workflow portability. Users - first register their applications to Executor instances, which then return - corresponding ``Task`` objects upon submission within user functions. - - **Task** attributes and retrieval functions can be queried to determine - the status of running application instances. Functions are also provided - to access and interrogate files in the task's working directory. - - libEnsemble's Executors and Tasks contain many familiar features and methods - to Python's native `concurrent futures`_ interface. Executors feature the - ``submit()`` function for launching apps (detailed below), but currently do - not support ``map()`` or ``shutdown()``. Tasks are much like ``futures``. - They feature the ``cancel()``, ``cancelled()``, ``running()``, ``done()``, - ``result()``, and ``exception()`` functions from the standard. - - The main ``Executor`` class can subprocess serial applications in place, - while the ``MPIExecutor`` is used for running MPI applications. - - Typically, users choose and parameterize their ``Executor`` objects in their - calling scripts, where each executable generator or simulation application is - registered to it. Once in the user-side worker code (sim/gen func), the Executor - can be retrieved without any need to specify the type. - - Once the Executor is retrieved, tasks can be submitted by specifying the - ``app_name`` from registration in the calling script alongside other optional - parameters described in the API. - - **Basic usage** - - To set up an MPI executor, register an MPI application, and add - to the ensemble object. - - .. code-block:: python - - from libensemble import Ensemble - from libensemble.executors import MPIExecutor - - exctr = MPIExecutor() - exctr.register_app(full_path="/path/to/my/exe", app_name="sim1") - ensemble = Ensemble(executor=exctr) - - **In user simulation function**:: - - def sim_func(H, persis_info, sim_specs, libE_info): - - input_param = str(int(H["x"][0][0])) - exctr = libE_info["executor"] - - task = exctr.submit( - app_name="sim1", - num_procs=8, - app_args=input_param, - stdout="out.txt", - stderr="err.txt", - ) - - # Wait for task to complete - task.wait() - - Example use-cases: - - * :doc:`Electrostatic Forces example <../tutorials/executor_forces_tutorial>`: Launches the ``forces.x`` MPI application. - - * :doc:`Forces example with GPUs <../tutorials/forces_gpu_tutorial>`: Auto-assigns GPUs via executor. - - See :doc:`Running on HPC Systems<../platforms/platforms_index>` for illustrations - of how common options such as ``libE_specs["dedicated_mode"]`` affect the - run configuration on clusters and supercomputers. - - **Advanced Features** - - **Example of polling output and killing application:** - - In simulation function (sim_f). - - .. code-block:: python - - import time - - - def sim_func(H, persis_info, sim_specs, libE_info): - input_param = str(int(H["x"][0][0])) - exctr = libE_info["executor"] - - task = exctr.submit( - app_name="sim1", - num_procs=8, - app_args=input_param, - stdout="out.txt", - stderr="err.txt", - ) - - timeout_sec = 600 - poll_delay_sec = 1 - - while not task.finished: - # Has manager sent a finish signal - if exctr.manager_kill_received(): - task.kill() - my_cleanup() - - # Check output file for error and kill task - elif task.stdout_exists(): - if "Error" in task.read_stdout(): - task.kill() - - elif task.runtime > timeout_sec: - task.kill() # Timeout - - else: - time.sleep(poll_delay_sec) - task.poll() - - print(task.state) # state may be finished/failed/killed - - Users who wish to poll only for manager kill signals and timeouts don't necessarily - need to construct a polling loop like above, but can instead use the ``Executor`` - built-in ``polling_loop()`` method. An alternative to the above simulation function - may resemble: - - .. code-block:: python - - def sim_func(H, persis_info, sim_specs, libE_info): - input_param = str(int(H["x"][0][0])) - exctr = libE_info["executor"] - - task = exctr.submit( - app_name="sim1", - num_procs=8, - app_args=input_param, - stdout="out.txt", - stderr="err.txt", - ) - - timeout_sec = 600 - poll_delay_sec = 1 - - exctr.polling_loop(task, timeout=timeout_sec, delay=poll_delay_sec) - - print(task.state) # state may be finished/failed/killed - - The ``MPIExecutor`` autodetects system criteria such as the appropriate MPI launcher - and mechanisms to poll and kill tasks. It also has access to the resource manager, - which partitions resources among workers, ensuring that runs utilize different - resources (e.g., nodes). Furthermore, the ``MPIExecutor`` offers resilience via the - feature of re-launching tasks that fail to start because of system factors. - - .. _concurrent futures: https://docs.python.org/library/concurrent.futures.html - - .. tab-item:: Base Executor - - .. automodule:: executor - :no-undoc-members: - - Only for running local serial-launched applications. - To run MPI applications and use detected resources, use the `MPI Executor` tab. - - .. tab-set:: - - .. tab-item:: Base Executor - - .. autoclass:: libensemble.executors.executor.Executor - :members: - :exclude-members: serial_setup, sim_default_app, gen_default_app, get_app, default_app, set_resources, get_task, set_workerID, set_worker_info, new_tasks_timing, add_platform_info, set_gen_procs_gpus, kill, poll - - .. automethod:: __init__ - - .. tab-item:: Task - - .. _task_tag: - - Tasks are created and returned by the Executor's ``submit()``. Tasks - can be polled, killed, and waited on with the respective ``poll``, ``kill``, and ``wait`` functions. - Task information can be queried through instance attributes and query functions. - - .. autoclass:: libensemble.executors.executor.Task - :members: - :exclude-members: calc_task_timing, check_poll - - .. tab-item:: Task Attributes - - .. note:: - These should not be set directly. Tasks are launched by the Executor, - and task information can be queried through the task attributes - below and the query functions. - - :task.state: (string) The task status. One of - ("UNKNOWN"|"CREATED"|"WAITING"|"RUNNING"|"FINISHED"|"USER_KILLED"|"FAILED"|"FAILED_TO_START") - - :task.process: (process obj) The process object used by the underlying process - manager (e.g., return value of subprocess.Popen). - :task.errcode: (int) The error code (or return code) used by the underlying process manager. - :task.finished: (boolean) True means task has finished running - not whether it was successful. - :task.success: (boolean) Did task complete successfully (e.g., the return code is zero)? - :task.runtime: (int) Time in seconds that task has been running. - :task.submit_time: (int) Time since epoch that task was submitted. - :task.total_time: (int) Total time from task submission to completion (only available when task is finished). - - Run configuration attributes - some will be autogenerated: - - :task.workdir: (string) Work directory for the task - :task.name: (string) Name of task - autogenerated - :task.app: (app obj) Use application/executable, registered using exctr.register_app - :task.app_args: (string) Application arguments as a string - :task.stdout: (string) Name of file where the standard output of the task is written (in task.workdir) - :task.stderr: (string) Name of file where the standard error of the task is written (in task.workdir) - :task.dry_run: (boolean) True if task corresponds to dry run (no actual submission) - :task.runline: (string) Complete, parameterized command to be subprocessed to launch app - - .. tab-item:: MPI Executor - - .. automodule:: mpi_executor - :no-undoc-members: - - .. autoclass:: libensemble.executors.mpi_executor.MPIExecutor - :show-inheritance: - :inherited-members: - :exclude-members: serial_setup, sim_default_app, gen_default_app, get_app, default_app, set_resources, get_task, set_workerID, set_worker_info, new_tasks_timing, add_platform_info, set_gen_procs_gpus, kill, poll - - **Class-specific Attributes** - - Class-specific attributes can be set directly to alter the behavior of the MPI - Executor. However, they should be used with caution, because they may not - be implemented in other executors. - - :max_submit_attempts: (int) Maximum number of launch attempts for a given - task. *Default: 5*. - :fail_time: (int or float) *Only if wait_on_start is set.* Maximum run time to failure in - seconds that results in relaunch. *Default: 2*. - :retry_delay_incr: (int or float) Delay increment between launch attempts in seconds. - *Default: 5*. (i.e., First retry after 5 seconds, then 10 seconds, then 15, etc...) +.. toctree:: + :hidden: - Example. To increase resilience against submission failures:: + ex_overview + ex_base + ex_mpi - taskctrl = MPIExecutor() - taskctrl.max_launch_attempts = 8 - taskctrl.fail_time = 5 - taskctrl.retry_delay_incr = 10 +The **Executor** provides a portable interface for running applications on any system and +any number of compute resources. - .. _customizer: +Please select from the sections above or the sidebar navigation to read more. diff --git a/docs/executor/ex_mpi.rst b/docs/executor/ex_mpi.rst new file mode 100644 index 0000000000..59a36f9e52 --- /dev/null +++ b/docs/executor/ex_mpi.rst @@ -0,0 +1,34 @@ +MPI Executor +============ + +`Overview `__ \|\| `Base Executor `__ \|\| **MPI Executor** + +.. automodule:: mpi_executor + :no-undoc-members: + +.. autoclass:: libensemble.executors.mpi_executor.MPIExecutor + :show-inheritance: + :inherited-members: + :exclude-members: serial_setup, sim_default_app, gen_default_app, get_app, default_app, set_resources, get_task, set_workerID, set_worker_info, new_tasks_timing, add_platform_info, set_gen_procs_gpus, kill, poll + +**Class-specific Attributes** + +Class-specific attributes can be set directly to alter the behavior of the MPI +Executor. However, they should be used with caution, because they may not +be implemented in other executors. + +:max_submit_attempts: (int) Maximum number of launch attempts for a given + task. *Default: 5*. +:fail_time: (int or float) *Only if wait_on_start is set.* Maximum run time to failure in + seconds that results in relaunch. *Default: 2*. +:retry_delay_incr: (int or float) Delay increment between launch attempts in seconds. + *Default: 5*. (i.e., First retry after 5 seconds, then 10 seconds, then 15, etc...) + +Example. To increase resilience against submission failures:: + + taskctrl = MPIExecutor() + taskctrl.max_launch_attempts = 8 + taskctrl.fail_time = 5 + taskctrl.retry_delay_incr = 10 + +.. _customizer: diff --git a/docs/executor/ex_overview.rst b/docs/executor/ex_overview.rst new file mode 100644 index 0000000000..f53510b733 --- /dev/null +++ b/docs/executor/ex_overview.rst @@ -0,0 +1,159 @@ +Overview +======== + +**Overview** \|\| `Base Executor `__ \|\| `MPI Executor `__ + +The **Executor** provides a portable interface for running applications on any system and +any number of compute resources. + +.. dropdown:: Detailed description + + An **Executor** interface is provided by libEnsemble to remove the burden + of system interaction from the user and improve workflow portability. Users + first register their applications to Executor instances, which then return + corresponding ``Task`` objects upon submission within user functions. + + **Task** attributes and retrieval functions can be queried to determine + the status of running application instances. Functions are also provided + to access and interrogate files in the task's working directory. + + libEnsemble's Executors and Tasks contain many familiar features and methods + to Python's native `concurrent futures`_ interface. Executors feature the + ``submit()`` function for launching apps (detailed below), but currently do + not support ``map()`` or ``shutdown()``. Tasks are much like ``futures``. + They feature the ``cancel()``, ``cancelled()``, ``running()``, ``done()``, + ``result()``, and ``exception()`` functions from the standard. + + The main ``Executor`` class can subprocess serial applications in place, + while the ``MPIExecutor`` is used for running MPI applications. + + Typically, users choose and parameterize their ``Executor`` objects in their + calling scripts, where each executable generator or simulation application is + registered to it. Once in the user-side worker code (sim/gen func), the Executor + can be retrieved without any need to specify the type. + + Once the Executor is retrieved, tasks can be submitted by specifying the + ``app_name`` from registration in the calling script alongside other optional + parameters described in the API. + +**Basic usage** + +To set up an MPI executor, register an MPI application, and add +to the ensemble object. + +.. code-block:: python + + from libensemble import Ensemble + from libensemble.executors import MPIExecutor + + exctr = MPIExecutor() + exctr.register_app(full_path="/path/to/my/exe", app_name="sim1") + ensemble = Ensemble(executor=exctr) + +**In user simulation function**:: + + def sim_func(H, persis_info, sim_specs, libE_info): + + input_param = str(int(H["x"][0][0])) + exctr = libE_info["executor"] + + task = exctr.submit( + app_name="sim1", + num_procs=8, + app_args=input_param, + stdout="out.txt", + stderr="err.txt", + ) + + # Wait for task to complete + task.wait() + +Example use-cases: + +* :doc:`Electrostatic Forces example <../tutorials/executor_forces_tutorial>`: Launches the ``forces.x`` MPI application. + +* :doc:`Forces example with GPUs <../tutorials/forces_gpu_tutorial>`: Auto-assigns GPUs via executor. + +See :doc:`Running on HPC Systems<../platforms/platforms_index>` for illustrations +of how common options such as ``libE_specs["dedicated_mode"]`` affect the +run configuration on clusters and supercomputers. + +**Advanced Features** + +**Example of polling output and killing application:** + +In simulation function (sim_f). + +.. code-block:: python + + import time + + + def sim_func(H, persis_info, sim_specs, libE_info): + input_param = str(int(H["x"][0][0])) + exctr = libE_info["executor"] + + task = exctr.submit( + app_name="sim1", + num_procs=8, + app_args=input_param, + stdout="out.txt", + stderr="err.txt", + ) + + timeout_sec = 600 + poll_delay_sec = 1 + + while not task.finished: + # Has manager sent a finish signal + if exctr.manager_kill_received(): + task.kill() + my_cleanup() + + # Check output file for error and kill task + elif task.stdout_exists(): + if "Error" in task.read_stdout(): + task.kill() + + elif task.runtime > timeout_sec: + task.kill() # Timeout + + else: + time.sleep(poll_delay_sec) + task.poll() + + print(task.state) # state may be finished/failed/killed + +Users who wish to poll only for manager kill signals and timeouts don't necessarily +need to construct a polling loop like above, but can instead use the ``Executor`` +built-in ``polling_loop()`` method. An alternative to the above simulation function +may resemble: + +.. code-block:: python + + def sim_func(H, persis_info, sim_specs, libE_info): + input_param = str(int(H["x"][0][0])) + exctr = libE_info["executor"] + + task = exctr.submit( + app_name="sim1", + num_procs=8, + app_args=input_param, + stdout="out.txt", + stderr="err.txt", + ) + + timeout_sec = 600 + poll_delay_sec = 1 + + exctr.polling_loop(task, timeout=timeout_sec, delay=poll_delay_sec) + + print(task.state) # state may be finished/failed/killed + +The ``MPIExecutor`` autodetects system criteria such as the appropriate MPI launcher +and mechanisms to poll and kill tasks. It also has access to the resource manager, +which partitions resources among workers, ensuring that runs utilize different +resources (e.g., nodes). Furthermore, the ``MPIExecutor`` offers resilience via the +feature of re-launching tasks that fail to start because of system factors. + +.. _concurrent futures: https://docs.python.org/library/concurrent.futures.html diff --git a/docs/function_guides/calc_status.rst b/docs/function_guides/calc_status.rst index fc1038a36f..93384bc2ae 100644 --- a/docs/function_guides/calc_status.rst +++ b/docs/function_guides/calc_status.rst @@ -19,81 +19,81 @@ user-specified string. They are the third optional return value from a user func Built-in codes are available in the ``libensemble.message_numbers`` module, but users are also free to return any custom string. -.. tab-set:: - - .. tab-item:: calc_status with :ref:`Executor` - - .. code-block:: python - :linenos: - :emphasize-lines: 4,16,19,22,30 - - from libensemble.message_numbers import WORKER_DONE, WORKER_KILL, TASK_FAILED - - task = exctr.submit(calc_type="sim", num_procs=cores, wait_on_start=True) - calc_status = UNSET_TAG - poll_interval = 1 # secs - while not task.finished: - if task.runtime > time_limit: - task.kill() # Timeout - else: - time.sleep(poll_interval) - task.poll() - - if task.finished: - if task.state == "FINISHED": - print("Task {} completed".format(task.name)) - calc_status = WORKER_DONE - elif task.state == "FAILED": - print("Warning: Task {} failed: Error code {}".format(task.name, task.errcode)) - calc_status = TASK_FAILED - elif task.state == "USER_KILLED": - print("Warning: Task {} has been killed".format(task.name)) - calc_status = WORKER_KILL - else: - print("Warning: Task {} in unknown state {}. Error code {}".format(task.name, task.state, task.errcode)) - - outspecs = sim_specs["out"] - output = np.zeros(1, dtype=outspecs) - output["energy"][0] = final_energy - - return output, persis_info, calc_status - - .. tab-item:: Custom calc_status - - .. code-block:: python - :linenos: - - from libensemble.message_numbers import WORKER_DONE, TASK_FAILED - - task = exctr.submit(calc_type="sim", num_procs=cores, wait_on_start=True) - - task.wait(timeout=60) - - file_output = read_task_output(task) - if task.errcode == 0: - if "fail" in file_output: - calc_status = "Task failed successfully?" - else: - calc_status = WORKER_DONE - else: - calc_status = TASK_FAILED - - outspecs = sim_specs["out"] - output = np.zeros(1, dtype=outspecs) - output["energy"][0] = final_energy - - return output, persis_info, calc_status - -.. tab-set:: - - .. tab-item:: Available values - - .. literalinclude:: ../../libensemble/message_numbers.py - :start-after: first_calc_status_rst_tag - :end-before: last_calc_status_rst_tag - - .. tab-item:: Corresponding messages - - .. literalinclude:: ../../libensemble/message_numbers.py - :start-at: calc_status_strings - :end-before: last_calc_status_string_rst_tag +calc_status with Executor +--------------------------- + +.. code-block:: python + :linenos: + :emphasize-lines: 4,16,19,22,30 + + from libensemble.message_numbers import WORKER_DONE, WORKER_KILL, TASK_FAILED + + task = exctr.submit(calc_type="sim", num_procs=cores, wait_on_start=True) + calc_status = UNSET_TAG + poll_interval = 1 # secs + while not task.finished: + if task.runtime > time_limit: + task.kill() # Timeout + else: + time.sleep(poll_interval) + task.poll() + + if task.finished: + if task.state == "FINISHED": + print("Task {} completed".format(task.name)) + calc_status = WORKER_DONE + elif task.state == "FAILED": + print("Warning: Task {} failed: Error code {}".format(task.name, task.errcode)) + calc_status = TASK_FAILED + elif task.state == "USER_KILLED": + print("Warning: Task {} has been killed".format(task.name)) + calc_status = WORKER_KILL + else: + print("Warning: Task {} in unknown state {}. Error code {}".format(task.name, task.state, task.errcode)) + + outspecs = sim_specs["out"] + output = np.zeros(1, dtype=outspecs) + output["energy"][0] = final_energy + + return output, persis_info, calc_status + +Custom calc_status +------------------ + +.. code-block:: python + :linenos: + + from libensemble.message_numbers import WORKER_DONE, TASK_FAILED + + task = exctr.submit(calc_type="sim", num_procs=cores, wait_on_start=True) + + task.wait(timeout=60) + + file_output = read_task_output(task) + if task.errcode == 0: + if "fail" in file_output: + calc_status = "Task failed successfully?" + else: + calc_status = WORKER_DONE + else: + calc_status = TASK_FAILED + + outspecs = sim_specs["out"] + output = np.zeros(1, dtype=outspecs) + output["energy"][0] = final_energy + + return output, persis_info, calc_status + +Available values +---------------- + +.. literalinclude:: ../../libensemble/message_numbers.py + :start-after: first_calc_status_rst_tag + :end-before: last_calc_status_rst_tag + +Corresponding messages +---------------------- + +.. literalinclude:: ../../libensemble/message_numbers.py + :start-at: calc_status_strings + :end-before: last_calc_status_string_rst_tag diff --git a/docs/function_guides/generator_legacy.rst b/docs/function_guides/generator_legacy.rst index eac9910abe..c8c155a363 100644 --- a/docs/function_guides/generator_legacy.rst +++ b/docs/function_guides/generator_legacy.rst @@ -1,7 +1,7 @@ Legacy Generator Function ========================= -**Introduction** \|\| `Standardized Generator (gest-api) `__ \|\| **Legacy Generator Function** +`Introduction `__ \|\| `Standardized Generator (gest-api) `__ \|\| **Legacy Generator Function** .. code-block:: python @@ -94,52 +94,53 @@ Sending/receiving data is supported by the :ref:`PersistentSupport` for more information about the message tags. diff --git a/docs/function_guides/generator_standardized.rst b/docs/function_guides/generator_standardized.rst index d09e01a842..d02c0619f7 100644 --- a/docs/function_guides/generator_standardized.rst +++ b/docs/function_guides/generator_standardized.rst @@ -1,7 +1,7 @@ Standardized Generator (gest-api) ================================= -**Introduction** \|\| **Standardized Generator (gest-api)** \|\| `Legacy Generator Function `__ +`Introduction `__ \|\| **Standardized Generator (gest-api)** \|\| `Legacy Generator Function `__ Standardized generators are classes that inherit from ``gest_api.Generator``. They adhere to the ``gest-api`` standard and are parameterized by a ``VOCS`` diff --git a/docs/function_guides/simulator_legacy.rst b/docs/function_guides/simulator_legacy.rst index 01927401b2..3f65096abc 100644 --- a/docs/function_guides/simulator_legacy.rst +++ b/docs/function_guides/simulator_legacy.rst @@ -1,7 +1,7 @@ Legacy Simulator Function ========================= -**Introduction** \|\| `Standardized Simulator (gest-api) `__ \|\| **Legacy Simulator Function** +`Introduction `__ \|\| `Standardized Simulator (gest-api) `__ \|\| **Legacy Simulator Function** .. code-block:: python diff --git a/docs/function_guides/simulator_standardized.rst b/docs/function_guides/simulator_standardized.rst index 6561ea16cb..27b72deb5f 100644 --- a/docs/function_guides/simulator_standardized.rst +++ b/docs/function_guides/simulator_standardized.rst @@ -1,7 +1,7 @@ Standardized Simulator (gest-api) ================================= -**Introduction** \|\| **Standardized Simulator (gest-api)** \|\| `Legacy Simulator Function `__ +`Introduction `__ \|\| **Standardized Simulator (gest-api)** \|\| `Legacy Simulator Function `__ Standardized simulators are plain callables — no base class required — with the signature:: diff --git a/docs/index.rst b/docs/index.rst index 98b6448a5d..49a06cdc6c 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -10,7 +10,7 @@ :caption: User Guide: Quickstart - advanced_installation + advanced_installation/advanced_installation overview_usecases programming_libE running_libE @@ -20,7 +20,7 @@ :maxdepth: 1 :caption: Tutorials: - tutorials/local_sine_tutorial + tutorials/local_sine_tutorial/local_sine_tutorial tutorials/executor_forces_tutorial tutorials/forces_gpu_tutorial tutorials/gpcam_tutorial @@ -56,3 +56,4 @@ dev_guide/release_management/release_index dev_guide/dev_API/developer_API + bibliography diff --git a/docs/introduction.rst b/docs/introduction.rst index 4b36943398..87ccac72f6 100644 --- a/docs/introduction.rst +++ b/docs/introduction.rst @@ -1,7 +1,7 @@ .. include:: ../README.rst :start-after: after_badges_rst_tag -See the :doc:`tutorial` for a step-by-step beginners guide. +See the :doc:`tutorial` for a step-by-step beginners guide. See the `user guide`_ for more information. diff --git a/docs/latex_index.rst b/docs/latex_index.rst index 556a421a24..e2fd0ffb90 100644 --- a/docs/latex_index.rst +++ b/docs/latex_index.rst @@ -34,7 +34,7 @@ other libEnsemble information. .. toctree:: :maxdepth: 3 - advanced_installation + advanced_installation/advanced_installation tutorials/tutorials FAQ known_issues diff --git a/docs/platforms/aurora.rst b/docs/platforms/aurora.rst index 4865ba0c18..c29ed0bc08 100644 --- a/docs/platforms/aurora.rst +++ b/docs/platforms/aurora.rst @@ -27,7 +27,7 @@ To obtain libEnsemble:: pip install libensemble -See :doc:`here<../advanced_installation>` for more information on advanced +See :doc:`here<../advanced_installation/advanced_installation>` for more information on advanced options for installing libEnsemble, including using Spack. Example diff --git a/docs/platforms/bebop.rst b/docs/platforms/bebop.rst index 61403eb973..2682a54863 100644 --- a/docs/platforms/bebop.rst +++ b/docs/platforms/bebop.rst @@ -46,7 +46,7 @@ To install via ``conda``: conda config --add channels conda-forge conda install -c conda-forge libensemble -See :doc:`here<../advanced_installation>` for more information on advanced options +See :doc:`here<../advanced_installation/advanced_installation>` for more information on advanced options for installing libEnsemble. Job Submission diff --git a/docs/platforms/frontier.rst b/docs/platforms/frontier.rst index 4fdc7a0b36..a57ffadd97 100644 --- a/docs/platforms/frontier.rst +++ b/docs/platforms/frontier.rst @@ -33,7 +33,7 @@ libEnsemble can be installed via pip:: pip install libensemble -See :doc:`advanced installation<../advanced_installation>` for other installation options. +See :doc:`advanced installation<../advanced_installation/advanced_installation>` for other installation options. Example ------- diff --git a/docs/platforms/improv.rst b/docs/platforms/improv.rst index bdb2269a85..dfe40da138 100644 --- a/docs/platforms/improv.rst +++ b/docs/platforms/improv.rst @@ -15,7 +15,7 @@ To create a conda environment and install libEnsemble:: conda activate improv_libe_env pip install libensemble -See :doc:`here<../advanced_installation>` for more information on advanced +See :doc:`here<../advanced_installation/advanced_installation>` for more information on advanced options for installing libEnsemble, including using Spack. Job Submission diff --git a/docs/platforms/perlmutter.rst b/docs/platforms/perlmutter.rst index 755e5bb7eb..a1c79703f8 100644 --- a/docs/platforms/perlmutter.rst +++ b/docs/platforms/perlmutter.rst @@ -50,7 +50,7 @@ by one of the following ways. conda config --add channels conda-forge conda install -c conda-forge libensemble -See :doc:`advanced installation<../advanced_installation>` for other installation options. +See :doc:`advanced installation<../advanced_installation/advanced_installation>` for other installation options. Job Submission -------------- diff --git a/docs/platforms/polaris.rst b/docs/platforms/polaris.rst index 5fdf82aaae..21518ccf42 100644 --- a/docs/platforms/polaris.rst +++ b/docs/platforms/polaris.rst @@ -36,7 +36,7 @@ environment (if you need ``conda install``). More details at `Python for Polaris pip install libensemble -See :doc:`here<../advanced_installation>` for more information on advanced options +See :doc:`here<../advanced_installation/advanced_installation>` for more information on advanced options for installing libEnsemble, including using Spack. Job Submission diff --git a/docs/running_libE.rst b/docs/running_libE.rst index aaed63342f..6329e13e27 100644 --- a/docs/running_libE.rst +++ b/docs/running_libE.rst @@ -8,91 +8,92 @@ Running libEnsemble :doc:`MPI Executor`. The communication modes described here only refer to how the libEnsemble manager and workers communicate. -.. tab-set:: +Local Comms +----------- - .. tab-item:: Local Comms +Uses Python's built-in multiprocessing_ module. +The ``comms`` type ``local`` and number of workers ``nworkers`` for running simulators +may be provided in :ref:`libE_specs`. - Uses Python's built-in multiprocessing_ module. - The ``comms`` type ``local`` and number of workers ``nworkers`` for running simulators - may be provided in :ref:`libE_specs`. +Run: - Run: + python myscript.py - python myscript.py +Or, if the script uses the :meth:`parse_args` function +or an :class:`Ensemble` object with ``Ensemble(parse_args=True)``, +this can be specified on the command line: - Or, if the script uses the :meth:`parse_args` function - or an :class:`Ensemble` object with ``Ensemble(parse_args=True)``, - this can be specified on the command line: + python myscript.py -n N - python myscript.py -n N +libEnsemble will run on **one node** in this scenario. To +:doc:`disallow this node` +from app-launches (if running libEnsemble on a compute node), +set ``libE_specs["dedicated_mode"] = True``. - libEnsemble will run on **one node** in this scenario. To - :doc:`disallow this node` - from app-launches (if running libEnsemble on a compute node), - set ``libE_specs["dedicated_mode"] = True``. +This mode can also be used to run on a **launch** node of a three-tier +system, ensuring the whole compute-node allocation is available for +launching apps. Make sure there are no imports of ``mpi4py`` in your Python scripts. - This mode can also be used to run on a **launch** node of a three-tier - system, ensuring the whole compute-node allocation is available for - launching apps. Make sure there are no imports of ``mpi4py`` in your Python scripts. +Note that on macOS and Windows, the default multiprocessing method is ``"spawn"`` +instead of ``"fork"``; to resolve many related issues, we recommend placing +calling script code in an ``if __name__ == "__main__":`` block. - Note that on macOS and Windows, the default multiprocessing method is ``"spawn"`` - instead of ``"fork"``; to resolve many related issues, we recommend placing - calling script code in an ``if __name__ == "__main__":`` block. +**Limitations of local mode** - **Limitations of local mode** +- Workers cannot be :doc:`distributed` across nodes. +- In some scenarios, any import of ``mpi4py`` will cause this to break. +- Does not have the potential scaling of MPI mode, but is sufficient for most users. - - Workers cannot be :doc:`distributed` across nodes. - - In some scenarios, any import of ``mpi4py`` will cause this to break. - - Does not have the potential scaling of MPI mode, but is sufficient for most users. +MPI Comms +--------- - .. tab-item:: MPI Comms +This option uses mpi4py_ for the Manager/Worker communication. It is used automatically if +you run your libEnsemble calling script with an MPI runner such as:: - This option uses mpi4py_ for the Manager/Worker communication. It is used automatically if - you run your libEnsemble calling script with an MPI runner such as:: + mpirun -np N python myscript.py - mpirun -np N python myscript.py +where ``N`` is the number of processes. This will launch one manager and +``N-1`` simulator workers. - where ``N`` is the number of processes. This will launch one manager and - ``N-1`` simulator workers. +This option requires ``mpi4py`` to be installed to interface with the MPI on your system. +It works on a standalone system, and with both +:doc:`central and distributed modes` of running libEnsemble on +multi-node systems. - This option requires ``mpi4py`` to be installed to interface with the MPI on your system. - It works on a standalone system, and with both - :doc:`central and distributed modes` of running libEnsemble on - multi-node systems. +It also potentially scales the best when running with many workers on HPC systems. - It also potentially scales the best when running with many workers on HPC systems. +**Limitations of MPI mode** - **Limitations of MPI mode** +If launching MPI applications from workers, then MPI is nested. **This is not +supported with Open MPI**. This can be overcome by using a proxy launcher. +This nesting does work with MPICH_ and its derivative MPI implementations. - If launching MPI applications from workers, then MPI is nested. **This is not - supported with Open MPI**. This can be overcome by using a proxy launcher. - This nesting does work with MPICH_ and its derivative MPI implementations. +It is also unsuitable to use this mode when running on the **launch** nodes of +three-tier systems. In that case ``local`` mode is recommended. - It is also unsuitable to use this mode when running on the **launch** nodes of - three-tier systems. In that case ``local`` mode is recommended. +TCP Comms +--------- - .. tab-item:: TCP Comms +Run the Manager on one system and launch workers to remote +systems or nodes over TCP. Configure through +:class:`libE_specs`, or on the command line +if using an :class:`Ensemble` object with +``Ensemble(parse_args=True)``, - Run the Manager on one system and launch workers to remote - systems or nodes over TCP. Configure through - :class:`libE_specs`, or on the command line - if using an :class:`Ensemble` object with - ``Ensemble(parse_args=True)``, +**Reverse-ssh interface** - **Reverse-ssh interface** +Set ``comms`` to ``ssh`` to launch workers on remote ssh-accessible systems. This +co-locates workers, functions, and any applications. User +functions can also be persistent, unlike when launching remote functions via +:ref:`Globus Compute`. - Set ``comms`` to ``ssh`` to launch workers on remote ssh-accessible systems. This - co-locates workers, functions, and any applications. User - functions can also be persistent, unlike when launching remote functions via - :ref:`Globus Compute`. +The remote working directory and Python need to be specified. This may resemble:: - The remote working directory and Python need to be specified. This may resemble:: + python myscript.py --comms ssh --workers machine1 machine2 --worker_pwd /home/workers --worker_python /home/.conda/.../python - python myscript.py --comms ssh --workers machine1 machine2 --worker_pwd /home/workers --worker_python /home/.conda/.../python +**Limitations of TCP mode** - **Limitations of TCP mode** - - - There cannot be two calls to ``Ensemble.run()`` or ``libE()`` in the same script. +- There cannot be two calls to ``Ensemble.run()`` or ``libE()`` in the same script. Further Command Line Options ---------------------------- diff --git a/docs/tutorials/local_sine_tutorial.rst b/docs/tutorials/local_sine_tutorial.rst deleted file mode 100644 index 56943a7cc3..0000000000 --- a/docs/tutorials/local_sine_tutorial.rst +++ /dev/null @@ -1,275 +0,0 @@ -=================== -Simple Introduction -=================== - -This tutorial demonstrates the capability to perform ensembles of -calculations in parallel using :doc:`libEnsemble<../introduction>`. - -We recommend reading this brief :doc:`Overview<../overview_usecases>`. - -|Open in Colab| - -For this tutorial, our generator will produce uniform randomly sampled -values, and our simulator will calculate the sine of each. By default we don't -need to write a new allocation function. - -.. tab-set:: - - .. tab-item:: 1. Getting started - - libEnsemble is written entirely in Python_. Let's make sure - the correct version is installed. - - .. code-block:: bash - - python --version # This should be >= 3.11 - - .. _Python: https://www.python.org/ - - For this tutorial, you need NumPy_ and (optionally) - Matplotlib_ to visualize your results. Install libEnsemble and these other - libraries with - - .. code-block:: bash - - pip install libensemble - pip install matplotlib # Optional - - If your system doesn't allow you to perform these installations, try adding - ``--user`` to the end of each command. - - .. tab-item:: 2. Generator - - Let's begin the coding portion of this tutorial by writing our generator. - - An available libEnsemble worker will call this generator's ``.suggest()`` method to obtain - new values to evaluate. - - For now, create a new Python file named ``sine_gen.py``. Write the following: - - .. literalinclude:: ../../libensemble/tests/functionality_tests/sine_gen_std.py - :language: python - :linenos: - :caption: examples/tutorials/simple_sine/sine_gen_std.py - - libEnsemble accepts generators that implement the gest-api_ interface. These generators - accept a ``gest_api.VOCS`` object for configuration, and contain a ``.suggest(num_points)`` - method that returns ``num_points`` points. Points consist of a list of dictionaries - with keys that match the variable names from the ``gest_api.VOCS`` object. - - Our generator's ``suggest()`` method creates ``num_points`` dictionaries. For each key in - the generator's ``self.variables``, it creates a random number uniformly distributed - between the corresponding ``lower`` and ``upper`` bounds of its domain. - - Our generator must implement a ``_validate_vocs()`` method. Here, we implement a simple - check that ensures the ``VOCS`` object has at least one variable. - - .. tab-item:: 3. Simulator - - Next, we'll write our simulator function or :ref:`sim_f`. Simulator - functions perform calculations based on values from the generator. - :ref:`sim_specs` is a dictionary containing user-defined fields - and parameters. - - Create a new Python file named ``sine_sim.py``. Write the following: - - .. literalinclude:: ../../libensemble/tests/functionality_tests/sine_sim.py - :language: python - :linenos: - :caption: examples/tutorials/simple_sine/sine_sim.py - - Our simulator function is called by a worker for every work item produced by - the generator. This function calculates the sine of the passed value, - and then returns it so the worker can store the result. - - .. tab-item:: 4. Script - - Now lets write the script that configures our generator and simulator - functions and starts libEnsemble. - - Create an empty Python file named ``calling.py``. - In this file, we'll start by importing NumPy, libEnsemble's setup classes, the generator, - and simulator function. - - In a class called :ref:`LibeSpecs` we'll - specify the number of workers and the manager/worker intercommunication method. - ``"local"``, refers to Python's multiprocessing. - - .. literalinclude:: ../../libensemble/tests/functionality_tests/test_local_sine_tutorial.py - :language: python - :linenos: - :end-at: libE_specs = LibeSpecs - - We configure the settings and specifications for our ``sim_f`` and ``gen_f`` - functions in the :ref:`GenSpecs` and - :ref:`SimSpecs` classes, which we saw previously - being passed to our functions *as dictionaries*. - These classes also describe to libEnsemble what inputs and outputs from those - functions to expect. - - .. literalinclude:: ../../libensemble/tests/functionality_tests/test_local_sine_tutorial.py - :language: python - :linenos: - :lineno-start: 10 - :start-at: gen_specs = GenSpecs - :end-at: sim_specs_end_tag - - We then specify the circumstances where - libEnsemble should stop execution in :ref:`ExitCriteria`. - - .. literalinclude:: ../../libensemble/tests/functionality_tests/test_local_sine_tutorial.py - :language: python - :linenos: - :lineno-start: 26 - :start-at: exit_criteria = ExitCriteria - :end-at: exit_criteria = ExitCriteria - - Now we're ready to write our libEnsemble :doc:`libE<../programming_libE>` - function call. :ref:`ensemble.H` is the final version of - the history array. ``ensemble.flag`` should be zero if no errors occur. - - .. literalinclude:: ../../libensemble/tests/functionality_tests/test_local_sine_tutorial.py - :language: python - :linenos: - :lineno-start: 28 - :start-at: ensemble = Ensemble - :end-at: print(history) - - That's it! Now that these files are complete, we can run our simulation. - - .. code-block:: bash - - python calling.py - - If everything ran perfectly and you included the above print statements, you - should get something similar to the following output (although the - columns might be rearranged). - - .. code-block:: - - ["y", "sim_started_time", "gen_worker", "sim_worker", "sim_started", "sim_ended", "x", "allocated", "sim_id", "gen_ended_time"] - [(-0.37466051, 1.559+09, 2, 2, True, True, [-0.38403059], True, 0, 1.559+09) - (-0.29279634, 1.559+09, 2, 3, True, True, [-2.84444261], True, 1, 1.559+09) - ( 0.29358492, 1.559+09, 2, 4, True, True, [ 0.29797487], True, 2, 1.559+09) - (-0.3783986, 1.559+09, 2, 1, True, True, [-0.38806564], True, 3, 1.559+09) - (-0.45982062, 1.559+09, 2, 2, True, True, [-0.47779319], True, 4, 1.559+09) - ... - - In this arrangement, our output values are listed on the far left with the - generated values being the fourth column from the right. - - Two additional log files should also have been created. - ``ensemble.log`` contains debugging or informational logging output from - libEnsemble, while ``libE_stats.txt`` contains a quick summary of all - calculations performed. - - Here is graphed output using ``Matplotlib``, with entries colored by which - worker performed the simulation: - - .. image:: ../images/sinex.png - :alt: sine - :align: center - - If you want to verify your results through plotting and installed Matplotlib - earlier, copy and paste the following code into the bottom of your calling - script and run ``python calling.py`` again - - .. literalinclude:: ../../libensemble/tests/functionality_tests/test_local_sine_tutorial.py - :language: python - :linenos: - :lineno-start: 37 - :start-at: import matplotlib - :end-at: plt.savefig("tutorial_sines.png") - - Each of these example files can be found in the repository in `examples/tutorials/simple_sine`_. - - **Exercise** - - Write a Calling Script with the following specifications: - - 1. Set the generator function's lower and upper bounds to -6 and 6, respectively - 2. Increase the generator batch size to 10 - 3. Set libEnsemble to stop execution after 160 *generations* using the ``gen_max`` option - 4. Print an error message if any errors occurred while libEnsemble was running - - .. dropdown:: **Click Here for Solution** - - .. literalinclude:: ../../libensemble/tests/functionality_tests/test_local_sine_tutorial_2.py - :language: python - :linenos: - :emphasize-lines: 15,16,17,27,33,34 - - .. tab-item:: 5. Next steps - - **libEnsemble with MPI** - - MPI_ is a standard interface for parallel computing, implemented in libraries - such as MPICH_ and used at extreme scales. MPI potentially allows libEnsemble's - processes to be distributed over multiple nodes and works in some - circumstances where Python's multiprocessing does not. In this section, we'll - explore modifying the above code to use MPI instead of multiprocessing. - - We recommend the MPI distribution MPICH_ for this tutorial, which can be found - for a variety of systems here_. You also need mpi4py_, which can be installed - with ``pip install mpi4py``. If you'd like to use a specific version or - distribution of MPI instead of MPICH, configure mpi4py with that MPI at - installation with ``MPICC= pip install mpi4py`` If this - doesn't work, try appending ``--user`` to the end of the command. See the - mpi4py_ docs for more information. - - Verify that MPI has been installed correctly with ``mpirun --version``. - - **Modifying the script** - - Only a few changes are necessary to make our code MPI-compatible. For starters, - comment out the ``libE_specs`` definition: - - .. literalinclude:: ../../libensemble/tests/functionality_tests/test_local_sine_tutorial_3.py - :language: python - :start-at: # libE_specs = LibeSpecs - :end-at: # libE_specs = LibeSpecs - - We'll be parameterizing our MPI runtime with a ``parse_args=True`` argument to - the ``Ensemble`` class instead of ``libE_specs``. We'll also use an ``ensemble.is_manager`` - attribute so only the first MPI rank runs the data-processing code. - - The bottom of your calling script should now resemble: - - .. literalinclude:: ../../libensemble/tests/functionality_tests/test_local_sine_tutorial_3.py - :linenos: - :lineno-start: 28 - :language: python - :start-at: # replace libE_specs - - With these changes in place, our libEnsemble code can be run with MPI by - - .. code-block:: bash - - mpirun -n 5 python calling.py - - where ``-n 5`` tells ``mpirun`` to produce five processes, one of which will be - the manager process with the libEnsemble manager and the other four will run - libEnsemble workers. - - This tutorial is only a tiny demonstration of the parallelism capabilities of - libEnsemble. libEnsemble has been developed primarily to support research on - High-Performance computers, with potentially hundreds of workers performing - calculations simultaneously. Please read our - :doc:`platform guides <../platforms/platforms_index>` for introductions to using - libEnsemble on many such machines. - - libEnsemble's Executors can launch non-Python user applications and simulations across - allocated compute resources. Try out this feature with a more-complicated - libEnsemble use-case within our - :doc:`Electrostatic Forces tutorial <./executor_forces_tutorial>`. - -.. _gest-api: https://github.com/campa-consortium/gest-api -.. _Matplotlib: https://matplotlib.org/ -.. _MPI: https://en.wikipedia.org/wiki/Message_Passing_Interface -.. _MPICH: https://www.mpich.org/ -.. _mpi4py: https://mpi4py.readthedocs.io/en/stable/install.html -.. _NumPy: https://www.numpy.org/ -.. _here: https://www.mpich.org/downloads/ -.. _examples/tutorials/simple_sine: https://github.com/Libensemble/libensemble/tree/develop/examples/tutorials/simple_sine -.. |Open in Colab| image:: https://colab.research.google.com/assets/colab-badge.svg - :target: http://colab.research.google.com/github/Libensemble/libensemble/blob/develop/examples/tutorials/simple_sine/sine_tutorial_notebook.ipynb diff --git a/docs/tutorials/local_sine_tutorial/local_sine_tutorial.rst b/docs/tutorials/local_sine_tutorial/local_sine_tutorial.rst new file mode 100644 index 0000000000..d5e587a0f0 --- /dev/null +++ b/docs/tutorials/local_sine_tutorial/local_sine_tutorial.rst @@ -0,0 +1,28 @@ +=================== +Simple Introduction +=================== + +**Introduction** \|\| `1. Getting started `__ \|\| `2. Generator `__ \|\| `3. Simulator `__ \|\| `4. Script `__ \|\| `5. Next steps `__ + +This tutorial demonstrates the capability to perform ensembles of +calculations in parallel using :doc:`libEnsemble<../../introduction>`. + +We recommend reading this brief :doc:`Overview<../../overview_usecases>`. + +|Open in Colab| + +For this tutorial, our generator will produce uniform randomly sampled +values, and our simulator will calculate the sine of each. By default we don't +need to write a new allocation function. + +.. toctree:: + :hidden: + + local_sine_tutorial_1 + local_sine_tutorial_2 + local_sine_tutorial_3 + local_sine_tutorial_4 + local_sine_tutorial_5 + +.. |Open in Colab| image:: https://colab.research.google.com/assets/colab-badge.svg + :target: http://colab.research.google.com/github/Libensemble/libensemble/blob/develop/examples/tutorials/simple_sine/sine_tutorial_notebook.ipynb diff --git a/docs/tutorials/local_sine_tutorial/local_sine_tutorial_1.rst b/docs/tutorials/local_sine_tutorial/local_sine_tutorial_1.rst new file mode 100644 index 0000000000..5c5db2ec82 --- /dev/null +++ b/docs/tutorials/local_sine_tutorial/local_sine_tutorial_1.rst @@ -0,0 +1,28 @@ +1. Getting started +================== + +`Introduction `__ \|\| **1. Getting started** \|\| `2. Generator `__ \|\| `3. Simulator `__ \|\| `4. Script `__ \|\| `5. Next steps `__ + +libEnsemble is written entirely in Python_. Let's make sure +the correct version is installed. + +.. code-block:: bash + + python --version # This should be >= 3.11 + +.. _Python: https://www.python.org/ + +For this tutorial, you need NumPy_ and (optionally) +Matplotlib_ to visualize your results. Install libEnsemble and these other +libraries with + +.. code-block:: bash + + pip install libensemble + pip install matplotlib # Optional + +If your system doesn't allow you to perform these installations, try adding +``--user`` to the end of each command. + +.. _Matplotlib: https://matplotlib.org/ +.. _NumPy: https://www.numpy.org/ diff --git a/docs/tutorials/local_sine_tutorial/local_sine_tutorial_2.rst b/docs/tutorials/local_sine_tutorial/local_sine_tutorial_2.rst new file mode 100644 index 0000000000..024bb52d14 --- /dev/null +++ b/docs/tutorials/local_sine_tutorial/local_sine_tutorial_2.rst @@ -0,0 +1,30 @@ +2. Generator +============ + +`Introduction `__ \|\| `1. Getting started `__ \|\| **2. Generator** \|\| `3. Simulator `__ \|\| `4. Script `__ \|\| `5. Next steps `__ + +Let's begin the coding portion of this tutorial by writing our generator. + +An available libEnsemble worker will call this generator's ``.suggest()`` method to obtain +new values to evaluate. + +For now, create a new Python file named ``sine_gen.py``. Write the following: + +.. literalinclude:: ../../../libensemble/tests/functionality_tests/sine_gen_std.py + :language: python + :linenos: + :caption: examples/tutorials/simple_sine/sine_gen_std.py + +libEnsemble accepts generators that implement the gest-api_ interface. These generators +accept a ``gest_api.VOCS`` object for configuration, and contain a ``.suggest(num_points)`` +method that returns ``num_points`` points. Points consist of a list of dictionaries +with keys that match the variable names from the ``gest_api.VOCS`` object. + +Our generator's ``suggest()`` method creates ``num_points`` dictionaries. For each key in +the generator's ``self.variables``, it creates a random number uniformly distributed +between the corresponding ``lower`` and ``upper`` bounds of its domain. + +Our generator must implement a ``_validate_vocs()`` method. Here, we implement a simple +check that ensures the ``VOCS`` object has at least one variable. + +.. _gest-api: https://github.com/campa-consortium/gest-api diff --git a/docs/tutorials/local_sine_tutorial/local_sine_tutorial_3.rst b/docs/tutorials/local_sine_tutorial/local_sine_tutorial_3.rst new file mode 100644 index 0000000000..05836abf32 --- /dev/null +++ b/docs/tutorials/local_sine_tutorial/local_sine_tutorial_3.rst @@ -0,0 +1,20 @@ +3. Simulator +============ + +`Introduction `__ \|\| `1. Getting started `__ \|\| `2. Generator `__ \|\| **3. Simulator** \|\| `4. Script `__ \|\| `5. Next steps `__ + +Next, we'll write our simulator function or :ref:`sim_f`. Simulator +functions perform calculations based on values from the generator. +:ref:`sim_specs` is a dictionary containing user-defined fields +and parameters. + +Create a new Python file named ``sine_sim.py``. Write the following: + +.. literalinclude:: ../../../libensemble/tests/functionality_tests/sine_sim.py + :language: python + :linenos: + :caption: examples/tutorials/simple_sine/sine_sim.py + +Our simulator function is called by a worker for every work item produced by +the generator. This function calculates the sine of the passed value, +and then returns it so the worker can store the result. diff --git a/docs/tutorials/local_sine_tutorial/local_sine_tutorial_4.rst b/docs/tutorials/local_sine_tutorial/local_sine_tutorial_4.rst new file mode 100644 index 0000000000..92a5c8536b --- /dev/null +++ b/docs/tutorials/local_sine_tutorial/local_sine_tutorial_4.rst @@ -0,0 +1,121 @@ +4. Script +========= + +`Introduction `__ \|\| `1. Getting started `__ \|\| `2. Generator `__ \|\| `3. Simulator `__ \|\| **4. Script** \|\| `5. Next steps `__ + +Now lets write the script that configures our generator and simulator +functions and starts libEnsemble. + +Create an empty Python file named ``calling.py``. +In this file, we'll start by importing NumPy, libEnsemble's setup classes, the generator, +and simulator function. + +In a class called :ref:`LibeSpecs` we'll +specify the number of workers and the manager/worker intercommunication method. +``"local"``, refers to Python's multiprocessing. + +.. literalinclude:: ../../../libensemble/tests/functionality_tests/test_local_sine_tutorial.py + :language: python + :linenos: + :end-at: libE_specs = LibeSpecs + +We configure the settings and specifications for our ``sim_f`` and ``gen_f`` +functions in the :ref:`GenSpecs` and +:ref:`SimSpecs` classes, which we saw previously +being passed to our functions *as dictionaries*. +These classes also describe to libEnsemble what inputs and outputs from those +functions to expect. + +.. literalinclude:: ../../../libensemble/tests/functionality_tests/test_local_sine_tutorial.py + :language: python + :linenos: + :lineno-start: 10 + :start-at: gen_specs = GenSpecs + :end-at: sim_specs_end_tag + +We then specify the circumstances where +libEnsemble should stop execution in :ref:`ExitCriteria`. + +.. literalinclude:: ../../../libensemble/tests/functionality_tests/test_local_sine_tutorial.py + :language: python + :linenos: + :lineno-start: 26 + :start-at: exit_criteria = ExitCriteria + :end-at: exit_criteria = ExitCriteria + +Now we're ready to write our libEnsemble :doc:`libE<../../programming_libE>` +function call. :ref:`ensemble.H` is the final version of +the history array. ``ensemble.flag`` should be zero if no errors occur. + +.. literalinclude:: ../../../libensemble/tests/functionality_tests/test_local_sine_tutorial.py + :language: python + :linenos: + :lineno-start: 28 + :start-at: ensemble = Ensemble + :end-at: print(history) + +That's it! Now that these files are complete, we can run our simulation. + +.. code-block:: bash + + python calling.py + +If everything ran perfectly and you included the above print statements, you +should get something similar to the following output (although the +columns might be rearranged). + +.. code-block:: + + ["y", "sim_started_time", "gen_worker", "sim_worker", "sim_started", "sim_ended", "x", "allocated", "sim_id", "gen_ended_time"] + [(-0.37466051, 1.559+09, 2, 2, True, True, [-0.38403059], True, 0, 1.559+09) + (-0.29279634, 1.559+09, 2, 3, True, True, [-2.84444261], True, 1, 1.559+09) + ( 0.29358492, 1.559+09, 2, 4, True, True, [ 0.29797487], True, 2, 1.559+09) + (-0.3783986, 1.559+09, 2, 1, True, True, [-0.38806564], True, 3, 1.559+09) + (-0.45982062, 1.559+09, 2, 2, True, True, [-0.47779319], True, 4, 1.559+09) + ... + +In this arrangement, our output values are listed on the far left with the +generated values being the fourth column from the right. + +Two additional log files should also have been created. +``ensemble.log`` contains debugging or informational logging output from +libEnsemble, while ``libE_stats.txt`` contains a quick summary of all +calculations performed. + +Here is graphed output using ``Matplotlib``, with entries colored by which +worker performed the simulation: + +.. image:: ../../images/sinex.png + :alt: sine + :align: center + +If you want to verify your results through plotting and installed Matplotlib +earlier, copy and paste the following code into the bottom of your calling +script and run ``python calling.py`` again + +.. literalinclude:: ../../../libensemble/tests/functionality_tests/test_local_sine_tutorial.py + :language: python + :linenos: + :lineno-start: 37 + :start-at: import matplotlib + :end-at: plt.savefig("tutorial_sines.png") + +Each of these example files can be found in the repository in `examples/tutorials/simple_sine`_. + +**Exercise** + +Write a Calling Script with the following specifications: + +1. Set the generator function's lower and upper bounds to -6 and 6, respectively +2. Increase the generator batch size to 10 +3. Set libEnsemble to stop execution after 160 *generations* using the ``gen_max`` option +4. Print an error message if any errors occurred while libEnsemble was running + +.. dropdown:: **Click Here for Solution** + + .. literalinclude:: ../../../libensemble/tests/functionality_tests/test_local_sine_tutorial_2.py + :language: python + :linenos: + :emphasize-lines: 15,16,17,27,33,34 + +.. _examples/tutorials/simple_sine: https://github.com/Libensemble/libensemble/tree/develop/examples/tutorials/simple_sine diff --git a/docs/tutorials/local_sine_tutorial/local_sine_tutorial_5.rst b/docs/tutorials/local_sine_tutorial/local_sine_tutorial_5.rst new file mode 100644 index 0000000000..5c67c73df7 --- /dev/null +++ b/docs/tutorials/local_sine_tutorial/local_sine_tutorial_5.rst @@ -0,0 +1,71 @@ +5. Next steps +============= + +`Introduction `__ \|\| `1. Getting started `__ \|\| `2. Generator `__ \|\| `3. Simulator `__ \|\| `4. Script `__ \|\| **5. Next steps** + +**libEnsemble with MPI** + +MPI_ is a standard interface for parallel computing, implemented in libraries +such as MPICH_ and used at extreme scales. MPI potentially allows libEnsemble's +processes to be distributed over multiple nodes and works in some +circumstances where Python's multiprocessing does not. In this section, we'll +explore modifying the above code to use MPI instead of multiprocessing. + +We recommend the MPI distribution MPICH_ for this tutorial, which can be found +for a variety of systems here_. You also need mpi4py_, which can be installed +with ``pip install mpi4py``. If you'd like to use a specific version or +distribution of MPI instead of MPICH, configure mpi4py with that MPI at +installation with ``MPICC= pip install mpi4py`` If this +doesn't work, try appending ``--user`` to the end of the command. See the +mpi4py_ docs for more information. + +Verify that MPI has been installed correctly with ``mpirun --version``. + +**Modifying the script** + +Only a few changes are necessary to make our code MPI-compatible. For starters, +comment out the ``libE_specs`` definition: + +.. literalinclude:: ../../../libensemble/tests/functionality_tests/test_local_sine_tutorial_3.py + :language: python + :start-at: # libE_specs = LibeSpecs + :end-at: # libE_specs = LibeSpecs + +We'll be parameterizing our MPI runtime with a ``parse_args=True`` argument to +the ``Ensemble`` class instead of ``libE_specs``. We'll also use an ``ensemble.is_manager`` +attribute so only the first MPI rank runs the data-processing code. + +The bottom of your calling script should now resemble: + +.. literalinclude:: ../../../libensemble/tests/functionality_tests/test_local_sine_tutorial_3.py + :linenos: + :lineno-start: 28 + :language: python + :start-at: # replace libE_specs + +With these changes in place, our libEnsemble code can be run with MPI by + +.. code-block:: bash + + mpirun -n 5 python calling.py + +where ``-n 5`` tells ``mpirun`` to produce five processes, one of which will be +the manager process with the libEnsemble manager and the other four will run +libEnsemble workers. + +This tutorial is only a tiny demonstration of the parallelism capabilities of +libEnsemble. libEnsemble has been developed primarily to support research on +High-Performance computers, with potentially hundreds of workers performing +calculations simultaneously. Please read our +:doc:`platform guides <../../platforms/platforms_index>` for introductions to using +libEnsemble on many such machines. + +libEnsemble's Executors can launch non-Python user applications and simulations across +allocated compute resources. Try out this feature with a more-complicated +libEnsemble use-case within our +:doc:`Electrostatic Forces tutorial <../executor_forces_tutorial>`. + +.. _MPI: https://en.wikipedia.org/wiki/Message_Passing_Interface +.. _MPICH: https://www.mpich.org/ +.. _here: https://www.mpich.org/downloads/ +.. _mpi4py: https://mpi4py.readthedocs.io/en/stable/install.html diff --git a/docs/tutorials/tutorials.rst b/docs/tutorials/tutorials.rst index cee04fe523..1ea0edc10e 100644 --- a/docs/tutorials/tutorials.rst +++ b/docs/tutorials/tutorials.rst @@ -3,7 +3,7 @@ Tutorials .. toctree:: - local_sine_tutorial + local_sine_tutorial/local_sine_tutorial executor_forces_tutorial forces_gpu_tutorial gpcam_tutorial diff --git a/docs/utilities.rst b/docs/utilities.rst index 3c75dc9703..dbdc2dcb22 100644 --- a/docs/utilities.rst +++ b/docs/utilities.rst @@ -1,47 +1,49 @@ Convenience Tools and Functions =============================== -.. tab-set:: +Setup Helpers +------------- - .. tab-item:: Setup Helpers +.. automodule:: tools + :members: + :no-undoc-members: - .. automodule:: tools - :members: - :no-undoc-members: +Persistent Helpers +------------------ - .. tab-item:: Persistent Helpers +.. _p_gen_routines: - .. _p_gen_routines: +These routines are commonly used within persistent generator functions +such as ``persistent_aposmm`` in ``libensemble/gen_funcs/`` for intermediate +communication with the manager. Persistent simulator functions are also supported. - These routines are commonly used within persistent generator functions - such as ``persistent_aposmm`` in ``libensemble/gen_funcs/`` for intermediate - communication with the manager. Persistent simulator functions are also supported. +.. automodule:: persistent_support + :members: + :no-undoc-members: - .. automodule:: persistent_support - :members: - :no-undoc-members: +Allocation Helpers +------------------ - .. tab-item:: Allocation Helpers +These routines are used within custom allocation functions to help prepare ``Work`` +structures for workers. See the routines within ``libensemble/alloc_funcs/`` for +examples. - These routines are used within custom allocation functions to help prepare ``Work`` - structures for workers. See the routines within ``libensemble/alloc_funcs/`` for - examples. +.. automodule:: alloc_support + :members: + :no-undoc-members: - .. automodule:: alloc_support - :members: - :no-undoc-members: +Live Data +--------- - .. tab-item:: Live Data +These classes provide a means to capture and display data during a workflow run. +Users may provide an initialized object via ``libE_specs["live_data"]``. For example:: - These classes provide a means to capture and display data during a workflow run. - Users may provide an initialized object via ``libE_specs["live_data"]``. For example:: + from libensemble.tools.live_data.plot2n import Plot2N + libE_specs["live_data"] = Plot2N(plot_type='2d') - from libensemble.tools.live_data.plot2n import Plot2N - libE_specs["live_data"] = Plot2N(plot_type='2d') +.. automodule:: libensemble.tools.live_data.live_data + :members: - .. automodule:: libensemble.tools.live_data.live_data - :members: - - .. automodule:: plot2n - :members: Plot2N - :show-inheritance: +.. automodule:: plot2n + :members: Plot2N + :show-inheritance: diff --git a/docs/welcome.rst b/docs/welcome.rst index 9498fab1ac..01fdef4425 100644 --- a/docs/welcome.rst +++ b/docs/welcome.rst @@ -40,7 +40,7 @@ libEnsemble A complete toolkit for dynamic ensembles of calculations - New to libEnsemble? :doc:`Start here`. - - Try out libEnsemble with a :doc:`tutorial`. + - Try out libEnsemble with a :doc:`tutorial`. - Go in depth by reading the :doc:`full overview`. - See the :doc:`FAQ` for common questions and answers, errors, and resolutions. - Check us out on `GitHub`_. diff --git a/libensemble/libE.py b/libensemble/libE.py index 9af1d52405..219e2cd8c4 100644 --- a/libensemble/libE.py +++ b/libensemble/libE.py @@ -189,7 +189,7 @@ def libE( libE_specs: :obj:`dict` or :class:`LibeSpecs`, Optional Specifications for libEnsemble - :doc:`(example)` + :doc:`(example)` H0: `NumPy structured array `_, Optional diff --git a/libensemble/tools/parse_args.py b/libensemble/tools/parse_args.py index ca1ce53a49..9f52129d9b 100644 --- a/libensemble/tools/parse_args.py +++ b/libensemble/tools/parse_args.py @@ -226,7 +226,7 @@ def parse_args(): libE_specs: :obj:`dict` Settings and specifications for libEnsemble - :doc:`(example)` + :doc:`(example)` """ args, misc_args = parser.parse_known_args(sys.argv[1:])