Skip to content

Peak load management heuristic control#641

Draft
jaredthomas68 wants to merge 22 commits intoNatLabRockies:developfrom
jaredthomas68:peakload
Draft

Peak load management heuristic control#641
jaredthomas68 wants to merge 22 commits intoNatLabRockies:developfrom
jaredthomas68:peakload

Conversation

@jaredthomas68
Copy link
Copy Markdown
Collaborator

Peak load management heuristic control

This PR adds Peak load management heuristic control to H2I. This does not do demand dispatch, but rather dispatches based on peaks in the provided load and rules defined by the user.

Section 1: Type of Contribution

  • Feature Enhancement
    • Framework
    • New Model
    • Updated Model
    • Tools/Utilities
    • Other (please describe):
  • Bug Fix
  • Documentation Update
  • CI Changes
  • Other (please describe):

Section 2: Draft PR Checklist

  • Open draft PR
  • Describe the feature that will be added
  • Fill out TODO list steps
  • Describe requested feedback from reviewers on draft PR
  • Complete Section 7: New Model Checklist (if applicable)

TODO:

  • Step 1
  • Step 2

Type of Reviewer Feedback Requested (on Draft PR)

I am primarily looking for high-level structural and implementation feedback at this point
Structural feedback:

Implementation feedback:

Other feedback:

Section 3: General PR Checklist

  • PR description thoroughly describes the new feature, bug fix, etc.
  • Added tests for new functionality or bug fixes
  • Tests pass (If not, and this is expected, please elaborate in the Section 6: Test Results)
  • Documentation
    • Docstrings are up-to-date
    • Related docs/ files are up-to-date, or added when necessary
    • Documentation has been rebuilt successfully
    • Examples have been updated (if applicable)
  • CHANGELOG.md
    • At least one complete sentence has been provided to describe the changes made in this PR
    • After the above, a hyperlink has been provided to the PR using the following format:
      "A complete thought. [PR XYZ]((https://github.com/NatLabRockies/H2Integrate/pull/XYZ)", where
      XYZ should be replaced with the actual number.

Section 3: Related Issues

Section 4: Impacted Areas of the Software

Section 4.1: New Files

  • path/to/file.extension
    • method1: What and why something was changed in one sentence or less.

Section 4.2: Modified Files

  • path/to/file.extension
    • method1: What and why something was changed in one sentence or less.

Section 5: Additional Supporting Information

Section 6: Test Results, if applicable

Section 7 (Optional): New Model Checklist

  • Model Structure:
    • Follows established naming conventions outlined in docs/developer_guide/coding_guidelines.md
    • Used attrs class to define the Config to load in attributes for the model
      • If applicable: inherit from BaseConfig or CostModelBaseConfig
    • Added: initialize() method, setup() method, compute() method
      • If applicable: inherit from CostModelBaseClass
  • Integration: Model has been properly integrated into H2Integrate
    • Added to supported_models.py
    • If a new commodity_type is added, update create_financial_model in h2integrate_model.py
  • Tests: Unit tests have been added for the new model
    • Pytest-style unit tests
    • Unit tests are in a "test" folder within the folder a new model was added to
    • If applicable add integration tests
  • Example: If applicable, a working example demonstrating the new model has been created
    • Input file comments
    • Run file comments
    • Example has been tested and runs successfully in test_all_examples.py
  • Documentation:
    • Write docstrings using the Google style
    • Model added to the main models list in docs/user_guide/model_overview.md
      • Model documentation page added to the appropriate docs/ section
      • <model_name>.md is added to the _toc.yml

@elenya-grant elenya-grant self-requested a review March 31, 2026 19:43
Copy link
Copy Markdown
Collaborator

@elenya-grant elenya-grant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just left some initial comments/questions - haven't done a deep dive yet (so some of my questions/comments may be silly or I'll be able to answer during a deep-dive) but plan to do a deeper review by Thursday morning. I only looked at the changes and additions to the control classes but will review the tests in the second review I do.

Overall looks like a great start - most of my comments were small or were questions!

# determine demand_profile peaks using defaults of daily peaks inside peak_range
# for the full simulation but respecting the peak range specified in the config
self.secondary_peaks_df = self.get_peaks(
demand_profile=self.condig.demand_profile,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be inputs[f"{self.config.commodity}_demand"] instead of the demand from the config?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of the reasoning for this is in my comment here: #641 (comment). I guess I can split up demand and time stamp as separate inputs so we can use the input like the other controllers.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have split up demand and date_time

@jaredthomas68 jaredthomas68 requested a review from vijay092 April 2, 2026 16:11
Copy link
Copy Markdown
Collaborator

@elenya-grant elenya-grant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Howdy! I gave this more of a deeper look! I think I'm a little confused on how this method works (I didn't try to understand it super hard yet) - so most of my comments were nitpicks or questions. My only blocking comment is about the error being removed from load_plant_yaml - I don't think that error message should be removed at this time.

I think a visual (or two) may be nice to explain some of the inputs to the controller - I think if a doc page with some visuals and explanation on the inputs would be super helpful in making it easier for users to understand how to change the control input parameters based on their use-case.

dt_seconds = int(simulation_cfg["dt"])

# Optional start_time in config; default to a fixed reference timestamp.
start_time = simulation_cfg.get("start_time", "2000-01-01 00:00:00")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the start-time format in the plant config is defined as being: mm/dd/yyyy HH:MM:SS or mm/dd HH:MM:SS and defaults to 01/01 00:30:00 (doesn't include a year because it was initially going to be used with resource data and the year may change based on the resource year). The format here does not match - do you think we could make sure that the format is consistent mm/dd/yyyy instead of yyyy-mm-dd?

I made a similar function when I was starting on the resource models (it never made it in) but it handles whether a year was added or not:

from datetime import datetime, timezone, timedelta

def make_time_profile(
    start_time: str,
    dt: float | int,
    n_timesteps: int,
    time_zone: int | float,
    start_year: int | None = None,
):
    """Generate a time-series profile for a given start time, time step interval, and
    number of timesteps, with a timezone signature.

    Args:
        start_time (str): simulation start time formatted as 'mm/dd/yyyy HH:MM:SS' or
            'mm/dd HH:MM:SS'
        dt (float | int): time step interval in seconds.
        n_timesteps (int): number of timesteps in a simulation.
        time_zone (int | float): timezone offset from UTC in hours.
        start_year (int | None, optional): year to use for start-time. if start-time
            is formatted as 'mm/dd/yyyy HH:MM:SS' then will overwrite original year.
            If None, the year will default to 1900 if start-time is formatted as 'mm/dd HH:MM:SS'.
            Defaults to None.

    Returns:
        list[datetime]: list of datetime objects that represents the time profile
    """

    tz_utc_offset = timedelta(hours=time_zone)
    tz = timezone(offset=tz_utc_offset)
    tz_str = str(tz).replace("UTC", "").replace(":", "")
    if tz_str == "":
        tz_str = "+0000"
    # timezone formatted as ±HHMM[SS[.ffffff]]
    start_time_w_tz = f"{start_time} ({tz_str})"
    if len(start_time.split("/")) == 3:
        if start_year is not None:
            start_time_month_day_year, start_time_time = start_time.split(" ")
            start_time_month_day = "/".join(i for i in start_time_month_day_year.split("/")[:-1])
            start_time_w_tz = f"{start_time_month_day}/{start_year} {start_time_time} ({tz_str})"

        t = datetime.strptime(start_time_w_tz, "%m/%d/%Y %H:%M:%S (%z)")
    elif len(start_time.split("/")) == 2:
        if start_year is not None:
            start_time_month_day, start_time_time = start_time.split(" ")
            start_time_w_tz = f"{start_time_month_day}/{start_year} {start_time_time} ({tz_str})"
            t = datetime.strptime(start_time_w_tz, "%m/%d/%Y %H:%M:%S (%z)")
        else:
            # NOTE: year will default to 1900
            t = datetime.strptime(start_time_w_tz, "%m/%d %H:%M:%S (%z)")
    time_profile = [None] * n_timesteps
    time_step = timedelta(seconds=dt)
    for i in range(n_timesteps):
        time_profile[i] = t
        t += time_step
    return time_profile

dt_seconds = int(simulation_cfg["dt"])

# Optional start_time in config; default to a fixed reference timestamp.
start_time = simulation_cfg.get("start_time", "2000-01-01 00:00:00")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the plant config is loaded using load_plant_yaml(), then the start time should always be included. Aka - I don't think we should have default values in both the modeling schema and this function. But - I'm happy to see a function like this get in!

@@ -0,0 +1,11 @@
name: plant_config
description: Demonstrates multivariable streams with a gas combiner
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update description in plant config

commodity: electricity
commodity_rate_units: kW
max_charge_rate: 2500.0 # kW/time step, 1, 2.5, or 5 MW
max_capacity: 10000.0 # kWh, 80 MWh
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment for max_capacity is wrong, should say 10 MWh

max_supervisor_events: (int | None, optional): The maximum number of discharge events
allowed for the supervisor in the period specified in max_supervisor_event_period,
or across all time steps if max_supervisor_event_period is None.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you add in the other attributes to the doc string? Like peak_range, advance_discharge_period, delay_charge_period, allow_charge_in_peak_range, and min_peak_proximity?

},
)

def __attrs_post_init__(self):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the dictionary inputs be checked I the __attrs_post_init__ method to check that they have the right keys?


self.get_allowed_discharge()

@staticmethod
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this a staticemethod rather than just a normal method? (same with _normalize_peak_range?)

# Dispatch strategy outline:
# - Discharge: Starting when time_to_peak <= advance_discharge_period
# * Discharge at max rate (or less to reach targets)
# * Stop discharging only when SOC reaches min_soc
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could these inline comments get moved closer to where that logic is represented in the code?


This method applies an open-loop storage control strategy to balance the
commodity demand and input flow. When input exceeds demand, excess commodity
is used to charge storage (subject to rate, efficiency, and SOC limits). When
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The description of this compute method makes it seem really similar to the DemandOpenLoopStorageController and the HeuristicLoadFollowingControl - could you update the doctoring to explain the peak-shaving novelty of this?

@vijay092
Copy link
Copy Markdown
Collaborator

vijay092 commented Apr 2, 2026

Very exciting work @jaredthomas68! Thanks for putting this together in such a short time! I did a full pass through h2integrate/control/control_strategies/storage/plm_openloop_storage_controller.py and left comments wherever I spotted small issues. Feel free to address them as you see fit.

dispatch_priority_demand_profile: str = field(
validator=contains(["demand_profile", "demand_profile_supervisor"]),
)
max_supervisor_events: int | None = (field(default=None),)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this supposed to be a tuple?

charge_efficiency: float | None = field(default=None, validator=range_val_or_none(0, 1))
discharge_efficiency: float | None = field(default=None, validator=range_val_or_none(0, 1))
round_trip_efficiency: float | None = field(default=None, validator=range_val_or_none(0, 1))
demand_profile_supervisor: int | float | list | None = field()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default = None


self.max_discharge_rate = self.max_charge_rate

# make sure peak_range is in correct format because yaml
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same problem for advance_discharge_period, right?

)

# Store simulation parameters for later use
self.dt = self.options["plant_config"]["plant"]["simulation"]["dt"]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never used.


# Store simulation parameters for later use
self.dt = self.options["plant_config"]["plant"]["simulation"]["dt"]
self.time_index = build_time_series_from_plant_config(self.options["plant_config"])
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is worth adding a length check against self.n_timesteps somewhere.

day_df = supervisory_peaks_df[
supervisory_peaks_df["date_time"].dt.floor("D") == day
]
# If supervisor has peaks on the day, use supervisor's flags for all rows that day
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to add check for when supervisor is None.

next_peak_time - self.peaks_df.loc[idx, "date_time"]
)

def get_allowed_discharge(self):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Method name is misleading. It actually computes "allow_charge"?

soc_array[i] = deepcopy(soc)

# stay in discharge mode until the battery is fully discharged
if soc <= soc_min:
Copy link
Copy Markdown
Collaborator

@vijay092 vijay092 Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note for future:
discharging is only set to False when soc <= soc_min. If the battery doesn't fully drain during the event duration, discharging will continue to stay True

# start discharging when we approach a peak and have some charge
if time_to_peak <= advance_discharge_period and soc > soc_min:
discharging = True

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest adding charging = False here in case charging hasn't been set to False in the previous timestep.

# Note: discharge_needed is internal (storage view), max_discharge_rate is external
discharge_needed = max_discharge_rate / discharge_eff
discharge = min(
discharge_needed, available_discharge, max_discharge_rate / discharge_eff
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first and third terms are the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants