Skip to content

Add environment variables/secrets, repository and org dependabot secrets#614

Open
WolfgangFischerEtas wants to merge 6 commits intoeclipse-csi:mainfrom
etas-contrib:feature/pr_environment
Open

Add environment variables/secrets, repository and org dependabot secrets#614
WolfgangFischerEtas wants to merge 6 commits intoeclipse-csi:mainfrom
etas-contrib:feature/pr_environment

Conversation

@WolfgangFischerEtas
Copy link

Overview

This pull request extends Otterdog’s model hierarchy and secret management capabilities. It introduces a new environment‑level layer for variables and secrets, adds Dependabot secrets at both repository and organization level, and updates the documentation accordingly.
New Features

  • Environment‑level variables and secrets
    Environments now support their own variables and secrets. This introduces a new hierarchical layer, forming a structured chain of Repository → Environment → Variables/Secrets.

  • Dependabot secrets at organization level
    Organizations can now define and manage Dependabot secrets centrally, including support for redacted values and credential‑provider behavior.

  • Dependabot secrets at repository level
    Repositories now support Dependabot secrets in the same way as other secret types, including import, export, and validation.

Model Changes

Introduction of parent_object
The new environment layer requires each model object to know its position within the hierarchy. To support this, an internal field was added:

    parent_object: ModelObject | None = dataclasses.field(
        default=None,
        kw_only=True,
        repr=False,
        compare=False,
        metadata={"model_only": True},
    )

This field links each object to its parent in the model tree, enabling consistent validation, navigation, and API construction. It is internal only: it is not serialized, not compared, and not part of the external configuration. The model_only: True metadata ensures it remains strictly an internal modeling detail.

Note on the current approach

The introduction of the new hierarchy also exposed a structural inconsistency in ModelObject. The class now contains both a parent_object field and methods that still accept a parent_object parameter, for example:

    def get_model_header(self, parent_object: ModelObject | None = None) -> str:
        header = f"[bold]{self.model_object_name}[/]"

This hybrid approach is not ideal. With the new internal parent_object field (created via dataclasses.field and marked as model_only: True), methods like get_model_header should eventually rely on the stored parent reference instead of receiving it as a parameter. This cleanup is out of scope for this PR but should be addressed in a follow‑up to ensure a consistent and unified model design.

@kairoaraujo kairoaraujo self-requested a review March 12, 2026 14:24
@AlexanderLanin
Copy link
Contributor

This closes #537

Copy link
Contributor

@kairoaraujo kairoaraujo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @WolfgangFischerEtas, thank you for the PR.

Looking the code, I'm just curious if we should worry about that this PR adds 2 new API calls per environment (one for secrets, one for variables).
Which will be 1 call per repo + 2 calls × number of environments

Another performance suggestion: can we use asyncio.gather() for parallel repo processing?

@WolfgangFischerEtas
Copy link
Author

I would not worry too much about these 2 additonal calls, they are both encapsulated in

            if jsonnet_config.default_env_secret_config is not None:
                # get secrets of the repo environment
                secrets = await rest_api.env.get_secrets(github_id, repo.name, environment.name)

therefore if you're not interested in these secrets and variables just remove the default....config from the otterdog_defaults.libjsonnet and the additional calls are not made.

I haven't thought about the asyncio.gather function. But if we're doing more calls in parallel are we then getting into the next trouble with the github rate limits?
I saw this comment in github_organization.py:
# limit the number of repos that are processed concurrently to avoid hitting secondary rate limits
and I'm more concerned about the increasing number of concurrent requests than a slightly longer processing time.

@WolfgangFischerEtas
Copy link
Author

@kairoaraujo:
I’ve gone through all review comments and addressed everything that was raised. Please feel free to take another look whenever you have time. Let me know if anything still needs adjustment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants