Synchronise master with upstream#868
Merged
Alex-Welsh merged 21 commits intostackhpc/masterfrom Apr 27, 2026
Merged
Conversation
Change-Id: I305bb756eb2fd8f582bbef5b2674f8b09b141ab8
Add authentication settings for Neutron and replace deprecated configuration. See change I686cfdef78de927fa4bc1921c15e8d5853fd2ef9 for more details [1]. [1] https://review.opendev.org/c/openstack/octavia/+/866327 Change-Id: I212dbfc7d555c731f629c1f73ae518351dd713f6 Signed-off-by: Pierre Riteau <pierre@stackhpc.com>
The configuration for ProxySQL and Valkey exporters was incorrectly nested inside the 'enable_prometheus_alertmanager' block. This caused the scrape jobs to be missing from prometheus.yml when Alertmanager was disabled, even if the exporters themselves were enabled. This commit moves these exporter blocks outside the Alertmanager conditional to ensure metrics are collected independently. Closes-Bug: #2148279 Change-Id: Ib4ac95cfce2272529cdc8b98c40dc7f21c50e27f Signed-off-by: Piotr Milewski <vurmil@gmail.com>
Kolla-Ansible defaulted for long time to deploy cinder-volume/backup containers on storage group, which is good for Cinder LVM deployments. Problem is - majority of deployments does not use Cinder LVM backend - they use mainly Ceph (based on OpenStack User Survey) and probably some other proprietary vendors which are API driven. Let's change the default to reflect reality. If a user has cinder lvm enabled - we will deploy it as it was before If a user has both cinder lvm and something else enabled - we will deploy cinder-volume and cinder-backup in both places (cinder group and storage group) If a user doesn't have cinder lvm enabled - we deploy only in the cinder group Change-Id: Ia4a874de8f1b8eab8ff688eab4c6af013449c18c Signed-off-by: Michal Nasiadka <mnasiadka@gmail.com>
When monitoring is deployed on hosts outside of control group it will not pass service_enabled_and_mapped_to_host filter and bootstrapping step will not be executed, resulting with lack of prometheus user in mariadb. Closes-bug: #2148551 Change-Id: I520131aa976609dd2c1f5b95e4e5fc5fb8fc6e57 Signed-off-by: Grzegorz Bialas <grzegorz@stackhpc.com>
Since we use the same globals before upgrade and after upgrade, we need to not enable vpnaas before the upgrade, because there's no additional logic for configuring neutron-ovn-vpn-agent in previous release. Rename the role under /roles to fit out naming and ensure no clashes with other role names Change-Id: I3ccef6b1e75224a60036e9bf49500804ee76eba6 Signed-off-by: Michal Nasiadka <mnasiadka@gmail.com>
We enabled prometheus due to testing of network-exporter Change-Id: I382ab5394baef107cfe62f6e1416561b269cf58c Signed-off-by: Michal Nasiadka <mnasiadka@gmail.com>
j2lint 1.2.0 requires rich<15 and u-c bumped it to 15.0.0, so things are failing right now - but we don't need u-c for linters job Change-Id: Ie14c11e299da1e505d379ad35928cc62929a678f Signed-off-by: Michal Nasiadka <mnasiadka@gmail.com>
Allow services to set an explicit uWSGI thread count alongside the default `enable-threads` setting [1]. Horizon now passes exising `horizon_wsgi_threads` to uWSGI. [1] https://uwsgi-docs.readthedocs.io/en/latest/Options.html#threads Change-Id: I0e33f2331ba10645f5947572c3d443647dd0c3a1 Signed-off-by: Bartosz Bezak <bartosz@stackhpc.com>
Template is no longer used as Barbican API already relies on the shared service-uwsgi-config role. This removes obsolete and unused file. Change-Id: Ifc8f652843c85b95fa1f5d12a41db21100994ea5 Signed-off-by: Michal Arbet <michal.arbet@ultimum.io>
The migration to uWSGI has been successfully completed. All API services now run exclusively under uWSGI. The *_wsgi_provider variables have been removed from all affected roles along with leftover Apache wsgi config templates and fluentd Apache log handling. Rework Apache fluentd input configuration so it doesn't glob out whole /var/log/kolla/*/* and just include required services when enabled. Change-Id: I1475e629d033163a50a5c681c8fe2557a7b10bae Signed-off-by: Michal Nasiadka <mnasiadka@gmail.com> Signed-off-by: Bartosz Bezak <bartosz@stackhpc.com>
Adds support for deploying multiple instances of the Nova Compute Ironic service on the same host. This is useful in large baremetal deployments, where the sharding and/or conductor group feature is used to scale out the service. A further patch to support deploying multiple Ironic conductor instances will follow. Co-Authored-By: Bartosz Bezak <bartosz@stackhpc.com> Co-Authored-By: Bertrand Lanson <bertrand.lanson@protonmail.com> Change-Id: Ibbddfd87e831a3775c3ed66c80b18e5070cedc90 Signed-off-by: Doug Szumski <doug@stackhpc.com>
Alex-Welsh
approved these changes
Apr 27, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains a snapshot of master from upstream master.