Skip to content

add check-alpn to server config for haproxy_route relations #386

@alexdlukens

Description

@alexdlukens

Bug Description

When upgrading from HAProxy rev241 to rev330, IS noticed a failure on health checks:
Discussed on mattermost here: https://chat.canonical.com/canonical/pl/mnw9t43fcibz7ypgcriw8ibpue

Where the backend (a vault deployment) printed errors regarding bogus requests:

2026-03-11T21:22:13Z vault.vaultd[12960]: 2026-03-11T21:22:13.789Z [INFO]  http2: server: error reading preface from client <haproxy-ip>:39012: bogus greeting "GET /sys/health HTTP/1.0"

and we saw on HAProxy:

Mar 11 21:19:53 ingress-ps7-is-admin-canonical-com-01 haproxy[3318914]: Server prod-canonical-vault-ps7-ingress-configurator/prod-canonical-vault-ps7-ingress-configurator_8200_0 is DOWN, reason: Layer7 timeout, check duration: 59999ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.

I found a similar bug report against octavia that references this behavior in HAProxy: https://bugs.launchpad.net/octavia/+bug/2043812. Where health checks will fail to negotiate ssl.

Their remedy is to add check-alpn to each backend.

When comparing rev241 to rev330, I noticed a difference in the server block in haproxy_route.cfg.j2: the addition of alpn h2,http/1.1 to the end of the block

$ git diff rev241:templates/haproxy_route.cfg.j2 -- haproxy-operator/templates/haproxy_route.cfg.j2 | grep alpn -B1
-    bind [::]:443 v4v6 ssl crt {{ haproxy_crt_dir }}
+    bind [::]:443 v4v6 ssl crt {{ haproxy_crt_dir }} alpn h2,http/1.1
--
-    server {{ server.server_name }} {{ server.address }}:{{ server.port }}{% if server.check %} check inter {{ server.check.interval }}s rise {{ server.check.rise }} fall {{ server.check.fall }}{% endif +%}{% if server.protocol == "https" %} ssl ca-file {{ haproxy_cas_file }}{% endif +%}
+    server {{ server.server_name }} {{ server.address }}:{{ server.port }}{% if server.check %} check inter {{ server.check.interval }}s rise {{ server.check.rise }} fall {{ server.check.fall }}{% endif +%}{% if server.protocol == "https" %} ssl ca-file {{ haproxy_cas_file }} alpn h2,http/1.1{% endif +%}

I think that adding check-alpn h2,http/1.1 to the server line will remedy this behavior

Impact

Medium (functionality degraded, workaround exists)

Impact Rationale

we have reverted back to rev241 and is working properly again.

To Reproduce

  • Deploy HAProxy rev330, ingress-configurator, backing application with ssl enabled.
  • configure SSL from haproxy to backing application
  • configure health checks on ingress configurator. E.g.
    • timeout-server=60
    • health-check-path=/hello/world
    • health-check-rise=2
    • health-check-interval=60
    • health-check-fall=1

I confirmed the health checks were working by ssh-ing into the haproxy unit and checking the backend definition for the haproxy_route relation. If I saw option httpchk GET /hello/world, the health checks were configured correctly.

Environment

This occurred on PS7 ingress environments. This is one of the few deployments with backend health checks configured with SSL IIUC.

Relevant log output

From reproduced env (different from initial report)

Traefik k8s pod logs (backing application):

ubuntu@juju-d1e12d-k8s-jenkins-is-test-gh-11:~$ sudo k8s kubectl logs -n k8s-jenkins-is-test-gh-server pod/traefik-k8s-0 -c traefik --tail=1000 | grep bogus
2026-03-11T22:34:56.738Z [traefik] time="2026-03-11T22:34:56Z" level=debug msg="http2: server: error reading preface from client <redacted>:43469: bogus greeting \"GET /k8s-jenkins-is-test\""
2026-03-11T22:34:57.433Z [traefik] time="2026-03-11T22:34:57Z" level=debug msg="http2: server: error reading preface from client <redacted>:61035: bogus greeting \"GET /k8s-jenkins-is-test\""
2026-03-11T22:36:55.609Z [traefik] time="2026-03-11T22:36:55Z" level=debug msg="http2: server: error reading preface from client <redacted>:51873: bogus greeting \"GET /k8s-jenkins-is-test\""
... many more times

From haproxy:

Mar 11 22:35:50 ingress-ps7-is-jenkins-canonical-com-01 haproxy[1043918]: [WARNING]  (1043918) : Server k8s-jenkins-is-test-gh-server-ingress-configurator/k8s-jenkins-is-test-gh-server-ingress-configurator_443_0 is DOWN, reason: Layer7 timeout, check duration: 59999ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar 11 22:35:50 ingress-ps7-is-jenkins-canonical-com-01 haproxy[1043918]: [ALERT]    (1043918) : backend 'k8s-jenkins-is-test-gh-server-ingress-configurator' has no server available!
Mar 11 22:35:50 ingress-ps7-is-jenkins-canonical-com-01 haproxy[1043918]: Server k8s-jenkins-is-test-gh-server-ingress-configurator/k8s-jenkins-is-test-gh-server-ingress-configurator_443_0 is DOWN, reason: Layer7 timeout, check duration: 59999ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar 11 22:35:50 ingress-ps7-is-jenkins-canonical-com-01 haproxy[1043918]: backend k8s-jenkins-is-test-gh-server-ingress-configurator has no server available!
Mar 11 22:35:50 ingress-ps7-is-jenkins-canonical-com-01 haproxy[3014996]: [NOTICE]   (3014996) : haproxy version is 2.8.5-1ubuntu3.4
Mar 11 22:35:50 ingress-ps7-is-jenkins-canonical-com-01 haproxy[3014996]: [NOTICE]   (3014996) : path to executable is /usr/sbin/haproxy
Mar 11 22:35:50 ingress-ps7-is-jenkins-canonical-com-01 haproxy[3014996]: [WARNING]  (3014996) : Former worker (1043918) exited with code 0 (Exit)
Mar 11 22:35:56 ingress-ps7-is-jenkins-canonical-com-01 haproxy[1045188]: [WARNING]  (1045188) : Server k8s-jenkins-is-test-gh-server-ingress-configurator/k8s-jenkins-is-test-gh-server-ingress-configurator_443_0 is DOWN, reason: Layer7 timeout, check duration: 59999ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar 11 22:35:56 ingress-ps7-is-jenkins-canonical-com-01 haproxy[1045188]: Server k8s-jenkins-is-test-gh-server-ingress-configurator/k8s-jenkins-is-test-gh-server-ingress-configurator_443_0 is DOWN, reason: Layer7 timeout, check duration: 59999ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar 11 22:35:56 ingress-ps7-is-jenkins-canonical-com-01 haproxy[1045188]: [ALERT]    (1045188) : backend 'k8s-jenkins-is-test-gh-server-ingress-configurator' has no server available!
Mar 11 22:35:56 ingress-ps7-is-jenkins-canonical-com-01 haproxy[1045188]: backend k8s-jenkins-is-test-gh-server-ingress-configurator has no server available!

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions