Skip to content

Raise default HTTP/2 receive windows and batch HTTP/2 receive-window refills#481

Open
ericmj wants to merge 5 commits intomainfrom
ericmj/http2-larger-default-windows
Open

Raise default HTTP/2 receive windows and batch HTTP/2 receive-window refills#481
ericmj wants to merge 5 commits intomainfrom
ericmj/http2-larger-default-windows

Conversation

@ericmj
Copy link
Copy Markdown
Member

@ericmj ericmj commented Apr 13, 2026

Builds on top of #480, merge and then rebase in the correct order.

Raise default HTTP/2 receive windows

Default connection receive window is now 16 MB (was 65_535, the RFC
7540 §6.9.2 minimum), sent via a WINDOW_UPDATE on stream 0 as part of
the connection preface. Default stream receive window is now 4 MB (was
65_535), advertised via SETTINGS_INITIAL_WINDOW_SIZE in the same
preface. Both settable via the new :connection_window_size option
and the existing :client_settings option.

Window size / RTT sets a hard cap on per-stream throughput. At the
previous 65_535-byte stream window:

Path (typical RTT) 65 KB 4 MB 16 MB
LAN (1 ms) 62 MB/s 4 GB/s 16 GB/s
Region (20 ms) 3.1 MB/s 200 MB/s 800 MB/s
Cross-country (70 ms) 0.9 MB/s 57 MB/s 229 MB/s
Transatlantic (100 ms) 0.6 MB/s 40 MB/s 160 MB/s
Transpacific (130 ms) 0.5 MB/s 31 MB/s 123 MB/s
Antipodal (230 ms) 0.3 MB/s 17 MB/s 70 MB/s

Any caller talking to a server more than a few milliseconds away was
bottlenecked well below their link bandwidth without knowing why. 4 MB
per stream saturates gigabit anywhere on earth; 16 MB at the connection
level lets four streams run in parallel at full rate before the shared
pool binds.

For comparison, Go's net/http2 uses 1 GB / 4 MB (conn/stream) and gun
uses 8 MB / 8 MB. 16 MB / 4 MB is roughly in the same family, with the
ratio chosen so conn is not the bottleneck for typical parallel use.

Callers who want the old behavior can pass connection_window_size: 65_535 and client_settings: [initial_window_size: 65_535] to
connect/4.

Batch HTTP/2 receive-window refills

Previously refill_client_windows/3 sent a WINDOW_UPDATE on both the
connection and the stream after every DATA frame, with the increment
set to the frame's byte size. That kept the advertised window pinned
at its peak but tied outbound WINDOW_UPDATE traffic one-to-one with
inbound DATA frames.

An adversarial server can exploit that ratio. By sending many small
DATA frames — in the limit, one byte of body per frame — it can force
the client to emit one 13-byte WINDOW_UPDATE per frame. At high frame
rates that's a small but real client-side amplification: a flood of
outbound control frames driven entirely by the peer.

This change gates refills on a threshold. The client tracks the
current remaining window for the connection and each stream and only
sends a WINDOW_UPDATE once that remaining drops to
:receive_window_update_threshold bytes. The update then tops the
window straight back up to its configured peak. One frame per
receive_window_size - receive_window_update_threshold bytes
consumed, not per DATA frame. The default threshold is 160_000 bytes,
matching gun's connection_window_update_threshold — roughly 10× the
default 16 KB max frame size, leaving the server a safety margin
before the window would starve it.

Behaviour-wise:

  • With the new 4 MB / 16 MB default windows, the client sends
    roughly one stream-level WINDOW_UPDATE per ~3.84 MB consumed
    (previously ~250 per 4 MB), and one connection-level update per
    ~15.84 MB (previously ~1000 per 16 MB).
  • Callers that explicitly set the stream or connection window at or
    below the 160_000-byte threshold get the old behaviour — one
    refill per frame — because remaining is always ≤ threshold.
  • The threshold is shared between the connection and per-stream
    windows; there is no way to tune them independently today.

The threshold is tunable via the new :receive_window_update_threshold
option to Mint.HTTP.connect/4.

Throughput trade-off from batching

Batching trades a small throughput ceiling for the amplification-bound
and reduced ack traffic. With per-frame refills the effective window
stays at its peak continuously, so throughput ≈ window / RTT. With
batching, the server can send at most (window - threshold) before
pausing until the next refill arrives an RTT later, so the steady-state
ceiling drops to (window - threshold) / RTT — about 96 % of the
per-frame ceiling at the default 4 MB / 160 KB combination. In a local
benchmark over a 100 ms-RTT delay proxy, a single stream reached
~24 MB/s against a theoretical 40 MB/s per-frame ceiling; most of the
gap is per-frame CPU overhead in the rig, ~4 % is the batching itself.
The amplification fix is worth it.

Track advertised windows synchronously in initiate/5

New streams read their initial receive-window tracking from
conn.client_settings, which previously carried only library
defaults until the server ACKed our SETTINGS. When the user
advertised a smaller stream window than the 4 MB default, streams
opened before the ACK tracked 4 MB locally while the server
respected the advertised value — the remaining window never crossed
the refill threshold, stream-level WINDOW_UPDATEs never fired, and
the connection stalled. Mirrored the advertised settings into the
struct synchronously at connect time (the sender doesn't need the
ACK to know what it committed to). Regression test included.

Closes #432.

ericmj added 5 commits April 12, 2026 15:47
Advertises a larger HTTP/2 receive window (connection-level or
per-stream) by sending a WINDOW_UPDATE frame. Needed because RFC 7540
makes the connection-level initial window tunable only via
WINDOW_UPDATE — not SETTINGS — leaving the spec default of 64 KB as
the only reachable value without an API like this.

In hex's `mix deps.get` — many parallel multi-MB tarball downloads
sharing one HTTP/2 connection — raising the connection window from
64 KB to 8 MB via this function drops 10 runs from 32.7s to 29.2s
(10.8%), matching their HTTP/1 pool.

Deliberately asymmetric with get_window_size/2 (which returns the
client *send* window). Docstrings on both carry warning callouts
spelling out send-vs-receive so callers don't assume they round-trip.

Target is :connection or {:request, ref}; grow-only (shrink attempts
return {:error, conn, %HTTPError{reason: :window_size_too_small}});
new_size validated against 1..2^31-1. Tracks the advertised peak on
new receive_window_size fields on the connection and stream.
The connection and stream structs tracked a `window_size` field for
the client's outbound (send) window and a separately-named
`receive_window_size` field for the inbound window. Renaming the
former to `send_window_size` makes the pair symmetric and removes a
long-standing source of confusion about which direction a bare
`window_size` refers to.
Default connection receive window is now 16 MB (was 65_535), sent via
a WINDOW_UPDATE on stream 0 as part of the connection preface. Default
stream receive window is now 4 MB (was 65_535), advertised via
SETTINGS_INITIAL_WINDOW_SIZE in the same preface. Both settable via
the new `:connection_window_size` option and the existing
`:client_settings` option.

Window size / RTT sets a hard cap on per-stream throughput. At the
previous 65_535-byte stream window:

  Path (typical RTT)       | 65 KB    | 4 MB     | 16 MB
  -------------------------|----------|----------|----------
  LAN (1 ms)               | 62 MB/s  | 4 GB/s   | 16 GB/s
  Region (20 ms)           | 3.1 MB/s | 200 MB/s | 800 MB/s
  Cross-country (70 ms)    | 0.9 MB/s | 57 MB/s  | 229 MB/s
  Transatlantic (100 ms)   | 0.6 MB/s | 40 MB/s  | 160 MB/s
  Transpacific (130 ms)    | 0.5 MB/s | 31 MB/s  | 123 MB/s
  Antipodal (230 ms)       | 0.3 MB/s | 17 MB/s  | 70 MB/s

Any caller talking to a server more than a few milliseconds away was
bottlenecked well below their link bandwidth without knowing why. 4 MB
per stream saturates gigabit anywhere on earth; 16 MB at the connection
level lets four streams run in parallel at full rate before the shared
pool binds.

Callers who want the old behaviour can pass `connection_window_size:
65_535` and `client_settings: [initial_window_size: 65_535]` to
`connect/4`.
Previously `refill_client_windows/3` sent a WINDOW_UPDATE on both the
connection and the stream after every DATA frame, with the increment
set to the frame's byte size. That kept the advertised window pinned
at its peak but tied outbound WINDOW_UPDATE traffic one-to-one with
inbound DATA frames.

An adversarial server can exploit that ratio. By sending many small
DATA frames — in the limit, one byte of body per frame — it can force
the client to emit one 13-byte WINDOW_UPDATE per frame. At high frame
rates that's a small but real client-side amplification: a flood of
outbound control frames driven entirely by the peer.

This change gates refills on a threshold. The client tracks the
current remaining window for the connection and each stream and only
sends a WINDOW_UPDATE once that remaining drops to
`:receive_window_update_threshold` bytes. The update then tops the
window straight back up to its configured peak. One frame per
`receive_window_size - receive_window_update_threshold` bytes
consumed, not per DATA frame. The default threshold is 160_000 bytes
— roughly 10× the default 16 KB max frame size, leaving the server a
safety margin before the window would starve it.

Behaviour-wise:

  * With the 4 MB / 16 MB default windows, the client sends roughly
    one stream-level WINDOW_UPDATE per ~3.84 MB consumed (previously
    ~250 per 4 MB), and one connection-level update per ~15.84 MB
    (previously ~1000 per 16 MB).
  * Callers that explicitly set the stream or connection window down
    to the 65_535 spec minimum get the old behaviour — one refill per
    frame — because remaining is always below the default 160_000
    threshold.

The threshold is tunable via the new `:receive_window_update_threshold`
option to `Mint.HTTP.connect/4`.
Streams opened before the server's SETTINGS ACK arrived were reading
their initial receive window from `conn.client_settings`, which still
held library defaults at that point. If the user advertised a stream
window smaller than the default (e.g. `initial_window_size: 65_535`),
the stream struct tracked the 4 MB default locally while the server
respected the 65_535 we sent in SETTINGS. The client's remaining
window never dropped to the refill threshold, stream-level
WINDOW_UPDATE frames never fired, and the connection stalled once the
server exhausted its per-stream send window.

Mirror the advertised `client_settings_params` into
`conn.client_settings` during `initiate/5` — the sender already knows
what it committed to and doesn't need to wait for the ACK to act on
it. Add a regression test that opens a stream before the ACK round
trip and asserts the stream struct reflects the advertised value.

Also rename `receive_window` to `receive_window_remaining` so the
peak/remaining distinction is clear at the call site, and document
that `:receive_window_update_threshold` is shared between the
connection and per-stream windows (so windows at or below the
threshold refill on every DATA frame).
@coveralls
Copy link
Copy Markdown

Coverage Report for CI Build 5

Coverage increased (+0.4%) to 88.143%

Details

  • Coverage increased (+0.4%) from the base build.
  • Patch coverage: 5 uncovered changes across 1 file (81 of 86 lines covered, 94.19%).
  • No coverage regressions found.

Uncovered Changes

File Changed Covered %
lib/mint/http2.ex 86 81 94.19%

Coverage Regressions

No coverage regressions found.


Coverage Stats

Coverage Status
Relevant Lines: 1535
Covered Lines: 1353
Line Coverage: 88.14%
Coverage Strength: 245.82 hits per line

💛 - Coveralls

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Avoid unnecessary WINDOW_UPDATE frames in HTTP/2 client

2 participants