Allow decompression to continue after exceeding max_length#11966
Allow decompression to continue after exceeding max_length#11966Dreamsorcerer wants to merge 153 commits intomasterfrom
Conversation
❌ 2 Tests Failed:
View the top 1 failed test(s) by shortest run time
View the full list of 2 ❄️ flaky test(s)
To view more test analytics, go to the Test Analytics Dashboard |
Merging this PR will degrade performance by 43.97%
Performance Changes
Comparing |
for more information, see https://pre-commit.ci
…nto Dreamsorcerer-patch-5
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
|
I'm pulling out the .read_chunk(decode=True). If someone wants to work on implementing that in a separate PR, the test is: |
|
Is this ready to be merged? We're running into |
tests/test_benchmarks_client.py::test_get_request_with_251308_compressed_chunked_payload is still failing on Windows. I've not figured out why yet. If you want to help debug and figure out why that is failing, then we'll be able to finish off this PR quickly. |
e3eeb81 to
b420704
Compare
a558cbc to
7f4e807
Compare
|
looks like the fundamental issue is that we can have data still in the buffer that isn't read yet when the connection closed and the reader gets killed at connection close leaving it stuck |
This reverts commit f5aa703.
Architecture summary:
b""to get more data. The decompression output has been reduced to 256KiB, matching the socket read limit.b"".