Parquet: prevent binary offset overflow by stopping batch early#9362
Parquet: prevent binary offset overflow by stopping batch early#9362vigneshsiva11 wants to merge 1 commit intoapache:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR fixes a critical bug where reading Parquet files containing very large binary or string values could cause an offset overflow error or panic. The fix moves the overflow check to occur before buffer mutation, ensuring that the internal state remains consistent if an overflow would occur.
Changes:
- Modified
try_pushmethod inOffsetBufferto calculate and validate the next offset before mutating internal buffers - The overflow detection now happens before calling
extend_from_sliceandpush, preventing partial state corruption
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
I am not sure this PR closes the issue -- maybe it changes the error message? Or does it prevent a panic? |
This PR doesn't change the size of the batches emitted I don't think 🤔 |
Which issue does this PR close?
Rationale for this change
When reading Parquet files containing very large binary or string values, the Arrow Parquet reader can attempt to construct a RecordBatch whose total value buffer exceeds the maximum representable offset size. This can lead to an overflow error or panic during decoding.
Instead of allowing the buffer to overflow and failing late, the reader should detect this condition early and stop decoding before the offset exceeds the representable limit. This behavior is consistent with other Arrow implementations (for example, PyArrow), which emit smaller batches when encountering very large row groups.
What changes are included in this PR?
Are these changes tested?
Yes.
Note: Some Parquet and Arrow integration tests require external test data provided via git submodules (parquet-testing and testing). These submodules are not present in a minimal local checkout but are initialized in CI.
Are there any user-facing changes?
Yes.
There are no breaking changes to public APIs.