feat(reader): cache Parquet metadata for when FileScanTasks read the same file#2100
feat(reader): cache Parquet metadata for when FileScanTasks read the same file#2100mbutrovich wants to merge 2 commits intoapache:mainfrom
Conversation
|
I'm also considering caching results in redundant |
|
So caching didn't help our test pipeline. It turns out the table I ran this test on has a ton of Parquet files, almost 1:1 |
Which issue does this PR close?
While running Spark/Iceberg with DataFusion Comet on a workload that generates ~80,000
FileScanTaskobjects passed into theArrowReader, we see the majority of CPU time spent inget_metadatacalls viaArrowReader::create_parquet_record_batch_stream_builder.This is a screenshot from the CPU time flame graph from one of the executors in this Spark job:

I suspect the
ArrowReaderis processingFileScanTasks for the same Parquet data files and fetching the same metadata, burning CPU cycles to parse and adding extra object store calls.What changes are included in this PR?
ParquetMetadataCachemodeled after delete_filter.rs's behavior. I made the key a composite of the location and whether the page index was requested to be read, since a subsequenttruewhen cached withfalsewill yield improper results.ArrowReaderhas a metadata cache.BasicDeleteFileLoaderhas a metadata cache.Are these changes tested?