You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Andy Grove (Jira)" <ji...@apache.org> on 2020/12/20 23:08:00 UTC

[jira] [Created] (ARROW-10995) [Rust] [DataFusion] Improve parallelism when reading Parquet files

Andy Grove created ARROW-10995:
----------------------------------

             Summary: [Rust] [DataFusion] Improve parallelism when reading Parquet files
                 Key: ARROW-10995
                 URL: https://issues.apache.org/jira/browse/ARROW-10995
             Project: Apache Arrow
          Issue Type: Improvement
          Components: Rust - DataFusion
            Reporter: Andy Grove


Currently the unit of parallelism is the number of parquet files being read.

For example, if we run a query against a Parquet table that consists of 8 partitions then we will attempt to run 8 async tasks in parallel and if there is a single Parquet file then we will only try and run 1 async task so this does not scale well.

A better approach would be to have one parallel task per "chunk" in each parquet file. This would involve an upfront step in the planner to scan the parquet meta-data to get a list of chunks and then split these up between the configured number of parallel tasks.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)