You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Richard Shadrach (Jira)" <ji...@apache.org> on 2021/07/17 19:19:00 UTC

[jira] [Created] (ARROW-13369) performance of read_table using filters on a partitioned parquet file

Richard Shadrach created ARROW-13369:
----------------------------------------

             Summary: performance of read_table using filters on a partitioned parquet file
                 Key: ARROW-13369
                 URL: https://issues.apache.org/jira/browse/ARROW-13369
             Project: Apache Arrow
          Issue Type: Improvement
          Components: Python
    Affects Versions: 4.0.0
            Reporter: Richard Shadrach


Reading a single partition of a parquet file via filters is significantly slower than reading the partition directly.
{code:java}
import pandas as pd
size = 100_000
df = pd.DataFrame({'a': [1, 2, 3] * size, 'b': [4, 5, 6] * size})
df.to_parquet('test.parquet', partition_cols=['a'])
%timeit pd.read_parquet('test.parquet/a=1')
%timeit pd.read_parquet('test.parquet', filters=[('a', '=', 1)])
{code}
gives the timings
{code:python}
1.37 ms ± 46.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
2.41 ms ± 90.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 
 {code}
Likewise, changing size to 1_000_000 in the above code gives
{code:python}
4.94 ms ± 585 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
9.5 ms ± 140 µs per loop (mean ± std. dev. of 7 runs, 100 loops each){code}
Part of the docs for [read_table|https://arrow.apache.org/docs/python/generated/pyarrow.parquet.read_table.html] states:

> Partition keys embedded in a nested directory structure will be exploited to avoid loading files at all if they contain no matching rows.

From this, I expected the performance to be roughly the same. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)