You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by "Quentin Francois (JIRA)" <ji...@apache.org> on 2015/08/24 15:23:46 UTC

[jira] [Commented] (PARQUET-344) Limit the number of rows per block and per split

    [ https://issues.apache.org/jira/browse/PARQUET-344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14709272#comment-14709272 ] 

Quentin Francois commented on PARQUET-344:
------------------------------------------

Any feedback on this? 

> Limit the number of rows per block and per split
> ------------------------------------------------
>
>                 Key: PARQUET-344
>                 URL: https://issues.apache.org/jira/browse/PARQUET-344
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-mr
>            Reporter: Quentin Francois
>
> We use Parquet to store raw metrics data and then query this data with Hadoop-Pig. 
> The issue is that sometimes we end up with small Parquet files (~80mo) that contain more than 300 000 000 rows, usually because of a constant metric which results in a very good compression. Too good. As a result we have a very few number of maps that process up to 10x more rows than the other maps and we lose the benefits of the parallelization. 
> The fix for that has two components I believe:
> 1. Be able to limit the number of rows per Parquet block (in addition to the size limit).
> 2. Be able to limit the number of rows per split.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)