You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "F Méthot (JIRA)" <ji...@apache.org> on 2017/03/24 13:43:41 UTC

[jira] [Created] (DRILL-5379) Set Hdfs Block Size based on Parquet Block Size

F Méthot created DRILL-5379:
-------------------------------

             Summary: Set Hdfs Block Size based on Parquet Block Size
                 Key: DRILL-5379
                 URL: https://issues.apache.org/jira/browse/DRILL-5379
             Project: Apache Drill
          Issue Type: Improvement
          Components: Storage - Parquet
    Affects Versions: 1.9.0
            Reporter: F Méthot
             Fix For: Future


It seems there a way to force Drill to store CTAS generated parquet file as a single block when using HDFS. Java HDFS API allows to do that, files could be created with the Parquet block-size set in a session or system config.

Since it is ideal  to have single parquet file per hdfs block.

Here is the HDFS API that allow to do that:
http://archive.cloudera.com/cdh4/cdh/4/hadoop/api/org/apache/hadoop/fs/FileSystem.html#create(org.apache.hadoop.fs.Path,%20boolean,%20int,%20short,%20long)

http://archive.cloudera.com/cdh4/cdh/4/hadoop/api/org/apache/hadoop/fs/FileSystem.html#create(org.apache.hadoop.fs.Path,%20boolean,%20int,%20short,%20long)


Drill uses the hadoop ParquetFileWriter (https://github.com/Parquet/parquet-mr/blob/master/parquet-hadoop/src/main/java/parquet/hadoop/ParquetFileWriter.java).
This is where the file creation occurs so it might be tricky.

However, ParquetRecordWriter.java (https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetRecordWriter.java) in Drill creates the ParquetFileWriter with an hadoop configuration object.

something to explore: Could the block size be set as a property within the Configuration object before passing it to ParquetFileWriter constructor?





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)