You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "liuxian (JIRA)" <ji...@apache.org> on 2018/09/06 10:38:00 UTC

[jira] [Created] (SPARK-25356) Add Parquet block size (row group size) option to SparkSQL configuration

liuxian created SPARK-25356:
-------------------------------

             Summary:  Add  Parquet block size (row group size)  option to SparkSQL configuration
                 Key: SPARK-25356
                 URL: https://issues.apache.org/jira/browse/SPARK-25356
             Project: Spark
          Issue Type: Improvement
          Components: SQL
    Affects Versions: 2.4.0
            Reporter: liuxian


I think we should configure the Parquet buffer size when using Parquet format.

Because for HDFS, `dfs.block.size` is configurable, sometimes we hope the block size of parquet to be consistent with it.

And  whether this parameter `spark.sql.files.maxPartitionBytes` is best consistent with the Parquet  block size when using Parquet format?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org