You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2019/10/23 11:33:00 UTC

[jira] [Resolved] (SPARK-29542) [SQL][DOC] The descriptions of `spark.sql.files.*` are confused.

     [ https://issues.apache.org/jira/browse/SPARK-29542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-29542.
----------------------------------
    Fix Version/s: 3.0.0
       Resolution: Fixed

Fixed in https://github.com/apache/spark/pull/26200

> [SQL][DOC] The descriptions of `spark.sql.files.*` are confused.
> ----------------------------------------------------------------
>
>                 Key: SPARK-29542
>                 URL: https://issues.apache.org/jira/browse/SPARK-29542
>             Project: Spark
>          Issue Type: Documentation
>          Components: Documentation
>    Affects Versions: 2.4.4
>            Reporter: feiwang
>            Assignee: feiwang
>            Priority: Minor
>             Fix For: 3.0.0
>
>         Attachments: screenshot-1.png
>
>
> Hiļ¼Œthe description of `spark.sql.files.maxPartitionBytes` is shown as below.
> {code:java}
> The maximum number of bytes to pack into a single partition when reading files.
> {code}
> It seems that it can ensure each partition at most process bytes of that value for spark sql.
> As shown in the attachment,  the value of spark.sql.files.maxPartitionBytes is 128MB.
> For stage 1, its input is 16.3TB, but there are only 6400 tasks.
> I checked the code,  it is only effective for data source table.
> So, its description is confused.
> Same as all the descriptions of `spark.sql.files.*`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org