You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yuming Wang (Jira)" <ji...@apache.org> on 2020/12/20 12:03:00 UTC

[jira] [Commented] (SPARK-33855) Add spark job maximum created files limit configuration

    [ https://issues.apache.org/jira/browse/SPARK-33855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17252408#comment-17252408 ] 

Yuming Wang commented on SPARK-33855:
-------------------------------------

May be you can repartition by dynamic partition columns before insert partition table: https://github.com/apache/spark/pull/28032

> Add spark job maximum created files limit configuration
> -------------------------------------------------------
>
>                 Key: SPARK-33855
>                 URL: https://issues.apache.org/jira/browse/SPARK-33855
>             Project: Spark
>          Issue Type: New Feature
>          Components: SQL
>    Affects Versions: 2.4.3, 3.0.1
>            Reporter: Su Qilong
>            Priority: Major
>
> Add a configuration item like : [hive.exec.max.created.files|https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.exec.max.created.files] to limit maximum number of HDFS files created by a single spark job.
>  
> This is useful when dynamic partition insertion is enabled, or for those jobs contains only 1 stage with very large parallelism



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org