You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (Jira)" <ji...@apache.org> on 2021/12/14 04:19:00 UTC

[jira] [Commented] (SPARK-37217) The number of dynamic partitions should early check when writing to external tables

    [ https://issues.apache.org/jira/browse/SPARK-37217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17458891#comment-17458891 ] 

Apache Spark commented on SPARK-37217:
--------------------------------------

User 'cxzl25' has created a pull request for this issue:
https://github.com/apache/spark/pull/34889

> The number of dynamic partitions should early check when writing to external tables
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-37217
>                 URL: https://issues.apache.org/jira/browse/SPARK-37217
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.2.0
>            Reporter: dzcxzl
>            Assignee: dzcxzl
>            Priority: Trivial
>             Fix For: 3.3.0
>
>
> [SPARK-29295|https://issues.apache.org/jira/browse/SPARK-29295] introduces a mechanism that writes to external tables is a dynamic partition method, and the data in the target partition will be deleted first.
> Assuming that 1001 partitions are written, the data of 10001 partitions will be deleted first, but because hive.exec.max.dynamic.partitions is 1000 by default, loadDynamicPartitions will fail at this time, but the data of 1001 partitions has been deleted.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org