You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2019/01/23 10:19:00 UTC
[jira] [Assigned] (SPARK-26672) SinglePartition may not satisfies
HashClusteredDistribution/OrderedDistribution
[ https://issues.apache.org/jira/browse/SPARK-26672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Apache Spark reassigned SPARK-26672:
------------------------------------
Assignee: Apache Spark
> SinglePartition may not satisfies HashClusteredDistribution/OrderedDistribution
> -------------------------------------------------------------------------------
>
> Key: SPARK-26672
> URL: https://issues.apache.org/jira/browse/SPARK-26672
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.4.0
> Reporter: Wang, Gang
> Assignee: Apache Spark
> Priority: Major
>
> If we are loading data to a *bucketed table* TEST_TABLE of which bucket number is not 1, from another table SRC_TABLE(bucketed or not) with sql:
> insert overwrite table TEST_TABLE select * from SRC_TABLE limit 1000.
> Data inserted into TEST_TABLE will not be bucketed since after LimitExec the output partitioning will be SinglePartition, and in current logic, it satisfies the HashClusteredDistribution, so no shuffle will be added.
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org