You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Zhenhua Wang (Jira)" <ji...@apache.org> on 2020/03/16 12:41:00 UTC
[jira] [Updated] (SPARK-31164) Inconsistent rdd and output
partitioning for bucket table when output doesn't contain all bucket
columns
[ https://issues.apache.org/jira/browse/SPARK-31164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Zhenhua Wang updated SPARK-31164:
---------------------------------
Description: For a bucketed table, when deciding output partitioning, if the output doesn't contain all bucket columns, the result is `UnknownPartitioning`. But when generating rdd, current Spark uses `createBucketedReadRDD` because it doesn't check if the output contains all bucket columns. So the rdd and its output partitioning are inconsistent. (was: For a bucketed table, when deciding output partitioning, if the output doesn't contain all bucket columns, the result is `UnknownPartitioning`. But when generating rdd, current Spark uses `createBucketedReadRDD` because it doesn't check if the output contains all bucket columns. So the rdd and it's output partitioning are inconsistent.)
> Inconsistent rdd and output partitioning for bucket table when output doesn't contain all bucket columns
> --------------------------------------------------------------------------------------------------------
>
> Key: SPARK-31164
> URL: https://issues.apache.org/jira/browse/SPARK-31164
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.4.5, 3.0.0
> Reporter: Zhenhua Wang
> Priority: Major
>
> For a bucketed table, when deciding output partitioning, if the output doesn't contain all bucket columns, the result is `UnknownPartitioning`. But when generating rdd, current Spark uses `createBucketedReadRDD` because it doesn't check if the output contains all bucket columns. So the rdd and its output partitioning are inconsistent.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org