You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Herman van Hovell (JIRA)" <ji...@apache.org> on 2016/11/20 16:43:59 UTC

[jira] [Commented] (SPARK-18515) AlterTableDropPartitions fails for non-string columns

    [ https://issues.apache.org/jira/browse/SPARK-18515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15681435#comment-15681435 ] 

Herman van Hovell commented on SPARK-18515:
-------------------------------------------

[~dongjoon] I am reverting this from branch-2.1: https://github.com/apache/spark/commit/1126c3194ee1c79015cf1d3808bc963aa93dcadf

> AlterTableDropPartitions fails for non-string columns
> -----------------------------------------------------
>
>                 Key: SPARK-18515
>                 URL: https://issues.apache.org/jira/browse/SPARK-18515
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: Herman van Hovell
>            Assignee: Dongjoon Hyun
>
> AlterTableDropPartitions fails with a scala MatchError if you use non-string partitioning columns:
> {noformat}
> spark.sql("drop table if exists tbl_x")
> spark.sql("create table tbl_x (a int) partitioned by (p int)")
> spark.sql("alter table tbl_x add partition (p=10)")
> spark.sql("alter table tbl_x drop partition (p=10)")
> {noformat}
> Yields the following error:
> {noformat}
> scala.MatchError: (cast(p#8 as int) = 10) (of class org.apache.spark.sql.catalyst.expressions.EqualTo)
>   at org.apache.spark.sql.execution.command.AlterTableDropPartitionCommand$$anonfun$10$$anonfun$11.apply(ddl.scala:462)
>   at org.apache.spark.sql.execution.command.AlterTableDropPartitionCommand$$anonfun$10$$anonfun$11.apply(ddl.scala:462)
>   at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.immutable.List.map(List.scala:285)
>   at org.apache.spark.sql.execution.command.AlterTableDropPartitionCommand$$anonfun$10.apply(ddl.scala:462)
>   at org.apache.spark.sql.execution.command.AlterTableDropPartitionCommand$$anonfun$10.apply(ddl.scala:461)
>   at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at org.apache.spark.sql.execution.command.AlterTableDropPartitionCommand.run(ddl.scala:461)
>   at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
>   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
>   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
>   at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
>   at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
>   at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
>   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
>   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:591)
>   ... 39 elided
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org