You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (Jira)" <ji...@apache.org> on 2020/07/06 10:30:00 UTC

[jira] [Assigned] (SPARK-32192) Print column name when throws ClassCastException

     [ https://issues.apache.org/jira/browse/SPARK-32192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-32192:
------------------------------------

    Assignee:     (was: Apache Spark)

> Print column name when throws ClassCastException
> ------------------------------------------------
>
>                 Key: SPARK-32192
>                 URL: https://issues.apache.org/jira/browse/SPARK-32192
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.4.3, 3.0.0
>            Reporter: xiepengjie
>            Priority: Minor
>             Fix For: 2.4.3
>
>
> We have a demo like this:
> {code:java}
> drop table if exists cast_exception_test;
> create table cast_exception_test(c1 int, c2 string) partitioned by (dt string) stored as orc;
> insert into table cast_exception_test partition(dt='2020-04-08') values('1', 'jeff_1');
> {code}
> Now we query this partition no problem both spark and hive, but if we change the type of partition's field, spark will throw exception, but hive will not:
> {code:java}
> alter table cast_exception_test change column c1 c1 string;
> -- hive correct, but spark throws ClassCastException
> select * from cast_exception_test where dt='2020-04-08';
> {code}
> exception like this:
> {code:java}
> Caused by: java.lang.ClassCastException: org.apache.hadoop.io.IntWritable cannot be cast to org.apache.hadoop.io.Text    at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.getPrimitiveWritableObject(WritableStringObjectInspector.java:41)    at org.apache.spark.sql.hive.HiveInspectors$$anonfun$unwrapperFor$23.apply(HiveInspectors.scala:547)    at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$21$$anonfun$apply$15.apply(TableReader.scala:515)    at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$21$$anonfun$apply$15.apply(TableReader.scala:515)    at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:532)    at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:522)    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)    at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:256)    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:838)    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:838)    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:326)    at org.apache.spark.rdd.RDD.iterator(RDD.scala:290)    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)    at org.apache.spark.scheduler.Task.run(Task.scala:121)    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)    at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org