You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Manjay Kumar (Jira)" <ji...@apache.org> on 2020/09/09 10:26:00 UTC

[jira] [Comment Edited] (SPARK-17477) SparkSQL cannot handle schema evolution from Int -> Long when parquet files have Int as its type while hive metastore has Long as its type

    [ https://issues.apache.org/jira/browse/SPARK-17477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17192763#comment-17192763 ] 

Manjay Kumar edited comment on SPARK-17477 at 9/9/20, 10:25 AM:
----------------------------------------------------------------

i tried both the options , but did not work for me.

 

I am trying to parse the the parquet file to dataset using case class with Encoders.


was (Author: manjay7869):
i tried both the options , but did not work for me.

 

I am trying to parse the the parquet file to dataset using case class with encoders.

> SparkSQL cannot handle schema evolution from Int -> Long when parquet files have Int as its type while hive metastore has Long as its type
> ------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-17477
>                 URL: https://issues.apache.org/jira/browse/SPARK-17477
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: Gang Wu
>            Priority: Major
>
> When using SparkSession to read a Hive table which is stored as parquet files. If there has been a schema evolution from int to long of a column. There are some old parquet files use int for the column while some new parquet files use long. In Hive metastore, the type is long (bigint).
> Therefore when I use the following:
> {quote}
> sparkSession.sql("select * from table").show()
> {quote}
> I got the following exception:
> {quote}
> 16/08/29 17:50:20 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 3.0 (TID 91, XXX): org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block 0 in file hdfs://path/to/parquet/1-part-r-00000-d8e4f5aa-b6b9-4cad-8432-a7ae7a590a93.gz.parquet
>        	at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:228)
>        	at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:201)
>        	at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:36)
>        	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
>        	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91)
>        	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:128)
>        	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91)
>        	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
>        	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>        	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
>        	at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
>        	at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
>        	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
>        	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
>        	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>        	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>        	at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>        	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
>        	at org.apache.spark.scheduler.Task.run(Task.scala:85)
>        	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>        	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>        	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>        	at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassCastException: org.apache.spark.sql.catalyst.expressions.MutableLong cannot be cast to org.apache.spark.sql.catalyst.expressions.MutableInt
>        	at org.apache.spark.sql.catalyst.expressions.SpecificMutableRow.setInt(SpecificMutableRow.scala:246)
>        	at org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter$RowUpdater.setInt(ParquetRowConverter.scala:161)
>        	at org.apache.spark.sql.execution.datasources.parquet.ParquetPrimitiveConverter.addInt(ParquetRowConverter.scala:85)
>        	at org.apache.parquet.column.impl.ColumnReaderImpl$2$3.writeValue(ColumnReaderImpl.java:249)
>        	at org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:365)
>        	at org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:405)
>        	at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:209)
>        	... 22 more
> {quote}
> But this kind of schema evolution (int => long) is valid is Hive and Presto.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org