You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "roncenzhao (JIRA)" <ji...@apache.org> on 2017/01/11 08:55:58 UTC

[jira] [Commented] (SPARK-19175) columns changed orc table encouter 'IndexOutOfBoundsException' when read the old schema files

    [ https://issues.apache.org/jira/browse/SPARK-19175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817717#comment-15817717 ] 

roncenzhao commented on SPARK-19175:
------------------------------------

I have the two doubts:
1. In the method `HiveTableScanExec.addColumnMetadataToConf(conf)`, we set the `serdeConstants.LIST_COLUMN_TYPES` and `serdeConstants.LIST_COLUMNS` into `hadoopConf`.
2. In the method `HadoopTableReader.initializeLocalJobConfFunc(path, tableDesc)`, we set the table's properties which include `serdeConstants.LIST_COLUMN_TYPES` and `serdeConstants.LIST_COLUMNS` into jobConf.

I think it's the two points that cause this problem. When I remove this two method, the sql will run successfully.

I don't know why we must set the `serdeConstants.LIST_COLUMN_TYPES` and `serdeConstants.LIST_COLUMNS` into jobConf.

Would anyone please tell me the reason? Thanks ~~~

> columns changed orc table encouter 'IndexOutOfBoundsException' when read the old schema files
> ---------------------------------------------------------------------------------------------
>
>                 Key: SPARK-19175
>                 URL: https://issues.apache.org/jira/browse/SPARK-19175
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.2, 2.1.0
>         Environment: spark2.0.2
>            Reporter: roncenzhao
>
> We hava an orc table called orc_test_tbl and hava inserted some data into it.
> After that, we change the table schema by droping some columns.
> When reading the old schema file, we get the follow exception.
> But hive can read it successfully.
> ```
> java.lang.IndexOutOfBoundsException: toIndex = 65
>         at java.util.ArrayList.subListRangeCheck(ArrayList.java:962)
>         at java.util.ArrayList.subList(ArrayList.java:954)
>         at org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.getSchemaOnRead(RecordReaderFactory.java:161)
>         at org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.createTreeReader(RecordReaderFactory.java:66)
>         at org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.<init>(RecordReaderImpl.java:202)
>         at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:539)
>         at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.<init>(OrcRawRecordMerger.java:183)
>         at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$OriginalReaderPair.<init>(OrcRawRecordMerger.java:226)
>         at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.<init>(OrcRawRecordMerger.java:437)
>         at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1215)
>         at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1113)
>         at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:245)
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:105)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
>         at org.apache.spark.scheduler.Task.run(Task.scala:86)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> ```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org