You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Nilesh Barge (JIRA)" <ji...@apache.org> on 2014/10/16 21:50:33 UTC

[jira] [Created] (SPARK-3978) Schema change on Spark-Hive (Parquet file format) table not working

Nilesh Barge created SPARK-3978:
-----------------------------------

             Summary: Schema change on Spark-Hive (Parquet file format) table not working
                 Key: SPARK-3978
                 URL: https://issues.apache.org/jira/browse/SPARK-3978
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 1.1.0
            Reporter: Nilesh Barge


On following releases: 
Spark 1.1.0 (built using sbt/sbt -Dhadoop.version=2.2.0 -Phive assembly) , Apache HDFS 2.2 

Spark job is able to create/add/read data in hive, parquet formatted, tables using HiveContext. 
But, after changing schema, spark job is not able to read data and throws following exception: 
java.lang.ArrayIndexOutOfBoundsException: 2 
        at org.apache.hadoop.hive.ql.io.parquet.serde.ArrayWritableObjectInspector.getStructFieldData(ArrayWritableObjectInspector.java:127) 
        at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$1.apply(TableReader.scala:284) 
        at org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$1.apply(TableReader.scala:278) 
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
        at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
        at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) 
        at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) 
        at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) 
        at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) 
        at scala.collection.AbstractIterator.to(Iterator.scala:1157) 
        at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) 
        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) 
        at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) 
        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) 
        at org.apache.spark.rdd.RDD$$anonfun$16.apply(RDD.scala:774) 
        at org.apache.spark.rdd.RDD$$anonfun$16.apply(RDD.scala:774) 
        at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121) 
        at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121) 
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) 
        at org.apache.spark.scheduler.Task.run(Task.scala:54) 
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177) 
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
        at java.lang.Thread.run(Thread.java:744)


code snippet in short: 

hiveContext.sql("CREATE EXTERNAL TABLE IF NOT EXISTS people_table (name String, age INT) ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat' OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat'"); 
hiveContext.sql("INSERT INTO TABLE people_table SELECT name, age FROM temp_table_people1"); 
hiveContext.sql("SELECT * FROM people_table"); //Here, data read was successful.  
hiveContext.sql("ALTER TABLE people_table ADD COLUMNS (gender STRING)"); 
hiveContext.sql("SELECT * FROM people_table"); //Not able to read existing data and ArrayIndexOutOfBoundsException is thrown.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org