You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by "barge.nilesh" <ba...@gmail.com> on 2014/09/29 22:43:37 UTC

Schema change on Spark Hive (Parquet file format) table not working

I am using following releases: 
Spark 1.1 (built using */sbt/sbt -Dhadoop.version=2.2.0 -Phive assembly/*) ,
Apache HDFS 2.2

My job is able to create/add/read data in hive, parquet formatted, tables
using HiveContext. 
But, after changing schema, job is not able to read existing data and throws
following exception:
*/java.lang.ArrayIndexOutOfBoundsException: 2
	at
org.apache.hadoop.hive.ql.io.parquet.serde.ArrayWritableObjectInspector.getStructFieldData(ArrayWritableObjectInspector.java:127)
	at
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$1.apply(TableReader.scala:284)
	at
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$1.apply(TableReader.scala:278)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
	at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
	at scala.collection.AbstractIterator.to(Iterator.scala:1157)
	at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
	at
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
	at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
	at org.apache.spark.rdd.RDD$$anonfun$16.apply(RDD.scala:774)
	at org.apache.spark.rdd.RDD$$anonfun$16.apply(RDD.scala:774)
	at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
	at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
	at org.apache.spark.scheduler.Task.run(Task.scala:54)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:744)/*


Please find below, code snippet:

/	public static void main(String[] args) {
        SparkConf sparkConf = (new
SparkConf()).setAppName("SchemaChangeTest").set("spark.cores.max",
"16").set("spark.executor.memory", "8g");
        JavaSparkContext sparkContext = new JavaSparkContext(sparkConf);
        JavaHiveContext hiveContext = new JavaHiveContext(sparkContext);
        
        List<String> people1List = new ArrayList<String>();
        people1List.add("Michael,30");
        people1List.add("William,31");
        JavaRDD<String> people1RDD = sparkContext.parallelize(people1List);

        //String encoded schema#1
        String schema1String = "name STRING,age INT";
        //Generate the schema based on the string of schema
        StructType people1Schema = getSchema(schema1String);
        //Convert records of the RDD (people) to Rows.
        JavaRDD<Row> people1RowRDD = people1RDD.map(new Function<String,
Row>() {
					public Row call(String record) throws Exception {
        				String[] fields = record.split(",");
        				return Row.create(fields[0],
Integer.parseInt(fields[1].trim()));
        			}
        		});
        //Apply schema & register as temporary table
        hiveContext.applySchema(people1RowRDD,
people1Schema).registerTempTable("temp_table_people1");
        //Create people table
        hiveContext.sql("CREATE EXTERNAL TABLE IF NOT EXISTS people_table
(name String, age INT) *ROW FORMAT SERDE
'parquet.hive.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT
'parquet.hive.DeprecatedParquetInputFormat' OUTPUTFORMAT
'parquet.hive.DeprecatedParquetOutputFormat*'");
        //Add new data
        hiveContext.sql("INSERT INTO TABLE people_table SELECT name, age
FROM temp_table_people1");
        //Fetch rows and print
        JavaSchemaRDD people1TableRows = hiveContext.sql("SELECT * FROM
people_table");
        logger.info(people1TableRows.collect());

        *//Until this point everything is fine, job creates new table, add
data in to table and then able to read from table*

        //----------------------------------------------------
        //------------ Change Schema --------------------
        //----------------------------------------------------
        hiveContext.sql("ALTER TABLE people_table ADD COLUMNS (gender
STRING)");
        
        List<String> people2List = new ArrayList<String>();
        people2List.add("David,32,M");
        people2List.add("Lorena,33,F");
        JavaRDD<String> people2RDD = sparkContext.parallelize(people2List);
        
        //String encoded schema#2
        String schema2String = "name STRING,age INT,gender STRING";
        //Generate the schema based on the string of schema
        StructType people2Schema = getSchema(schema2String);
        //Convert records of the RDD (people) to Rows.
        JavaRDD<Row> people2RowRDD = people2RDD.map(new Function<String,
Row>() {
					public Row call(String record) throws Exception {
        				String[] fields = record.split(",");
        				return Row.create(fields[0], Integer.parseInt(fields[1].trim()),
fields[2].trim());
        			}       		});
        //Apply schema & register as temporary table
        hiveContext.applySchema(people2RowRDD,
people2Schema).registerTempTable("temp_table_people2");
        //Add new data
        hiveContext.sql("INSERT INTO TABLE people_table SELECT name, age,
gender FROM temp_table_people2");
        
        //Fetch rows and print
        JavaSchemaRDD people2TableRows = hiveContext.sql("SELECT * FROM
people_table");
        *logger.info(people2TableRows.collect()); //Exception is being
thrown here *

	}/

Any pointers towards to the root cause, solution, or workaround are
appreciated....





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Schema-change-on-Spark-Hive-Parquet-file-format-table-not-working-tp15360.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Schema change on Spark Hive (Parquet file format) table not working

Posted by sim <si...@swoop.com>.
Yes, I've found a number of problems with metadata management in Spark SQL. 

One core issue is  SPARK-9764
<https://issues.apache.org/jira/browse/SPARK-9764>  . Related issues are 
SPARK-9342 <https://issues.apache.org/jira/browse/SPARK-9342>  ,  SPARK-9761
<https://issues.apache.org/jira/browse/SPARK-9761>   and  SPARK-9762
<https://issues.apache.org/jira/browse/SPARK-9762>  .

I've also observed a case where, after an exception in ALTER TABLE, Spark
SQL thought a table had 0 rows while, in fact, all the data was still there.
I was not able to reproduce this one reliably so I did not create a JIRA
issue for it.

Let's vote for these issues and get them resolved.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Schema-change-on-Spark-Hive-Parquet-file-format-table-not-working-tp15360p24180.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Schema change on Spark Hive (Parquet file format) table not working

Posted by "barge.nilesh" <ba...@gmail.com>.
To find root cause, I installed hive 0.12 separately and tried the exact same
test through Hive CLI and it *passed*. So, looks like it is a problem with
Spark-SQL.

Has anybody else faced this issue (Hive-parquet table schema change)??
Should I create JIRA ticket for this?





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Schema-change-on-Spark-Hive-Parquet-file-format-table-not-working-tp15360p15851.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Schema change on Spark Hive (Parquet file format) table not working

Posted by "barge.nilesh" <ba...@gmail.com>.
code snippet in short:

hiveContext.sql("*CREATE EXTERNAL TABLE IF NOT EXISTS people_table (name
String, age INT) ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat'
OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat'*"); 
hiveContext.sql("*INSERT INTO TABLE people_table SELECT name, age FROM
temp_table_people1*"); 
hiveContext.sql("*SELECT * FROM people_table*"); ///Here, data read was
successful./ 
hiveContext.sql("*ALTER TABLE people_table ADD COLUMNS (gender STRING)*");
hiveContext.sql("*SELECT * FROM people_table*"); ///Not able to read
existing data and ArrayIndexOutOfBoundsException is thrown./



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Schema-change-on-Spark-Hive-Parquet-file-format-table-not-working-tp15360p15415.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org