You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "KaiXu (JIRA)" <ji...@apache.org> on 2018/02/08 01:14:00 UTC

[jira] [Commented] (HIVE-14171) Parquet: Simple vectorization throws NPEs

    [ https://issues.apache.org/jira/browse/HIVE-14171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356325#comment-16356325 ] 

KaiXu commented on HIVE-14171:
------------------------------

Found similar issue with hive.vectorized.use.row.serde.deserialize=true on TPC-DS query12, parquet file format:

 

Job aborted due to stage failure: Task 76 in stage 1.0 failed 4 times, most recent failure: Lost task 76.3 in stage 1.0 (TID 35, skl-slave2): java.io.IOException: java.io.IOException: java.lang.NullPointerException
         at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
         at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
         at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:231)
         at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:141)
         at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:254)
         at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:208)
         at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
         at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
         at scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:30)
         at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:83)
         at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
         at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:200)
         at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
         at org.apache.spark.scheduler.Task.run(Task.scala:86)
         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
         at java.lang.Thread.run(Thread.java:748)
 Caused by: java.io.IOException: java.lang.NullPointerException
         at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
         at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
         at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:355)
         at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:157)
         at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:51)
         at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
         at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:228)
         ... 17 more
 Caused by: java.lang.NullPointerException
         at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:206)
         at org.apache.hadoop.hive.ql.io.parquet.VectorizedParquetInputFormat$VectorizedParquetRecordReader.next(VectorizedParquetInputFormat.java:118)
         at org.apache.hadoop.hive.ql.io.parquet.VectorizedParquetInputFormat$VectorizedParquetRecordReader.next(VectorizedParquetInputFormat.java:51)
         at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
         ... 21 more

> Parquet: Simple vectorization throws NPEs
> -----------------------------------------
>
>                 Key: HIVE-14171
>                 URL: https://issues.apache.org/jira/browse/HIVE-14171
>             Project: Hive
>          Issue Type: Bug
>          Components: File Formats, Vectorization
>    Affects Versions: 2.2.0
>            Reporter: Gopal V
>            Priority: Major
>              Labels: Parquet
>
> {code}
>  create temporary table cd_parquet stored as parquet as select * from customer_demographics;
> select count(1) from cd_parquet where cd_gender = 'F';
> {code}
> {code}
> Caused by: java.lang.NullPointerException
> 	at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:206)
> 	at org.apache.hadoop.hive.ql.io.parquet.VectorizedParquetInputFormat$VectorizedParquetRecordReader.next(VectorizedParquetInputFormat.java:118)
> 	at org.apache.hadoop.hive.ql.io.parquet.VectorizedParquetInputFormat$VectorizedParquetRecordReader.next(VectorizedParquetInputFormat.java:51)
> 	at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
> 	... 17 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)