You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/05/31 19:11:59 UTC

[GitHub] [spark] ajithme opened a new pull request #24762: [SPARK-27907][SQL] HiveUDAF with 0 rows throw NPE when try to serialize

ajithme opened a new pull request #24762: [SPARK-27907][SQL] HiveUDAF with 0 rows throw NPE when try to serialize
URL: https://github.com/apache/spark/pull/24762
 
 
   ## What changes were proposed in this pull request?
   
   When query returns zero rows, the HiveUDAFFunction.seralize throws NPE
   
   create table abc(a int)
   insert into abc values (1)
   insert into abc values (2)
   select histogram_numeric(a,2) from abc where a=3 //NPE
   
   ```
   Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 5, localhost, executor driver): java.lang.NullPointerException
   at org.apache.spark.sql.hive.HiveUDAFFunction.serialize(hiveUDFs.scala:477)
   at org.apache.spark.sql.hive.HiveUDAFFunction.serialize(hiveUDFs.scala:315)
   at org.apache.spark.sql.catalyst.expressions.aggregate.TypedImperativeAggregate.serializeAggregateBufferInPlace(interfaces.scala:570)
   at org.apache.spark.sql.execution.aggregate.AggregationIterator.$anonfun$generateResultProjection$6(AggregationIterator.scala:254)
   at org.apache.spark.sql.execution.aggregate.ObjectAggregationIterator.outputForEmptyGroupingKeyWithoutInput(ObjectAggregationIterator.scala:97)
   at org.apache.spark.sql.execution.aggregate.ObjectHashAggregateExec.$anonfun$doExecute$2(ObjectHashAggregateExec.scala:132)
   at org.apache.spark.sql.execution.aggregate.ObjectHashAggregateExec.$anonfun$doExecute$2$adapted(ObjectHashAggregateExec.scala:107)
   at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2(RDD.scala:839)
   at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndexInternal$2$adapted(RDD.scala:839)
   at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:327)
   at org.apache.spark.rdd.RDD.iterator(RDD.scala:291)
   at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:327)
   at org.apache.spark.rdd.RDD.iterator(RDD.scala:291)
   at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
   at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:94)
   at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
   at org.apache.spark.scheduler.Task.run(Task.scala:122)
   at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:425)
   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1350)
   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:428)
   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:745)
   ```  
   
   Hence add a check not avoid NPE
   
   ## How was this patch tested?
   
   Added new UT case

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org