You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Sahil Takiar (JIRA)" <ji...@apache.org> on 2017/07/31 21:59:00 UTC

[jira] [Assigned] (HIVE-17219) NPE in SparkPartitionPruningSinkOperator#closeOp for query with partitioned join in subquery

     [ https://issues.apache.org/jira/browse/HIVE-17219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sahil Takiar reassigned HIVE-17219:
-----------------------------------


> NPE in SparkPartitionPruningSinkOperator#closeOp for query with partitioned join in subquery
> --------------------------------------------------------------------------------------------
>
>                 Key: HIVE-17219
>                 URL: https://issues.apache.org/jira/browse/HIVE-17219
>             Project: Hive
>          Issue Type: Sub-task
>    Affects Versions: 3.0.0
>            Reporter: Sahil Takiar
>            Assignee: Sahil Takiar
>
> The following query: {{select * from partitioned_table1 where partitioned_table1.part_col in (select partitioned_table2.col from partitioned_table2 join partitioned_table3 on partitioned_table3.col = partitioned_table2.part_col)}} throws a NPE in {{SparkPartitionPruningSinkOperator#closeOp}}
> The full stack trace is:
> {code}
> Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 22.0 failed 4 times, most recent failure: Lost task 1.3 in stage 22.0 (TID 37, 10.16.1.179): java.lang.IllegalStateException: Hit error while closing operators - failing tree: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException
>         at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:194)
>         at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
>         at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:96)
>         at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
>         at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:147)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
>         at org.apache.spark.scheduler.Task.run(Task.scala:85)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException
>         at org.apache.hadoop.hive.ql.parse.spark.SparkPartitionPruningSinkOperator.closeOp(SparkPartitionPruningSinkOperator.java:95)
>         at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:709)
>         at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:723)
>         at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:723)
>         at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:723)
>         at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:723)
>         at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:723)
>         at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:723)
>         at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:171)
>         ... 11 more
> Caused by: java.lang.NullPointerException
>         at org.apache.hadoop.hive.ql.parse.spark.SparkPartitionPruningSinkOperator.flushToFile(SparkPartitionPruningSinkOperator.java:151)
>         at org.apache.hadoop.hive.ql.parse.spark.SparkPartitionPruningSinkOperator.closeOp(SparkPartitionPruningSinkOperator.java:93)
>         ... 19 more
> {code}
> The full setup is:
> {code}
> set hive.spark.dynamic.partition.pruning=true;
> create table partitioned_table1 (col int) partitioned by (part_col int);
> create table partitioned_table2 (col int) partitioned by (part_col int);
> create table partitioned_table3 (col int) partitioned by (part_col int);
> create table regular_table (col1 int, col2 int);
> insert into table regular_table values (0, 0), (1, 1), (2, 2);
> alter table partitioned_table1 add partition (part_col = 1);
> alter table partitioned_table1 add partition (part_col = 2);
> alter table partitioned_table1 add partition (part_col = 3);
> insert into table partitioned_table1 partition (part_col = 1) values (1), (2), (3);
> insert into table partitioned_table1 partition (part_col = 2) values (1), (2), (3);
> insert into table partitioned_table1 partition (part_col = 3) values (1), (2), (3);
> alter table partitioned_table2 add partition (part_col = 1);
> alter table partitioned_table2 add partition (part_col = 2);
> alter table partitioned_table2 add partition (part_col = 3);
> insert into table partitioned_table2 partition (part_col = 1) values (1), (2), (3);
> insert into table partitioned_table2 partition (part_col = 2) values (1), (2), (3);
> insert into table partitioned_table2 partition (part_col = 3) values (1), (2), (3);
> alter table partitioned_table3 add partition (part_col = 1);
> alter table partitioned_table3 add partition (part_col = 2);
> alter table partitioned_table3 add partition (part_col = 3);
> insert into table partitioned_table3 partition (part_col = 1) values (1), (2), (3);
> insert into table partitioned_table3 partition (part_col = 2) values (1), (2), (3);
> insert into table partitioned_table3 partition (part_col = 3) values (1), (2), (3);
> select * from partitioned_table1 where partitioned_table1.part_col in (select partitioned_table2.col from partitioned_table2 join partitioned_table3 on partitioned_table3.col = partitioned_table2.part_col);
> {code}
> Doesn't seem to happen when map-joins are enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)