You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Xuefu Zhang (JIRA)" <ji...@apache.org> on 2014/12/11 16:44:13 UTC

[jira] [Resolved] (HIVE-8751) Null Pointer Exception when counter is used for stats during inserting overwrite partitioned tables [Spark Branch]

     [ https://issues.apache.org/jira/browse/HIVE-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Xuefu Zhang resolved HIVE-8751.
-------------------------------
    Resolution: Cannot Reproduce

Cannot reproduce the problem in the latest branch. It must have been fixed already.

> Null Pointer Exception when counter is used for stats during inserting overwrite partitioned tables [Spark Branch]
> ------------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-8751
>                 URL: https://issues.apache.org/jira/browse/HIVE-8751
>             Project: Hive
>          Issue Type: Sub-task
>            Reporter: Na Yang
>            Assignee: Na Yang
>
> The following quries cause NullPointerException
> {noformat}
> set hive.stats.dbclass=counter;
> set hive.stats.autogather=true;
> set hive.exec.dynamic.partition.mode=nonstrict;
> create table dummy (key string, value string) partitioned by (ds string, hr string);
> insert overwrite table dummy partition (ds='10',hr='11') select * from src;
> {noformat}
> Here is the stacktrace
> {noformat}
> 2014-11-05 15:30:42,621 ERROR [main] exec.Task (SparkTask.java:execute(116)) - Failed to execute spark task.
> java.lang.NullPointerException
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.getRequiredCounterPrefix(SparkTask.java:235)
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:103)
> 	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:161)
> 	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> 	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1644)
> 	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1404)
> 	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1216)
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1043)
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1033)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:247)
> 	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
> 	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:783)
> 	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
> 	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
> {noformat}
> The cause of this NullPointerException is that the SparkTask tries to get table partition info to set RequiredCounterPrefix before data are actually inserted to the partitioned table. Therefore, a null partition info is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)