You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Sushanth Sowmyan (JIRA)" <ji...@apache.org> on 2016/01/21 23:48:39 UTC

[jira] [Commented] (HIVE-11470) NPE in DynamicPartFileRecordWriterContainer on null part-keys.

    [ https://issues.apache.org/jira/browse/HIVE-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15111541#comment-15111541 ] 

Sushanth Sowmyan commented on HIVE-11470:
-----------------------------------------

+1, LGTM. The tests reported do not seem to be related.

> NPE in DynamicPartFileRecordWriterContainer on null part-keys.
> --------------------------------------------------------------
>
>                 Key: HIVE-11470
>                 URL: https://issues.apache.org/jira/browse/HIVE-11470
>             Project: Hive
>          Issue Type: Bug
>          Components: HCatalog
>    Affects Versions: 1.2.0
>            Reporter: Mithun Radhakrishnan
>            Assignee: Mithun Radhakrishnan
>         Attachments: HIVE-11470.1.patch, HIVE-11470.2.patch
>
>
> When partitioning data using {{HCatStorer}}, one sees the following NPE, if the dyn-part-key is of null-value:
> {noformat}
> 2015-07-30 23:59:59,627 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.lang.NullPointerException
> at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:473)
> at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:436)
> at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:416)
> at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:256)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.NullPointerException
> at org.apache.hive.hcatalog.mapreduce.DynamicPartitionFileRecordWriterContainer.getLocalFileWriter(DynamicPartitionFileRecordWriterContainer.java:141)
> at org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:110)
> at org.apache.hive.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:54)
> at org.apache.hive.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:309)
> at org.apache.hive.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:61)
> at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
> at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
> at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:558)
> at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:471)
> ... 11 more
> {noformat}
> The reason is that the {{DynamicPartitionFileRecordWriterContainer}} makes an unfortunate assumption when fetching a local file-writer instance:
> {code:title=DynamicPartitionFileRecordWriterContainer.java}
>   @Override
>   protected LocalFileWriter getLocalFileWriter(HCatRecord value) 
>     throws IOException, HCatException {
>     
>     OutputJobInfo localJobInfo = null;
>     // Calculate which writer to use from the remaining values - this needs to
>     // be done before we delete cols.
>     List<String> dynamicPartValues = new ArrayList<String>();
>     for (Integer colToAppend : dynamicPartCols) {
>       dynamicPartValues.add(value.get(colToAppend).toString()); // <-- YIKES!
>     }
>     ...
>   }
> {code}
> Must check for null, and substitute with {{"\_\_HIVE_DEFAULT_PARTITION\_\_"}}, or equivalent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)