You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Ziyang Zhao (JIRA)" <ji...@apache.org> on 2016/07/06 23:22:10 UTC

[jira] [Commented] (HIVE-13965) Empty resultset run into Exception when using Thrift Binary Serde

    [ https://issues.apache.org/jira/browse/HIVE-13965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365327#comment-15365327 ] 

Ziyang Zhao commented on HIVE-13965:
------------------------------------

org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constantPropagateForSubQuery
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_repair
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_table_nonprintable
org.apache.hadoop.hive.ql.metadata.TestHiveMetaStoreChecker.testPartitionsCheck
org.apache.hadoop.hive.ql.metadata.TestHiveMetaStoreChecker.testTableCheck
These nine failed in previous precommit builds(from 173-177)

org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority
This test case passed in my local VM.

> Empty resultset run into Exception when using Thrift Binary Serde
> -----------------------------------------------------------------
>
>                 Key: HIVE-13965
>                 URL: https://issues.apache.org/jira/browse/HIVE-13965
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2
>    Affects Versions: 2.1.0
>            Reporter: Ziyang Zhao
>            Assignee: Ziyang Zhao
>         Attachments: HIVE-13965.1.patch.txt
>
>
> This error can be reproduced by enabling thrift binary serde, using beeline connect to hiveserver2 and executing the following commands:
> >create table test3(num1 int);
> >create table test4(num1 int);
> >insert into test3 values(1);
> >insert into test4 values(2);
> >select * from test3 join test4 on test3.num1=test4.num1;
> The result should be empty, but it gives an exception:
> Diagnostic Messages for this Task:
> Error: java.lang.RuntimeException: Hive Runtime Error while closing operators
>         at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:206)
>         at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> Caused by: java.lang.NullPointerException
>         at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1029)
>         at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:641)
>         at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:655)
>         at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:655)
>         at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:655)
>         at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:195)
>         ... 8 more
> This error is caused in FileSinkOperator.java. 
> If the resultset is empty, function process() will not be called, so variable "fpaths" will not be set. When run into CloseOp(), 
> if (conf.isHiveServerQuery() && HiveConf.getBoolVar(hconf,
>          HiveConf.ConfVars.HIVE_SERVER2_THRIFT_RESULTSET_SERIALIZE_IN_TASKS) &&
>          serializer.getClass().getName().equalsIgnoreCase(ThriftJDBCBinarySerDe.class.getName())) {
>          try {
>            recordValue = serializer.serialize(null, inputObjInspectors[0]);
>            rowOutWriters = fpaths.outWriters;
>            rowOutWriters[0].write(recordValue);
>          } catch (SerDeException | IOException e) {
>            throw new HiveException(e);
>          }
>      }
> Here fpaths is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)