You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Andrew Olson (JIRA)" <ji...@apache.org> on 2018/06/08 19:33:00 UTC

[jira] [Resolved] (HIVE-11105) NegativeArraySizeException from org.apache.hadoop.io.BytesWritable.setCapacity during serialization phase

     [ https://issues.apache.org/jira/browse/HIVE-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Olson resolved HIVE-11105.
---------------------------------
    Resolution: Fixed

Resolving this as the Hadoop version was updated from 2.7.2 [1] in Hive 2.3.3 to 3.1.0 [2] in Hive 3.0.0 (HADOOP-11901 was fixed in Hadoop 2.8.0).

[1] https://github.com/apache/hive/blob/rel/release-2.3.3/pom.xml#L141
[2] https://github.com/apache/hive/blob/rel/release-3.0.0/pom.xml#L149

> NegativeArraySizeException from org.apache.hadoop.io.BytesWritable.setCapacity during serialization phase
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-11105
>                 URL: https://issues.apache.org/jira/browse/HIVE-11105
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Priyesh Raj
>            Priority: Major
>             Fix For: 3.0.0
>
>
> I am getting the exception while running a query on very large data set. The issue is coming in Hive, however my understanding is it's a hadoop setCapacity function problem. The variable definition is integer and it is not able to handle such a large count.
> Please look into it.
> {code}
> org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NegativeArraySizeException
> 	at org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1141)
> 	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:577)
> 	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
> 	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
> 	at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
> 	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:227)
> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
> 	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
> 	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NegativeArraySizeException
> 	at org.apache.hadoop.hive.ql.exec.GroupByOperator.flush(GroupByOperator.java:1099)
> 	at org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1138)
> 	... 13 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NegativeArraySizeException
> 	at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:336)
> 	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793)
> 	at org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:1064)
> 	at org.apache.hadoop.hive.ql.exec.GroupByOperator.flush(GroupByOperator.java:1082)
> 	... 14 more
> Caused by: java.lang.NegativeArraySizeException
> 	at org.apache.hadoop.io.BytesWritable.setCapacity(BytesWritable.java:144)
> 	at org.apache.hadoop.io.BytesWritable.setSize(BytesWritable.java:123)
> 	at org.apache.hadoop.io.BytesWritable.set(BytesWritable.java:171)
> 	at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:213)
> 	at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.makeValueWritable(ReduceSinkOperator.java:456)
> 	at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:316)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)