You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Andrew Olson (JIRA)" <ji...@apache.org> on 2015/11/06 16:18:27 UTC

[jira] [Reopened] (MAPREDUCE-21) NegativeArraySizeException in reducer with new api

     [ https://issues.apache.org/jira/browse/MAPREDUCE-21?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Olson reopened MAPREDUCE-21:
-----------------------------------

We encountered the stack trace in this issue's description a few days ago. The SequenceFile "corruption" (unreadability) happens because of an integer math overflow [1], if the BytesWritable size is > Integer.MAX_VALUE / 3 (about 682MB). Here is [2] a stackoverflow discussion about this.

[1] https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/BytesWritable.java#L123
[2] http://stackoverflow.com/questions/24127304/negativearraysizeexception-when-creating-a-sequencefile-with-large-1gb-bytesw

> NegativeArraySizeException in reducer with new api
> --------------------------------------------------
>
>                 Key: MAPREDUCE-21
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-21
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: task
>            Reporter: Amareshwari Sriramadasu
>
> I observed one of the reducers failing with NegativeArraySizeException with new api.
> The exception trace:
> java.lang.NegativeArraySizeException
> 	at org.apache.hadoop.io.BytesWritable.setCapacity(BytesWritable.java:119)
> 	at org.apache.hadoop.io.BytesWritable.setSize(BytesWritable.java:98)
> 	at org.apache.hadoop.io.BytesWritable.readFields(BytesWritable.java:153)
> 	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
> 	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
> 	at org.apache.hadoop.mapreduce.ReduceContext.nextKeyValue(ReduceContext.java:142)
> 	at org.apache.hadoop.mapreduce.ReduceContext.nextKey(ReduceContext.java:121)
> 	at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:189)
> 	at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:542)
> 	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:409)
> 	at org.apache.hadoop.mapred.Child.main(Child.java:159)
> The corresponding line in ReduceContext is 
> {code}
> line#142    key = keyDeserializer.deserialize(key);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)