You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Joseph Smith (JIRA)" <ji...@apache.org> on 2018/06/08 19:34:00 UTC

[jira] [Created] (HADOOP-15524) BytesWritable causes OOME when array size reaches Integer.MAX_VALUE

Joseph Smith created HADOOP-15524:
-------------------------------------

             Summary: BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
                 Key: HADOOP-15524
                 URL: https://issues.apache.org/jira/browse/HADOOP-15524
             Project: Hadoop Common
          Issue Type: Bug
          Components: io
            Reporter: Joseph Smith


BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal array.  On my environment, this causes an OOME
{code:java}
Exception in thread "main" java.lang.OutOfMemoryError: Requested array size exceeds VM limit
{code}
byte[Integer.MAX_VALUE-2] must be used to prevent this error.

Tested on OSX and CentOS 7 using Java version 1.8.0_131.

I noticed that java.util.ArrayList contains the following
{code:java}
/**
 * The maximum size of array to allocate.
 * Some VMs reserve some header words in an array.
 * Attempts to allocate larger arrays may result in
 * OutOfMemoryError: Requested array size exceeds VM limit
 */
private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
{code}
 

BytesWritable.setSize should use something similar to prevent an OOME from occurring.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org