You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by "Peter Bacsko (JIRA)" <ji...@apache.org> on 2018/02/13 12:14:00 UTC

[jira] [Created] (MAPREDUCE-7052) TestFixedLengthInputFormat#testFormatCompressedIn is flaky

Peter Bacsko created MAPREDUCE-7052:
---------------------------------------

             Summary: TestFixedLengthInputFormat#testFormatCompressedIn is flaky
                 Key: MAPREDUCE-7052
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7052
             Project: Hadoop Map/Reduce
          Issue Type: Bug
          Components: client, test
            Reporter: Peter Bacsko
            Assignee: Peter Bacsko


Sometimes the test case TestFixedLengthInputFormat#testFormatCompressedIn can fail with the following error:

{noformat}
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
	at org.apache.hadoop.mapred.TestFixedLengthInputFormat.runRandomTests(TestFixedLengthInputFormat.java:322)
	at org.apache.hadoop.mapred.TestFixedLengthInputFormat.testFormatCompressedIn(TestFixedLengthInputFormat.java:90)
{noformat}

*Root cause:* under special circumstances, the following line can return a huge number:

{noformat}
          // Test a split size that is less than record len
          numSplits = (int)(fileSize/Math.floor(recordLength/2));
{noformat}

For example, let {{seed}} be 2026428718. This causes {{recordLength}} to be 1 at iteration 19. {{Math.floor()}} returns negative Infinity, which becomes positve infinity after the divison. Casting it to {{int}} yields {{Integer.MAX_VALUE}}. Eventually we get an OOME because the test wants to create a huge {{InputSplit}} array.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org