You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2013/09/14 03:10:32 UTC

Failed: YARN-1130 PreCommit Build #1929

Jira: https://issues.apache.org/jira/browse/YARN-1130
Build: https://builds.apache.org/job/PreCommit-YARN-Build/1929/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 7401 lines...]
                  org.apache.hadoop.mapreduce.TestMapReduceLazyOutput
                  org.apache.hadoop.mapreduce.v2.TestSpeculativeExecution
                  org.apache.hadoop.mapred.TestJobCleanup
                  org.apache.hadoop.mapreduce.v2.TestMRAMWithNonNormalizedCapabilities
                  org.apache.hadoop.mapred.TestReduceFetch
                  org.apache.hadoop.mapred.TestReduceFetchFromPartialMem
                  org.apache.hadoop.mapred.TestMerge
                  org.apache.hadoop.mapreduce.v2.TestMRJobs
                  org.apache.hadoop.mapreduce.TestMRJobClient
                  org.apache.hadoop.mapred.TestTaskCommit
                  org.apache.hadoop.mapreduce.TestChild
                  org.apache.hadoop.mapred.TestJobName
                  org.apache.hadoop.mapred.TestLazyOutput
                  org.apache.hadoop.mapreduce.security.TestBinaryTokenFile
                  org.apache.hadoop.mapreduce.v2.TestUberAM
                  org.apache.hadoop.mapred.TestMiniMRClientCluster
                  org.apache.hadoop.mapred.TestSpecialCharactersInOutputPath
                  org.apache.hadoop.mapreduce.v2.TestMRJobsWithHistoryService
                  org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter
                  org.apache.hadoop.ipc.TestSocketFactory
                  org.apache.hadoop.mapred.TestJobSysDirWithDFS

                                      The following test timeouts occurred in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common:

org.apache.hadoop.mapred.TestClusterMapReduceTestCase
org.apache.hadoop.conf.TestNoDefaultsJobConf

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1929//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1929//console

This message is automatically generated.


======================================================================
======================================================================
    Adding comment to Jira.
======================================================================
======================================================================


Comment added.
22fae3ba41ec4c0328572a5cf3971bf5766aa366 logged out


======================================================================
======================================================================
    Finished build.
======================================================================
======================================================================


Build step 'Execute shell' marked build as failure
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
46 tests failed.
FAILED:  org.apache.hadoop.ipc.TestSocketFactory.testSocketFactory

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.ipc.TestSocketFactory.initAndStartMiniMRYarnCluster(TestSocketFactory.java:112)
	at org.apache.hadoop.ipc.TestSocketFactory.testSocketFactory(TestSocketFactory.java:85)


FAILED:  org.apache.hadoop.mapred.TestBlockLimits.testWithLimits

Error Message:
Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable

Stack Trace:
java.io.IOException: Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:528)
	at org.apache.hadoop.mapred.TestBlockLimits.runCustomFormat(TestBlockLimits.java:67)
	at org.apache.hadoop.mapred.TestBlockLimits.testWithLimits(TestBlockLimits.java:53)


FAILED:  org.apache.hadoop.mapred.TestClusterMRNotification.testMR

Error Message:
Job failed!

Stack Trace:
java.io.IOException: Job failed!
	at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:832)
	at org.apache.hadoop.mapred.NotificationTestCase.launchWordCount(NotificationTestCase.java:241)
	at org.apache.hadoop.mapred.NotificationTestCase.testMR(NotificationTestCase.java:156)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.mapred.TestJobCleanup.testDefaultCleanupAndAbort

Error Message:
Done file "/home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-0/_SUCCESS" missing for job job_1379116807019_0001

Stack Trace:
java.lang.AssertionError: Done file "/home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-0/_SUCCESS" missing for job job_1379116807019_0001
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.mapred.TestJobCleanup.testSuccessfulJob(TestJobCleanup.java:171)
	at org.apache.hadoop.mapred.TestJobCleanup.testDefaultCleanupAndAbort(TestJobCleanup.java:271)


FAILED:  org.apache.hadoop.mapred.TestJobCleanup.testCustomAbort

Error Message:
Done file "/home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-1/_SUCCESS" missing for job job_1379116807019_0002

Stack Trace:
java.lang.AssertionError: Done file "/home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-1/_SUCCESS" missing for job job_1379116807019_0002
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.mapred.TestJobCleanup.testSuccessfulJob(TestJobCleanup.java:171)
	at org.apache.hadoop.mapred.TestJobCleanup.testCustomAbort(TestJobCleanup.java:291)


FAILED:  org.apache.hadoop.mapred.TestJobCleanup.testCustomCleanup

Error Message:
No. of killed maps should be 1

Stack Trace:
java.lang.AssertionError: No. of killed maps should be 1
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.mapred.TestJobCleanup.testKilledJob(TestJobCleanup.java:247)
	at org.apache.hadoop.mapred.TestJobCleanup.testCustomCleanup(TestJobCleanup.java:324)


FAILED:  org.apache.hadoop.mapred.TestJobCounters.testHeapUsageCounter

Error Message:
Job job_1379116943238_0001 failed!

Stack Trace:
java.lang.AssertionError: Job job_1379116943238_0001 failed!
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.mapred.TestJobCounters.runHeapUsageTestJob(TestJobCounters.java:632)
	at org.apache.hadoop.mapred.TestJobCounters.testHeapUsageCounter(TestJobCounters.java:678)


FAILED:  org.apache.hadoop.mapred.TestJobName.testComplexName

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:180)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.mapred.TestJobName.testComplexNameWithRegex

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:180)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.mapred.TestJobSysDirWithDFS.testWithDFS

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:124)
	at org.apache.hadoop.mapred.TestJobSysDirWithDFS.testWithDFS(TestJobSysDirWithDFS.java:130)


FAILED:  org.apache.hadoop.mapred.TestLazyOutput.testLazyOutput

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.TestLazyOutput.testLazyOutput(TestLazyOutput.java:146)


FAILED:  org.apache.hadoop.mapred.TestMerge.testMerge

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:41)
	at org.apache.hadoop.mapred.TestMerge.testMerge(TestMerge.java:81)


FAILED:  org.apache.hadoop.mapred.TestMiniMRChildTask.org.apache.hadoop.mapred.TestMiniMRChildTask

Error Message:
Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable

Stack Trace:
java.io.IOException: Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
	at org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:82)
	at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1834)
	at org.apache.hadoop.mapred.TestMiniMRChildTask.setup(TestMiniMRChildTask.java:320)


FAILED:  org.apache.hadoop.mapred.TestMiniMRClasspath.testClassPath

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.TestMiniMRClasspath.testClassPath(TestMiniMRClasspath.java:175)


FAILED:  org.apache.hadoop.mapred.TestMiniMRClasspath.testExternalWritable

Error Message:
java.io.IOException: NodeManager 2 failed to start

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.IOException: NodeManager 2 failed to start
	at org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:347)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.TestMiniMRClasspath.testExternalWritable(TestMiniMRClasspath.java:207)


FAILED:  org.apache.hadoop.mapred.TestMiniMRClientCluster.testJob

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:92)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.junit.Assert.assertTrue(Assert.java:54)
	at org.apache.hadoop.mapred.TestMiniMRClientCluster.testJob(TestMiniMRClientCluster.java:162)


FAILED:  org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.testDistinctUsers

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.setUp(TestMiniMRWithDFSWithDistinctUsers.java:97)


FAILED:  org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.testMultipleSpills

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:483)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:111)
	at org.apache.hadoop.fs.FilterFs.getFileStatus(FilterFs.java:117)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1106)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1102)
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
	at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1102)
	at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createDir(DirectoryCollection.java:126)
	at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createNonExistentDirs(DirectoryCollection.java:85)
	at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.serviceInit(LocalDirsHandlerService.java:138)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.serviceInit(NodeHealthCheckerService.java:48)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:203)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:333)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.setUp(TestMiniMRWithDFSWithDistinctUsers.java:97)


FAILED:  org.apache.hadoop.mapred.TestNetworkedJob.testGetJobStatus

Error Message:
Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable

Stack Trace:
java.io.IOException: Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:528)
	at org.apache.hadoop.mapred.TestNetworkedJob.testGetJobStatus(TestNetworkedJob.java:113)


FAILED:  org.apache.hadoop.mapred.TestNetworkedJob.testNetworkedJob

Error Message:
Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable

Stack Trace:
java.io.IOException: Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
	at org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:82)
	at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1834)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:60)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:41)
	at org.apache.hadoop.mapred.TestNetworkedJob.testNetworkedJob(TestNetworkedJob.java:133)


FAILED:  org.apache.hadoop.mapred.TestNetworkedJob.testJobQueueClient

Error Message:
Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable

Stack Trace:
java.io.IOException: Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
	at org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:82)
	at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1834)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:60)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:41)
	at org.apache.hadoop.mapred.TestNetworkedJob.testJobQueueClient(TestNetworkedJob.java:319)


FAILED:  org.apache.hadoop.mapred.TestReduceFetchFromPartialMem$1.org.apache.hadoop.mapred.TestReduceFetchFromPartialMem

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.TestReduceFetchFromPartialMem$1.setUp(TestReduceFetchFromPartialMem.java:61)


FAILED:  org.apache.hadoop.mapred.TestReduceFetchFromPartialMem$1.org.apache.hadoop.mapred.TestReduceFetchFromPartialMem

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.TestReduceFetchFromPartialMem$1.setUp(TestReduceFetchFromPartialMem.java:61)


FAILED:  org.apache.hadoop.mapred.TestSpecialCharactersInOutputPath.testJobWithDFS

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.TestSpecialCharactersInOutputPath.testJobWithDFS(TestSpecialCharactersInOutputPath.java:112)


FAILED:  org.apache.hadoop.mapred.TestTaskCommit.testTaskCommitLogFlush

Error Message:
File /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test/tmplog/syslog does not exist

Stack Trace:
java.io.FileNotFoundException: File /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test/tmplog/syslog does not exist
	at org.apache.hadoop.fs.Stat.parseExecResult(Stat.java:124)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:486)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
	at org.apache.hadoop.mapred.TestTaskCommit.testTaskCommitLogFlush(TestTaskCommit.java:358)


FAILED:  org.apache.hadoop.mapreduce.TestChild.testChild

Error Message:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [file:/tmp/hadoop-yarn/staging/history/done]

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [file:/tmp/hadoop-yarn/staging/history/done]
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:180)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.HadoopTestCase.setUp(HadoopTestCase.java:156)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [file:/tmp/hadoop-yarn/staging/history/done]
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:505)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:94)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:92)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:164)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.HadoopTestCase.setUp(HadoopTestCase.java:156)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.io.IOException: Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:111)
	at org.apache.hadoop.fs.FilterFs.getFileStatus(FilterFs.java:117)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1106)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1102)
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
	at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1102)
	at org.apache.hadoop.fs.FileContext$Util.exists(FileContext.java:1514)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.mkdir(HistoryFileManager.java:561)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:502)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:94)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:92)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:164)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.HadoopTestCase.setUp(HadoopTestCase.java:156)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.io.IOException: java.io.IOException: error=11, Resource temporarily unavailable
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:111)
	at org.apache.hadoop.fs.FilterFs.getFileStatus(FilterFs.java:117)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1106)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1102)
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
	at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1102)
	at org.apache.hadoop.fs.FileContext$Util.exists(FileContext.java:1514)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.mkdir(HistoryFileManager.java:561)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:502)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:94)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:92)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:164)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.HadoopTestCase.setUp(HadoopTestCase.java:156)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.mapreduce.TestMRJobClient.testJobSubmissionSpecsAndFiles

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:180)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.mapreduce.TestMRJobClient.testJobClient

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:180)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.mapreduce.TestMapReduceLazyOutput.testLazyOutput

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapreduce.TestMapReduceLazyOutput.testLazyOutput(TestMapReduceLazyOutput.java:136)


FAILED:  org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter.testDefaultCleanupAndAbort

Error Message:
Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable

Stack Trace:
java.io.IOException: Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
	at org.apache.hadoop.mapreduce.MapReduceTestUtil.createJob(MapReduceTestUtil.java:359)
	at org.apache.hadoop.mapreduce.MapReduceTestUtil.createJob(MapReduceTestUtil.java:352)
	at org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter.testSuccessfulJob(TestJobOutputCommitter.java:146)
	at org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter.testDefaultCleanupAndAbort(TestJobOutputCommitter.java:224)


FAILED:  org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter.testCustomAbort

Error Message:
Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable

Stack Trace:
java.io.IOException: Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
	at org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:82)
	at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1834)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:60)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.HadoopTestCase.setUp(HadoopTestCase.java:156)
	at org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter.setUp(TestJobOutputCommitter.java:59)


FAILED:  org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter.testCustomCleanup

Error Message:
Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable

Stack Trace:
java.io.IOException: Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
	at org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:82)
	at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1834)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:60)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.HadoopTestCase.setUp(HadoopTestCase.java:156)
	at org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter.setUp(TestJobOutputCommitter.java:59)


FAILED:  org.apache.hadoop.mapreduce.security.TestBinaryTokenFile.org.apache.hadoop.mapreduce.security.TestBinaryTokenFile

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapreduce.security.TestBinaryTokenFile.setUp(TestBinaryTokenFile.java:208)


FAILED:  org.apache.hadoop.mapreduce.security.TestMRCredentials.org.apache.hadoop.mapreduce.security.TestMRCredentials

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:41)
	at org.apache.hadoop.mapreduce.security.TestMRCredentials.setUp(TestMRCredentials.java:64)


FAILED:  org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.encryptedShuffleWithClientCerts

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:41)
	at org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.startCluster(TestEncryptedShuffle.java:104)
	at org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.encryptedShuffleWithCerts(TestEncryptedShuffle.java:135)
	at org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.encryptedShuffleWithClientCerts(TestEncryptedShuffle.java:164)


FAILED:  org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.encryptedShuffleWithoutClientCerts

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:483)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:111)
	at org.apache.hadoop.fs.FilterFs.getFileStatus(FilterFs.java:117)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1106)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1102)
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
	at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1102)
	at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createDir(DirectoryCollection.java:126)
	at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createNonExistentDirs(DirectoryCollection.java:85)
	at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.serviceInit(LocalDirsHandlerService.java:138)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.serviceInit(NodeHealthCheckerService.java:48)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:203)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:333)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:41)
	at org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.startCluster(TestEncryptedShuffle.java:104)
	at org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.encryptedShuffleWithCerts(TestEncryptedShuffle.java:135)
	at org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.encryptedShuffleWithoutClientCerts(TestEncryptedShuffle.java:169)


FAILED:  org.apache.hadoop.mapreduce.v2.TestMRAMWithNonNormalizedCapabilities.testJobWithNonNormalizedCapabilities

Error Message:
Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable

Stack Trace:
java.io.IOException: Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
	at org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:82)
	at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1834)
	at org.apache.hadoop.mapreduce.v2.TestMRAMWithNonNormalizedCapabilities.setup(TestMRAMWithNonNormalizedCapabilities.java:76)


FAILED:  org.apache.hadoop.mapreduce.v2.TestMRAppWithCombiner.org.apache.hadoop.mapreduce.v2.TestMRAppWithCombiner

Error Message:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [file:/tmp/hadoop-yarn/staging/history/done]

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [file:/tmp/hadoop-yarn/staging/history/done]
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:111)
	at org.apache.hadoop.fs.FilterFs.getFileStatus(FilterFs.java:117)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1106)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1102)
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
	at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1102)
	at org.apache.hadoop.fs.FileContext$Util.exists(FileContext.java:1514)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.mkdir(HistoryFileManager.java:561)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:502)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:94)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:92)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:164)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapreduce.v2.TestMRAppWithCombiner.setup(TestMRAppWithCombiner.java:80)


FAILED:  org.apache.hadoop.mapreduce.v2.TestMRJobs.org.apache.hadoop.mapreduce.v2.TestMRJobs

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapreduce.v2.TestMRJobs.setup(TestMRJobs.java:130)


FAILED:  org.apache.hadoop.mapreduce.v2.TestMRJobsWithHistoryService.testJobHistoryData

Error Message:
Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable

Stack Trace:
java.io.IOException: Cannot run program "stat": java.io.IOException: error=11, Resource temporarily unavailable
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
	at org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:82)
	at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1834)
	at org.apache.hadoop.mapreduce.v2.TestMRJobsWithHistoryService.setup(TestMRJobsWithHistoryService.java:97)


FAILED:  org.apache.hadoop.mapreduce.v2.TestMROldApiJobs.org.apache.hadoop.mapreduce.v2.TestMROldApiJobs

Error Message:
unable to create new native thread

Stack Trace:
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:483)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
	at org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:82)
	at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1834)
	at org.apache.hadoop.mapreduce.v2.TestMROldApiJobs.setup(TestMROldApiJobs.java:86)


FAILED:  org.apache.hadoop.mapreduce.v2.TestMiniMRProxyUser.testValidProxyUser

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:483)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:111)
	at org.apache.hadoop.fs.FilterFs.getFileStatus(FilterFs.java:117)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1106)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1102)
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
	at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1102)
	at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createDir(DirectoryCollection.java:126)
	at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createNonExistentDirs(DirectoryCollection.java:85)
	at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.serviceInit(LocalDirsHandlerService.java:138)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.serviceInit(NodeHealthCheckerService.java:48)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:203)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:333)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapreduce.v2.TestMiniMRProxyUser.setUp(TestMiniMRProxyUser.java:85)


FAILED:  org.apache.hadoop.mapreduce.v2.TestNonExistentJob.testGetInvalidJob

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at java.lang.UNIXProcess$1.run(UNIXProcess.java:141)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:103)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:111)
	at org.apache.hadoop.fs.FilterFs.getFileStatus(FilterFs.java:117)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1106)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1102)
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
	at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1102)
	at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createDir(DirectoryCollection.java:126)
	at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createNonExistentDirs(DirectoryCollection.java:85)
	at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.serviceInit(LocalDirsHandlerService.java:138)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.serviceInit(NodeHealthCheckerService.java:48)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:203)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:333)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapreduce.v2.TestNonExistentJob.setUp(TestNonExistentJob.java:72)


FAILED:  org.apache.hadoop.mapreduce.v2.TestRMNMInfo.org.apache.hadoop.mapreduce.v2.TestRMNMInfo

Error Message:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [file:/tmp/hadoop-yarn/staging/history/done]

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [file:/tmp/hadoop-yarn/staging/history/done]
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:111)
	at org.apache.hadoop.fs.FilterFs.getFileStatus(FilterFs.java:117)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1106)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1102)
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
	at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1102)
	at org.apache.hadoop.fs.FileContext$Util.exists(FileContext.java:1514)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.mkdir(HistoryFileManager.java:561)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:502)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:94)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:92)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:164)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapreduce.v2.TestRMNMInfo.setup(TestRMNMInfo.java:80)


FAILED:  org.apache.hadoop.mapreduce.v2.TestSpeculativeExecution.org.apache.hadoop.mapreduce.v2.TestSpeculativeExecution

Error Message:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [file:/tmp/hadoop-yarn/staging/history/done]

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [file:/tmp/hadoop-yarn/staging/history/done]
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:447)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:111)
	at org.apache.hadoop.fs.FilterFs.getFileStatus(FilterFs.java:117)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1106)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1102)
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
	at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1102)
	at org.apache.hadoop.fs.FileContext$Util.exists(FileContext.java:1514)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.mkdir(HistoryFileManager.java:561)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:502)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:94)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:92)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:164)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapreduce.v2.TestSpeculativeExecution.setup(TestSpeculativeExecution.java:118)


FAILED:  org.apache.hadoop.mapreduce.v2.TestUberAM.org.apache.hadoop.mapreduce.v2.TestUberAM

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:483)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:111)
	at org.apache.hadoop.fs.FilterFs.getFileStatus(FilterFs.java:117)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1106)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1102)
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
	at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1102)
	at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createDir(DirectoryCollection.java:126)
	at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createNonExistentDirs(DirectoryCollection.java:85)
	at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.serviceInit(LocalDirsHandlerService.java:138)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.serviceInit(NodeHealthCheckerService.java:48)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:203)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:333)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapreduce.v2.TestMRJobs.setup(TestMRJobs.java:130)
	at org.apache.hadoop.mapreduce.v2.TestUberAM.setup(TestUberAM.java:45)



Failed: YARN-540 PreCommit Build #1930

Posted by Apache Jenkins Server <je...@builds.apache.org>.
Jira: https://issues.apache.org/jira/browse/YARN-540
Build: https://builds.apache.org/job/PreCommit-YARN-Build/1930/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4 lines...]
U         hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
U         hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestOutOfOrderWrite.java
U         hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
U         hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
U         hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
U         hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java
At revision 1523145
Reverting /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/nightly to depth infinity with ignoreExternals: false
Updating http://svn.apache.org/repos/asf/hadoop/nightly at revision '2013-09-13T23:43:04.213 +0000'
At revision 1523145
no change for http://svn.apache.org/repos/asf/hadoop/common/trunk since the previous build
no change for http://svn.apache.org/repos/asf/hadoop/nightly since the previous build
No emails were triggered.
FATAL: Unable to produce a script file
hudson.util.IOException2: Failed to create a temp file on /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build
	at hudson.FilePath.createTextTempFile(FilePath.java:1251)
	at hudson.tasks.CommandInterpreter.createScriptFile(CommandInterpreter.java:115)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:75)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:60)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:782)
	at hudson.model.Build$BuildExecution.build(Build.java:199)
	at hudson.model.Build$BuildExecution.doRun(Build.java:160)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:567)
	at hudson.model.Run.execute(Run.java:1603)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
	at hudson.model.ResourceController.execute(ResourceController.java:88)
	at hudson.model.Executor.run(Executor.java:246)
Caused by: hudson.util.IOException2: remote file operation failed: /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build at hudson.remoting.Channel@5c6e8a13:hadoop2
	at hudson.FilePath.act(FilePath.java:905)
	at hudson.FilePath.act(FilePath.java:882)
	at hudson.FilePath.createTextTempFile(FilePath.java:1225)
	... 12 more
Caused by: java.io.IOException: No space left on device
	at java.io.FileOutputStream.writeBytes(Native Method)
	at java.io.FileOutputStream.write(FileOutputStream.java:282)
	at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
	at sun.nio.cs.StreamEncoder.implClose(StreamEncoder.java:297)
	at sun.nio.cs.StreamEncoder.close(StreamEncoder.java:130)
	at java.io.OutputStreamWriter.close(OutputStreamWriter.java:216)
	at hudson.FilePath$15.invoke(FilePath.java:1244)
	at hudson.FilePath$15.invoke(FilePath.java:1225)
	at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2423)
	at hudson.remoting.UserRequest.perform(UserRequest.java:118)
	at hudson.remoting.UserRequest.perform(UserRequest.java:48)
	at hudson.remoting.Request$2.run(Request.java:326)
	at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)
Build step 'Execute shell' marked build as failure
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Editable Email Notification is waiting for a checkpoint on PreCommit-YARN-Build #1929
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.