You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2013/09/17 06:53:07 UTC

Failed: YARN-261 PreCommit Build #1944

Jira: https://issues.apache.org/jira/browse/YARN-261
Build: https://builds.apache.org/job/PreCommit-YARN-Build/1944/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 3666 lines...]

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12603522/YARN-261.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 3 new or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of javac compiler warnings.

    {color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

                  org.apache.hadoop.mapreduce.TestMRJobClient

                                      The following test timeouts occurred in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

org.apache.hadoop.mapreduce.v2.TestUberAM

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1944//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1944//console

This message is automatically generated.


======================================================================
======================================================================
    Adding comment to Jira.
======================================================================
======================================================================


Comment added.
e187b5f739305935cd778c50b2f0e8330c49808d logged out


======================================================================
======================================================================
    Finished build.
======================================================================
======================================================================


Build step 'Execute shell' marked build as failure
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestMRJobClient.testJobClient

Error Message:
expected:<1> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1> but was:<0>
	at junit.framework.Assert.fail(Assert.java:50)
	at junit.framework.Assert.failNotEquals(Assert.java:287)
	at junit.framework.Assert.assertEquals(Assert.java:67)
	at junit.framework.Assert.assertEquals(Assert.java:199)
	at junit.framework.Assert.assertEquals(Assert.java:205)
	at org.apache.hadoop.mapreduce.TestMRJobClient.testJobList(TestMRJobClient.java:474)
	at org.apache.hadoop.mapreduce.TestMRJobClient.testJobClient(TestMRJobClient.java:112)



Success: YARN-1068 PreCommit Build #1948

Posted by Apache Jenkins Server <je...@builds.apache.org>.
Jira: https://issues.apache.org/jira/browse/YARN-1068
Build: https://builds.apache.org/job/PreCommit-YARN-Build/1948/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 3208 lines...]

/bin/kill -9 18583 
kill: No such process
NOP




{color:green}+1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12603703/yarn-1068-3.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1948//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1948//console

This message is automatically generated.


======================================================================
======================================================================
    Adding comment to Jira.
======================================================================
======================================================================


Comment added.
32a5f4952198ca8b9dd1df6c17789edd005c45d1 logged out


======================================================================
======================================================================
    Finished build.
======================================================================
======================================================================


Archiving artifacts
Description set: YARN-1068
Recording test results
Email was triggered for: Success
Sending email for trigger: Success



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Failed: YARN-261 PreCommit Build #1947

Posted by Apache Jenkins Server <je...@builds.apache.org>.
Jira: https://issues.apache.org/jira/browse/YARN-261
Build: https://builds.apache.org/job/PreCommit-YARN-Build/1947/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4855 lines...]
{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12603696/YARN-261--n4.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 4 new or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

                  org.apache.hadoop.mapred.TestJobCounters
                  org.apache.hadoop.mapred.TestMiniMRClasspath
                  org.apache.hadoop.mapred.TestJobCleanup
                  org.apache.hadoop.mapred.TestClusterMapReduceTestCase

                                      The test build failed in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1947//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1947//console

This message is automatically generated.


======================================================================
======================================================================
    Adding comment to Jira.
======================================================================
======================================================================


Comment added.
85b5f557c9ebbdda6c39960165597e67291c2213 logged out


======================================================================
======================================================================
    Finished build.
======================================================================
======================================================================


Build step 'Execute shell' marked build as failure
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
10 tests failed.
FAILED:  org.apache.hadoop.mapred.TestClusterMapReduceTestCase.testMapReduce

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:180)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.mapreduce.v2.MiniMRYarnCluster$JobHistoryServerWrapper.serviceStart(MiniMRYarnCluster.java:165)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.mapred.TestClusterMapReduceTestCase.testMapReduceRestarting

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:351)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657)
	at org.jboss.netty.util.internal.DeadLockProofWorker.start(DeadLockProofWorker.java:38)
	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.<init>(AbstractNioSelector.java:95)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.<init>(AbstractNioWorker.java:51)
	at org.jboss.netty.channel.socket.nio.NioWorker.<init>(NioWorker.java:45)
	at org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
	at org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:99)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:69)
	at org.jboss.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:39)
	at org.jboss.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:33)
	at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.<init>(NioServerSocketChannelFactory.java:149)
	at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.<init>(NioServerSocketChannelFactory.java:131)
	at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.<init>(NioServerSocketChannelFactory.java:115)
	at org.apache.hadoop.mapred.ShuffleHandler.serviceInit(ShuffleHandler.java:293)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit(AuxServices.java:110)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:192)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:203)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:333)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.mapred.TestClusterMapReduceTestCase.testDFSRestart

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:351)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657)
	at org.jboss.netty.util.internal.DeadLockProofWorker.start(DeadLockProofWorker.java:38)
	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.<init>(AbstractNioSelector.java:95)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.<init>(AbstractNioWorker.java:51)
	at org.jboss.netty.channel.socket.nio.NioWorker.<init>(NioWorker.java:45)
	at org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
	at org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:99)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:69)
	at org.jboss.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:39)
	at org.jboss.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:33)
	at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.<init>(NioServerSocketChannelFactory.java:149)
	at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.<init>(NioServerSocketChannelFactory.java:131)
	at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.<init>(NioServerSocketChannelFactory.java:115)
	at org.apache.hadoop.mapred.ShuffleHandler.serviceInit(ShuffleHandler.java:293)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit(AuxServices.java:110)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:192)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:203)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:333)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase$ConfigurableMiniMRCluster.<init>(ClusterMapReduceTestCase.java:101)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:86)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.mapred.TestClusterMapReduceTestCase.testMRConfig

Error Message:
unable to create new native thread

Stack Trace:
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:980)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:954)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:146)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:844)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:268)
	at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:155)
	at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:777)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:644)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:334)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
	at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
	at junit.framework.TestCase.runBare(TestCase.java:132)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


FAILED:  org.apache.hadoop.mapred.TestJobCleanup.testDefaultCleanupAndAbort

Error Message:
Done file "/home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-0/_SUCCESS" missing for job job_1379458425601_0001

Stack Trace:
java.lang.AssertionError: Done file "/home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-0/_SUCCESS" missing for job job_1379458425601_0001
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.mapred.TestJobCleanup.testSuccessfulJob(TestJobCleanup.java:171)
	at org.apache.hadoop.mapred.TestJobCleanup.testDefaultCleanupAndAbort(TestJobCleanup.java:271)


FAILED:  org.apache.hadoop.mapred.TestJobCleanup.testCustomAbort

Error Message:
Done file "/home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-1/_SUCCESS" missing for job job_1379458425601_0002

Stack Trace:
java.lang.AssertionError: Done file "/home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-1/_SUCCESS" missing for job job_1379458425601_0002
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.mapred.TestJobCleanup.testSuccessfulJob(TestJobCleanup.java:171)
	at org.apache.hadoop.mapred.TestJobCleanup.testCustomAbort(TestJobCleanup.java:291)


FAILED:  org.apache.hadoop.mapred.TestJobCleanup.testCustomCleanup

Error Message:
Done file "/home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-2/_custom_cleanup" missing for job job_1379458425601_0003

Stack Trace:
java.lang.AssertionError: Done file "/home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/test-dir/test-job-cleanup/output-2/_custom_cleanup" missing for job job_1379458425601_0003
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.mapred.TestJobCleanup.testSuccessfulJob(TestJobCleanup.java:171)
	at org.apache.hadoop.mapred.TestJobCleanup.testCustomCleanup(TestJobCleanup.java:314)


FAILED:  org.apache.hadoop.mapred.TestJobCounters.testHeapUsageCounter

Error Message:
Job job_1379458591752_0001 failed!

Stack Trace:
java.lang.AssertionError: Job job_1379458591752_0001 failed!
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.mapred.TestJobCounters.runHeapUsageTestJob(TestJobCounters.java:632)
	at org.apache.hadoop.mapred.TestJobCounters.testHeapUsageCounter(TestJobCounters.java:678)


FAILED:  org.apache.hadoop.mapred.TestMiniMRClasspath.testClassPath

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657)
	at org.jboss.netty.util.internal.DeadLockProofWorker.start(DeadLockProofWorker.java:38)
	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.<init>(AbstractNioSelector.java:95)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.<init>(AbstractNioWorker.java:51)
	at org.jboss.netty.channel.socket.nio.NioWorker.<init>(NioWorker.java:45)
	at org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
	at org.jboss.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:99)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:69)
	at org.jboss.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:39)
	at org.jboss.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:33)
	at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.<init>(NioServerSocketChannelFactory.java:149)
	at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.<init>(NioServerSocketChannelFactory.java:131)
	at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.<init>(NioServerSocketChannelFactory.java:115)
	at org.apache.hadoop.mapred.ShuffleHandler.serviceInit(ShuffleHandler.java:293)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit(AuxServices.java:110)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:192)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:203)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:333)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.TestMiniMRClasspath.testClassPath(TestMiniMRClasspath.java:175)


FAILED:  org.apache.hadoop.mapred.TestMiniMRClasspath.testExternalWritable

Error Message:
java.lang.OutOfMemoryError: unable to create new native thread

Stack Trace:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:640)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:483)
	at org.apache.hadoop.util.Shell.run(Shell.java:417)
	at org.apache.hadoop.fs.Stat.getFileStatus(Stat.java:74)
	at org.apache.hadoop.fs.RawLocalFileSystem.getNativeFileLinkStatus(RawLocalFileSystem.java:808)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:740)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:525)
	at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:111)
	at org.apache.hadoop.fs.FilterFs.getFileStatus(FilterFs.java:117)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1106)
	at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1102)
	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
	at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1102)
	at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createDir(DirectoryCollection.java:126)
	at org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.createNonExistentDirs(DirectoryCollection.java:85)
	at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.serviceInit(LocalDirsHandlerService.java:138)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.serviceInit(NodeHealthCheckerService.java:48)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:203)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:333)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
	at org.apache.hadoop.mapred.MiniMRClientClusterFactory.create(MiniMRClientClusterFactory.java:80)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:183)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:171)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:163)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:155)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:148)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:141)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:134)
	at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:129)
	at org.apache.hadoop.mapred.TestMiniMRClasspath.testExternalWritable(TestMiniMRClasspath.java:207)



Failed: YARN-1203 PreCommit Build #1946

Posted by Apache Jenkins Server <je...@builds.apache.org>.
Jira: https://issues.apache.org/jira/browse/YARN-1203
Build: https://builds.apache.org/job/PreCommit-YARN-Build/1946/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4079 lines...]



{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12603689/YARN-1203.20131017.1.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The following test timeouts occurred in hadoop-common-project/hadoop-common hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

org.apache.hadoop.mapreduce.v2.app.TestRMContainerAllocator

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1946//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1946//console

This message is automatically generated.


======================================================================
======================================================================
    Adding comment to Jira.
======================================================================
======================================================================


Comment added.
5d205d7224b63e0483203ea1538d4d957a25800f logged out


======================================================================
======================================================================
    Finished build.
======================================================================
======================================================================


Build step 'Execute shell' marked build as failure
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed

Failed: YARN-261 PreCommit Build #1945

Posted by Apache Jenkins Server <je...@builds.apache.org>.
Jira: https://issues.apache.org/jira/browse/YARN-261
Build: https://builds.apache.org/job/PreCommit-YARN-Build/1945/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 3661 lines...]
{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12603535/YARN-261--n3.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 3 new or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

                  org.apache.hadoop.mapreduce.TestMRJobClient

                                      The following test timeouts occurred in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

org.apache.hadoop.mapreduce.v2.TestUberAM
org.apache.hadoop.mapred.TestNetworkedJob

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-YARN-Build/1945//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1945//console

This message is automatically generated.


======================================================================
======================================================================
    Adding comment to Jira.
======================================================================
======================================================================


Comment added.
1f697aa3fdcd12f501851c1948df4a71be4699a9 logged out


======================================================================
======================================================================
    Finished build.
======================================================================
======================================================================


Build step 'Execute shell' marked build as failure
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestMRJobClient.testJobClient

Error Message:
expected:<1> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<1> but was:<0>
	at junit.framework.Assert.fail(Assert.java:50)
	at junit.framework.Assert.failNotEquals(Assert.java:287)
	at junit.framework.Assert.assertEquals(Assert.java:67)
	at junit.framework.Assert.assertEquals(Assert.java:199)
	at junit.framework.Assert.assertEquals(Assert.java:205)
	at org.apache.hadoop.mapreduce.TestMRJobClient.testJobList(TestMRJobClient.java:474)
	at org.apache.hadoop.mapreduce.TestMRJobClient.testJobClient(TestMRJobClient.java:112)