You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2011/11/03 01:36:47 UTC
Hadoop-Mapreduce-22-branch - Build # 87 - Failure
See https://builds.apache.org/job/Hadoop-Mapreduce-22-branch/87/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 500297 lines...]
[junit] 11/11/03 00:32:26 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
[junit] 11/11/03 00:32:26 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:48938, storageID=DS-475485639-67.195.138.25-48938-1320280345227, infoPort=36358, ipcPort=59430):Finishing DataNode in: FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data3/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data4/current/finalized'}
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping server on 59430
[junit] 11/11/03 00:32:26 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/03 00:32:26 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
[junit] 11/11/03 00:32:26 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
[junit] 11/11/03 00:32:26 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
[junit] 11/11/03 00:32:26 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping server on 42200
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 0 on 42200: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 2 on 42200: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 1 on 42200: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping IPC Server listener on 42200
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping IPC Server Responder
[junit] 11/11/03 00:32:26 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 25
[junit] 11/11/03 00:32:26 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/03 00:32:26 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
[junit] 11/11/03 00:32:26 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:44434, storageID=DS-908436179-67.195.138.25-44434-1320280345099, infoPort=55557, ipcPort=42200):Finishing DataNode in: FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping server on 42200
[junit] 11/11/03 00:32:26 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/03 00:32:26 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
[junit] 11/11/03 00:32:26 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
[junit] 11/11/03 00:32:26 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
[junit] 11/11/03 00:32:26 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 11/11/03 00:32:26 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 11/11/03 00:32:26 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 5 2
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping server on 58221
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 0 on 58221: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 2 on 58221: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 5 on 58221: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 8 on 58221: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 9 on 58221: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 1 on 58221: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping IPC Server listener on 58221
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 4 on 58221: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 7 on 58221: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 6 on 58221: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 3 on 58221: exiting
[junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping IPC Server Responder
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.89 sec
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:817: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:796: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/build.xml:87: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/raid/build.xml:60: Tests failed!
Total time: 193 minutes 53 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-3139
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
FAILED: junit.framework.TestSuite.org.apache.hadoop.mapred.TestFairSchedulerSystem
Error Message:
java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
Stack Trace:
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:336)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:546)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:483)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:475)
at org.apache.hadoop.mapred.MiniMRCluster.<init>(MiniMRCluster.java:418)
at org.apache.hadoop.mapred.TestFairSchedulerSystem.setUp(TestFairSchedulerSystem.java:74)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1055)
at org.apache.hadoop.ipc.Client.call(Client.java:1031)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy6.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:235)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:275)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:249)
at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:86)
at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:98)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:74)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:456)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:435)
at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.java:322)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:416)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:504)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:206)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1164)
at org.apache.hadoop.ipc.Client.call(Client.java:1008)
FAILED: org.apache.hadoop.raid.TestRaidNode.testPathFilter
Error Message:
Too many open files at sun.nio.ch.IOUtil.initPipe(Native Method) at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:49) at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWithTimeout.java:407) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:322) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:132) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read1(BufferedInputStream.java:258) at java.io.BufferedInputStream.read(BufferedInputStream.java:317) at java.io.DataInputStream.read(DataInputStream.java:132) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:122) at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:297) at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:273) at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:225) at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:193) at org.apache.hadoop.hdfs.BlockReader.read(BlockReader.java:136) at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:466) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:517) at java.io.DataInputStream.read(DataInputStream.java:132) at org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.java:138) at org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStream.java:117) at org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java:95) at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74) at org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147) at org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867) at org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333) at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476)
Stack Trace:
java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.initPipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:49)
at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWithTimeout.java:407)
at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:322)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:132)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at java.io.DataInputStream.read(DataInputStream.java:132)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:122)
at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:297)
at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:273)
at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:225)
at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:193)
at org.apache.hadoop.hdfs.BlockReader.read(BlockReader.java:136)
at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:466)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:517)
at java.io.DataInputStream.read(DataInputStream.java:132)
at org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.java:138)
at org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStream.java:117)
at org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java:95)
at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74)
at org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147)
at org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867)
at org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1028)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy11.recoverFile(Unknown Source)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:84)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy11.recoverFile(Unknown Source)
at org.apache.hadoop.raid.RaidShell.recover(RaidShell.java:272)
at org.apache.hadoop.raid.TestRaidNode.simulateError(TestRaidNode.java:576)
at org.apache.hadoop.raid.TestRaidNode.doTestPathFilter(TestRaidNode.java:331)
at org.apache.hadoop.raid.TestRaidNode.testPathFilter(TestRaidNode.java:257)
FAILED: org.apache.hadoop.streaming.TestDumpTypedBytes.testDumping
Error Message:
port out of range:-1
Stack Trace:
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.<init>(InetSocketAddress.java:118)
at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:519)
at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:459)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:459)
at org.apache.hadoop.hdfs.server.namenode.NameNode.activate(NameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:387)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:576)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1538)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:445)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:378)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:259)
at org.apache.hadoop.streaming.TestDumpTypedBytes.testDumping(TestDumpTypedBytes.java:42)
FAILED: org.apache.hadoop.streaming.TestLoadTypedBytes.testLoading
Error Message:
port out of range:-1
Stack Trace:
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.<init>(InetSocketAddress.java:118)
at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:519)
at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:459)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:459)
at org.apache.hadoop.hdfs.server.namenode.NameNode.activate(NameNode.java:403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:387)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:576)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1538)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:445)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:378)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:259)
at org.apache.hadoop.streaming.TestLoadTypedBytes.testLoading(TestLoadTypedBytes.java:42)
Hadoop-Mapreduce-22-branch - Build # 92 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-22-branch/92/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 502062 lines...]
[junit] 11/11/25 01:14:44 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:36428, storageID=DS-1315548783-67.195.138.25-36428-1322183683592, infoPort=60248, ipcPort=54238):Finishing DataNode in: FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data3/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data4/current/finalized'}
[junit] 11/11/25 01:14:44 INFO ipc.Server: Stopping server on 54238
[junit] 11/11/25 01:14:44 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/25 01:14:44 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
[junit] 11/11/25 01:14:44 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
[junit] 11/11/25 01:14:44 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
[junit] 11/11/25 01:14:44 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
[junit] 11/11/25 01:14:44 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
[junit] 11/11/25 01:14:44 INFO ipc.Server: Stopping server on 57206
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 0 on 57206: exiting
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 1 on 57206: exiting
[junit] 11/11/25 01:14:44 INFO ipc.Server: Stopping IPC Server listener on 57206
[junit] 11/11/25 01:14:44 INFO ipc.Server: Stopping IPC Server Responder
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 2 on 57206: exiting
[junit] 11/11/25 01:14:44 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
[junit] 11/11/25 01:14:44 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
[junit] 11/11/25 01:14:44 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:45770, storageID=DS-240987502-67.195.138.25-45770-1322183683461, infoPort=47732, ipcPort=57206):Finishing DataNode in: FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
[junit] 11/11/25 01:14:44 INFO ipc.Server: Stopping server on 57206
[junit] 11/11/25 01:14:44 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/25 01:14:44 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
[junit] 11/11/25 01:14:44 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
[junit] 11/11/25 01:14:44 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
[junit] 11/11/25 01:14:44 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
[junit] 11/11/25 01:14:44 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 11/11/25 01:14:44 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 11/11/25 01:14:44 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 4 6
[junit] 11/11/25 01:14:44 INFO ipc.Server: Stopping server on 55273
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 0 on 55273: exiting
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 1 on 55273: exiting
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 3 on 55273: exiting
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 2 on 55273: exiting
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 6 on 55273: exiting
[junit] 11/11/25 01:14:44 INFO ipc.Server: Stopping IPC Server listener on 55273
[junit] 11/11/25 01:14:44 INFO ipc.Server: Stopping IPC Server Responder
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 4 on 55273: exiting
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 5 on 55273: exiting
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 8 on 55273: exiting
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 9 on 55273: exiting
[junit] 11/11/25 01:14:44 INFO ipc.Server: IPC Server handler 7 on 55273: exiting
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.882 sec
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:817: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:796: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/build.xml:87: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/raid/build.xml:60: Tests failed!
Total time: 156 minutes 31 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating HADOOP-7861
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.raid.TestRaidNode.testPathFilter
Error Message:
Could not obtain block: blk_2194475199335531650_1015 file=/destraid/user/dhruba/raidtest/file2 at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:559) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:382) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:514) at java.io.DataInputStream.read(DataInputStream.java:132) at org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.java:138) at org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStream.java:117) at org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java:95) at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74) at org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147) at org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867) at org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333) at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476)
Stack Trace:
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: blk_2194475199335531650_1015 file=/destraid/user/dhruba/raidtest/file2
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:559)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:382)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:514)
at java.io.DataInputStream.read(DataInputStream.java:132)
at org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.java:138)
at org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStream.java:117)
at org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java:95)
at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74)
at org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147)
at org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867)
at org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1028)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy11.recoverFile(Unknown Source)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:84)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy11.recoverFile(Unknown Source)
at org.apache.hadoop.raid.RaidShell.recover(RaidShell.java:272)
at org.apache.hadoop.raid.TestRaidNode.simulateError(TestRaidNode.java:576)
at org.apache.hadoop.raid.TestRaidNode.doTestPathFilter(TestRaidNode.java:331)
at org.apache.hadoop.raid.TestRaidNode.testPathFilter(TestRaidNode.java:257)
Hadoop-Mapreduce-22-branch - Build # 91 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-22-branch/91/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 501531 lines...]
[junit] 11/11/21 01:13:16 INFO ipc.Server: Stopping server on 58441
[junit] 11/11/21 01:13:16 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/21 01:13:16 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
[junit] 11/11/21 01:13:16 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
[junit] 11/11/21 01:13:16 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
[junit] 11/11/21 01:13:16 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
[junit] 11/11/21 01:13:16 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
[junit] 11/11/21 01:13:16 INFO ipc.Server: Stopping server on 50689
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 0 on 50689: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 1 on 50689: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 2 on 50689: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: Stopping IPC Server listener on 50689
[junit] 11/11/21 01:13:16 INFO ipc.Server: Stopping IPC Server Responder
[junit] 11/11/21 01:13:16 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 25
[junit] 11/11/21 01:13:16 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/21 01:13:16 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
[junit] 11/11/21 01:13:16 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:46401, storageID=DS-1060439588-67.195.138.25-46401-1321837995601, infoPort=59192, ipcPort=50689):Finishing DataNode in: FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
[junit] 11/11/21 01:13:16 INFO ipc.Server: Stopping server on 50689
[junit] 11/11/21 01:13:16 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/21 01:13:16 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
[junit] 11/11/21 01:13:16 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
[junit] 11/11/21 01:13:16 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
[junit] 11/11/21 01:13:16 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
[junit] 11/11/21 01:13:16 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 11/11/21 01:13:16 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 11/11/21 01:13:16 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 2Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 4 5
[junit] 11/11/21 01:13:16 INFO ipc.Server: Stopping server on 42923
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 1 on 42923: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 3 on 42923: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 6 on 42923: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 7 on 42923: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 4 on 42923: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 0 on 42923: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 2 on 42923: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: Stopping IPC Server listener on 42923
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 9 on 42923: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 8 on 42923: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: IPC Server handler 5 on 42923: exiting
[junit] 11/11/21 01:13:16 INFO ipc.Server: Stopping IPC Server Responder
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 22.273 sec
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:817: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:796: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/build.xml:87: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/raid/build.xml:60: Tests failed!
Total time: 155 minutes 42 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-2059
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.raid.TestRaidNode.testPathFilter
Error Message:
Could not obtain block: blk_7798474642348103960_1015 file=/destraid/user/dhruba/raidtest/file2 at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:559) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:382) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:514) at java.io.DataInputStream.read(DataInputStream.java:132) at org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.java:138) at org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStream.java:117) at org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java:95) at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74) at org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147) at org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867) at org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333) at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476)
Stack Trace:
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: blk_7798474642348103960_1015 file=/destraid/user/dhruba/raidtest/file2
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:559)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:382)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:514)
at java.io.DataInputStream.read(DataInputStream.java:132)
at org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.java:138)
at org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStream.java:117)
at org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java:95)
at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74)
at org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147)
at org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867)
at org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1028)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy11.recoverFile(Unknown Source)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:84)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy11.recoverFile(Unknown Source)
at org.apache.hadoop.raid.RaidShell.recover(RaidShell.java:272)
at org.apache.hadoop.raid.TestRaidNode.simulateError(TestRaidNode.java:576)
at org.apache.hadoop.raid.TestRaidNode.doTestPathFilter(TestRaidNode.java:331)
at org.apache.hadoop.raid.TestRaidNode.testPathFilter(TestRaidNode.java:257)
Hadoop-Mapreduce-22-branch - Build # 90 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-22-branch/90/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 501847 lines...]
[junit] 11/11/19 01:14:37 INFO ipc.Server: Stopping server on 47047
[junit] 11/11/19 01:14:37 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/19 01:14:37 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
[junit] 11/11/19 01:14:37 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
[junit] 11/11/19 01:14:37 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
[junit] 11/11/19 01:14:37 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
[junit] 11/11/19 01:14:37 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
[junit] 11/11/19 01:14:37 INFO ipc.Server: Stopping server on 53841
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 0 on 53841: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 2 on 53841: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 1 on 53841: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: Stopping IPC Server listener on 53841
[junit] 11/11/19 01:14:37 INFO ipc.Server: Stopping IPC Server Responder
[junit] 11/11/19 01:14:37 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 25
[junit] 11/11/19 01:14:37 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/19 01:14:37 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
[junit] 11/11/19 01:14:37 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:44582, storageID=DS-326083123-67.195.138.25-44582-1321665276859, infoPort=55554, ipcPort=53841):Finishing DataNode in: FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
[junit] 11/11/19 01:14:37 INFO ipc.Server: Stopping server on 53841
[junit] 11/11/19 01:14:37 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/19 01:14:37 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
[junit] 11/11/19 01:14:37 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
[junit] 11/11/19 01:14:37 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
[junit] 11/11/19 01:14:37 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
[junit] 11/11/19 01:14:37 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 11/11/19 01:14:37 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 11/11/19 01:14:37 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 6 4
[junit] 11/11/19 01:14:37 INFO ipc.Server: Stopping server on 55903
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 1 on 55903: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 3 on 55903: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 2 on 55903: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 4 on 55903: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 5 on 55903: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 0 on 55903: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 6 on 55903: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 7 on 55903: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 8 on 55903: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: IPC Server handler 9 on 55903: exiting
[junit] 11/11/19 01:14:37 INFO ipc.Server: Stopping IPC Server listener on 55903
[junit] 11/11/19 01:14:37 INFO ipc.Server: Stopping IPC Server Responder
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.931 sec
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:817: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:796: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/build.xml:87: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/raid/build.xml:60: Tests failed!
Total time: 157 minutes 14 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-3429
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED: org.apache.hadoop.raid.TestRaidNode.testPathFilter
Error Message:
Could not obtain block: blk_1215122297271296869_1015 file=/destraid/user/dhruba/raidtest/file2 at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:559) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:382) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:514) at java.io.DataInputStream.read(DataInputStream.java:132) at org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.java:138) at org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStream.java:117) at org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java:95) at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74) at org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147) at org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867) at org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333) at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476)
Stack Trace:
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: blk_1215122297271296869_1015 file=/destraid/user/dhruba/raidtest/file2
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:559)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:382)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:514)
at java.io.DataInputStream.read(DataInputStream.java:132)
at org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.java:138)
at org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStream.java:117)
at org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java:95)
at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74)
at org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147)
at org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867)
at org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1028)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy11.recoverFile(Unknown Source)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:84)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy11.recoverFile(Unknown Source)
at org.apache.hadoop.raid.RaidShell.recover(RaidShell.java:272)
at org.apache.hadoop.raid.TestRaidNode.simulateError(TestRaidNode.java:576)
at org.apache.hadoop.raid.TestRaidNode.doTestPathFilter(TestRaidNode.java:331)
at org.apache.hadoop.raid.TestRaidNode.testPathFilter(TestRaidNode.java:257)
Hadoop-Mapreduce-22-branch - Build # 89 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-22-branch/89/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 502546 lines...]
[junit] 11/11/17 13:45:39 INFO ipc.Server: Stopping server on 40390
[junit] 11/11/17 13:45:39 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/17 13:45:39 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
[junit] 11/11/17 13:45:39 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
[junit] 11/11/17 13:45:39 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
[junit] 11/11/17 13:45:39 INFO hdfs.MiniDFSCluster: Shutting down DataNode 0
[junit] 11/11/17 13:45:39 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
[junit] 11/11/17 13:45:39 INFO ipc.Server: Stopping server on 33845
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 0 on 33845: exiting
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 1 on 33845: exiting
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 2 on 33845: exiting
[junit] 11/11/17 13:45:39 INFO ipc.Server: Stopping IPC Server listener on 33845
[junit] 11/11/17 13:45:39 INFO ipc.Server: Stopping IPC Server Responder
[junit] 11/11/17 13:45:39 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 21
[junit] 11/11/17 13:45:39 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/17 13:45:39 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
[junit] 11/11/17 13:45:39 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:47419, storageID=DS-294177327-67.195.138.25-47419-1321537538031, infoPort=55223, ipcPort=33845):Finishing DataNode in: FSDataset{dirpath='/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/raid/test/data/dfs/data/data2/current/finalized'}
[junit] 11/11/17 13:45:39 INFO ipc.Server: Stopping server on 33845
[junit] 11/11/17 13:45:39 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
[junit] 11/11/17 13:45:39 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
[junit] 11/11/17 13:45:39 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
[junit] 11/11/17 13:45:39 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
[junit] 11/11/17 13:45:39 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
[junit] 11/11/17 13:45:39 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
[junit] 11/11/17 13:45:39 WARN namenode.DecommissionManager: Monitor interrupted: java.lang.InterruptedException: sleep interrupted
[junit] 11/11/17 13:45:39 INFO namenode.FSEditLog: Number of transactions: 14 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 7 SyncTimes(ms): 12 5
[junit] 11/11/17 13:45:39 INFO ipc.Server: Stopping server on 49244
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 0 on 49244: exiting
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 1 on 49244: exiting
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 2 on 49244: exiting
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 5 on 49244: exiting
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 3 on 49244: exiting
[junit] 11/11/17 13:45:39 INFO ipc.Server: Stopping IPC Server listener on 49244
[junit] 11/11/17 13:45:39 INFO ipc.Server: Stopping IPC Server Responder
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 8 on 49244: exiting
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 6 on 49244: exiting
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 9 on 49244: exiting
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 7 on 49244: exiting
[junit] 11/11/17 13:45:39 INFO ipc.Server: IPC Server handler 4 on 49244: exiting
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 20.783 sec
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:817: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:796: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/build.xml:87: The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/contrib/raid/build.xml:60: Tests failed!
Total time: 188 minutes 8 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-3311
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
FAILED: org.apache.hadoop.mapred.TestCapacitySchedulerWithJobTracker.testJobTrackerIntegration
Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.
Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.
FAILED: org.apache.hadoop.mapred.gridmix.TestGridmixSubmission.testSerialSubmit
Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.
Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.
FAILED: org.apache.hadoop.mapred.gridmix.TestSleepJob.testMapTasksOnlySleepJobs
Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.
Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.
FAILED: org.apache.hadoop.raid.TestRaidNode.testPathFilter
Error Message:
Could not obtain block: blk_458401384656090374_1015 file=/destraid/user/dhruba/raidtest/file2 at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:559) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:382) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:514) at java.io.DataInputStream.read(DataInputStream.java:132) at org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.java:138) at org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStream.java:117) at org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java:95) at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74) at org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147) at org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867) at org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333) at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476)
Stack Trace:
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: blk_458401384656090374_1015 file=/destraid/user/dhruba/raidtest/file2
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:559)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:382)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:514)
at java.io.DataInputStream.read(DataInputStream.java:132)
at org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.java:138)
at org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStream.java:117)
at org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java:95)
at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74)
at org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147)
at org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867)
at org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1028)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy11.recoverFile(Unknown Source)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:84)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy11.recoverFile(Unknown Source)
at org.apache.hadoop.raid.RaidShell.recover(RaidShell.java:272)
at org.apache.hadoop.raid.TestRaidNode.simulateError(TestRaidNode.java:576)
at org.apache.hadoop.raid.TestRaidNode.doTestPathFilter(TestRaidNode.java:331)
at org.apache.hadoop.raid.TestRaidNode.testPathFilter(TestRaidNode.java:257)
Hadoop-Mapreduce-22-branch - Build # 88 - Still Failing
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See https://builds.apache.org/job/Hadoop-Mapreduce-22-branch/88/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 191653 lines...]
[junit] 0.95:96670
[junit] Failed Reduce CDF --------
[junit] 0: -9223372036854775808--9223372036854775807
[junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358,
[junit] ===============
[junit] 2011-11-06 13:09:44,214 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,215 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,215 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,215 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,216 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,216 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,216 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,216 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,217 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,217 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,217 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,217 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,218 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,218 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,218 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,218 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,219 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,219 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,219 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
[junit] 2011-11-06 13:09:44,220 WARN rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
[junit] generated failed map runtime distribution
[junit] 100000: 18592--18592
[junit] 0.1:18592
[junit] 0.5:18592
[junit] 0.9:18592
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 1.81 sec
[junit] Running org.apache.hadoop.util.TestReflectionUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.25 sec
[junit] Running org.apache.hadoop.util.TestRunJar
[junit] Creating file/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/data/out
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.211 sec
checkfailure:
[touch] Creating /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/testsfailed
run-test-mapred-all-withtestcaseonly:
run-test-mapred:
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:813: Tests failed!
Total time: 150 minutes 55 seconds
Build step 'Execute shell' marked build as failure
[FINDBUGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Recording test results
Publishing Javadoc
Recording fingerprints
Updating MAPREDUCE-1118
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
REGRESSION: org.apache.hadoop.tools.TestHadoopArchives.testPathWithSpaces
Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.
Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.