You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2016/03/08 23:55:23 UTC

Hadoop-Hdfs-trunk-Java8 - Build # 984 - Still Failing

See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/984/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 8977 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:07 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  03:50 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.097 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:54 h
[INFO] Finished at: 2016-03-08T22:54:13+00:00
[INFO] Final Memory: 70M/476M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
5 tests failed.
FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
org/apache/hadoop/fs/MD5MD5CRC32GzipFileChecksum

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/fs/MD5MD5CRC32GzipFileChecksum
	at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at org.apache.hadoop.hdfs.DFSClient.getFileChecksum(DFSClient.java:1720)
	at org.apache.hadoop.hdfs.DistributedFileSystem$30.doCall(DistributedFileSystem.java:1434)
	at org.apache.hadoop.hdfs.DistributedFileSystem$30.doCall(DistributedFileSystem.java:1431)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1442)
	at org.apache.hadoop.fs.shell.Display$Checksum.processPath(Display.java:197)
	at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:321)
	at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:293)
	at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:275)
	at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:259)
	at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
	at org.apache.hadoop.fs.shell.Command.run(Command.java:166)
	at org.apache.hadoop.fs.FsShell.run(FsShell.java:319)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
	at org.apache.hadoop.cli.util.FSCmdExecutor.execute(FSCmdExecutor.java:35)
	at org.apache.hadoop.cli.util.CommandExecutor.executeCommand(CommandExecutor.java:78)
	at org.apache.hadoop.cli.TestHDFSCLI.execute(TestHDFSCLI.java:100)


FAILED:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that failed

Stack Trace:
java.lang.AssertionError: One of the tests failed. See the Detailed results to identify the command that failed
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
	at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
	at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:87)


FAILED:  org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:45896,DS-54a18813-b120-4774-8e27-95f4151ceb84,DISK], DatanodeInfoWithStorage[127.0.0.1:36293,DS-31efbad7-412e-4059-bb8a-d90f003d2b63,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:45896,DS-54a18813-b120-4774-8e27-95f4151ceb84,DISK], DatanodeInfoWithStorage[127.0.0.1:36293,DS-31efbad7-412e-4059-bb8a-d90f003d2b63,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:45896,DS-54a18813-b120-4774-8e27-95f4151ceb84,DISK], DatanodeInfoWithStorage[127.0.0.1:36293,DS-31efbad7-412e-4059-bb8a-d90f003d2b63,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:45896,DS-54a18813-b120-4774-8e27-95f4151ceb84,DISK], DatanodeInfoWithStorage[127.0.0.1:36293,DS-31efbad7-412e-4059-bb8a-d90f003d2b63,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1166)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)


FAILED:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanodeSimple

Error Message:
test timed out after 100000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 100000 milliseconds
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:690)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:1098)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanodeSimple(TestBalancer.java:1040)


FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
The stream is closed

Stack Trace:
java.io.IOException: The stream is closed
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
	at java.io.DataOutputStream.flush(DataOutputStream.java:123)
	at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
	at org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:877)
	at org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:726)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:721)