You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2016/02/23 00:42:20 UTC

Hadoop-Hdfs-trunk-Java8 - Build # 930 - Still Failing

See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/930/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 5952 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:57 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  03:50 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.079 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:54 h
[INFO] Finished at: 2016-02-22T23:42:04+00:00
[INFO] Final Memory: 70M/481M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestDFSClientRetries.testClientDNProtocolTimeout

Error Message:
Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.NoClassDefFoundError: org/apache/hadoop/ipc/Client$Connection$2; Host Details : local host is: "asf901.gq1.ygridcore.net/67.195.81.145"; destination host is: "localhost":43274; 

Stack Trace:
java.io.IOException: Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.NoClassDefFoundError: org/apache/hadoop/ipc/Client$Connection$2; Host Details : local host is: "asf901.gq1.ygridcore.net/67.195.81.145"; destination host is: "localhost":43274; 
	at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745)
	at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:378)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1510)
	at org.apache.hadoop.ipc.Client.call(Client.java:1425)
	at org.apache.hadoop.ipc.Client.call(Client.java:1386)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
	at com.sun.proxy.$Proxy26.getReplicaVisibleLength(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getReplicaVisibleLength(ClientDatanodeProtocolTranslatorPB.java:180)
	at org.apache.hadoop.hdfs.TestDFSClientRetries.testClientDNProtocolTimeout(TestDFSClientRetries.java:884)


FAILED:  org.apache.hadoop.hdfs.TestDatanodeRegistration.testForcedRegistration

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:86)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.junit.Assert.assertTrue(Assert.java:52)
	at org.apache.hadoop.hdfs.TestDatanodeRegistration.testForcedRegistration(TestDatanodeRegistration.java:365)


FAILED:  org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout

Error Message:
write timedout too late in 1297 ms.

Stack Trace:
java.io.IOException: write timedout too late in 1297 ms.
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
	at java.io.OutputStream.write(OutputStream.java:75)
	at org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1040)


FAILED:  org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:45120,DS-e6c634b4-838c-4c79-b115-46b5550a554f,DISK], DatanodeInfoWithStorage[127.0.0.1:50835,DS-8e7ab495-5814-4f63-9ac2-d8ca695a59e2,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:45120,DS-e6c634b4-838c-4c79-b115-46b5550a554f,DISK], DatanodeInfoWithStorage[127.0.0.1:50835,DS-8e7ab495-5814-4f63-9ac2-d8ca695a59e2,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:45120,DS-e6c634b4-838c-4c79-b115-46b5550a554f,DISK], DatanodeInfoWithStorage[127.0.0.1:50835,DS-8e7ab495-5814-4f63-9ac2-d8ca695a59e2,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:45120,DS-e6c634b4-838c-4c79-b115-46b5550a554f,DISK], DatanodeInfoWithStorage[127.0.0.1:50835,DS-8e7ab495-5814-4f63-9ac2-d8ca695a59e2,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1166)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)