You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2013/11/22 14:22:32 UTC

Hadoop-Hdfs-trunk - Build # 1590 - Still Failing

See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1590/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 11786 lines...]
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ****** FindBugsMojo execute *******
[INFO] canGenerate is false
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:47:34.791s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [1.923s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:47:38.071s
[INFO] Finished at: Fri Nov 22 13:22:00 UTC 2013
[INFO] Final Memory: 36M/304M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-5285
Updating HADOOP-9114
Updating MAPREDUCE-5631
Updating HDFS-5543
Updating HADOOP-10103
Updating HDFS-5407
Updating YARN-1320
Updating HDFS-5473
Updating HDFS-5288
Updating HADOOP-10111
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
9 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testWaitForCachedReplicas

Error Message:
Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.

Stack Trace:
java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:267)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1764)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1679)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1191)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:666)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:335)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:317)
	at org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testWaitForCachedReplicas(TestCacheDirectives.java:710)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled

Error Message:
Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.

Stack Trace:
java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:267)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1764)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1679)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1191)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:666)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:335)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:317)
	at org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testAddingCacheDirectiveInfosWhenCachingIsDisabled(TestCacheDirectives.java:767)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testWaitForCachedReplicasInDirectory

Error Message:
Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.

Stack Trace:
java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:267)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1764)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1679)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1191)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:666)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:335)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:317)
	at org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testWaitForCachedReplicasInDirectory(TestCacheDirectives.java:813)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testReplicationFactor

Error Message:
Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.

Stack Trace:
java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:267)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1764)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1679)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1191)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:666)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:335)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:317)
	at org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives.testReplicationFactor(TestCacheDirectives.java:897)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.testCacheAndUncacheBlockSimple

Error Message:
Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.

Stack Trace:
java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:267)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1764)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1679)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1191)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:666)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:335)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:317)
	at org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.setUp(TestFsDatasetCache.java:113)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.testCacheAndUncacheBlockWithRetries

Error Message:
Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.

Stack Trace:
java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:267)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1764)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1679)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1191)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:666)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:335)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:317)
	at org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.setUp(TestFsDatasetCache.java:113)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.testUncacheUnknownBlock

Error Message:
Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.

Stack Trace:
java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:267)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1764)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1679)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1191)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:666)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:335)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:317)
	at org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.setUp(TestFsDatasetCache.java:113)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.testUncachingBlocksBeforeCachingFinishes

Error Message:
Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.

Stack Trace:
java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:267)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1764)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1679)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1191)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:666)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:335)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:317)
	at org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.setUp(TestFsDatasetCache.java:113)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.testFilesExceedMaxLockedMemory

Error Message:
Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.

Stack Trace:
java.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) is greater than zero and native code is not available.
	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:668)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:267)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1764)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1679)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1191)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:666)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:335)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:317)
	at org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.setUp(TestFsDatasetCache.java:113)