You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-dev@hadoop.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2014/03/04 00:23:06 UTC
Failed: YARN-1729 PreCommit Build #3232
Jira: https://issues.apache.org/jira/browse/YARN-1729
Build: https://builds.apache.org/job/PreCommit-YARN-Build/3232/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2907 lines...]
kill: No such process
NOP
{color:red}-1 overall{color}. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12632368/YARN-1729.7.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author tags.
{color:green}+1 tests included{color}. The patch appears to include 3 new or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings.
{color:red}-1 release audit{color}. The applied patch generated 1 release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice.
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results: https://builds.apache.org/job/PreCommit-YARN-Build/3232//testReport/
Release audit warnings: https://builds.apache.org/job/PreCommit-YARN-Build/3232//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3232//console
This message is automatically generated.
======================================================================
======================================================================
Adding comment to Jira.
======================================================================
======================================================================
Comment added.
fd40e19cd0a4371f14cf549a5ae5858b7e2299a9 logged out
======================================================================
======================================================================
Finished build.
======================================================================
======================================================================
Build step 'Execute shell' marked build as failure
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed
Success: YARN-1408 PreCommit Build #3238
Posted by Apache Jenkins Server <je...@builds.apache.org>.
Jira: https://issues.apache.org/jira/browse/YARN-1408
Build: https://builds.apache.org/job/PreCommit-YARN-Build/3238/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 3043 lines...]
/bin/kill -9 29279
kill: No such process
NOP
{color:green}+1 overall{color}. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12629000/Yarn-1408.4.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results: https://builds.apache.org/job/PreCommit-YARN-Build/3238//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3238//console
This message is automatically generated.
======================================================================
======================================================================
Adding comment to Jira.
======================================================================
======================================================================
Comment added.
3b564bc430f02a0b1e56cf1a66357e93ffc43d62 logged out
======================================================================
======================================================================
Finished build.
======================================================================
======================================================================
Archiving artifacts
Description set: YARN-1408
Recording test results
Email was triggered for: Success
Sending email for trigger: Success
###################################################################################
############################## FAILED TESTS (if any) ##############################
All tests passed
Failed: YARN-986 PreCommit Build #3237
Posted by Apache Jenkins Server <je...@builds.apache.org>.
Jira: https://issues.apache.org/jira/browse/YARN-986
Build: https://builds.apache.org/job/PreCommit-YARN-Build/3237/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 4903 lines...]
org.apache.hadoop.mapreduce.TestMRJobClient
org.apache.hadoop.mapred.TestMerge
org.apache.hadoop.mapred.TestReduceFetch
org.apache.hadoop.mapred.TestLazyOutput
org.apache.hadoop.mapred.TestReduceFetchFromPartialMem
org.apache.hadoop.mapreduce.v2.TestMRJobs
org.apache.hadoop.mapred.TestMRCJCFileInputFormat
org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers
org.apache.hadoop.mapred.TestJobSysDirWithDFS
org.apache.hadoop.mapreduce.security.TestMRCredentials
org.apache.hadoop.mapreduce.TestMapReduceLazyOutput
org.apache.hadoop.mapreduce.lib.join.TestJoinProperties
org.apache.hadoop.ipc.TestMRCJCSocketFactory
org.apache.hadoop.mapred.TestMiniMRClasspath
org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle
org.apache.hadoop.conf.TestNoDefaultsJobConf
org.apache.hadoop.mapred.TestMiniMRChildTask
org.apache.hadoop.mapreduce.lib.input.TestDelegatingInputFormat
org.apache.hadoop.mapred.join.TestDatamerge
org.apache.hadoop.mapred.lib.TestDelegatingInputFormat
org.apache.hadoop.fs.TestFileSystem
org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge
org.apache.hadoop.mapreduce.security.TestBinaryTokenFile
org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results: https://builds.apache.org/job/PreCommit-YARN-Build/3237//testReport/
Release audit warnings: https://builds.apache.org/job/PreCommit-YARN-Build/3237//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3237//console
This message is automatically generated.
======================================================================
======================================================================
Adding comment to Jira.
======================================================================
======================================================================
Comment added.
9f5ff3894f93dd4d19b893ae286fa2e2bbfb81d6 logged out
======================================================================
======================================================================
Finished build.
======================================================================
======================================================================
Build step 'Execute shell' marked build as failure
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
45 tests failed.
FAILED: org.apache.hadoop.conf.TestNoDefaultsJobConf.testNoDefaults
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.HadoopTestCase.setUp(HadoopTestCase.java:149)
at junit.framework.TestCase.runBare(TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
FAILED: org.apache.hadoop.fs.TestDFSIO.org.apache.hadoop.fs.TestDFSIO
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.fs.TestDFSIO.beforeClass(TestDFSIO.java:205)
FAILED: org.apache.hadoop.fs.TestFileSystem.testFsCache
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:564)
at org.apache.hadoop.fs.TestFileSystem.runTestCache(TestFileSystem.java:526)
at org.apache.hadoop.fs.TestFileSystem.testFsCache(TestFileSystem.java:512)
FAILED: org.apache.hadoop.ipc.TestMRCJCSocketFactory.testSocketFactory
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.ipc.TestMRCJCSocketFactory.testSocketFactory(TestMRCJCSocketFactory.java:52)
FAILED: org.apache.hadoop.mapred.TestClusterMapReduceTestCase.testMapReduce
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
at junit.framework.TestCase.runBare(TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
FAILED: org.apache.hadoop.mapred.TestClusterMapReduceTestCase.testMapReduceRestarting
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
at junit.framework.TestCase.runBare(TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
FAILED: org.apache.hadoop.mapred.TestClusterMapReduceTestCase.testDFSRestart
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
at junit.framework.TestCase.runBare(TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
FAILED: org.apache.hadoop.mapred.TestClusterMapReduceTestCase.testMRConfig
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
at junit.framework.TestCase.runBare(TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
FAILED: org.apache.hadoop.mapred.TestJobName.testComplexName
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
at junit.framework.TestCase.runBare(TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
FAILED: org.apache.hadoop.mapred.TestJobName.testComplexNameWithRegex
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
at junit.framework.TestCase.runBare(TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
FAILED: org.apache.hadoop.mapred.TestJobSysDirWithDFS.testWithDFS
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.TestJobSysDirWithDFS.testWithDFS(TestJobSysDirWithDFS.java:128)
FAILED: org.apache.hadoop.mapred.TestLazyOutput.testLazyOutput
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.TestLazyOutput.testLazyOutput(TestLazyOutput.java:144)
FAILED: org.apache.hadoop.mapred.TestMRCJCFileInputFormat.testLocality
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:530)
at org.apache.hadoop.mapred.TestMRCJCFileInputFormat.newDFSCluster(TestMRCJCFileInputFormat.java:47)
at org.apache.hadoop.mapred.TestMRCJCFileInputFormat.testLocality(TestMRCJCFileInputFormat.java:56)
FAILED: org.apache.hadoop.mapred.TestMRCJCFileInputFormat.testNumInputs
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:530)
at org.apache.hadoop.mapred.TestMRCJCFileInputFormat.newDFSCluster(TestMRCJCFileInputFormat.java:47)
at org.apache.hadoop.mapred.TestMRCJCFileInputFormat.testNumInputs(TestMRCJCFileInputFormat.java:115)
FAILED: org.apache.hadoop.mapred.TestMRCJCFileInputFormat.testMultiLevelInput
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:530)
at org.apache.hadoop.mapred.TestMRCJCFileInputFormat.testMultiLevelInput(TestMRCJCFileInputFormat.java:165)
FAILED: org.apache.hadoop.mapred.TestMerge.testMerge
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.TestMerge.testMerge(TestMerge.java:79)
FAILED: org.apache.hadoop.mapred.TestMiniMRChildTask.org.apache.hadoop.mapred.TestMiniMRChildTask
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.TestMiniMRChildTask.setup(TestMiniMRChildTask.java:326)
FAILED: org.apache.hadoop.mapred.TestMiniMRClasspath.testClassPath
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.TestMiniMRClasspath.testClassPath(TestMiniMRClasspath.java:172)
FAILED: org.apache.hadoop.mapred.TestMiniMRClasspath.testExternalWritable
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.TestMiniMRClasspath.testExternalWritable(TestMiniMRClasspath.java:204)
FAILED: org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.testDistinctUsers
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.setUp(TestMiniMRWithDFSWithDistinctUsers.java:78)
FAILED: org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.testMultipleSpills
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.setUp(TestMiniMRWithDFSWithDistinctUsers.java:78)
FAILED: org.apache.hadoop.mapred.TestReduceFetchFromPartialMem$1.org.apache.hadoop.mapred.TestReduceFetchFromPartialMem
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.TestReduceFetchFromPartialMem$1.setUp(TestReduceFetchFromPartialMem.java:60)
FAILED: org.apache.hadoop.mapred.TestReduceFetchFromPartialMem$1.org.apache.hadoop.mapred.TestReduceFetchFromPartialMem
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.TestReduceFetchFromPartialMem$1.setUp(TestReduceFetchFromPartialMem.java:60)
FAILED: org.apache.hadoop.mapred.TestSpecialCharactersInOutputPath.testJobWithDFS
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.TestSpecialCharactersInOutputPath.testJobWithDFS(TestSpecialCharactersInOutputPath.java:109)
FAILED: org.apache.hadoop.mapred.join.TestDatamerge$1.org.apache.hadoop.mapred.join.TestDatamerge
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapred.join.TestDatamerge$1.setUp(TestDatamerge.java:65)
FAILED: org.apache.hadoop.mapred.lib.TestDelegatingInputFormat.testSplitting
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:530)
at org.apache.hadoop.mapred.lib.TestDelegatingInputFormat.testSplitting(TestDelegatingInputFormat.java:42)
FAILED: org.apache.hadoop.mapreduce.TestMRJobClient.testJobSubmissionSpecsAndFiles
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
at junit.framework.TestCase.runBare(TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
FAILED: org.apache.hadoop.mapreduce.TestMRJobClient.testJobClient
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.startCluster(ClusterMapReduceTestCase.java:81)
at org.apache.hadoop.mapred.ClusterMapReduceTestCase.setUp(ClusterMapReduceTestCase.java:56)
at junit.framework.TestCase.runBare(TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
FAILED: org.apache.hadoop.mapreduce.TestMapReduceLazyOutput.testLazyOutput
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapreduce.TestMapReduceLazyOutput.testLazyOutput(TestMapReduceLazyOutput.java:134)
FAILED: org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacement
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:530)
at org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacement(TestCombineFileInputFormat.java:316)
FAILED: org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacementForCompressedFiles
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:530)
at org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacementForCompressedFiles(TestCombineFileInputFormat.java:858)
FAILED: org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testMissingBlocks
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:530)
at org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testMissingBlocks(TestCombineFileInputFormat.java:1200)
FAILED: org.apache.hadoop.mapreduce.lib.input.TestDelegatingInputFormat.testSplitting
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:530)
at org.apache.hadoop.mapreduce.lib.input.TestDelegatingInputFormat.testSplitting(TestDelegatingInputFormat.java:40)
FAILED: org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge$1.org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapreduce.lib.join.TestJoinDatamerge$1.setUp(TestJoinDatamerge.java:48)
FAILED: org.apache.hadoop.mapreduce.lib.join.TestJoinProperties$1.org.apache.hadoop.mapreduce.lib.join.TestJoinProperties
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapreduce.lib.join.TestJoinProperties$1.setUp(TestJoinProperties.java:53)
FAILED: org.apache.hadoop.mapreduce.security.TestBinaryTokenFile.org.apache.hadoop.mapreduce.security.TestBinaryTokenFile
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.mapreduce.security.TestBinaryTokenFile.setUp(TestBinaryTokenFile.java:204)
FAILED: org.apache.hadoop.mapreduce.security.TestMRCredentials.org.apache.hadoop.mapreduce.security.TestMRCredentials
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapreduce.security.TestMRCredentials.setUp(TestMRCredentials.java:61)
FAILED: org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.encryptedShuffleWithClientCerts
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.startCluster(TestEncryptedShuffle.java:95)
at org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.encryptedShuffleWithCerts(TestEncryptedShuffle.java:138)
at org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.encryptedShuffleWithClientCerts(TestEncryptedShuffle.java:167)
FAILED: org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.encryptedShuffleWithoutClientCerts
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.startCluster(TestEncryptedShuffle.java:95)
at org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.encryptedShuffleWithCerts(TestEncryptedShuffle.java:138)
at org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle.encryptedShuffleWithoutClientCerts(TestEncryptedShuffle.java:172)
FAILED: org.apache.hadoop.mapreduce.v2.TestMRJobs.org.apache.hadoop.mapreduce.v2.TestMRJobs
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.mapreduce.v2.TestMRJobs.setup(TestMRJobs.java:125)
FAILED: org.apache.hadoop.mapreduce.v2.TestMiniMRProxyUser.testValidProxyUser
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapreduce.v2.TestMiniMRProxyUser.setUp(TestMiniMRProxyUser.java:73)
FAILED: org.apache.hadoop.mapreduce.v2.TestNonExistentJob.testGetInvalidJob
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:638)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:507)
at org.apache.hadoop.mapreduce.v2.TestNonExistentJob.setUp(TestNonExistentJob.java:60)
FAILED: org.apache.hadoop.mapreduce.v2.TestUberAM.org.apache.hadoop.mapreduce.v2.TestUberAM
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.mapreduce.v2.TestMRJobs.setup(TestMRJobs.java:125)
at org.apache.hadoop.mapreduce.v2.TestUberAM.setup(TestUberAM.java:47)
FAILED: org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStore
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStore(TestFSRMStateStore.java:134)
FAILED: org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStoreClientRetry
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStoreClientRetry(TestFSRMStateStore.java:168)
Failed: YARN-1752 PreCommit Build #3236
Posted by Apache Jenkins Server <je...@builds.apache.org>.
Jira: https://issues.apache.org/jira/browse/YARN-1752
Build: https://builds.apache.org/job/PreCommit-YARN-Build/3236/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 3187 lines...]
{color:red}-1 overall{color}. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12632436/YARN-1752.4.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author tags.
{color:green}+1 tests included{color}. The patch appears to include 4 new or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings.
{color:red}-1 release audit{color}. The applied patch generated 1 release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results: https://builds.apache.org/job/PreCommit-YARN-Build/3236//testReport/
Release audit warnings: https://builds.apache.org/job/PreCommit-YARN-Build/3236//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3236//console
This message is automatically generated.
======================================================================
======================================================================
Adding comment to Jira.
======================================================================
======================================================================
Comment added.
0d92c3ae4698527ed5a392400da68b24515ab00b logged out
======================================================================
======================================================================
Finished build.
======================================================================
======================================================================
Build step 'Execute shell' marked build as failure
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStore
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStore(TestFSRMStateStore.java:134)
FAILED: org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStoreClientRetry
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStoreClientRetry(TestFSRMStateStore.java:168)
Failed: YARN-1768 PreCommit Build #3235
Posted by Apache Jenkins Server <je...@builds.apache.org>.
Jira: https://issues.apache.org/jira/browse/YARN-1768
Build: https://builds.apache.org/job/PreCommit-YARN-Build/3235/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 101 lines...]
======================================================================
======================================================================
Pre-build trunk to verify trunk stability and javac warnings
======================================================================
======================================================================
Compiling /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk
/home/jenkins/tools/maven/latest/bin/mvn clean test -DskipTests -DHadoopPatchProcess -Ptest-patch > /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/patchprocess/trunkJavacWarnings.txt 2>&1
Trunk compilation is broken?
{color:red}-1 overall{color}. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12632431/YARN-1768.3.patch
against trunk revision .
{color:red}-1 patch{color}. Trunk compilation may be broken.
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3235//console
This message is automatically generated.
======================================================================
======================================================================
Adding comment to Jira.
======================================================================
======================================================================
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hs_err_pid2169.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# /home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build/trunk/hs_err_pid2179.log
======================================================================
======================================================================
Finished build.
======================================================================
======================================================================
Build step 'Execute shell' marked build as failure
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.
Failed: YARN-1774 PreCommit Build #3234
Posted by Apache Jenkins Server <je...@builds.apache.org>.
Jira: https://issues.apache.org/jira/browse/YARN-1774
Build: https://builds.apache.org/job/PreCommit-YARN-Build/3234/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 3078 lines...]
{color:red}-1 overall{color}. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12632380/YARN-1774.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings.
{color:red}-1 release audit{color}. The applied patch generated 1 release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore
The following test timeouts occurred in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results: https://builds.apache.org/job/PreCommit-YARN-Build/3234//testReport/
Release audit warnings: https://builds.apache.org/job/PreCommit-YARN-Build/3234//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3234//console
This message is automatically generated.
======================================================================
======================================================================
Adding comment to Jira.
======================================================================
======================================================================
Comment added.
5bea653020adac070f20d13803817ecfcf128072 logged out
======================================================================
======================================================================
Finished build.
======================================================================
======================================================================
Build step 'Execute shell' marked build as failure
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStore
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStore(TestFSRMStateStore.java:134)
FAILED: org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStoreClientRetry
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStoreClientRetry(TestFSRMStateStore.java:168)
Failed: YARN-1766 PreCommit Build #3233
Posted by Apache Jenkins Server <je...@builds.apache.org>.
Jira: https://issues.apache.org/jira/browse/YARN-1766
Build: https://builds.apache.org/job/PreCommit-YARN-Build/3233/
###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 3142 lines...]
{color:red}-1 overall{color}. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12632378/YARN-1766.2.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings.
{color:red}-1 release audit{color}. The applied patch generated 1 release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results: https://builds.apache.org/job/PreCommit-YARN-Build/3233//testReport/
Release audit warnings: https://builds.apache.org/job/PreCommit-YARN-Build/3233//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3233//console
This message is automatically generated.
======================================================================
======================================================================
Adding comment to Jira.
======================================================================
======================================================================
Comment added.
e7d11a26027606edbc3c934ac293d197cb1d3ffe logged out
======================================================================
======================================================================
Finished build.
======================================================================
======================================================================
Build step 'Execute shell' marked build as failure
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure
###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED: org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStore
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStore(TestFSRMStateStore.java:134)
FAILED: org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStoreClientRetry
Error Message:
libhadoop cannot be loaded.
Stack Trace:
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at org.apache.hadoop.net.unix.DomainSocketWatcher.<init>(DomainSocketWatcher.java:229)
at org.apache.hadoop.hdfs.client.DfsClientShmManager.<init>(DfsClientShmManager.java:404)
at org.apache.hadoop.hdfs.client.ShortCircuitCache.<init>(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.<init>(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:507)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:488)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1967)
at org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1989)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1303)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:718)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore.testFSRMStateStoreClientRetry(TestFSRMStateStore.java:168)