You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by "Arthur.hk.chan@gmail.com" <ar...@gmail.com> on 2014/08/03 12:32:28 UTC

Compile Hadoop 2.4.1 (with Tests and Without Tests)

Hi,

I am trying to compile Hadoop 2.4.1. 

If I run "mvm clean install -DskipTests", the compilation is GOOD, 
However, if I run "mvn clean install”, i.e. didn’t skip the Tests, it returned “Failures” 	

Can anyone please advise what should be prepared before unit tests in compilation?  From the error log, e.g. I found it used 192.168.12.37, but this was not my local IPs, should I change some configure file? any ideas?
On the other hand, can I use the the compiled code from GOOD compilation and just ignore the failed tests?

Please advise!!

Regards
Arthur




Compilation results:
run "mvm clean install -DskipTests", the compilation is GOOD, 
=====
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main ................................ SUCCESS [1.756s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [0.586s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [1.282s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [0.257s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.136s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [1.189s]
[INFO] Apache Hadoop MiniKDC ............................. SUCCESS [0.837s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [0.835s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [0.614s]
[INFO] Apache Hadoop Common .............................. SUCCESS [9.020s]
[INFO] Apache Hadoop NFS ................................. SUCCESS [9.341s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [0.013s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [1:11.329s]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [1.943s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [8.236s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [0.181s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.014s]
[INFO] hadoop-yarn ....................................... SUCCESS [0.045s]
[INFO] hadoop-yarn-api ................................... SUCCESS [3.080s]
[INFO] hadoop-yarn-common ................................ SUCCESS [3.995s]
[INFO] hadoop-yarn-server ................................ SUCCESS [0.036s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [0.406s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [7.874s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [0.185s]
[INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [2.766s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [0.975s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [0.260s]
[INFO] hadoop-yarn-client ................................ SUCCESS [0.401s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [0.012s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [0.194s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [0.157s]
[INFO] hadoop-yarn-site .................................. SUCCESS [0.028s]
[INFO] hadoop-yarn-project ............................... SUCCESS [0.030s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [0.027s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [1.384s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [1.167s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [0.151s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [0.692s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [0.521s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [9.581s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [0.105s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [0.288s]
[INFO] hadoop-mapreduce .................................. SUCCESS [0.031s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [2.485s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [14.204s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [0.147s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [0.283s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [0.266s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [0.109s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [0.173s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [0.013s]
[INFO] Apache Hadoop OpenStack support ................... SUCCESS [0.292s]
[INFO] Apache Hadoop Client .............................. SUCCESS [0.093s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.052s]
[INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [1.123s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [0.109s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [0.012s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [0.038s]
[INFO] ————————————————————————————————————




However, if I run "mvn clean install”, i.e. with Tests, it returned “Failures” 	
====
Running org.apache.hadoop.fs.viewfs.TestChRootedFs
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.626 sec - in org.apache.hadoop.fs.viewfs.TestChRootedFs
Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
Tests run: 49, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.099 sec <<< FAILURE! - in org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem)  Time elapsed: 0.028 sec  <<< FAILURE!
java.lang.AssertionError: Should throw IOException
	at org.junit.Assert.fail(Assert.java:93)
	at org.apache.hadoop.fs.FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:289)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.282 sec - in org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
Running org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.162 sec - in org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
Running org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.434 sec - in org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
Running org.apache.hadoop.fs.TestFileStatus
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec - in org.apache.hadoop.fs.TestFileStatus
Running org.apache.hadoop.fs.TestFileContextResolveAfs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec - in org.apache.hadoop.fs.TestFileContextResolveAfs
Running org.apache.hadoop.fs.TestGlobPattern
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.034 sec - in org.apache.hadoop.fs.TestGlobPattern
Running org.apache.hadoop.fs.s3native.TestJets3tNativeFileSystemStore
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.136 sec - in org.apache.hadoop.fs.s3native.TestJets3tNativeFileSystemStore
Running org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.498 sec - in org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
Running org.apache.hadoop.fs.TestPath
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.418 sec - in org.apache.hadoop.fs.TestPath
Running org.apache.hadoop.fs.TestTrash
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.588 sec - in org.apache.hadoop.fs.TestTrash
Running org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.37 sec - in org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
Running org.apache.hadoop.fs.TestFileContextDeleteOnExit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec - in org.apache.hadoop.fs.TestFileContextDeleteOnExit
Running org.apache.hadoop.fs.TestAfsCheckPath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 sec - in org.apache.hadoop.fs.TestAfsCheckPath
Running org.apache.hadoop.fs.TestLocalFileSystem
Tests run: 18, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 0.601 sec <<< FAILURE! - in org.apache.hadoop.fs.TestLocalFileSystem
testReportChecksumFailure(org.apache.hadoop.fs.TestLocalFileSystem)  Time elapsed: 0.074 sec  <<< FAILURE!
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:92)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.junit.Assert.assertTrue(Assert.java:54)
	at org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure(TestLocalFileSystem.java:356)

Running org.apache.hadoop.fs.permission.TestAcl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.039 sec - in org.apache.hadoop.fs.permission.TestAcl
Running org.apache.hadoop.fs.permission.TestFsPermission
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.244 sec - in org.apache.hadoop.fs.permission.TestFsPermission
Running org.apache.hadoop.fs.TestFileSystemCanonicalization
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec - in org.apache.hadoop.fs.TestFileSystemCanonicalization
Running org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
Tests run: 49, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.775 sec <<< FAILURE! - in org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem)  Time elapsed: 0.012 sec  <<< FAILURE!
java.lang.AssertionError: Should throw IOException
	at org.junit.Assert.fail(Assert.java:93)
	at org.apache.hadoop.fs.FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:289)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

Running org.apache.hadoop.fs.TestDFVariations
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec - in org.apache.hadoop.fs.TestDFVariations
Running org.apache.hadoop.fs.TestDelegationTokenRenewer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.323 sec - in org.apache.hadoop.fs.TestDelegationTokenRenewer
Running org.apache.hadoop.fs.TestFileSystemInitialization
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.244 sec - in org.apache.hadoop.fs.TestFileSystemInitialization
Running org.apache.hadoop.fs.TestGetFileBlockLocations
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.331 sec - in org.apache.hadoop.fs.TestGetFileBlockLocations
Running org.apache.hadoop.fs.TestFileSystemCaching
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.463 sec - in org.apache.hadoop.fs.TestFileSystemCaching
Running org.apache.hadoop.fs.TestChecksumFileSystem
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.431 sec - in org.apache.hadoop.fs.TestChecksumFileSystem
Running org.apache.hadoop.fs.TestLocalFsFCStatistics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.291 sec - in org.apache.hadoop.fs.TestLocalFsFCStatistics
Running org.apache.hadoop.fs.TestLocalFileSystemPermission
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.285 sec - in org.apache.hadoop.fs.TestLocalFileSystemPermission
Running org.apache.hadoop.fs.TestFcLocalFsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.359 sec - in org.apache.hadoop.fs.TestFcLocalFsPermission
Running org.apache.hadoop.fs.TestDU
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.243 sec - in org.apache.hadoop.fs.TestDU
Running org.apache.hadoop.fs.s3.TestINode
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec - in org.apache.hadoop.fs.s3.TestINode
Running org.apache.hadoop.fs.s3.TestS3FileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.241 sec - in org.apache.hadoop.fs.s3.TestS3FileSystem
Running org.apache.hadoop.fs.s3.TestS3Credentials
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.061 sec - in org.apache.hadoop.fs.s3.TestS3Credentials
Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.531 sec - in org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
Running org.apache.hadoop.fs.TestFileSystemTokens
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.276 sec - in org.apache.hadoop.fs.TestFileSystemTokens
Running org.apache.hadoop.metrics.ganglia.TestGangliaContext
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.075 sec - in org.apache.hadoop.metrics.ganglia.TestGangliaContext
Running org.apache.hadoop.metrics.TestMetricsServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 sec - in org.apache.hadoop.metrics.TestMetricsServlet
Running org.apache.hadoop.metrics.spi.TestOutputRecord
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.016 sec - in org.apache.hadoop.metrics.spi.TestOutputRecord
Running org.apache.hadoop.io.TestVersionedWritable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.02 sec - in org.apache.hadoop.io.TestVersionedWritable
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.207 sec - in org.apache.hadoop.io.TestEnumSetWritable
Running org.apache.hadoop.io.TestUTF8
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.35 sec - in org.apache.hadoop.io.TestUTF8
Running org.apache.hadoop.io.TestGenericWritable
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.215 sec - in org.apache.hadoop.io.TestGenericWritable
Running org.apache.hadoop.io.TestBoundedByteArrayOutputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 sec - in org.apache.hadoop.io.TestBoundedByteArrayOutputStream
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.069 sec - in org.apache.hadoop.io.retry.TestRetryProxy
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.551 sec - in org.apache.hadoop.io.retry.TestFailoverProxy
Running org.apache.hadoop.io.TestArrayWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec - in org.apache.hadoop.io.TestArrayWritable
Running org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
Tests run: 13, Failures: 0, Errors: 0, Skipped: 13, Time elapsed: 0.086 sec - in org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
Running org.apache.hadoop.io.compress.TestCodec
Tests run: 24, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 62.132 sec - in org.apache.hadoop.io.compress.TestCodec
Running org.apache.hadoop.io.compress.lz4.TestLz4CompressorDecompressor
Tests run: 12, Failures: 0, Errors: 0, Skipped: 12, Time elapsed: 0.08 sec - in org.apache.hadoop.io.compress.lz4.TestLz4CompressorDecompressor
Running org.apache.hadoop.io.compress.TestCompressorDecompressor
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.138 sec - in org.apache.hadoop.io.compress.TestCompressorDecompressor
Running org.apache.hadoop.io.compress.TestCodecFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec - in org.apache.hadoop.io.compress.TestCodecFactory
Running org.apache.hadoop.io.compress.TestBlockDecompressorStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.032 sec - in org.apache.hadoop.io.compress.TestBlockDecompressorStream
Running org.apache.hadoop.io.compress.TestCodecPool
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec - in org.apache.hadoop.io.compress.TestCodecPool
Running org.apache.hadoop.io.compress.zlib.TestZlibCompressorDecompressor
Tests run: 8, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 0.086 sec - in org.apache.hadoop.io.compress.zlib.TestZlibCompressorDecompressor
Running org.apache.hadoop.io.TestSecureIOUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.308 sec - in org.apache.hadoop.io.TestSecureIOUtils
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.033 sec - in org.apache.hadoop.io.TestBooleanWritable
Running org.apache.hadoop.io.TestMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec - in org.apache.hadoop.io.TestMapWritable
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec - in org.apache.hadoop.io.TestTextNonUTF8
Running org.apache.hadoop.io.TestWritableUtils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.051 sec - in org.apache.hadoop.io.TestWritableUtils
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 sec - in org.apache.hadoop.io.TestObjectWritableProtos
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.579 sec - in org.apache.hadoop.io.TestBloomMapFile
Running org.apache.hadoop.io.TestSortedMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec - in org.apache.hadoop.io.TestSortedMapWritable
Running org.apache.hadoop.io.TestDefaultStringifier
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.153 sec - in org.apache.hadoop.io.TestDefaultStringifier
Running org.apache.hadoop.io.TestWritableName
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec - in org.apache.hadoop.io.TestWritableName
Running org.apache.hadoop.io.TestSetFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.067 sec - in org.apache.hadoop.io.TestSetFile
Running org.apache.hadoop.io.TestMD5Hash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec - in org.apache.hadoop.io.TestMD5Hash
Running org.apache.hadoop.io.TestSequenceFileSerialization
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.297 sec - in org.apache.hadoop.io.TestSequenceFileSerialization
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.179 sec - in org.apache.hadoop.io.TestDataByteBuffers
Running org.apache.hadoop.io.TestSequenceFileSync
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.398 sec - in org.apache.hadoop.io.TestSequenceFileSync
Running org.apache.hadoop.io.TestArrayFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.286 sec - in org.apache.hadoop.io.TestArrayFile
Running org.apache.hadoop.io.TestArrayPrimitiveWritable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.054 sec - in org.apache.hadoop.io.TestArrayPrimitiveWritable
Running org.apache.hadoop.io.nativeio.TestSharedFileDescriptorFactory
Tests run: 3, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 0.074 sec - in org.apache.hadoop.io.nativeio.TestSharedFileDescriptorFactory
Running org.apache.hadoop.io.nativeio.TestNativeIO
Tests run: 17, Failures: 0, Errors: 0, Skipped: 17, Time elapsed: 0.088 sec - in org.apache.hadoop.io.nativeio.TestNativeIO
Running org.apache.hadoop.io.TestText
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.327 sec - in org.apache.hadoop.io.TestText
Running org.apache.hadoop.io.file.tfile.TestTFileByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.159 sec - in org.apache.hadoop.io.file.tfile.TestTFileByteArrays
Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.347 sec - in org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
Running org.apache.hadoop.io.file.tfile.TestTFileComparators
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec - in org.apache.hadoop.io.file.tfile.TestTFileComparators
Running org.apache.hadoop.io.file.tfile.TestTFileSplit
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.068 sec - in org.apache.hadoop.io.file.tfile.TestTFileSplit
Running org.apache.hadoop.io.file.tfile.TestTFileStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.834 sec - in org.apache.hadoop.io.file.tfile.TestTFileStreams
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.743 sec - in org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec - in org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.138 sec - in org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
Running org.apache.hadoop.io.file.tfile.TestTFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec - in org.apache.hadoop.io.file.tfile.TestTFile
Running org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.063 sec - in org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
Running org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.185 sec - in org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Running org.apache.hadoop.io.file.tfile.TestTFileSeek
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.384 sec - in org.apache.hadoop.io.file.tfile.TestTFileSeek
Running org.apache.hadoop.io.file.tfile.TestVLong
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.78 sec - in org.apache.hadoop.io.file.tfile.TestVLong
Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec - in org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.718 sec - in org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
Running org.apache.hadoop.io.file.tfile.TestTFileComparator2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.592 sec - in org.apache.hadoop.io.file.tfile.TestTFileComparator2
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.035 sec - in org.apache.hadoop.io.TestBytesWritable
Running org.apache.hadoop.io.TestWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.024 sec - in org.apache.hadoop.io.TestWritable
Running org.apache.hadoop.io.TestIOUtils
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.181 sec - in org.apache.hadoop.io.TestIOUtils
Running org.apache.hadoop.io.serializer.TestWritableSerialization
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec - in org.apache.hadoop.io.serializer.TestWritableSerialization
Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec - in org.apache.hadoop.io.serializer.avro.TestAvroSerialization
Running org.apache.hadoop.io.serializer.TestSerializationFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.149 sec - in org.apache.hadoop.io.serializer.TestSerializationFactory
Running org.apache.hadoop.io.TestMapFile
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.652 sec - in org.apache.hadoop.io.TestMapFile
Running org.apache.hadoop.io.TestSequenceFile
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.265 sec - in org.apache.hadoop.io.TestSequenceFile
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.952 sec - in org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Running org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.031 sec - in org.apache.hadoop.security.ssl.TestSSLFactory
Running org.apache.hadoop.security.TestUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec - in org.apache.hadoop.security.TestUserFromEnv
Running org.apache.hadoop.security.TestJNIGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.072 sec - in org.apache.hadoop.security.TestJNIGroupsMapping
Running org.apache.hadoop.security.TestDoAsEffectiveUser
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.637 sec - in org.apache.hadoop.security.TestDoAsEffectiveUser
Running org.apache.hadoop.security.TestGroupFallback
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec - in org.apache.hadoop.security.TestGroupFallback
Running org.apache.hadoop.security.TestUserGroupInformation
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.644 sec - in org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.214 sec - in org.apache.hadoop.security.TestAuthenticationFilter
Running org.apache.hadoop.security.TestCredentials
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.239 sec - in org.apache.hadoop.security.TestCredentials
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec - in org.apache.hadoop.security.TestLdapGroupsMapping
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.037 sec - in org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.253 sec - in org.apache.hadoop.security.authorize.TestAccessControlList
Running org.apache.hadoop.security.authorize.TestProxyUsers
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.275 sec - in org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec - in org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.token.TestToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.23 sec - in org.apache.hadoop.security.token.TestToken
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.089 sec - in org.apache.hadoop.security.token.delegation.TestDelegationToken
Running org.apache.hadoop.security.TestSecurityUtil
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.381 sec - in org.apache.hadoop.security.TestSecurityUtil
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec - in org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.ipc.TestCallQueueManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.371 sec - in org.apache.hadoop.ipc.TestCallQueueManager
Running org.apache.hadoop.ipc.TestMiniRPCBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.393 sec - in org.apache.hadoop.ipc.TestMiniRPCBenchmark
Running org.apache.hadoop.ipc.TestServer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec - in org.apache.hadoop.ipc.TestServer
Running org.apache.hadoop.ipc.TestIdentityProviders
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.233 sec - in org.apache.hadoop.ipc.TestIdentityProviders
Running org.apache.hadoop.ipc.TestSaslRPC
Tests run: 85, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.071 sec - in org.apache.hadoop.ipc.TestSaslRPC
Running org.apache.hadoop.ipc.TestRetryCache
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.175 sec - in org.apache.hadoop.ipc.TestRetryCache
Running org.apache.hadoop.ipc.TestRPC
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.518 sec - in org.apache.hadoop.ipc.TestRPC
Running org.apache.hadoop.ipc.TestIPC
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.761 sec - in org.apache.hadoop.ipc.TestIPC
Running org.apache.hadoop.ipc.TestRetryCacheMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec - in org.apache.hadoop.ipc.TestRetryCacheMetrics
Running org.apache.hadoop.ipc.TestProtoBufRpc
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.53 sec - in org.apache.hadoop.ipc.TestProtoBufRpc
Running org.apache.hadoop.ipc.TestSocketFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.141 sec - in org.apache.hadoop.ipc.TestSocketFactory
Running org.apache.hadoop.ipc.TestMultipleProtocolServer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.413 sec - in org.apache.hadoop.ipc.TestMultipleProtocolServer
Running org.apache.hadoop.ipc.TestIPCServerResponder
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.174 sec - in org.apache.hadoop.ipc.TestIPCServerResponder
Running org.apache.hadoop.ipc.TestRPCCallBenchmark
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.397 sec - in org.apache.hadoop.ipc.TestRPCCallBenchmark
Running org.apache.hadoop.ipc.TestRPCCompatibility
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.681 sec - in org.apache.hadoop.ipc.TestRPCCompatibility
Running org.apache.hadoop.util.TestLightWeightCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.24 sec - in org.apache.hadoop.util.TestLightWeightCache
Running org.apache.hadoop.util.TestShutdownThreadsHelper
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.07 sec - in org.apache.hadoop.util.TestShutdownThreadsHelper
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.036 sec - in org.apache.hadoop.util.TestVersionUtil
Running org.apache.hadoop.util.TestRunJar
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.056 sec - in org.apache.hadoop.util.TestRunJar
Running org.apache.hadoop.util.TestStringUtils
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.129 sec - in org.apache.hadoop.util.TestStringUtils
Running org.apache.hadoop.util.TestOptions
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.031 sec - in org.apache.hadoop.util.TestOptions
Running org.apache.hadoop.util.TestShell
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.152 sec - in org.apache.hadoop.util.TestShell
Running org.apache.hadoop.util.TestLineReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.037 sec - in org.apache.hadoop.util.TestLineReader
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.316 sec - in org.apache.hadoop.util.TestIndexedSort
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec - in org.apache.hadoop.util.TestIdentityHashStore
Running org.apache.hadoop.util.TestNativeLibraryChecker
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.119 sec - in org.apache.hadoop.util.TestNativeLibraryChecker
Running org.apache.hadoop.util.hash.TestHash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec - in org.apache.hadoop.util.hash.TestHash
Running org.apache.hadoop.util.TestDataChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 sec - in org.apache.hadoop.util.TestDataChecksum
Running org.apache.hadoop.util.TestGenericsUtil
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.127 sec - in org.apache.hadoop.util.TestGenericsUtil
Running org.apache.hadoop.util.TestNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.068 sec - in org.apache.hadoop.util.TestNativeCodeLoader
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec - in org.apache.hadoop.util.TestProtoUtil
Running org.apache.hadoop.util.TestDiskChecker
Tests run: 14, Failures: 6, Errors: 0, Skipped: 0, Time elapsed: 0.515 sec <<< FAILURE! - in org.apache.hadoop.util.TestDiskChecker
testCheckDir_notReadable(org.apache.hadoop.util.TestDiskChecker)  Time elapsed: 0.022 sec  <<< FAILURE!
java.lang.AssertionError: checkDir success
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
	at org.apache.hadoop.util.TestDiskChecker.testCheckDir_notReadable(TestDiskChecker.java:101)

testCheckDir_notWritable(org.apache.hadoop.util.TestDiskChecker)  Time elapsed: 0.018 sec  <<< FAILURE!
java.lang.AssertionError: checkDir success
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
	at org.apache.hadoop.util.TestDiskChecker.testCheckDir_notWritable(TestDiskChecker.java:106)

testCheckDir_notListable(org.apache.hadoop.util.TestDiskChecker)  Time elapsed: 0.015 sec  <<< FAILURE!
java.lang.AssertionError: checkDir success
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
	at org.apache.hadoop.util.TestDiskChecker.testCheckDir_notListable(TestDiskChecker.java:111)

testCheckDir_notReadable_local(org.apache.hadoop.util.TestDiskChecker)  Time elapsed: 0.001 sec  <<< FAILURE!
java.lang.AssertionError: checkDir success
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
	at org.apache.hadoop.util.TestDiskChecker.testCheckDir_notReadable_local(TestDiskChecker.java:150)

testCheckDir_notWritable_local(org.apache.hadoop.util.TestDiskChecker)  Time elapsed: 0.002 sec  <<< FAILURE!
java.lang.AssertionError: checkDir success
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
	at org.apache.hadoop.util.TestDiskChecker.testCheckDir_notWritable_local(TestDiskChecker.java:155)

testCheckDir_notListable_local(org.apache.hadoop.util.TestDiskChecker)  Time elapsed: 0.002 sec  <<< FAILURE!
java.lang.AssertionError: checkDir success
	at org.junit.Assert.fail(Assert.java:93)
	at org.junit.Assert.assertTrue(Assert.java:43)
	at org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
	at org.apache.hadoop.util.TestDiskChecker.testCheckDir_notListable_local(TestDiskChecker.java:160)

Running org.apache.hadoop.util.TestWinUtils
Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.083 sec - in org.apache.hadoop.util.TestWinUtils
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec - in org.apache.hadoop.util.TestStringInterner
Running org.apache.hadoop.util.TestGSet
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.544 sec - in org.apache.hadoop.util.TestGSet
Running org.apache.hadoop.util.TestSignalLogger
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec - in org.apache.hadoop.util.TestSignalLogger
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 sec - in org.apache.hadoop.util.TestZKUtil
Running org.apache.hadoop.util.TestAsyncDiskService
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec - in org.apache.hadoop.util.TestAsyncDiskService
Running org.apache.hadoop.util.TestPureJavaCrc32
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.269 sec - in org.apache.hadoop.util.TestPureJavaCrc32
Running org.apache.hadoop.util.TestHostsFileReader
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.091 sec - in org.apache.hadoop.util.TestHostsFileReader
Running org.apache.hadoop.util.TestShutdownHookManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec - in org.apache.hadoop.util.TestShutdownHookManager
Running org.apache.hadoop.util.TestReflectionUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec - in org.apache.hadoop.util.TestReflectionUtils
Running org.apache.hadoop.util.TestClassUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.033 sec - in org.apache.hadoop.util.TestClassUtil
Running org.apache.hadoop.util.TestJarFinder
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.302 sec - in org.apache.hadoop.util.TestJarFinder
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.404 sec - in org.apache.hadoop.util.TestGenericOptionsParser
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec - in org.apache.hadoop.util.TestLightWeightGSet
Running org.apache.hadoop.util.bloom.TestBloomFilters
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.264 sec - in org.apache.hadoop.util.bloom.TestBloomFilters

Results :

Failed tests: 
  TestZKFailoverController.testGracefulFailoverFailBecomingActive:484 Did not fail to graceful failover when target failed to become active!
  TestZKFailoverController.testGracefulFailoverFailBecomingStandby:518 expected:<1> but was:<0>
  TestZKFailoverController.testGracefulFailoverFailBecomingStandbyAndFailFence:540 Failover should have failed when old node wont fence
  TestTableMapping.testResolve:56 expected:</[rack1]> but was:</[default-rack]>
  TestTableMapping.testTableCaching:79 expected:</[rack1]> but was:</[default-rack]>
  TestTableMapping.testClearingCachedMappings:144 expected:</[rack1]> but was:</[default-rack]>
  TestNetUtils.testNormalizeHostName:619 expected:<[192.168.12.37]> but was:<[UnknownHost]>
  TestStaticMapping.testCachingRelaysResolveQueries:219->assertMapSize:94->Assert.assertEquals:472->Assert.assertEquals:128->Assert.failNotEquals:647->Assert.fail:93 Expected two entries in the map Mapping: cached switch mapping relaying to static mapping with single switch = false
Map:
  192.168.12.37 -> /default-rack
Nodes: 1
Switches: 1
 expected:<2> but was:<1>
  TestStaticMapping.testCachingCachesNegativeEntries:236->assertMapSize:94->Assert.assertEquals:472->Assert.assertEquals:128->Assert.failNotEquals:647->Assert.fail:93 Expected two entries in the map Mapping: cached switch mapping relaying to static mapping with single switch = false
Map:
  192.168.12.37 -> /default-rack
Nodes: 1
Switches: 1
 expected:<2> but was:<1>
  TestLocalDirAllocator.test0:141->validateTempDirCreation:110 Checking for build/test/temp/RELATIVE1 in build/test/temp/RELATIVE0/block9179437685378573554.tmp - FAILED!
  TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:110 Checking for build/test/temp/RELATIVE2 in build/test/temp/RELATIVE1/block7291734072352417917.tmp - FAILED!
  TestLocalDirAllocator.testRWBufferDirBecomesRO:220->validateTempDirCreation:110 Checking for build/test/temp/RELATIVE3 in build/test/temp/RELATIVE4/block4513557287751895920.tmp - FAILED!
  TestLocalDirAllocator.test0:141->validateTempDirCreation:110 Checking for /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1 in /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block8523050700077504235.tmp - FAILED!
  TestLocalDirAllocator.testROBufferDirAndRWBufferDir:164->validateTempDirCreation:110 Checking for /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2 in /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block200624031350129544.tmp - FAILED!
  TestLocalDirAllocator.testRWBufferDirBecomesRO:219->validateTempDirCreation:110 Checking for /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE3 in /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE4/block8868024598532665020.tmp - FAILED!
  TestLocalDirAllocator.test0:142->validateTempDirCreation:110 Checking for file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1 in /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block7318078621961387478.tmp - FAILED!
  TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:110 Checking for file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2 in /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1/block3298540567692029628.tmp - FAILED!
  TestLocalDirAllocator.testRWBufferDirBecomesRO:220->validateTempDirCreation:110 Checking for file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED3 in /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED4/block6014893019370084121.tmp - FAILED!
  TestFileUtil.testFailFullyDelete:411->validateAndSetWritablePermissions:385 The directory xSubDir *should* not have been deleted. expected:<true> but was:<false>
  TestFileUtil.testFailFullyDeleteContents:492->validateAndSetWritablePermissions:385 The directory xSubDir *should* not have been deleted. expected:<true> but was:<false>
  TestFileUtil.testGetDU:592 null
  TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:289 Should throw IOException
  TestLocalFileSystem.testReportChecksumFailure:356 null
  TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:289 Should throw IOException
  TestDiskChecker.testCheckDir_notReadable:101->_checkDirs:126 checkDir success
  TestDiskChecker.testCheckDir_notWritable:106->_checkDirs:126 checkDir success
  TestDiskChecker.testCheckDir_notListable:111->_checkDirs:126 checkDir success
  TestDiskChecker.testCheckDir_notReadable_local:150->_checkDirs:174 checkDir success
  TestDiskChecker.testCheckDir_notWritable_local:155->_checkDirs:174 checkDir success
  TestDiskChecker.testCheckDir_notListable_local:160->_checkDirs:174 checkDir success

Tests in error: 
  TestZKFailoverController.testGracefulFailover:444->Object.wait:-2 »  test time...

Tests run: 2285, Failures: 30, Errors: 1, Skipped: 104

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop Main ................................ SUCCESS [0.678s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [0.247s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [0.780s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [0.221s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.087s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [0.773s]
[INFO] Apache Hadoop MiniKDC ............................. SUCCESS [1:58.825s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [6:16.248s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [7.347s]
[INFO] Apache Hadoop Common .............................. FAILURE [11:49.512s]
[INFO] Apache Hadoop NFS ................................. SKIPPED
[INFO] Apache Hadoop Common Project ...................... SKIPPED
[INFO] Apache Hadoop HDFS ................................ SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] hadoop-yarn ....................................... SKIPPED
[INFO] hadoop-yarn-api ................................... SKIPPED
[INFO] hadoop-yarn-common ................................ SKIPPED
[INFO] hadoop-yarn-server ................................ SKIPPED
[INFO] hadoop-yarn-server-common ......................... SKIPPED
[INFO] hadoop-yarn-server-nodemanager .................... SKIPPED
[INFO] hadoop-yarn-server-web-proxy ...................... SKIPPED
[INFO] hadoop-yarn-server-applicationhistoryservice ...... SKIPPED
[INFO] hadoop-yarn-server-resourcemanager ................ SKIPPED
[INFO] hadoop-yarn-server-tests .......................... SKIPPED
[INFO] hadoop-yarn-client ................................ SKIPPED
[INFO] hadoop-yarn-applications .......................... SKIPPED
[INFO] hadoop-yarn-applications-distributedshell ......... SKIPPED
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SKIPPED
[INFO] hadoop-yarn-site .................................. SKIPPED
[INFO] hadoop-yarn-project ............................... SKIPPED
[INFO] hadoop-mapreduce-client ........................... SKIPPED
[INFO] hadoop-mapreduce-client-core ...................... SKIPPED
[INFO] hadoop-mapreduce-client-common .................... SKIPPED
[INFO] hadoop-mapreduce-client-shuffle ................... SKIPPED
[INFO] hadoop-mapreduce-client-app ....................... SKIPPED
[INFO] hadoop-mapreduce-client-hs ........................ SKIPPED
[INFO] hadoop-mapreduce-client-jobclient ................. SKIPPED
[INFO] hadoop-mapreduce-client-hs-plugins ................ SKIPPED
[INFO] Apache Hadoop MapReduce Examples .................. SKIPPED
[INFO] hadoop-mapreduce .................................. SKIPPED
[INFO] Apache Hadoop MapReduce Streaming ................. SKIPPED
[INFO] Apache Hadoop Distributed Copy .................... SKIPPED
[INFO] Apache Hadoop Archives ............................ SKIPPED
[INFO] Apache Hadoop Rumen ............................... SKIPPED
[INFO] Apache Hadoop Gridmix ............................. SKIPPED
[INFO] Apache Hadoop Data Join ........................... SKIPPED
[INFO] Apache Hadoop Extras .............................. SKIPPED
[INFO] Apache Hadoop Pipes ............................... SKIPPED
[INFO] Apache Hadoop OpenStack support ................... SKIPPED
[INFO] Apache Hadoop Client .............................. SKIPPED
[INFO] Apache Hadoop Mini-Cluster ........................ SKIPPED
[INFO] Apache Hadoop Scheduler Load Simulator ............ SKIPPED
[INFO] Apache Hadoop Tools Dist .......................... SKIPPED
[INFO] Apache Hadoop Tools ............................... SKIPPED
[INFO] Apache Hadoop Distribution ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 20:15.984s
[INFO] Finished at: Sun Aug 03 18:00:44 HKT 2014
[INFO] Final Memory: 56M/900M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on project hadoop-common: There are test failures.
[ERROR] 
[ERROR] Please refer to /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-common


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager and JobHistoryServer do not auto-failover to Standby Node

Posted by Akira AJISAKA <aj...@oss.nttdata.co.jp>.
You need additional settings to make ResourceManager auto-failover.

http://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html

JobHistoryServer does not have automatic failover feature.

Regards,
Akira

(2014/08/05 20:15), Arthur.hk.chan@gmail.com wrote:
> Hi
>
> I have set up the Hadoop 2.4.1 with HDFS High Availability using the
> Quorum Journal Manager.
>
> I am verifying Automatic Failover: I manually used “kill -9” command to
> disable all running Hadoop services in active node (NN-1), I can find
> that the Standby node (NN-2) now becomes ACTIVE now which is good,
> however, the “ResourceManager” service cannot be found in NN-2, please
> advise how to make ResourceManager and JobHistoryServer auto-failover?
> or do I miss some important setup? missing some settings in
> hdfs-site.xml or core-site.xml?
>
> Please help!
>
> Regards
> Arthur
>
>
>
>
> BEFORE TESTING:
> NN-1:
> jps
> 9564 NameNode
> 10176 JobHistoryServer
> 21215 Jps
> 17636 QuorumPeerMain
> 20838 NodeManager
> 9678 DataNode
> 9933 JournalNode
> 10085 DFSZKFailoverController
> 20724 ResourceManager
>
> NN-2 (Standby Name node)
> jps
> 14064 Jps
> 32046 NameNode
> 13765 NodeManager
> 32126 DataNode
> 32271 DFSZKFailoverController
>
>
>
> AFTER
> NN-1
> dips
> 17636 QuorumPeerMain
> 21508 Jps
>
> NN-2
> jps
> 32046 NameNode
> 13765 NodeManager
> 32126 DataNode
> 32271 DFSZKFailoverController
> 14165 Jps
>
>
>


Re: Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
I restarted the cluster, and the usercahe all gone automatically.  no longer an issue.  thanks

 
On 20 Aug, 2014, at 7:05 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi, 
> 
>  i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB
> 
> I found that these files are all under tmp/nm-local-dir/usercache
> 
> Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
I restarted the cluster, and the usercahe all gone automatically.  no longer an issue.  thanks

 
On 20 Aug, 2014, at 7:05 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi, 
> 
>  i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB
> 
> I found that these files are all under tmp/nm-local-dir/usercache
> 
> Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
I restarted the cluster, and the usercahe all gone automatically.  no longer an issue.  thanks

 
On 20 Aug, 2014, at 7:05 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi, 
> 
>  i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB
> 
> I found that these files are all under tmp/nm-local-dir/usercache
> 
> Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
I restarted the cluster, and the usercahe all gone automatically.  no longer an issue.  thanks

 
On 20 Aug, 2014, at 7:05 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi, 
> 
>  i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB
> 
> I found that these files are all under tmp/nm-local-dir/usercache
> 
> Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?
> 
> Regards
> Arthur
> 
> 


Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi, 

 i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB

I found that these files are all under tmp/nm-local-dir/usercache

Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?

Regards
Arthur



Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Thanks for your reply.  However I think it is not about 32-bit version issue, cus my Hadoop is 64-bit as I compiled it from source.  I think my way to install snappy should be wrong, 

Arthur
On 19 Aug, 2014, at 11:53 pm, Andre Kelpe <ak...@concurrentinc.com> wrote:

> Could this be caused by the fact that hadoop no longer ships with 64bit libs? https://issues.apache.org/jira/browse/HADOOP-9911
> 
> - André
> 
> 
> On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> I am trying Snappy in Hadoop 2.4.1, here are my steps: 
> 
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
> 
> 2)
> added the following 
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>     <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
> 
> 3) 
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
> 
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout
> 
> I got the following warning, actually there is no any test file created in hdfs:
> 
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.
> 
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?
> 
> Regards
> Arthur
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> André Kelpe
> andre@concurrentinc.com
> http://concurrentinc.com


Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Thanks for your reply.  However I think it is not about 32-bit version issue, cus my Hadoop is 64-bit as I compiled it from source.  I think my way to install snappy should be wrong, 

Arthur
On 19 Aug, 2014, at 11:53 pm, Andre Kelpe <ak...@concurrentinc.com> wrote:

> Could this be caused by the fact that hadoop no longer ships with 64bit libs? https://issues.apache.org/jira/browse/HADOOP-9911
> 
> - André
> 
> 
> On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> I am trying Snappy in Hadoop 2.4.1, here are my steps: 
> 
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
> 
> 2)
> added the following 
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>     <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
> 
> 3) 
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
> 
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout
> 
> I got the following warning, actually there is no any test file created in hdfs:
> 
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.
> 
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?
> 
> Regards
> Arthur
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> André Kelpe
> andre@concurrentinc.com
> http://concurrentinc.com


Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Thanks for your reply.  However I think it is not about 32-bit version issue, cus my Hadoop is 64-bit as I compiled it from source.  I think my way to install snappy should be wrong, 

Arthur
On 19 Aug, 2014, at 11:53 pm, Andre Kelpe <ak...@concurrentinc.com> wrote:

> Could this be caused by the fact that hadoop no longer ships with 64bit libs? https://issues.apache.org/jira/browse/HADOOP-9911
> 
> - André
> 
> 
> On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> I am trying Snappy in Hadoop 2.4.1, here are my steps: 
> 
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
> 
> 2)
> added the following 
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>     <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
> 
> 3) 
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
> 
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout
> 
> I got the following warning, actually there is no any test file created in hdfs:
> 
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.
> 
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?
> 
> Regards
> Arthur
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> André Kelpe
> andre@concurrentinc.com
> http://concurrentinc.com


Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Thanks for your reply.  However I think it is not about 32-bit version issue, cus my Hadoop is 64-bit as I compiled it from source.  I think my way to install snappy should be wrong, 

Arthur
On 19 Aug, 2014, at 11:53 pm, Andre Kelpe <ak...@concurrentinc.com> wrote:

> Could this be caused by the fact that hadoop no longer ships with 64bit libs? https://issues.apache.org/jira/browse/HADOOP-9911
> 
> - André
> 
> 
> On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> I am trying Snappy in Hadoop 2.4.1, here are my steps: 
> 
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
> 
> 2)
> added the following 
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>     <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
> 
> 3) 
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
> 
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout
> 
> I got the following warning, actually there is no any test file created in hdfs:
> 
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.
> 
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?
> 
> Regards
> Arthur
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> André Kelpe
> andre@concurrentinc.com
> http://concurrentinc.com


Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by Andre Kelpe <ak...@concurrentinc.com>.
Could this be caused by the fact that hadoop no longer ships with 64bit
libs? https://issues.apache.org/jira/browse/HADOOP-9911

- André


On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> I am trying Snappy in Hadoop 2.4.1, here are my steps:
>
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
>
> 2)
> added the following
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>
> <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
>
> 3)
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen
> 100000 /tmp/teragenout
>
> I got the following warning, actually there is no any test file created in
> hdfs:
>
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.map.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the map JVM env using
> mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.reduce.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the reduce JVM env using
> mapreduce.admin.user.env config settings.
>
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1?
> or what would be wrong? or is my new change in mapred-site.xml incorrect?
>
> Regards
> Arthur
>
>
>
>
>
>


-- 
André Kelpe
andre@concurrentinc.com
http://concurrentinc.com

Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi, 

 i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB

I found that these files are all under tmp/nm-local-dir/usercache

Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?

Regards
Arthur



Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by Andre Kelpe <ak...@concurrentinc.com>.
Could this be caused by the fact that hadoop no longer ships with 64bit
libs? https://issues.apache.org/jira/browse/HADOOP-9911

- André


On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> I am trying Snappy in Hadoop 2.4.1, here are my steps:
>
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
>
> 2)
> added the following
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>
> <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
>
> 3)
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen
> 100000 /tmp/teragenout
>
> I got the following warning, actually there is no any test file created in
> hdfs:
>
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.map.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the map JVM env using
> mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.reduce.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the reduce JVM env using
> mapreduce.admin.user.env config settings.
>
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1?
> or what would be wrong? or is my new change in mapred-site.xml incorrect?
>
> Regards
> Arthur
>
>
>
>
>
>


-- 
André Kelpe
andre@concurrentinc.com
http://concurrentinc.com

Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi, 

 i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB

I found that these files are all under tmp/nm-local-dir/usercache

Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?

Regards
Arthur



Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by Andre Kelpe <ak...@concurrentinc.com>.
Could this be caused by the fact that hadoop no longer ships with 64bit
libs? https://issues.apache.org/jira/browse/HADOOP-9911

- André


On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> I am trying Snappy in Hadoop 2.4.1, here are my steps:
>
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
>
> 2)
> added the following
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>
> <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
>
> 3)
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen
> 100000 /tmp/teragenout
>
> I got the following warning, actually there is no any test file created in
> hdfs:
>
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.map.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the map JVM env using
> mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.reduce.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the reduce JVM env using
> mapreduce.admin.user.env config settings.
>
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1?
> or what would be wrong? or is my new change in mapred-site.xml incorrect?
>
> Regards
> Arthur
>
>
>
>
>
>


-- 
André Kelpe
andre@concurrentinc.com
http://concurrentinc.com

Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by Andre Kelpe <ak...@concurrentinc.com>.
Could this be caused by the fact that hadoop no longer ships with 64bit
libs? https://issues.apache.org/jira/browse/HADOOP-9911

- André


On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> I am trying Snappy in Hadoop 2.4.1, here are my steps:
>
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
>
> 2)
> added the following
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>
> <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
>
> 3)
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen
> 100000 /tmp/teragenout
>
> I got the following warning, actually there is no any test file created in
> hdfs:
>
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.map.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the map JVM env using
> mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.reduce.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the reduce JVM env using
> mapreduce.admin.user.env config settings.
>
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1?
> or what would be wrong? or is my new change in mapred-site.xml incorrect?
>
> Regards
> Arthur
>
>
>
>
>
>


-- 
André Kelpe
andre@concurrentinc.com
http://concurrentinc.com

Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi, 

 i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB

I found that these files are all under tmp/nm-local-dir/usercache

Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?

Regards
Arthur



Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I am trying Snappy in Hadoop 2.4.1, here are my steps: 

(CentOS 64-bit)
1)
yum install snappy snappy-devel

2)
added the following 
(core-site.xml)
   <property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

3) 
mapred-site.xml
   <property>
    <name>mapreduce.admin.map.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>
   <property>
    <name>mapreduce.admin.reduce.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>

4) smoke test
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout

I got the following warning, actually there is no any test file created in hdfs:

14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.

Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?

Regards
Arthur






Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I am trying Snappy in Hadoop 2.4.1, here are my steps: 

(CentOS 64-bit)
1)
yum install snappy snappy-devel

2)
added the following 
(core-site.xml)
   <property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

3) 
mapred-site.xml
   <property>
    <name>mapreduce.admin.map.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>
   <property>
    <name>mapreduce.admin.reduce.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>

4) smoke test
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout

I got the following warning, actually there is no any test file created in hdfs:

14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.

Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?

Regards
Arthur






Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

   Could you show me the error message for rm2. please ?


Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 10:17 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> Thank y very much!
>
> At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager
> in rm2 is not started accordingly.  Please advise what would be wrong?
> Thanks
>
> Regards
> Arthur
>
>
>
>
> On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> No, need to start multiple RMs separately.
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
> Interesting question. But one of the design for auto-failover is that the
> down-time of RM is invisible to end users. The end users can submit
> applications normally even if the failover happens.
>
> We can monitor the status of RMs by using the command-line (you did
> previously) or from webUI/webService
> (rm_address:portnumber/cluster/cluster). We can get the current status from
> there.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is
>> my yarn-site.xml.
>>
>> At the moment, the ResourceManager HA works if:
>>
>> 1) at rm1, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> active
>>
>> yarn rmadmin -getServiceState rm2
>> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
>> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
>> MILLISECONDS)
>> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
>> connection exception: java.net.ConnectException: Connection refused; For
>> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>>
>>
>> 2) at rm2, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> standby
>>
>>
>> Some questions:
>> Q1)  I need start yarn in EACH master separately, is this normal? Is
>> there a way that I just run ./sbin/start-yarn.sh in rm1 and get the
>> STANDBY ResourceManager in rm2 started as well?
>>
>> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
>> down in an auto-failover env? or how do you monitor the status of
>> ACTIVE/STANDBY ResourceManager?
>>
>>
>> Regards
>> Arthur
>>
>>
>> <?xml version="1.0"?>
>> <configuration>
>>
>> <!-- Site specific YARN configuration properties -->
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services</name>
>>       <value>mapreduce_shuffle</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.resourcemanager.address</name>
>>       <value>192.168.1.1:8032</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.resource-tracker.address</name>
>>        <value>192.168.1.1:8031</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.admin.address</name>
>>        <value>192.168.1.1:8033</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.scheduler.address</name>
>>        <value>192.168.1.1:8030</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.loacl-dirs</name>
>>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>>        <final>true</final>
>>    </property>
>>
>>    <property>
>>        <name>yarn.web-proxy.address</name>
>>        <value>192.168.1.1:8888</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>    </property>
>>
>>
>>
>>
>>    <property>
>>       <name>yarn.nodemanager.resource.memory-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.minimum-allocation-mb</name>
>>       <value>9216</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.maximum-allocation-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>
>>
>>   <property>
>>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>>     <value>2000</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.cluster-id</name>
>>     <value>cluster_rm</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.rm-ids</name>
>>     <value>rm1,rm2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm1</name>
>>     <value>192.168.1.1</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm2</name>
>>     <value>192.168.1.2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.recovery.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.store.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>>   </property>
>>   <property>
>>       <name>yarn.resourcemanager.zk-address</name>
>>       <value>rm1:2181,m135:2181,m137:2181</value>
>>   </property>
>>   <property>
>>
>> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>>     <value>5000</value>
>>   </property>
>>
>>   <!-- RM1 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm1</name>
>>     <value>192.168.1.1:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>>     <value>192.168.1.1:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>>     <value>192.168.1.1:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>>     <value>192.168.1.1:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>>     <value>192.168.1.1:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm1</name>
>>     <value>192.168.1.1:23142</value>
>>   </property>
>>
>>
>>   <!-- RM2 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm2</name>
>>     <value>192.168.1.2:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>>     <value>192.168.1.2:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>>     <value>192.168.1.2:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>>     <value>192.168.1.2:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>>     <value>192.168.1.2:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm2</name>
>>     <value>192.168.1.2:23142</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.remote-app-log-dir</name>
>>     <value>/edh/hadoop_logs/hadoop/</value>
>>   </property>
>>
>> </configuration>
>>
>>
>>
>> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>>
>> Hey, Arthur:
>>
>>     Did you use single node cluster or multiple nodes cluster? Could you
>> share your configuration file (yarn-site.xml) ? This looks like a
>> configuration issue.
>>
>> Thanks
>>
>> Xuan Gong
>>
>>
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
>> arthur.hk.chan@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> If I have TWO nodes for ResourceManager HA, what should be the correct
>>> steps and commands to start and stop ResourceManager in a ResourceManager
>>> HA cluster ?
>>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it
>>> seems that  ./sbin/start-yarn.sh can only start YARN in a node at a
>>> time.
>>>
>>> Regards
>>> Arthur
>>>
>>>
>>>
>>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

   Could you show me the error message for rm2. please ?


Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 10:17 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> Thank y very much!
>
> At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager
> in rm2 is not started accordingly.  Please advise what would be wrong?
> Thanks
>
> Regards
> Arthur
>
>
>
>
> On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> No, need to start multiple RMs separately.
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
> Interesting question. But one of the design for auto-failover is that the
> down-time of RM is invisible to end users. The end users can submit
> applications normally even if the failover happens.
>
> We can monitor the status of RMs by using the command-line (you did
> previously) or from webUI/webService
> (rm_address:portnumber/cluster/cluster). We can get the current status from
> there.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is
>> my yarn-site.xml.
>>
>> At the moment, the ResourceManager HA works if:
>>
>> 1) at rm1, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> active
>>
>> yarn rmadmin -getServiceState rm2
>> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
>> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
>> MILLISECONDS)
>> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
>> connection exception: java.net.ConnectException: Connection refused; For
>> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>>
>>
>> 2) at rm2, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> standby
>>
>>
>> Some questions:
>> Q1)  I need start yarn in EACH master separately, is this normal? Is
>> there a way that I just run ./sbin/start-yarn.sh in rm1 and get the
>> STANDBY ResourceManager in rm2 started as well?
>>
>> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
>> down in an auto-failover env? or how do you monitor the status of
>> ACTIVE/STANDBY ResourceManager?
>>
>>
>> Regards
>> Arthur
>>
>>
>> <?xml version="1.0"?>
>> <configuration>
>>
>> <!-- Site specific YARN configuration properties -->
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services</name>
>>       <value>mapreduce_shuffle</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.resourcemanager.address</name>
>>       <value>192.168.1.1:8032</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.resource-tracker.address</name>
>>        <value>192.168.1.1:8031</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.admin.address</name>
>>        <value>192.168.1.1:8033</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.scheduler.address</name>
>>        <value>192.168.1.1:8030</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.loacl-dirs</name>
>>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>>        <final>true</final>
>>    </property>
>>
>>    <property>
>>        <name>yarn.web-proxy.address</name>
>>        <value>192.168.1.1:8888</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>    </property>
>>
>>
>>
>>
>>    <property>
>>       <name>yarn.nodemanager.resource.memory-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.minimum-allocation-mb</name>
>>       <value>9216</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.maximum-allocation-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>
>>
>>   <property>
>>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>>     <value>2000</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.cluster-id</name>
>>     <value>cluster_rm</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.rm-ids</name>
>>     <value>rm1,rm2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm1</name>
>>     <value>192.168.1.1</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm2</name>
>>     <value>192.168.1.2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.recovery.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.store.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>>   </property>
>>   <property>
>>       <name>yarn.resourcemanager.zk-address</name>
>>       <value>rm1:2181,m135:2181,m137:2181</value>
>>   </property>
>>   <property>
>>
>> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>>     <value>5000</value>
>>   </property>
>>
>>   <!-- RM1 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm1</name>
>>     <value>192.168.1.1:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>>     <value>192.168.1.1:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>>     <value>192.168.1.1:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>>     <value>192.168.1.1:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>>     <value>192.168.1.1:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm1</name>
>>     <value>192.168.1.1:23142</value>
>>   </property>
>>
>>
>>   <!-- RM2 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm2</name>
>>     <value>192.168.1.2:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>>     <value>192.168.1.2:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>>     <value>192.168.1.2:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>>     <value>192.168.1.2:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>>     <value>192.168.1.2:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm2</name>
>>     <value>192.168.1.2:23142</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.remote-app-log-dir</name>
>>     <value>/edh/hadoop_logs/hadoop/</value>
>>   </property>
>>
>> </configuration>
>>
>>
>>
>> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>>
>> Hey, Arthur:
>>
>>     Did you use single node cluster or multiple nodes cluster? Could you
>> share your configuration file (yarn-site.xml) ? This looks like a
>> configuration issue.
>>
>> Thanks
>>
>> Xuan Gong
>>
>>
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
>> arthur.hk.chan@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> If I have TWO nodes for ResourceManager HA, what should be the correct
>>> steps and commands to start and stop ResourceManager in a ResourceManager
>>> HA cluster ?
>>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it
>>> seems that  ./sbin/start-yarn.sh can only start YARN in a node at a
>>> time.
>>>
>>> Regards
>>> Arthur
>>>
>>>
>>>
>>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

   Could you show me the error message for rm2. please ?


Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 10:17 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> Thank y very much!
>
> At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager
> in rm2 is not started accordingly.  Please advise what would be wrong?
> Thanks
>
> Regards
> Arthur
>
>
>
>
> On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> No, need to start multiple RMs separately.
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
> Interesting question. But one of the design for auto-failover is that the
> down-time of RM is invisible to end users. The end users can submit
> applications normally even if the failover happens.
>
> We can monitor the status of RMs by using the command-line (you did
> previously) or from webUI/webService
> (rm_address:portnumber/cluster/cluster). We can get the current status from
> there.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is
>> my yarn-site.xml.
>>
>> At the moment, the ResourceManager HA works if:
>>
>> 1) at rm1, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> active
>>
>> yarn rmadmin -getServiceState rm2
>> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
>> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
>> MILLISECONDS)
>> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
>> connection exception: java.net.ConnectException: Connection refused; For
>> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>>
>>
>> 2) at rm2, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> standby
>>
>>
>> Some questions:
>> Q1)  I need start yarn in EACH master separately, is this normal? Is
>> there a way that I just run ./sbin/start-yarn.sh in rm1 and get the
>> STANDBY ResourceManager in rm2 started as well?
>>
>> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
>> down in an auto-failover env? or how do you monitor the status of
>> ACTIVE/STANDBY ResourceManager?
>>
>>
>> Regards
>> Arthur
>>
>>
>> <?xml version="1.0"?>
>> <configuration>
>>
>> <!-- Site specific YARN configuration properties -->
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services</name>
>>       <value>mapreduce_shuffle</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.resourcemanager.address</name>
>>       <value>192.168.1.1:8032</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.resource-tracker.address</name>
>>        <value>192.168.1.1:8031</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.admin.address</name>
>>        <value>192.168.1.1:8033</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.scheduler.address</name>
>>        <value>192.168.1.1:8030</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.loacl-dirs</name>
>>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>>        <final>true</final>
>>    </property>
>>
>>    <property>
>>        <name>yarn.web-proxy.address</name>
>>        <value>192.168.1.1:8888</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>    </property>
>>
>>
>>
>>
>>    <property>
>>       <name>yarn.nodemanager.resource.memory-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.minimum-allocation-mb</name>
>>       <value>9216</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.maximum-allocation-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>
>>
>>   <property>
>>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>>     <value>2000</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.cluster-id</name>
>>     <value>cluster_rm</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.rm-ids</name>
>>     <value>rm1,rm2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm1</name>
>>     <value>192.168.1.1</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm2</name>
>>     <value>192.168.1.2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.recovery.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.store.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>>   </property>
>>   <property>
>>       <name>yarn.resourcemanager.zk-address</name>
>>       <value>rm1:2181,m135:2181,m137:2181</value>
>>   </property>
>>   <property>
>>
>> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>>     <value>5000</value>
>>   </property>
>>
>>   <!-- RM1 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm1</name>
>>     <value>192.168.1.1:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>>     <value>192.168.1.1:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>>     <value>192.168.1.1:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>>     <value>192.168.1.1:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>>     <value>192.168.1.1:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm1</name>
>>     <value>192.168.1.1:23142</value>
>>   </property>
>>
>>
>>   <!-- RM2 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm2</name>
>>     <value>192.168.1.2:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>>     <value>192.168.1.2:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>>     <value>192.168.1.2:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>>     <value>192.168.1.2:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>>     <value>192.168.1.2:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm2</name>
>>     <value>192.168.1.2:23142</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.remote-app-log-dir</name>
>>     <value>/edh/hadoop_logs/hadoop/</value>
>>   </property>
>>
>> </configuration>
>>
>>
>>
>> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>>
>> Hey, Arthur:
>>
>>     Did you use single node cluster or multiple nodes cluster? Could you
>> share your configuration file (yarn-site.xml) ? This looks like a
>> configuration issue.
>>
>> Thanks
>>
>> Xuan Gong
>>
>>
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
>> arthur.hk.chan@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> If I have TWO nodes for ResourceManager HA, what should be the correct
>>> steps and commands to start and stop ResourceManager in a ResourceManager
>>> HA cluster ?
>>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it
>>> seems that  ./sbin/start-yarn.sh can only start YARN in a node at a
>>> time.
>>>
>>> Regards
>>> Arthur
>>>
>>>
>>>
>>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I am trying Snappy in Hadoop 2.4.1, here are my steps: 

(CentOS 64-bit)
1)
yum install snappy snappy-devel

2)
added the following 
(core-site.xml)
   <property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

3) 
mapred-site.xml
   <property>
    <name>mapreduce.admin.map.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>
   <property>
    <name>mapreduce.admin.reduce.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>

4) smoke test
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout

I got the following warning, actually there is no any test file created in hdfs:

14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.

Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?

Regards
Arthur






Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I am trying Snappy in Hadoop 2.4.1, here are my steps: 

(CentOS 64-bit)
1)
yum install snappy snappy-devel

2)
added the following 
(core-site.xml)
   <property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

3) 
mapred-site.xml
   <property>
    <name>mapreduce.admin.map.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>
   <property>
    <name>mapreduce.admin.reduce.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>

4) smoke test
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout

I got the following warning, actually there is no any test file created in hdfs:

14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.

Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?

Regards
Arthur






Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

   Could you show me the error message for rm2. please ?


Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 10:17 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> Thank y very much!
>
> At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager
> in rm2 is not started accordingly.  Please advise what would be wrong?
> Thanks
>
> Regards
> Arthur
>
>
>
>
> On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> No, need to start multiple RMs separately.
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
> Interesting question. But one of the design for auto-failover is that the
> down-time of RM is invisible to end users. The end users can submit
> applications normally even if the failover happens.
>
> We can monitor the status of RMs by using the command-line (you did
> previously) or from webUI/webService
> (rm_address:portnumber/cluster/cluster). We can get the current status from
> there.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is
>> my yarn-site.xml.
>>
>> At the moment, the ResourceManager HA works if:
>>
>> 1) at rm1, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> active
>>
>> yarn rmadmin -getServiceState rm2
>> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
>> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
>> MILLISECONDS)
>> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
>> connection exception: java.net.ConnectException: Connection refused; For
>> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>>
>>
>> 2) at rm2, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> standby
>>
>>
>> Some questions:
>> Q1)  I need start yarn in EACH master separately, is this normal? Is
>> there a way that I just run ./sbin/start-yarn.sh in rm1 and get the
>> STANDBY ResourceManager in rm2 started as well?
>>
>> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
>> down in an auto-failover env? or how do you monitor the status of
>> ACTIVE/STANDBY ResourceManager?
>>
>>
>> Regards
>> Arthur
>>
>>
>> <?xml version="1.0"?>
>> <configuration>
>>
>> <!-- Site specific YARN configuration properties -->
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services</name>
>>       <value>mapreduce_shuffle</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.resourcemanager.address</name>
>>       <value>192.168.1.1:8032</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.resource-tracker.address</name>
>>        <value>192.168.1.1:8031</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.admin.address</name>
>>        <value>192.168.1.1:8033</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.scheduler.address</name>
>>        <value>192.168.1.1:8030</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.loacl-dirs</name>
>>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>>        <final>true</final>
>>    </property>
>>
>>    <property>
>>        <name>yarn.web-proxy.address</name>
>>        <value>192.168.1.1:8888</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>    </property>
>>
>>
>>
>>
>>    <property>
>>       <name>yarn.nodemanager.resource.memory-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.minimum-allocation-mb</name>
>>       <value>9216</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.maximum-allocation-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>
>>
>>   <property>
>>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>>     <value>2000</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.cluster-id</name>
>>     <value>cluster_rm</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.rm-ids</name>
>>     <value>rm1,rm2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm1</name>
>>     <value>192.168.1.1</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm2</name>
>>     <value>192.168.1.2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.recovery.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.store.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>>   </property>
>>   <property>
>>       <name>yarn.resourcemanager.zk-address</name>
>>       <value>rm1:2181,m135:2181,m137:2181</value>
>>   </property>
>>   <property>
>>
>> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>>     <value>5000</value>
>>   </property>
>>
>>   <!-- RM1 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm1</name>
>>     <value>192.168.1.1:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>>     <value>192.168.1.1:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>>     <value>192.168.1.1:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>>     <value>192.168.1.1:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>>     <value>192.168.1.1:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm1</name>
>>     <value>192.168.1.1:23142</value>
>>   </property>
>>
>>
>>   <!-- RM2 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm2</name>
>>     <value>192.168.1.2:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>>     <value>192.168.1.2:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>>     <value>192.168.1.2:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>>     <value>192.168.1.2:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>>     <value>192.168.1.2:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm2</name>
>>     <value>192.168.1.2:23142</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.remote-app-log-dir</name>
>>     <value>/edh/hadoop_logs/hadoop/</value>
>>   </property>
>>
>> </configuration>
>>
>>
>>
>> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>>
>> Hey, Arthur:
>>
>>     Did you use single node cluster or multiple nodes cluster? Could you
>> share your configuration file (yarn-site.xml) ? This looks like a
>> configuration issue.
>>
>> Thanks
>>
>> Xuan Gong
>>
>>
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
>> arthur.hk.chan@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> If I have TWO nodes for ResourceManager HA, what should be the correct
>>> steps and commands to start and stop ResourceManager in a ResourceManager
>>> HA cluster ?
>>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it
>>> seems that  ./sbin/start-yarn.sh can only start YARN in a node at a
>>> time.
>>>
>>> Regards
>>> Arthur
>>>
>>>
>>>
>>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

Thank y very much!

At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager in rm2 is not started accordingly.  Please advise what would be wrong? Thanks

Regards
Arthur




On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:

> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> No, need to start multiple RMs separately.
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager? 
> 
> Interesting question. But one of the design for auto-failover is that the down-time of RM is invisible to end users. The end users can submit applications normally even if the failover happens. 
> 
> We can monitor the status of RMs by using the command-line (you did previously) or from webUI/webService (rm_address:portnumber/cluster/cluster). We can get the current status from there.
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.
> 
> At the moment, the ResourceManager HA works if:
> 
> 1) at rm1, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> active
> 
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> 
> 2) at rm2, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> standby
> 
> 
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   
> 
> 
> Regards
> Arthur
> 
> 
> <?xml version="1.0"?>
> <configuration>
> 
> <!-- Site specific YARN configuration properties -->
> 
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
> 
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
> 
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
> 
> 
> 
> 
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
> 
> 
> 
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>     <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
> 
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
> 
> 
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
> 
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
> 
> </configuration>
> 
> 
> 
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
> 
>> Hey, Arthur:
>> 
>>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
>> 
>> Thanks
>> 
>> Xuan Gong
>> 
>> 
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
>> Hi,
>> 
>> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>> 
>> Regards
>> Arthur
>> 
>> 
> 
> 
> 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

Thank y very much!

At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager in rm2 is not started accordingly.  Please advise what would be wrong? Thanks

Regards
Arthur




On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:

> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> No, need to start multiple RMs separately.
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager? 
> 
> Interesting question. But one of the design for auto-failover is that the down-time of RM is invisible to end users. The end users can submit applications normally even if the failover happens. 
> 
> We can monitor the status of RMs by using the command-line (you did previously) or from webUI/webService (rm_address:portnumber/cluster/cluster). We can get the current status from there.
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.
> 
> At the moment, the ResourceManager HA works if:
> 
> 1) at rm1, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> active
> 
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> 
> 2) at rm2, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> standby
> 
> 
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   
> 
> 
> Regards
> Arthur
> 
> 
> <?xml version="1.0"?>
> <configuration>
> 
> <!-- Site specific YARN configuration properties -->
> 
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
> 
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
> 
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
> 
> 
> 
> 
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
> 
> 
> 
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>     <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
> 
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
> 
> 
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
> 
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
> 
> </configuration>
> 
> 
> 
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
> 
>> Hey, Arthur:
>> 
>>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
>> 
>> Thanks
>> 
>> Xuan Gong
>> 
>> 
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
>> Hi,
>> 
>> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>> 
>> Regards
>> Arthur
>> 
>> 
> 
> 
> 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

Thank y very much!

At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager in rm2 is not started accordingly.  Please advise what would be wrong? Thanks

Regards
Arthur




On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:

> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> No, need to start multiple RMs separately.
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager? 
> 
> Interesting question. But one of the design for auto-failover is that the down-time of RM is invisible to end users. The end users can submit applications normally even if the failover happens. 
> 
> We can monitor the status of RMs by using the command-line (you did previously) or from webUI/webService (rm_address:portnumber/cluster/cluster). We can get the current status from there.
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.
> 
> At the moment, the ResourceManager HA works if:
> 
> 1) at rm1, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> active
> 
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> 
> 2) at rm2, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> standby
> 
> 
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   
> 
> 
> Regards
> Arthur
> 
> 
> <?xml version="1.0"?>
> <configuration>
> 
> <!-- Site specific YARN configuration properties -->
> 
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
> 
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
> 
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
> 
> 
> 
> 
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
> 
> 
> 
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>     <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
> 
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
> 
> 
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
> 
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
> 
> </configuration>
> 
> 
> 
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
> 
>> Hey, Arthur:
>> 
>>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
>> 
>> Thanks
>> 
>> Xuan Gong
>> 
>> 
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
>> Hi,
>> 
>> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>> 
>> Regards
>> Arthur
>> 
>> 
> 
> 
> 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

Thank y very much!

At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager in rm2 is not started accordingly.  Please advise what would be wrong? Thanks

Regards
Arthur




On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:

> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> No, need to start multiple RMs separately.
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager? 
> 
> Interesting question. But one of the design for auto-failover is that the down-time of RM is invisible to end users. The end users can submit applications normally even if the failover happens. 
> 
> We can monitor the status of RMs by using the command-line (you did previously) or from webUI/webService (rm_address:portnumber/cluster/cluster). We can get the current status from there.
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.
> 
> At the moment, the ResourceManager HA works if:
> 
> 1) at rm1, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> active
> 
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> 
> 2) at rm2, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> standby
> 
> 
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   
> 
> 
> Regards
> Arthur
> 
> 
> <?xml version="1.0"?>
> <configuration>
> 
> <!-- Site specific YARN configuration properties -->
> 
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
> 
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
> 
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
> 
> 
> 
> 
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
> 
> 
> 
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>     <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
> 
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
> 
> 
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
> 
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
> 
> </configuration>
> 
> 
> 
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
> 
>> Hey, Arthur:
>> 
>>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
>> 
>> Thanks
>> 
>> Xuan Gong
>> 
>> 
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
>> Hi,
>> 
>> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>> 
>> Regards
>> Arthur
>> 
>> 
> 
> 
> 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there
a way that I just run ./sbin/start-yarn.sh in rm1 and get the
STANDBY ResourceManager in rm2 started as well?

No, need to start multiple RMs separately.

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down
in an auto-failover env? or how do you monitor the status of
ACTIVE/STANDBY ResourceManager?

Interesting question. But one of the design for auto-failover is that the
down-time of RM is invisible to end users. The end users can submit
applications normally even if the failover happens.

We can monitor the status of RMs by using the command-line (you did
previously) or from webUI/webService
(rm_address:portnumber/cluster/cluster). We can get the current status from
there.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my
> yarn-site.xml.
>
> At the moment, the ResourceManager HA works if:
>
> 1) at rm1, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> active
>
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
> MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
>
> 2) at rm2, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> standby
>
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
>
> Regards
> Arthur
>
>
> <?xml version="1.0"?>
> <configuration>
>
> <!-- Site specific YARN configuration properties -->
>
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
>
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
>
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
>
>
>
>
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
>
>
>
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>
> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
>
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
>
>
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
>
> </configuration>
>
>
>
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Hey, Arthur:
>
>     Did you use single node cluster or multiple nodes cluster? Could you
> share your configuration file (yarn-site.xml) ? This looks like a
> configuration issue.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> If I have TWO nodes for ResourceManager HA, what should be the correct
>> steps and commands to start and stop ResourceManager in a ResourceManager
>> HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
>> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>>
>> Regards
>> Arthur
>>
>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there
a way that I just run ./sbin/start-yarn.sh in rm1 and get the
STANDBY ResourceManager in rm2 started as well?

No, need to start multiple RMs separately.

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down
in an auto-failover env? or how do you monitor the status of
ACTIVE/STANDBY ResourceManager?

Interesting question. But one of the design for auto-failover is that the
down-time of RM is invisible to end users. The end users can submit
applications normally even if the failover happens.

We can monitor the status of RMs by using the command-line (you did
previously) or from webUI/webService
(rm_address:portnumber/cluster/cluster). We can get the current status from
there.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my
> yarn-site.xml.
>
> At the moment, the ResourceManager HA works if:
>
> 1) at rm1, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> active
>
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
> MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
>
> 2) at rm2, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> standby
>
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
>
> Regards
> Arthur
>
>
> <?xml version="1.0"?>
> <configuration>
>
> <!-- Site specific YARN configuration properties -->
>
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
>
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
>
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
>
>
>
>
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
>
>
>
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>
> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
>
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
>
>
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
>
> </configuration>
>
>
>
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Hey, Arthur:
>
>     Did you use single node cluster or multiple nodes cluster? Could you
> share your configuration file (yarn-site.xml) ? This looks like a
> configuration issue.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> If I have TWO nodes for ResourceManager HA, what should be the correct
>> steps and commands to start and stop ResourceManager in a ResourceManager
>> HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
>> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>>
>> Regards
>> Arthur
>>
>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there
a way that I just run ./sbin/start-yarn.sh in rm1 and get the
STANDBY ResourceManager in rm2 started as well?

No, need to start multiple RMs separately.

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down
in an auto-failover env? or how do you monitor the status of
ACTIVE/STANDBY ResourceManager?

Interesting question. But one of the design for auto-failover is that the
down-time of RM is invisible to end users. The end users can submit
applications normally even if the failover happens.

We can monitor the status of RMs by using the command-line (you did
previously) or from webUI/webService
(rm_address:portnumber/cluster/cluster). We can get the current status from
there.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my
> yarn-site.xml.
>
> At the moment, the ResourceManager HA works if:
>
> 1) at rm1, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> active
>
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
> MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
>
> 2) at rm2, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> standby
>
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
>
> Regards
> Arthur
>
>
> <?xml version="1.0"?>
> <configuration>
>
> <!-- Site specific YARN configuration properties -->
>
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
>
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
>
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
>
>
>
>
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
>
>
>
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>
> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
>
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
>
>
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
>
> </configuration>
>
>
>
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Hey, Arthur:
>
>     Did you use single node cluster or multiple nodes cluster? Could you
> share your configuration file (yarn-site.xml) ? This looks like a
> configuration issue.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> If I have TWO nodes for ResourceManager HA, what should be the correct
>> steps and commands to start and stop ResourceManager in a ResourceManager
>> HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
>> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>>
>> Regards
>> Arthur
>>
>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there
a way that I just run ./sbin/start-yarn.sh in rm1 and get the
STANDBY ResourceManager in rm2 started as well?

No, need to start multiple RMs separately.

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down
in an auto-failover env? or how do you monitor the status of
ACTIVE/STANDBY ResourceManager?

Interesting question. But one of the design for auto-failover is that the
down-time of RM is invisible to end users. The end users can submit
applications normally even if the failover happens.

We can monitor the status of RMs by using the command-line (you did
previously) or from webUI/webService
(rm_address:portnumber/cluster/cluster). We can get the current status from
there.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my
> yarn-site.xml.
>
> At the moment, the ResourceManager HA works if:
>
> 1) at rm1, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> active
>
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
> MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
>
> 2) at rm2, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> standby
>
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
>
> Regards
> Arthur
>
>
> <?xml version="1.0"?>
> <configuration>
>
> <!-- Site specific YARN configuration properties -->
>
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
>
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
>
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
>
>
>
>
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
>
>
>
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>
> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
>
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
>
>
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
>
> </configuration>
>
>
>
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Hey, Arthur:
>
>     Did you use single node cluster or multiple nodes cluster? Could you
> share your configuration file (yarn-site.xml) ? This looks like a
> configuration issue.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> If I have TWO nodes for ResourceManager HA, what should be the correct
>> steps and commands to start and stop ResourceManager in a ResourceManager
>> HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
>> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>>
>> Regards
>> Arthur
>>
>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.

At the moment, the ResourceManager HA works if:

1) at rm1, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
active

yarn rmadmin -getServiceState rm2
14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused


2) at rm2, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
standby


Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   


Regards
Arthur


<?xml version="1.0"?>
<configuration>

<!-- Site specific YARN configuration properties -->

   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>

   <property>
      <name>yarn.resourcemanager.address</name>
      <value>192.168.1.1:8032</value>
   </property>

   <property>
       <name>yarn.resourcemanager.resource-tracker.address</name>
       <value>192.168.1.1:8031</value>
   </property>

   <property>
       <name>yarn.resourcemanager.admin.address</name>
       <value>192.168.1.1:8033</value>
   </property>

   <property>
       <name>yarn.resourcemanager.scheduler.address</name>
       <value>192.168.1.1:8030</value>
   </property>

   <property>
      <name>yarn.nodemanager.loacl-dirs</name>
       <value>/edh/hadoop_data/mapred/nodemanager</value>
       <final>true</final>
   </property>

   <property>
       <name>yarn.web-proxy.address</name>
       <value>192.168.1.1:8888</value>
   </property>

   <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>




   <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>18432</value>
   </property>

   <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>9216</value>
   </property>

   <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
      <value>18432</value>
   </property>



  <property>
    <name>yarn.resourcemanager.connect.retry-interval.ms</name>
    <value>2000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>cluster_rm</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>192.168.1.1</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>192.168.1.2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
  </property>
  <property>
      <name>yarn.resourcemanager.zk-address</name>
      <value>rm1:2181,m135:2181,m137:2181</value>
  </property>
  <property>
    <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
    <value>5000</value>
  </property>

  <!-- RM1 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm1</name>
    <value>192.168.1.1:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
    <value>192.168.1.1:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm1</name>
    <value>192.168.1.1:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>192.168.1.1:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
    <value>192.168.1.1:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm1</name>
    <value>192.168.1.1:23142</value>
  </property>


  <!-- RM2 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm2</name>
    <value>192.168.1.2:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
    <value>192.168.1.2:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm2</name>
    <value>192.168.1.2:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>192.168.1.2:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
    <value>192.168.1.2:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm2</name>
    <value>192.168.1.2:23142</value>
  </property>

  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/edh/hadoop_logs/hadoop/</value>
  </property>

</configuration>



On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:

> Hey, Arthur:
> 
>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.

At the moment, the ResourceManager HA works if:

1) at rm1, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
active

yarn rmadmin -getServiceState rm2
14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused


2) at rm2, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
standby


Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   


Regards
Arthur


<?xml version="1.0"?>
<configuration>

<!-- Site specific YARN configuration properties -->

   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>

   <property>
      <name>yarn.resourcemanager.address</name>
      <value>192.168.1.1:8032</value>
   </property>

   <property>
       <name>yarn.resourcemanager.resource-tracker.address</name>
       <value>192.168.1.1:8031</value>
   </property>

   <property>
       <name>yarn.resourcemanager.admin.address</name>
       <value>192.168.1.1:8033</value>
   </property>

   <property>
       <name>yarn.resourcemanager.scheduler.address</name>
       <value>192.168.1.1:8030</value>
   </property>

   <property>
      <name>yarn.nodemanager.loacl-dirs</name>
       <value>/edh/hadoop_data/mapred/nodemanager</value>
       <final>true</final>
   </property>

   <property>
       <name>yarn.web-proxy.address</name>
       <value>192.168.1.1:8888</value>
   </property>

   <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>




   <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>18432</value>
   </property>

   <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>9216</value>
   </property>

   <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
      <value>18432</value>
   </property>



  <property>
    <name>yarn.resourcemanager.connect.retry-interval.ms</name>
    <value>2000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>cluster_rm</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>192.168.1.1</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>192.168.1.2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
  </property>
  <property>
      <name>yarn.resourcemanager.zk-address</name>
      <value>rm1:2181,m135:2181,m137:2181</value>
  </property>
  <property>
    <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
    <value>5000</value>
  </property>

  <!-- RM1 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm1</name>
    <value>192.168.1.1:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
    <value>192.168.1.1:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm1</name>
    <value>192.168.1.1:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>192.168.1.1:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
    <value>192.168.1.1:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm1</name>
    <value>192.168.1.1:23142</value>
  </property>


  <!-- RM2 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm2</name>
    <value>192.168.1.2:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
    <value>192.168.1.2:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm2</name>
    <value>192.168.1.2:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>192.168.1.2:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
    <value>192.168.1.2:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm2</name>
    <value>192.168.1.2:23142</value>
  </property>

  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/edh/hadoop_logs/hadoop/</value>
  </property>

</configuration>



On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:

> Hey, Arthur:
> 
>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.

At the moment, the ResourceManager HA works if:

1) at rm1, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
active

yarn rmadmin -getServiceState rm2
14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused


2) at rm2, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
standby


Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   


Regards
Arthur


<?xml version="1.0"?>
<configuration>

<!-- Site specific YARN configuration properties -->

   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>

   <property>
      <name>yarn.resourcemanager.address</name>
      <value>192.168.1.1:8032</value>
   </property>

   <property>
       <name>yarn.resourcemanager.resource-tracker.address</name>
       <value>192.168.1.1:8031</value>
   </property>

   <property>
       <name>yarn.resourcemanager.admin.address</name>
       <value>192.168.1.1:8033</value>
   </property>

   <property>
       <name>yarn.resourcemanager.scheduler.address</name>
       <value>192.168.1.1:8030</value>
   </property>

   <property>
      <name>yarn.nodemanager.loacl-dirs</name>
       <value>/edh/hadoop_data/mapred/nodemanager</value>
       <final>true</final>
   </property>

   <property>
       <name>yarn.web-proxy.address</name>
       <value>192.168.1.1:8888</value>
   </property>

   <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>




   <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>18432</value>
   </property>

   <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>9216</value>
   </property>

   <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
      <value>18432</value>
   </property>



  <property>
    <name>yarn.resourcemanager.connect.retry-interval.ms</name>
    <value>2000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>cluster_rm</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>192.168.1.1</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>192.168.1.2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
  </property>
  <property>
      <name>yarn.resourcemanager.zk-address</name>
      <value>rm1:2181,m135:2181,m137:2181</value>
  </property>
  <property>
    <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
    <value>5000</value>
  </property>

  <!-- RM1 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm1</name>
    <value>192.168.1.1:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
    <value>192.168.1.1:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm1</name>
    <value>192.168.1.1:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>192.168.1.1:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
    <value>192.168.1.1:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm1</name>
    <value>192.168.1.1:23142</value>
  </property>


  <!-- RM2 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm2</name>
    <value>192.168.1.2:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
    <value>192.168.1.2:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm2</name>
    <value>192.168.1.2:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>192.168.1.2:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
    <value>192.168.1.2:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm2</name>
    <value>192.168.1.2:23142</value>
  </property>

  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/edh/hadoop_logs/hadoop/</value>
  </property>

</configuration>



On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:

> Hey, Arthur:
> 
>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.

At the moment, the ResourceManager HA works if:

1) at rm1, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
active

yarn rmadmin -getServiceState rm2
14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused


2) at rm2, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
standby


Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   


Regards
Arthur


<?xml version="1.0"?>
<configuration>

<!-- Site specific YARN configuration properties -->

   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>

   <property>
      <name>yarn.resourcemanager.address</name>
      <value>192.168.1.1:8032</value>
   </property>

   <property>
       <name>yarn.resourcemanager.resource-tracker.address</name>
       <value>192.168.1.1:8031</value>
   </property>

   <property>
       <name>yarn.resourcemanager.admin.address</name>
       <value>192.168.1.1:8033</value>
   </property>

   <property>
       <name>yarn.resourcemanager.scheduler.address</name>
       <value>192.168.1.1:8030</value>
   </property>

   <property>
      <name>yarn.nodemanager.loacl-dirs</name>
       <value>/edh/hadoop_data/mapred/nodemanager</value>
       <final>true</final>
   </property>

   <property>
       <name>yarn.web-proxy.address</name>
       <value>192.168.1.1:8888</value>
   </property>

   <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>




   <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>18432</value>
   </property>

   <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>9216</value>
   </property>

   <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
      <value>18432</value>
   </property>



  <property>
    <name>yarn.resourcemanager.connect.retry-interval.ms</name>
    <value>2000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>cluster_rm</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>192.168.1.1</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>192.168.1.2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
  </property>
  <property>
      <name>yarn.resourcemanager.zk-address</name>
      <value>rm1:2181,m135:2181,m137:2181</value>
  </property>
  <property>
    <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
    <value>5000</value>
  </property>

  <!-- RM1 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm1</name>
    <value>192.168.1.1:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
    <value>192.168.1.1:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm1</name>
    <value>192.168.1.1:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>192.168.1.1:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
    <value>192.168.1.1:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm1</name>
    <value>192.168.1.1:23142</value>
  </property>


  <!-- RM2 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm2</name>
    <value>192.168.1.2:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
    <value>192.168.1.2:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm2</name>
    <value>192.168.1.2:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>192.168.1.2:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
    <value>192.168.1.2:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm2</name>
    <value>192.168.1.2:23142</value>
  </property>

  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/edh/hadoop_logs/hadoop/</value>
  </property>

</configuration>



On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:

> Hey, Arthur:
> 
>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

    Did you use single node cluster or multiple nodes cluster? Could you
share your configuration file (yarn-site.xml) ? This looks like a
configuration issue.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> If I have TWO nodes for ResourceManager HA, what should be the correct
> steps and commands to start and stop ResourceManager in a ResourceManager
> HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>
> Regards
> Arthur
>
>
> On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
> Hi
>
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and
> NM2). When verifying ResourceManager failover, I use “kill -9” to terminate
> the ResourceManager in name node 1 (NM1), if I run the the test job, it
> seems that the failover of ResourceManager keeps trying NM1 and NM2
> non-stop.
>
> Does anyone have the idea what would be wrong about this?  Thanks
>
> Regards
> Arthur
>
>
>
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar
> pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> ….
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

    Did you use single node cluster or multiple nodes cluster? Could you
share your configuration file (yarn-site.xml) ? This looks like a
configuration issue.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> If I have TWO nodes for ResourceManager HA, what should be the correct
> steps and commands to start and stop ResourceManager in a ResourceManager
> HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>
> Regards
> Arthur
>
>
> On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
> Hi
>
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and
> NM2). When verifying ResourceManager failover, I use “kill -9” to terminate
> the ResourceManager in name node 1 (NM1), if I run the the test job, it
> seems that the failover of ResourceManager keeps trying NM1 and NM2
> non-stop.
>
> Does anyone have the idea what would be wrong about this?  Thanks
>
> Regards
> Arthur
>
>
>
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar
> pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> ….
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

    Did you use single node cluster or multiple nodes cluster? Could you
share your configuration file (yarn-site.xml) ? This looks like a
configuration issue.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> If I have TWO nodes for ResourceManager HA, what should be the correct
> steps and commands to start and stop ResourceManager in a ResourceManager
> HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>
> Regards
> Arthur
>
>
> On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
> Hi
>
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and
> NM2). When verifying ResourceManager failover, I use “kill -9” to terminate
> the ResourceManager in name node 1 (NM1), if I run the the test job, it
> seems that the failover of ResourceManager keeps trying NM1 and NM2
> non-stop.
>
> Does anyone have the idea what would be wrong about this?  Thanks
>
> Regards
> Arthur
>
>
>
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar
> pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> ….
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

    Did you use single node cluster or multiple nodes cluster? Could you
share your configuration file (yarn-site.xml) ? This looks like a
configuration issue.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> If I have TWO nodes for ResourceManager HA, what should be the correct
> steps and commands to start and stop ResourceManager in a ResourceManager
> HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>
> Regards
> Arthur
>
>
> On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
> Hi
>
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and
> NM2). When verifying ResourceManager failover, I use “kill -9” to terminate
> the ResourceManager in name node 1 (NM1), if I run the the test job, it
> seems that the failover of ResourceManager keeps trying NM1 and NM2
> non-stop.
>
> Does anyone have the idea what would be wrong about this?  Thanks
>
> Regards
> Arthur
>
>
>
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar
> pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> ….
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.

Regards
Arthur


On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi 
> 
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 
> 
> Does anyone have the idea what would be wrong about this?  Thanks
> 
> Regards
> Arthur
> 
> 
> 
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> ….


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.

Regards
Arthur


On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi 
> 
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 
> 
> Does anyone have the idea what would be wrong about this?  Thanks
> 
> Regards
> Arthur
> 
> 
> 
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> ….


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.

Regards
Arthur


On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi 
> 
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 
> 
> Does anyone have the idea what would be wrong about this?  Thanks
> 
> Regards
> Arthur
> 
> 
> 
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> ….


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.

Regards
Arthur


On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi 
> 
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 
> 
> Does anyone have the idea what would be wrong about this?  Thanks
> 
> Regards
> Arthur
> 
> 
> 
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> ….


Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi 

I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 

Does anyone have the idea what would be wrong about this?  Thanks

Regards
Arthur



bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
Number of Maps  = 5
Samples per Map = 1010000000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
….

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager and JobHistoryServer do not auto-failover to Standby Node

Posted by Akira AJISAKA <aj...@oss.nttdata.co.jp>.
You need additional settings to make ResourceManager auto-failover.

http://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html

JobHistoryServer does not have automatic failover feature.

Regards,
Akira

(2014/08/05 20:15), Arthur.hk.chan@gmail.com wrote:
> Hi
>
> I have set up the Hadoop 2.4.1 with HDFS High Availability using the
> Quorum Journal Manager.
>
> I am verifying Automatic Failover: I manually used “kill -9” command to
> disable all running Hadoop services in active node (NN-1), I can find
> that the Standby node (NN-2) now becomes ACTIVE now which is good,
> however, the “ResourceManager” service cannot be found in NN-2, please
> advise how to make ResourceManager and JobHistoryServer auto-failover?
> or do I miss some important setup? missing some settings in
> hdfs-site.xml or core-site.xml?
>
> Please help!
>
> Regards
> Arthur
>
>
>
>
> BEFORE TESTING:
> NN-1:
> jps
> 9564 NameNode
> 10176 JobHistoryServer
> 21215 Jps
> 17636 QuorumPeerMain
> 20838 NodeManager
> 9678 DataNode
> 9933 JournalNode
> 10085 DFSZKFailoverController
> 20724 ResourceManager
>
> NN-2 (Standby Name node)
> jps
> 14064 Jps
> 32046 NameNode
> 13765 NodeManager
> 32126 DataNode
> 32271 DFSZKFailoverController
>
>
>
> AFTER
> NN-1
> dips
> 17636 QuorumPeerMain
> 21508 Jps
>
> NN-2
> jps
> 32046 NameNode
> 13765 NodeManager
> 32126 DataNode
> 32271 DFSZKFailoverController
> 14165 Jps
>
>
>


Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi 

I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 

Does anyone have the idea what would be wrong about this?  Thanks

Regards
Arthur



bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
Number of Maps  = 5
Samples per Map = 1010000000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
….

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager and JobHistoryServer do not auto-failover to Standby Node

Posted by Akira AJISAKA <aj...@oss.nttdata.co.jp>.
You need additional settings to make ResourceManager auto-failover.

http://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html

JobHistoryServer does not have automatic failover feature.

Regards,
Akira

(2014/08/05 20:15), Arthur.hk.chan@gmail.com wrote:
> Hi
>
> I have set up the Hadoop 2.4.1 with HDFS High Availability using the
> Quorum Journal Manager.
>
> I am verifying Automatic Failover: I manually used “kill -9” command to
> disable all running Hadoop services in active node (NN-1), I can find
> that the Standby node (NN-2) now becomes ACTIVE now which is good,
> however, the “ResourceManager” service cannot be found in NN-2, please
> advise how to make ResourceManager and JobHistoryServer auto-failover?
> or do I miss some important setup? missing some settings in
> hdfs-site.xml or core-site.xml?
>
> Please help!
>
> Regards
> Arthur
>
>
>
>
> BEFORE TESTING:
> NN-1:
> jps
> 9564 NameNode
> 10176 JobHistoryServer
> 21215 Jps
> 17636 QuorumPeerMain
> 20838 NodeManager
> 9678 DataNode
> 9933 JournalNode
> 10085 DFSZKFailoverController
> 20724 ResourceManager
>
> NN-2 (Standby Name node)
> jps
> 14064 Jps
> 32046 NameNode
> 13765 NodeManager
> 32126 DataNode
> 32271 DFSZKFailoverController
>
>
>
> AFTER
> NN-1
> dips
> 17636 QuorumPeerMain
> 21508 Jps
>
> NN-2
> jps
> 32046 NameNode
> 13765 NodeManager
> 32126 DataNode
> 32271 DFSZKFailoverController
> 14165 Jps
>
>
>


Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi 

I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 

Does anyone have the idea what would be wrong about this?  Thanks

Regards
Arthur



bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
Number of Maps  = 5
Samples per Map = 1010000000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
….

Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi 

I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 

Does anyone have the idea what would be wrong about this?  Thanks

Regards
Arthur



bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
Number of Maps  = 5
Samples per Map = 1010000000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
….

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager and JobHistoryServer do not auto-failover to Standby Node

Posted by Akira AJISAKA <aj...@oss.nttdata.co.jp>.
You need additional settings to make ResourceManager auto-failover.

http://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html

JobHistoryServer does not have automatic failover feature.

Regards,
Akira

(2014/08/05 20:15), Arthur.hk.chan@gmail.com wrote:
> Hi
>
> I have set up the Hadoop 2.4.1 with HDFS High Availability using the
> Quorum Journal Manager.
>
> I am verifying Automatic Failover: I manually used “kill -9” command to
> disable all running Hadoop services in active node (NN-1), I can find
> that the Standby node (NN-2) now becomes ACTIVE now which is good,
> however, the “ResourceManager” service cannot be found in NN-2, please
> advise how to make ResourceManager and JobHistoryServer auto-failover?
> or do I miss some important setup? missing some settings in
> hdfs-site.xml or core-site.xml?
>
> Please help!
>
> Regards
> Arthur
>
>
>
>
> BEFORE TESTING:
> NN-1:
> jps
> 9564 NameNode
> 10176 JobHistoryServer
> 21215 Jps
> 17636 QuorumPeerMain
> 20838 NodeManager
> 9678 DataNode
> 9933 JournalNode
> 10085 DFSZKFailoverController
> 20724 ResourceManager
>
> NN-2 (Standby Name node)
> jps
> 14064 Jps
> 32046 NameNode
> 13765 NodeManager
> 32126 DataNode
> 32271 DFSZKFailoverController
>
>
>
> AFTER
> NN-1
> dips
> 17636 QuorumPeerMain
> 21508 Jps
>
> NN-2
> jps
> 32046 NameNode
> 13765 NodeManager
> 32126 DataNode
> 32271 DFSZKFailoverController
> 14165 Jps
>
>
>


Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager and JobHistoryServer do not auto-failover to Standby Node

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi 

I have set up the Hadoop 2.4.1 with HDFS High Availability using the Quorum Journal Manager.

I am verifying Automatic Failover: I manually used “kill -9” command to disable all running Hadoop services in active node (NN-1), I can find that the Standby node (NN-2) now becomes ACTIVE now which is good, however, the “ResourceManager” service cannot be found in NN-2, please advise how to make ResourceManager and JobHistoryServer auto-failover? or do I miss some important setup? missing some settings in hdfs-site.xml or core-site.xml?

Please help!

Regards
Arthur




BEFORE TESTING:
NN-1:
jps
9564 NameNode
10176 JobHistoryServer
21215 Jps
17636 QuorumPeerMain
20838 NodeManager
9678 DataNode
9933 JournalNode
10085 DFSZKFailoverController
20724 ResourceManager

NN-2 (Standby Name node)
jps
14064 Jps
32046 NameNode
13765 NodeManager
32126 DataNode
32271 DFSZKFailoverController



AFTER
NN-1
dips
17636 QuorumPeerMain
21508 Jps

NN-2
jps
32046 NameNode
13765 NodeManager
32126 DataNode
32271 DFSZKFailoverController
14165 Jps




Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager and JobHistoryServer do not auto-failover to Standby Node

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi 

I have set up the Hadoop 2.4.1 with HDFS High Availability using the Quorum Journal Manager.

I am verifying Automatic Failover: I manually used “kill -9” command to disable all running Hadoop services in active node (NN-1), I can find that the Standby node (NN-2) now becomes ACTIVE now which is good, however, the “ResourceManager” service cannot be found in NN-2, please advise how to make ResourceManager and JobHistoryServer auto-failover? or do I miss some important setup? missing some settings in hdfs-site.xml or core-site.xml?

Please help!

Regards
Arthur




BEFORE TESTING:
NN-1:
jps
9564 NameNode
10176 JobHistoryServer
21215 Jps
17636 QuorumPeerMain
20838 NodeManager
9678 DataNode
9933 JournalNode
10085 DFSZKFailoverController
20724 ResourceManager

NN-2 (Standby Name node)
jps
14064 Jps
32046 NameNode
13765 NodeManager
32126 DataNode
32271 DFSZKFailoverController



AFTER
NN-1
dips
17636 QuorumPeerMain
21508 Jps

NN-2
jps
32046 NameNode
13765 NodeManager
32126 DataNode
32271 DFSZKFailoverController
14165 Jps




Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager and JobHistoryServer do not auto-failover to Standby Node

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi 

I have set up the Hadoop 2.4.1 with HDFS High Availability using the Quorum Journal Manager.

I am verifying Automatic Failover: I manually used “kill -9” command to disable all running Hadoop services in active node (NN-1), I can find that the Standby node (NN-2) now becomes ACTIVE now which is good, however, the “ResourceManager” service cannot be found in NN-2, please advise how to make ResourceManager and JobHistoryServer auto-failover? or do I miss some important setup? missing some settings in hdfs-site.xml or core-site.xml?

Please help!

Regards
Arthur




BEFORE TESTING:
NN-1:
jps
9564 NameNode
10176 JobHistoryServer
21215 Jps
17636 QuorumPeerMain
20838 NodeManager
9678 DataNode
9933 JournalNode
10085 DFSZKFailoverController
20724 ResourceManager

NN-2 (Standby Name node)
jps
14064 Jps
32046 NameNode
13765 NodeManager
32126 DataNode
32271 DFSZKFailoverController



AFTER
NN-1
dips
17636 QuorumPeerMain
21508 Jps

NN-2
jps
32046 NameNode
13765 NodeManager
32126 DataNode
32271 DFSZKFailoverController
14165 Jps




Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager and JobHistoryServer do not auto-failover to Standby Node

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi 

I have set up the Hadoop 2.4.1 with HDFS High Availability using the Quorum Journal Manager.

I am verifying Automatic Failover: I manually used “kill -9” command to disable all running Hadoop services in active node (NN-1), I can find that the Standby node (NN-2) now becomes ACTIVE now which is good, however, the “ResourceManager” service cannot be found in NN-2, please advise how to make ResourceManager and JobHistoryServer auto-failover? or do I miss some important setup? missing some settings in hdfs-site.xml or core-site.xml?

Please help!

Regards
Arthur




BEFORE TESTING:
NN-1:
jps
9564 NameNode
10176 JobHistoryServer
21215 Jps
17636 QuorumPeerMain
20838 NodeManager
9678 DataNode
9933 JournalNode
10085 DFSZKFailoverController
20724 ResourceManager

NN-2 (Standby Name node)
jps
14064 Jps
32046 NameNode
13765 NodeManager
32126 DataNode
32271 DFSZKFailoverController



AFTER
NN-1
dips
17636 QuorumPeerMain
21508 Jps

NN-2
jps
32046 NameNode
13765 NodeManager
32126 DataNode
32271 DFSZKFailoverController
14165 Jps




RE: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by Brahma Reddy Battula <br...@huawei.com>.
ZKFC LOG:

By Default , it will be under HADOOP_HOME/logs/hadoop_******zkfc.log

Same can be confirmed by using the following commands(to get the log location)

jinfo 7370 | grep -i hadoop.log.dir

ps -eaf | grep -i DFSZKFailoverController | grep -i hadoop.log.dir

WEB Console :

And Default port for NameNode web console is 50070. you can check value of "dfs.namenode.http-address" in hdfs-site.xml..

Default values, you can check from the following link..

http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml





Thanks & Regards

Brahma Reddy Battula





________________________________
From: Arthur.hk.chan@gmail.com [arthur.hk.chan@gmail.com]
Sent: Monday, August 04, 2014 6:07 PM
To: user@hadoop.apache.org
Cc: Arthur.hk.chan@gmail.com
Subject: Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Hi,

Thanks for your reply.
It was about StandBy Namenode not promoted to Active.
Can you please advise what the path of ZKFC logs?

"Similar to Namenode status web page, a Cluster Web Console is added in federation to monitor the federated cluster at http://<any_nn_host:port>/dfsclusterhealth.jsp. Any Namenode in the cluster can be used to access this web page”
What is the default port for the cluster console? I tried 8088 but no luck.

Please advise.

Regards
Arthur




On 4 Aug, 2014, at 7:22 pm, Brahma Reddy Battula <br...@huawei.com>> wrote:

HI,


DO you mean Active Namenode which is killed is not transition to STANDBY..?

>>> Here Namenode will not start as standby if you kill..Again you need to start manually.

      Automatic failover means when over Active goes down Standy Node will transition to Active automatically..it's not like starting killed process and making the Active(which is standby.)

Please refer the following doc for same ..( Section : Verifying automatic failover)

http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html

OR

 DO you mean Standby Namenode is not transition to ACTIVE..?

>>>> Please check ZKFC logs,, Mostly this might not happen from the logs you pasted


Thanks & Regards



Brahma Reddy Battula



________________________________
From: Arthur.hk.chan@gmail.com<ma...@gmail.com> [arthur.hk.chan@gmail.com<ma...@gmail.com>]
Sent: Monday, August 04, 2014 4:38 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Cc: Arthur.hk.chan@gmail.com
Subject: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Hi,

I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node,

Please advise
Regards
Arthur


2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 11 more
2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby


RE: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by Brahma Reddy Battula <br...@huawei.com>.
ZKFC LOG:

By Default , it will be under HADOOP_HOME/logs/hadoop_******zkfc.log

Same can be confirmed by using the following commands(to get the log location)

jinfo 7370 | grep -i hadoop.log.dir

ps -eaf | grep -i DFSZKFailoverController | grep -i hadoop.log.dir

WEB Console :

And Default port for NameNode web console is 50070. you can check value of "dfs.namenode.http-address" in hdfs-site.xml..

Default values, you can check from the following link..

http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml





Thanks & Regards

Brahma Reddy Battula





________________________________
From: Arthur.hk.chan@gmail.com [arthur.hk.chan@gmail.com]
Sent: Monday, August 04, 2014 6:07 PM
To: user@hadoop.apache.org
Cc: Arthur.hk.chan@gmail.com
Subject: Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Hi,

Thanks for your reply.
It was about StandBy Namenode not promoted to Active.
Can you please advise what the path of ZKFC logs?

"Similar to Namenode status web page, a Cluster Web Console is added in federation to monitor the federated cluster at http://<any_nn_host:port>/dfsclusterhealth.jsp. Any Namenode in the cluster can be used to access this web page”
What is the default port for the cluster console? I tried 8088 but no luck.

Please advise.

Regards
Arthur




On 4 Aug, 2014, at 7:22 pm, Brahma Reddy Battula <br...@huawei.com>> wrote:

HI,


DO you mean Active Namenode which is killed is not transition to STANDBY..?

>>> Here Namenode will not start as standby if you kill..Again you need to start manually.

      Automatic failover means when over Active goes down Standy Node will transition to Active automatically..it's not like starting killed process and making the Active(which is standby.)

Please refer the following doc for same ..( Section : Verifying automatic failover)

http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html

OR

 DO you mean Standby Namenode is not transition to ACTIVE..?

>>>> Please check ZKFC logs,, Mostly this might not happen from the logs you pasted


Thanks & Regards



Brahma Reddy Battula



________________________________
From: Arthur.hk.chan@gmail.com<ma...@gmail.com> [arthur.hk.chan@gmail.com<ma...@gmail.com>]
Sent: Monday, August 04, 2014 4:38 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Cc: Arthur.hk.chan@gmail.com
Subject: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Hi,

I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node,

Please advise
Regards
Arthur


2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 11 more
2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby


RE: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by Brahma Reddy Battula <br...@huawei.com>.
ZKFC LOG:

By Default , it will be under HADOOP_HOME/logs/hadoop_******zkfc.log

Same can be confirmed by using the following commands(to get the log location)

jinfo 7370 | grep -i hadoop.log.dir

ps -eaf | grep -i DFSZKFailoverController | grep -i hadoop.log.dir

WEB Console :

And Default port for NameNode web console is 50070. you can check value of "dfs.namenode.http-address" in hdfs-site.xml..

Default values, you can check from the following link..

http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml





Thanks & Regards

Brahma Reddy Battula





________________________________
From: Arthur.hk.chan@gmail.com [arthur.hk.chan@gmail.com]
Sent: Monday, August 04, 2014 6:07 PM
To: user@hadoop.apache.org
Cc: Arthur.hk.chan@gmail.com
Subject: Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Hi,

Thanks for your reply.
It was about StandBy Namenode not promoted to Active.
Can you please advise what the path of ZKFC logs?

"Similar to Namenode status web page, a Cluster Web Console is added in federation to monitor the federated cluster at http://<any_nn_host:port>/dfsclusterhealth.jsp. Any Namenode in the cluster can be used to access this web page”
What is the default port for the cluster console? I tried 8088 but no luck.

Please advise.

Regards
Arthur




On 4 Aug, 2014, at 7:22 pm, Brahma Reddy Battula <br...@huawei.com>> wrote:

HI,


DO you mean Active Namenode which is killed is not transition to STANDBY..?

>>> Here Namenode will not start as standby if you kill..Again you need to start manually.

      Automatic failover means when over Active goes down Standy Node will transition to Active automatically..it's not like starting killed process and making the Active(which is standby.)

Please refer the following doc for same ..( Section : Verifying automatic failover)

http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html

OR

 DO you mean Standby Namenode is not transition to ACTIVE..?

>>>> Please check ZKFC logs,, Mostly this might not happen from the logs you pasted


Thanks & Regards



Brahma Reddy Battula



________________________________
From: Arthur.hk.chan@gmail.com<ma...@gmail.com> [arthur.hk.chan@gmail.com<ma...@gmail.com>]
Sent: Monday, August 04, 2014 4:38 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Cc: Arthur.hk.chan@gmail.com
Subject: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Hi,

I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node,

Please advise
Regards
Arthur


2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 11 more
2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby


RE: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by Brahma Reddy Battula <br...@huawei.com>.
ZKFC LOG:

By Default , it will be under HADOOP_HOME/logs/hadoop_******zkfc.log

Same can be confirmed by using the following commands(to get the log location)

jinfo 7370 | grep -i hadoop.log.dir

ps -eaf | grep -i DFSZKFailoverController | grep -i hadoop.log.dir

WEB Console :

And Default port for NameNode web console is 50070. you can check value of "dfs.namenode.http-address" in hdfs-site.xml..

Default values, you can check from the following link..

http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml





Thanks & Regards

Brahma Reddy Battula





________________________________
From: Arthur.hk.chan@gmail.com [arthur.hk.chan@gmail.com]
Sent: Monday, August 04, 2014 6:07 PM
To: user@hadoop.apache.org
Cc: Arthur.hk.chan@gmail.com
Subject: Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Hi,

Thanks for your reply.
It was about StandBy Namenode not promoted to Active.
Can you please advise what the path of ZKFC logs?

"Similar to Namenode status web page, a Cluster Web Console is added in federation to monitor the federated cluster at http://<any_nn_host:port>/dfsclusterhealth.jsp. Any Namenode in the cluster can be used to access this web page”
What is the default port for the cluster console? I tried 8088 but no luck.

Please advise.

Regards
Arthur




On 4 Aug, 2014, at 7:22 pm, Brahma Reddy Battula <br...@huawei.com>> wrote:

HI,


DO you mean Active Namenode which is killed is not transition to STANDBY..?

>>> Here Namenode will not start as standby if you kill..Again you need to start manually.

      Automatic failover means when over Active goes down Standy Node will transition to Active automatically..it's not like starting killed process and making the Active(which is standby.)

Please refer the following doc for same ..( Section : Verifying automatic failover)

http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html

OR

 DO you mean Standby Namenode is not transition to ACTIVE..?

>>>> Please check ZKFC logs,, Mostly this might not happen from the logs you pasted


Thanks & Regards



Brahma Reddy Battula



________________________________
From: Arthur.hk.chan@gmail.com<ma...@gmail.com> [arthur.hk.chan@gmail.com<ma...@gmail.com>]
Sent: Monday, August 04, 2014 4:38 PM
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Cc: Arthur.hk.chan@gmail.com
Subject: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Hi,

I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node,

Please advise
Regards
Arthur


2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 11 more
2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

Thanks for your reply.
It was about StandBy Namenode not promoted to Active.
Can you please advise what the path of ZKFC logs?  

"Similar to Namenode status web page, a Cluster Web Console is added in federation to monitor the federated cluster at http://<any_nn_host:port>/dfsclusterhealth.jsp. Any Namenode in the cluster can be used to access this web page” 
What is the default port for the cluster console? I tried 8088 but no luck.

Please advise.

Regards
Arthur




On 4 Aug, 2014, at 7:22 pm, Brahma Reddy Battula <br...@huawei.com> wrote:

> HI,
> 
> 
> DO you mean Active Namenode which is killed is not transition to STANDBY..?
> 
> >>> Here Namenode will not start as standby if you kill..Again you need to start manually.
>         
>       Automatic failover means when over Active goes down Standy Node will transition to Active automatically..it's not like starting killed process and making the Active(which is standby.)
> 
> Please refer the following doc for same ..( Section : Verifying automatic failover)
> 
> http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html
> 
> OR
> 
>  DO you mean Standby Namenode is not transition to ACTIVE..?
> 
> >>>> Please check ZKFC logs,, Mostly this might not happen from the logs you pasted
> 
> 
> Thanks & Regards
>  
> Brahma Reddy Battula
>  
> 
> From: Arthur.hk.chan@gmail.com [arthur.hk.chan@gmail.com]
> Sent: Monday, August 04, 2014 4:38 PM
> To: user@hadoop.apache.org
> Cc: Arthur.hk.chan@gmail.com
> Subject: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN
> 
> Hi,
> 
> I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node, 
> 
> Please advise
> Regards
> Arthur
> 
> 
> 2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
> java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
> at org.apache.hadoop.ipc.Client.call(Client.java:1414)
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
> at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
> at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
> at org.apache.hadoop.ipc.Client.call(Client.java:1381)
> ... 11 more
> 2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

Thanks for your reply.
It was about StandBy Namenode not promoted to Active.
Can you please advise what the path of ZKFC logs?  

"Similar to Namenode status web page, a Cluster Web Console is added in federation to monitor the federated cluster at http://<any_nn_host:port>/dfsclusterhealth.jsp. Any Namenode in the cluster can be used to access this web page” 
What is the default port for the cluster console? I tried 8088 but no luck.

Please advise.

Regards
Arthur




On 4 Aug, 2014, at 7:22 pm, Brahma Reddy Battula <br...@huawei.com> wrote:

> HI,
> 
> 
> DO you mean Active Namenode which is killed is not transition to STANDBY..?
> 
> >>> Here Namenode will not start as standby if you kill..Again you need to start manually.
>         
>       Automatic failover means when over Active goes down Standy Node will transition to Active automatically..it's not like starting killed process and making the Active(which is standby.)
> 
> Please refer the following doc for same ..( Section : Verifying automatic failover)
> 
> http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html
> 
> OR
> 
>  DO you mean Standby Namenode is not transition to ACTIVE..?
> 
> >>>> Please check ZKFC logs,, Mostly this might not happen from the logs you pasted
> 
> 
> Thanks & Regards
>  
> Brahma Reddy Battula
>  
> 
> From: Arthur.hk.chan@gmail.com [arthur.hk.chan@gmail.com]
> Sent: Monday, August 04, 2014 4:38 PM
> To: user@hadoop.apache.org
> Cc: Arthur.hk.chan@gmail.com
> Subject: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN
> 
> Hi,
> 
> I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node, 
> 
> Please advise
> Regards
> Arthur
> 
> 
> 2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
> java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
> at org.apache.hadoop.ipc.Client.call(Client.java:1414)
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
> at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
> at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
> at org.apache.hadoop.ipc.Client.call(Client.java:1381)
> ... 11 more
> 2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

Thanks for your reply.
It was about StandBy Namenode not promoted to Active.
Can you please advise what the path of ZKFC logs?  

"Similar to Namenode status web page, a Cluster Web Console is added in federation to monitor the federated cluster at http://<any_nn_host:port>/dfsclusterhealth.jsp. Any Namenode in the cluster can be used to access this web page” 
What is the default port for the cluster console? I tried 8088 but no luck.

Please advise.

Regards
Arthur




On 4 Aug, 2014, at 7:22 pm, Brahma Reddy Battula <br...@huawei.com> wrote:

> HI,
> 
> 
> DO you mean Active Namenode which is killed is not transition to STANDBY..?
> 
> >>> Here Namenode will not start as standby if you kill..Again you need to start manually.
>         
>       Automatic failover means when over Active goes down Standy Node will transition to Active automatically..it's not like starting killed process and making the Active(which is standby.)
> 
> Please refer the following doc for same ..( Section : Verifying automatic failover)
> 
> http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html
> 
> OR
> 
>  DO you mean Standby Namenode is not transition to ACTIVE..?
> 
> >>>> Please check ZKFC logs,, Mostly this might not happen from the logs you pasted
> 
> 
> Thanks & Regards
>  
> Brahma Reddy Battula
>  
> 
> From: Arthur.hk.chan@gmail.com [arthur.hk.chan@gmail.com]
> Sent: Monday, August 04, 2014 4:38 PM
> To: user@hadoop.apache.org
> Cc: Arthur.hk.chan@gmail.com
> Subject: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN
> 
> Hi,
> 
> I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node, 
> 
> Please advise
> Regards
> Arthur
> 
> 
> 2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
> java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
> at org.apache.hadoop.ipc.Client.call(Client.java:1414)
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
> at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
> at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
> at org.apache.hadoop.ipc.Client.call(Client.java:1381)
> ... 11 more
> 2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

Thanks for your reply.
It was about StandBy Namenode not promoted to Active.
Can you please advise what the path of ZKFC logs?  

"Similar to Namenode status web page, a Cluster Web Console is added in federation to monitor the federated cluster at http://<any_nn_host:port>/dfsclusterhealth.jsp. Any Namenode in the cluster can be used to access this web page” 
What is the default port for the cluster console? I tried 8088 but no luck.

Please advise.

Regards
Arthur




On 4 Aug, 2014, at 7:22 pm, Brahma Reddy Battula <br...@huawei.com> wrote:

> HI,
> 
> 
> DO you mean Active Namenode which is killed is not transition to STANDBY..?
> 
> >>> Here Namenode will not start as standby if you kill..Again you need to start manually.
>         
>       Automatic failover means when over Active goes down Standy Node will transition to Active automatically..it's not like starting killed process and making the Active(which is standby.)
> 
> Please refer the following doc for same ..( Section : Verifying automatic failover)
> 
> http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html
> 
> OR
> 
>  DO you mean Standby Namenode is not transition to ACTIVE..?
> 
> >>>> Please check ZKFC logs,, Mostly this might not happen from the logs you pasted
> 
> 
> Thanks & Regards
>  
> Brahma Reddy Battula
>  
> 
> From: Arthur.hk.chan@gmail.com [arthur.hk.chan@gmail.com]
> Sent: Monday, August 04, 2014 4:38 PM
> To: user@hadoop.apache.org
> Cc: Arthur.hk.chan@gmail.com
> Subject: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN
> 
> Hi,
> 
> I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node, 
> 
> Please advise
> Regards
> Arthur
> 
> 
> 2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
> java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
> at org.apache.hadoop.ipc.Client.call(Client.java:1414)
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
> at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
> at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
> at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
> at org.apache.hadoop.ipc.Client.call(Client.java:1381)
> ... 11 more
> 2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
> 2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby


RE: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by Brahma Reddy Battula <br...@huawei.com>.
HI,


DO you mean Active Namenode which is killed is not transition to STANDBY..?

>>> Here Namenode will not start as standby if you kill..Again you need to start manually.

      Automatic failover means when over Active goes down Standy Node will transition to Active automatically..it's not like starting killed process and making the Active(which is standby.)

Please refer the following doc for same ..( Section : Verifying automatic failover)

http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html

OR

 DO you mean Standby Namenode is not transition to ACTIVE..?

>>>> Please check ZKFC logs,, Mostly this might not happen from the logs you pasted



Thanks & Regards



Brahma Reddy Battula




________________________________
From: Arthur.hk.chan@gmail.com [arthur.hk.chan@gmail.com]
Sent: Monday, August 04, 2014 4:38 PM
To: user@hadoop.apache.org
Cc: Arthur.hk.chan@gmail.com
Subject: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Hi,

I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node,

Please advise
Regards
Arthur


2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 11 more
2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby







RE: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by Brahma Reddy Battula <br...@huawei.com>.
HI,


DO you mean Active Namenode which is killed is not transition to STANDBY..?

>>> Here Namenode will not start as standby if you kill..Again you need to start manually.

      Automatic failover means when over Active goes down Standy Node will transition to Active automatically..it's not like starting killed process and making the Active(which is standby.)

Please refer the following doc for same ..( Section : Verifying automatic failover)

http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html

OR

 DO you mean Standby Namenode is not transition to ACTIVE..?

>>>> Please check ZKFC logs,, Mostly this might not happen from the logs you pasted



Thanks & Regards



Brahma Reddy Battula




________________________________
From: Arthur.hk.chan@gmail.com [arthur.hk.chan@gmail.com]
Sent: Monday, August 04, 2014 4:38 PM
To: user@hadoop.apache.org
Cc: Arthur.hk.chan@gmail.com
Subject: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Hi,

I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node,

Please advise
Regards
Arthur


2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 11 more
2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby







RE: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by Brahma Reddy Battula <br...@huawei.com>.
HI,


DO you mean Active Namenode which is killed is not transition to STANDBY..?

>>> Here Namenode will not start as standby if you kill..Again you need to start manually.

      Automatic failover means when over Active goes down Standy Node will transition to Active automatically..it's not like starting killed process and making the Active(which is standby.)

Please refer the following doc for same ..( Section : Verifying automatic failover)

http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html

OR

 DO you mean Standby Namenode is not transition to ACTIVE..?

>>>> Please check ZKFC logs,, Mostly this might not happen from the logs you pasted



Thanks & Regards



Brahma Reddy Battula




________________________________
From: Arthur.hk.chan@gmail.com [arthur.hk.chan@gmail.com]
Sent: Monday, August 04, 2014 4:38 PM
To: user@hadoop.apache.org
Cc: Arthur.hk.chan@gmail.com
Subject: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Hi,

I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node,

Please advise
Regards
Arthur


2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 11 more
2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby







RE: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by Brahma Reddy Battula <br...@huawei.com>.
HI,


DO you mean Active Namenode which is killed is not transition to STANDBY..?

>>> Here Namenode will not start as standby if you kill..Again you need to start manually.

      Automatic failover means when over Active goes down Standy Node will transition to Active automatically..it's not like starting killed process and making the Active(which is standby.)

Please refer the following doc for same ..( Section : Verifying automatic failover)

http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html

OR

 DO you mean Standby Namenode is not transition to ACTIVE..?

>>>> Please check ZKFC logs,, Mostly this might not happen from the logs you pasted



Thanks & Regards



Brahma Reddy Battula




________________________________
From: Arthur.hk.chan@gmail.com [arthur.hk.chan@gmail.com]
Sent: Monday, August 04, 2014 4:38 PM
To: user@hadoop.apache.org
Cc: Arthur.hk.chan@gmail.com
Subject: Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Hi,

I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node,

Please advise
Regards
Arthur


2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
... 11 more
2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby







Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node, 

Please advise
Regards
Arthur


2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
	at org.apache.hadoop.ipc.Client.call(Client.java:1414)
	at org.apache.hadoop.ipc.Client.call(Client.java:1363)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
	at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
	at org.apache.hadoop.ipc.Client.call(Client.java:1381)
	... 11 more
2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
 






Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node, 

Please advise
Regards
Arthur


2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
	at org.apache.hadoop.ipc.Client.call(Client.java:1414)
	at org.apache.hadoop.ipc.Client.call(Client.java:1363)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
	at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
	at org.apache.hadoop.ipc.Client.call(Client.java:1381)
	... 11 more
2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
 






Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node, 

Please advise
Regards
Arthur


2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
	at org.apache.hadoop.ipc.Client.call(Client.java:1414)
	at org.apache.hadoop.ipc.Client.call(Client.java:1363)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
	at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
	at org.apache.hadoop.ipc.Client.call(Client.java:1381)
	... 11 more
2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
 






Hadoop 2.4.1 Verifying Automatic Failover Failed: Unable to trigger a roll of the active NN

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I have setup Hadoop 2.4.1 HA Cluster using Quorum Journal, I am verifying automatic failover, after killing the process of namenode from Active one, the name node was not failover to standby node, 

Please advise
Regards
Arthur


2014-08-04 18:54:40,453 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.net.ConnectException: Call From standbynode  to  activenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
	at org.apache.hadoop.ipc.Client.call(Client.java:1414)
	at org.apache.hadoop.ipc.Client.call(Client.java:1363)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
	at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
	at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
	at org.apache.hadoop.ipc.Client.call(Client.java:1381)
	... 11 more
2014-08-04 18:55:03,458 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:06,683 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54571 Call#17 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:16,643 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#1: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:19,530 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from activenode:54610 Call#17 Retry#5: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2014-08-04 18:55:20,756 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from activenode:54602 Call#0 Retry#3: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
 






ResourceManager version and Hadoop version

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I am running Apache Hadoop Cluster 2.4.1, I have two questions about Hadoop HTML link http://test_namenode:8088/cluster/cluster, 

1) If I click "Server metrics” to the page of http://test_namenode::8088/metrics, it is blank.
Can anyone please advise if this is normal or I have not yet setup some monitoring tools properly e.g. nagios?


2) On the page http://test_namenode:8088/cluster/cluster, I can see version is "2.4.1 from Unknown"
Is there a way to change the word from “Unknown” to a more meaningful word by myself?

ResourceManager version:                 	 2.4.1 from Unknown by hadoop  source checksum f74…...
Hadoop version:	 2.4.1 from Unknown by hadoop source checksum bb7…...


Many thanks!

Regards
Arthur

ResourceManager version and Hadoop version

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I am running Apache Hadoop Cluster 2.4.1, I have two questions about Hadoop HTML link http://test_namenode:8088/cluster/cluster, 

1) If I click "Server metrics” to the page of http://test_namenode::8088/metrics, it is blank.
Can anyone please advise if this is normal or I have not yet setup some monitoring tools properly e.g. nagios?


2) On the page http://test_namenode:8088/cluster/cluster, I can see version is "2.4.1 from Unknown"
Is there a way to change the word from “Unknown” to a more meaningful word by myself?

ResourceManager version:                 	 2.4.1 from Unknown by hadoop  source checksum f74…...
Hadoop version:	 2.4.1 from Unknown by hadoop source checksum bb7…...


Many thanks!

Regards
Arthur

Re: Compile Hadoop 2.4.1 (with Tests and Without Tests)

Posted by Tsuyoshi OZAWA <oz...@gmail.com>.
Hi,

Unfortunately, sometimes we face unexpected test failures. Please
check whether the problem has been registered or resolved on Hadoop's
JIRAs.

* https://issues.apache.org/jira/browse/HADOOP
* https://issues.apache.org/jira/browse/HDFS
* https://issues.apache.org/jira/browse/MAPREDUCE
* https://issues.apache.org/jira/browse/YARN

If not, please register it as an issue of JIRAs.

Thanks,
- Tsuyoshi

On Sun, Aug 3, 2014 at 7:32 PM, Arthur.hk.chan@gmail.com
<ar...@gmail.com> wrote:
> Hi,
>
> I am trying to compile Hadoop 2.4.1.
>
> If I run "mvm clean install -DskipTests", the compilation is GOOD,
> However, if I run "mvn clean install”, i.e. didn’t skip the Tests, it
> returned “Failures”
>
> Can anyone please advise what should be prepared before unit tests in
> compilation?  From the error log, e.g. I found it used 192.168.12.37, but
> this was not my local IPs, should I change some configure file? any ideas?
> On the other hand, can I use the the compiled code from GOOD compilation and
> just ignore the failed tests?
>
> Please advise!!
>
> Regards
> Arthur
>
>
>
>
> Compilation results:
> run "mvm clean install -DskipTests", the compilation is GOOD,
> =====
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main ................................ SUCCESS [1.756s]
> [INFO] Apache Hadoop Project POM ......................... SUCCESS [0.586s]
> [INFO] Apache Hadoop Annotations ......................... SUCCESS [1.282s]
> [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [0.257s]
> [INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.136s]
> [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [1.189s]
> [INFO] Apache Hadoop MiniKDC ............................. SUCCESS [0.837s]
> [INFO] Apache Hadoop Auth ................................ SUCCESS [0.835s]
> [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [0.614s]
> [INFO] Apache Hadoop Common .............................. SUCCESS [9.020s]
> [INFO] Apache Hadoop NFS ................................. SUCCESS [9.341s]
> [INFO] Apache Hadoop Common Project ...................... SUCCESS [0.013s]
> [INFO] Apache Hadoop HDFS ................................ SUCCESS
> [1:11.329s]
> [INFO] Apache Hadoop HttpFS .............................. SUCCESS [1.943s]
> [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [8.236s]
> [INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [0.181s]
> [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.014s]
> [INFO] hadoop-yarn ....................................... SUCCESS [0.045s]
> [INFO] hadoop-yarn-api ................................... SUCCESS [3.080s]
> [INFO] hadoop-yarn-common ................................ SUCCESS [3.995s]
> [INFO] hadoop-yarn-server ................................ SUCCESS [0.036s]
> [INFO] hadoop-yarn-server-common ......................... SUCCESS [0.406s]
> [INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [7.874s]
> [INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [0.185s]
> [INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [2.766s]
> [INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [0.975s]
> [INFO] hadoop-yarn-server-tests .......................... SUCCESS [0.260s]
> [INFO] hadoop-yarn-client ................................ SUCCESS [0.401s]
> [INFO] hadoop-yarn-applications .......................... SUCCESS [0.012s]
> [INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [0.194s]
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [0.157s]
> [INFO] hadoop-yarn-site .................................. SUCCESS [0.028s]
> [INFO] hadoop-yarn-project ............................... SUCCESS [0.030s]
> [INFO] hadoop-mapreduce-client ........................... SUCCESS [0.027s]
> [INFO] hadoop-mapreduce-client-core ...................... SUCCESS [1.384s]
> [INFO] hadoop-mapreduce-client-common .................... SUCCESS [1.167s]
> [INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [0.151s]
> [INFO] hadoop-mapreduce-client-app ....................... SUCCESS [0.692s]
> [INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [0.521s]
> [INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [9.581s]
> [INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [0.105s]
> [INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [0.288s]
> [INFO] hadoop-mapreduce .................................. SUCCESS [0.031s]
> [INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [2.485s]
> [INFO] Apache Hadoop Distributed Copy .................... SUCCESS [14.204s]
> [INFO] Apache Hadoop Archives ............................ SUCCESS [0.147s]
> [INFO] Apache Hadoop Rumen ............................... SUCCESS [0.283s]
> [INFO] Apache Hadoop Gridmix ............................. SUCCESS [0.266s]
> [INFO] Apache Hadoop Data Join ........................... SUCCESS [0.109s]
> [INFO] Apache Hadoop Extras .............................. SUCCESS [0.173s]
> [INFO] Apache Hadoop Pipes ............................... SUCCESS [0.013s]
> [INFO] Apache Hadoop OpenStack support ................... SUCCESS [0.292s]
> [INFO] Apache Hadoop Client .............................. SUCCESS [0.093s]
> [INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.052s]
> [INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [1.123s]
> [INFO] Apache Hadoop Tools Dist .......................... SUCCESS [0.109s]
> [INFO] Apache Hadoop Tools ............................... SUCCESS [0.012s]
> [INFO] Apache Hadoop Distribution ........................ SUCCESS [0.038s]
> [INFO] ————————————————————————————————————
>
>
>
>
> However, if I run "mvn clean install”, i.e. with Tests, it returned
> “Failures”
> ====
> Running org.apache.hadoop.fs.viewfs.TestChRootedFs
> Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.626 sec -
> in org.apache.hadoop.fs.viewfs.TestChRootedFs
> Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
> Tests run: 49, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.099 sec
> <<< FAILURE! - in
> org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
> testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem)
> Time elapsed: 0.028 sec  <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:93)
> at
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:289)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>
> Running
> org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
> Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.282 sec -
> in
> org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
> Running org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
> Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.162 sec -
> in org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
> Running org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.434 sec -
> in org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
> Running org.apache.hadoop.fs.TestFileStatus
> Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec -
> in org.apache.hadoop.fs.TestFileStatus
> Running org.apache.hadoop.fs.TestFileContextResolveAfs
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec -
> in org.apache.hadoop.fs.TestFileContextResolveAfs
> Running org.apache.hadoop.fs.TestGlobPattern
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.034 sec -
> in org.apache.hadoop.fs.TestGlobPattern
> Running org.apache.hadoop.fs.s3native.TestJets3tNativeFileSystemStore
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.136 sec -
> in org.apache.hadoop.fs.s3native.TestJets3tNativeFileSystemStore
> Running org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
> Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.498 sec -
> in org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
> Running org.apache.hadoop.fs.TestPath
> Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.418 sec -
> in org.apache.hadoop.fs.TestPath
> Running org.apache.hadoop.fs.TestTrash
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.588 sec -
> in org.apache.hadoop.fs.TestTrash
> Running org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.37 sec -
> in org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
> Running org.apache.hadoop.fs.TestFileContextDeleteOnExit
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec -
> in org.apache.hadoop.fs.TestFileContextDeleteOnExit
> Running org.apache.hadoop.fs.TestAfsCheckPath
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 sec -
> in org.apache.hadoop.fs.TestAfsCheckPath
> Running org.apache.hadoop.fs.TestLocalFileSystem
> Tests run: 18, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 0.601 sec
> <<< FAILURE! - in org.apache.hadoop.fs.TestLocalFileSystem
> testReportChecksumFailure(org.apache.hadoop.fs.TestLocalFileSystem)  Time
> elapsed: 0.074 sec  <<< FAILURE!
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertTrue(Assert.java:54)
> at
> org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure(TestLocalFileSystem.java:356)
>
> Running org.apache.hadoop.fs.permission.TestAcl
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.039 sec -
> in org.apache.hadoop.fs.permission.TestAcl
> Running org.apache.hadoop.fs.permission.TestFsPermission
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.244 sec -
> in org.apache.hadoop.fs.permission.TestFsPermission
> Running org.apache.hadoop.fs.TestFileSystemCanonicalization
> Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec -
> in org.apache.hadoop.fs.TestFileSystemCanonicalization
> Running org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
> Tests run: 49, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.775 sec
> <<< FAILURE! - in org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
> testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem)
> Time elapsed: 0.012 sec  <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:93)
> at
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:289)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>
> Running org.apache.hadoop.fs.TestDFVariations
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec -
> in org.apache.hadoop.fs.TestDFVariations
> Running org.apache.hadoop.fs.TestDelegationTokenRenewer
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.323 sec -
> in org.apache.hadoop.fs.TestDelegationTokenRenewer
> Running org.apache.hadoop.fs.TestFileSystemInitialization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.244 sec -
> in org.apache.hadoop.fs.TestFileSystemInitialization
> Running org.apache.hadoop.fs.TestGetFileBlockLocations
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.331 sec -
> in org.apache.hadoop.fs.TestGetFileBlockLocations
> Running org.apache.hadoop.fs.TestFileSystemCaching
> Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.463 sec -
> in org.apache.hadoop.fs.TestFileSystemCaching
> Running org.apache.hadoop.fs.TestChecksumFileSystem
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.431 sec -
> in org.apache.hadoop.fs.TestChecksumFileSystem
> Running org.apache.hadoop.fs.TestLocalFsFCStatistics
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.291 sec -
> in org.apache.hadoop.fs.TestLocalFsFCStatistics
> Running org.apache.hadoop.fs.TestLocalFileSystemPermission
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.285 sec -
> in org.apache.hadoop.fs.TestLocalFileSystemPermission
> Running org.apache.hadoop.fs.TestFcLocalFsPermission
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.359 sec -
> in org.apache.hadoop.fs.TestFcLocalFsPermission
> Running org.apache.hadoop.fs.TestDU
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.243 sec -
> in org.apache.hadoop.fs.TestDU
> Running org.apache.hadoop.fs.s3.TestINode
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec -
> in org.apache.hadoop.fs.s3.TestINode
> Running org.apache.hadoop.fs.s3.TestS3FileSystem
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.241 sec -
> in org.apache.hadoop.fs.s3.TestS3FileSystem
> Running org.apache.hadoop.fs.s3.TestS3Credentials
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.061 sec -
> in org.apache.hadoop.fs.s3.TestS3Credentials
> Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
> Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.531 sec -
> in org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
> Running org.apache.hadoop.fs.TestFileSystemTokens
> Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.276 sec -
> in org.apache.hadoop.fs.TestFileSystemTokens
> Running org.apache.hadoop.metrics.ganglia.TestGangliaContext
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.075 sec -
> in org.apache.hadoop.metrics.ganglia.TestGangliaContext
> Running org.apache.hadoop.metrics.TestMetricsServlet
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 sec -
> in org.apache.hadoop.metrics.TestMetricsServlet
> Running org.apache.hadoop.metrics.spi.TestOutputRecord
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.016 sec -
> in org.apache.hadoop.metrics.spi.TestOutputRecord
> Running org.apache.hadoop.io.TestVersionedWritable
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.02 sec -
> in org.apache.hadoop.io.TestVersionedWritable
> Running org.apache.hadoop.io.TestEnumSetWritable
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.207 sec -
> in org.apache.hadoop.io.TestEnumSetWritable
> Running org.apache.hadoop.io.TestUTF8
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.35 sec -
> in org.apache.hadoop.io.TestUTF8
> Running org.apache.hadoop.io.TestGenericWritable
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.215 sec -
> in org.apache.hadoop.io.TestGenericWritable
> Running org.apache.hadoop.io.TestBoundedByteArrayOutputStream
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 sec -
> in org.apache.hadoop.io.TestBoundedByteArrayOutputStream
> Running org.apache.hadoop.io.retry.TestRetryProxy
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.069 sec -
> in org.apache.hadoop.io.retry.TestRetryProxy
> Running org.apache.hadoop.io.retry.TestFailoverProxy
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.551 sec -
> in org.apache.hadoop.io.retry.TestFailoverProxy
> Running org.apache.hadoop.io.TestArrayWritable
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec -
> in org.apache.hadoop.io.TestArrayWritable
> Running
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
> Tests run: 13, Failures: 0, Errors: 0, Skipped: 13, Time elapsed: 0.086 sec
> - in org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
> Running org.apache.hadoop.io.compress.TestCodec
> Tests run: 24, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 62.132 sec
> - in org.apache.hadoop.io.compress.TestCodec
> Running org.apache.hadoop.io.compress.lz4.TestLz4CompressorDecompressor
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 12, Time elapsed: 0.08 sec -
> in org.apache.hadoop.io.compress.lz4.TestLz4CompressorDecompressor
> Running org.apache.hadoop.io.compress.TestCompressorDecompressor
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.138 sec -
> in org.apache.hadoop.io.compress.TestCompressorDecompressor
> Running org.apache.hadoop.io.compress.TestCodecFactory
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec -
> in org.apache.hadoop.io.compress.TestCodecFactory
> Running org.apache.hadoop.io.compress.TestBlockDecompressorStream
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.032 sec -
> in org.apache.hadoop.io.compress.TestBlockDecompressorStream
> Running org.apache.hadoop.io.compress.TestCodecPool
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec -
> in org.apache.hadoop.io.compress.TestCodecPool
> Running org.apache.hadoop.io.compress.zlib.TestZlibCompressorDecompressor
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 0.086 sec -
> in org.apache.hadoop.io.compress.zlib.TestZlibCompressorDecompressor
> Running org.apache.hadoop.io.TestSecureIOUtils
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.308 sec -
> in org.apache.hadoop.io.TestSecureIOUtils
> Running org.apache.hadoop.io.TestBooleanWritable
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.033 sec -
> in org.apache.hadoop.io.TestBooleanWritable
> Running org.apache.hadoop.io.TestMapWritable
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec -
> in org.apache.hadoop.io.TestMapWritable
> Running org.apache.hadoop.io.TestTextNonUTF8
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec -
> in org.apache.hadoop.io.TestTextNonUTF8
> Running org.apache.hadoop.io.TestWritableUtils
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.051 sec -
> in org.apache.hadoop.io.TestWritableUtils
> Running org.apache.hadoop.io.TestObjectWritableProtos
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 sec -
> in org.apache.hadoop.io.TestObjectWritableProtos
> Running org.apache.hadoop.io.TestBloomMapFile
> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.579 sec -
> in org.apache.hadoop.io.TestBloomMapFile
> Running org.apache.hadoop.io.TestSortedMapWritable
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec -
> in org.apache.hadoop.io.TestSortedMapWritable
> Running org.apache.hadoop.io.TestDefaultStringifier
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.153 sec -
> in org.apache.hadoop.io.TestDefaultStringifier
> Running org.apache.hadoop.io.TestWritableName
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec -
> in org.apache.hadoop.io.TestWritableName
> Running org.apache.hadoop.io.TestSetFile
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.067 sec -
> in org.apache.hadoop.io.TestSetFile
> Running org.apache.hadoop.io.TestMD5Hash
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec -
> in org.apache.hadoop.io.TestMD5Hash
> Running org.apache.hadoop.io.TestSequenceFileSerialization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.297 sec -
> in org.apache.hadoop.io.TestSequenceFileSerialization
> Running org.apache.hadoop.io.TestDataByteBuffers
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.179 sec -
> in org.apache.hadoop.io.TestDataByteBuffers
> Running org.apache.hadoop.io.TestSequenceFileSync
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.398 sec -
> in org.apache.hadoop.io.TestSequenceFileSync
> Running org.apache.hadoop.io.TestArrayFile
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.286 sec -
> in org.apache.hadoop.io.TestArrayFile
> Running org.apache.hadoop.io.TestArrayPrimitiveWritable
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.054 sec -
> in org.apache.hadoop.io.TestArrayPrimitiveWritable
> Running org.apache.hadoop.io.nativeio.TestSharedFileDescriptorFactory
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 0.074 sec -
> in org.apache.hadoop.io.nativeio.TestSharedFileDescriptorFactory
> Running org.apache.hadoop.io.nativeio.TestNativeIO
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 17, Time elapsed: 0.088 sec
> - in org.apache.hadoop.io.nativeio.TestNativeIO
> Running org.apache.hadoop.io.TestText
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.327 sec -
> in org.apache.hadoop.io.TestText
> Running org.apache.hadoop.io.file.tfile.TestTFileByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.159 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.347 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileComparators
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileComparators
> Running org.apache.hadoop.io.file.tfile.TestTFileSplit
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.068 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileSplit
> Running org.apache.hadoop.io.file.tfile.TestTFileStreams
> Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.834 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileStreams
> Running
> org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.743 sec -
> in
> org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
> Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
> Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
> Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.138 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
> Running org.apache.hadoop.io.file.tfile.TestTFile
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec -
> in org.apache.hadoop.io.file.tfile.TestTFile
> Running org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.063 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
> Running org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.185 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileSeek
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.384 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileSeek
> Running org.apache.hadoop.io.file.tfile.TestVLong
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.78 sec -
> in org.apache.hadoop.io.file.tfile.TestVLong
> Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.718 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileComparator2
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.592 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileComparator2
> Running org.apache.hadoop.io.TestBytesWritable
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.035 sec -
> in org.apache.hadoop.io.TestBytesWritable
> Running org.apache.hadoop.io.TestWritable
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.024 sec -
> in org.apache.hadoop.io.TestWritable
> Running org.apache.hadoop.io.TestIOUtils
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.181 sec -
> in org.apache.hadoop.io.TestIOUtils
> Running org.apache.hadoop.io.serializer.TestWritableSerialization
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec -
> in org.apache.hadoop.io.serializer.TestWritableSerialization
> Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec -
> in org.apache.hadoop.io.serializer.avro.TestAvroSerialization
> Running org.apache.hadoop.io.serializer.TestSerializationFactory
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.149 sec -
> in org.apache.hadoop.io.serializer.TestSerializationFactory
> Running org.apache.hadoop.io.TestMapFile
> Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.652 sec -
> in org.apache.hadoop.io.TestMapFile
> Running org.apache.hadoop.io.TestSequenceFile
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.265 sec -
> in org.apache.hadoop.io.TestSequenceFile
> Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.952 sec -
> in org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
> Running org.apache.hadoop.security.ssl.TestSSLFactory
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.031 sec -
> in org.apache.hadoop.security.ssl.TestSSLFactory
> Running org.apache.hadoop.security.TestUserFromEnv
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec -
> in org.apache.hadoop.security.TestUserFromEnv
> Running org.apache.hadoop.security.TestJNIGroupsMapping
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.072 sec -
> in org.apache.hadoop.security.TestJNIGroupsMapping
> Running org.apache.hadoop.security.TestDoAsEffectiveUser
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.637 sec -
> in org.apache.hadoop.security.TestDoAsEffectiveUser
> Running org.apache.hadoop.security.TestGroupFallback
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec -
> in org.apache.hadoop.security.TestGroupFallback
> Running org.apache.hadoop.security.TestUserGroupInformation
> Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.644 sec -
> in org.apache.hadoop.security.TestUserGroupInformation
> Running org.apache.hadoop.security.TestAuthenticationFilter
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.214 sec -
> in org.apache.hadoop.security.TestAuthenticationFilter
> Running org.apache.hadoop.security.TestCredentials
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.239 sec -
> in org.apache.hadoop.security.TestCredentials
> Running org.apache.hadoop.security.TestLdapGroupsMapping
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec -
> in org.apache.hadoop.security.TestLdapGroupsMapping
> Running org.apache.hadoop.security.TestUGIWithExternalKdc
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.037 sec -
> in org.apache.hadoop.security.TestUGIWithExternalKdc
> Running org.apache.hadoop.security.authorize.TestAccessControlList
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.253 sec -
> in org.apache.hadoop.security.authorize.TestAccessControlList
> Running org.apache.hadoop.security.authorize.TestProxyUsers
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.275 sec -
> in org.apache.hadoop.security.authorize.TestProxyUsers
> Running org.apache.hadoop.security.TestGroupsCaching
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec -
> in org.apache.hadoop.security.TestGroupsCaching
> Running org.apache.hadoop.security.token.TestToken
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.23 sec -
> in org.apache.hadoop.security.token.TestToken
> Running org.apache.hadoop.security.token.delegation.TestDelegationToken
> Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.089 sec
> - in org.apache.hadoop.security.token.delegation.TestDelegationToken
> Running org.apache.hadoop.security.TestSecurityUtil
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.381 sec -
> in org.apache.hadoop.security.TestSecurityUtil
> Running org.apache.hadoop.security.TestProxyUserFromEnv
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec -
> in org.apache.hadoop.security.TestProxyUserFromEnv
> Running org.apache.hadoop.ipc.TestCallQueueManager
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.371 sec -
> in org.apache.hadoop.ipc.TestCallQueueManager
> Running org.apache.hadoop.ipc.TestMiniRPCBenchmark
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.393 sec -
> in org.apache.hadoop.ipc.TestMiniRPCBenchmark
> Running org.apache.hadoop.ipc.TestServer
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec -
> in org.apache.hadoop.ipc.TestServer
> Running org.apache.hadoop.ipc.TestIdentityProviders
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.233 sec -
> in org.apache.hadoop.ipc.TestIdentityProviders
> Running org.apache.hadoop.ipc.TestSaslRPC
> Tests run: 85, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.071 sec
> - in org.apache.hadoop.ipc.TestSaslRPC
> Running org.apache.hadoop.ipc.TestRetryCache
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.175 sec -
> in org.apache.hadoop.ipc.TestRetryCache
> Running org.apache.hadoop.ipc.TestRPC
> Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.518 sec
> - in org.apache.hadoop.ipc.TestRPC
> Running org.apache.hadoop.ipc.TestIPC
> Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.761 sec
> - in org.apache.hadoop.ipc.TestIPC
> Running org.apache.hadoop.ipc.TestRetryCacheMetrics
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec -
> in org.apache.hadoop.ipc.TestRetryCacheMetrics
> Running org.apache.hadoop.ipc.TestProtoBufRpc
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.53 sec -
> in org.apache.hadoop.ipc.TestProtoBufRpc
> Running org.apache.hadoop.ipc.TestSocketFactory
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.141 sec -
> in org.apache.hadoop.ipc.TestSocketFactory
> Running org.apache.hadoop.ipc.TestMultipleProtocolServer
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.413 sec -
> in org.apache.hadoop.ipc.TestMultipleProtocolServer
> Running org.apache.hadoop.ipc.TestIPCServerResponder
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.174 sec -
> in org.apache.hadoop.ipc.TestIPCServerResponder
> Running org.apache.hadoop.ipc.TestRPCCallBenchmark
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.397 sec -
> in org.apache.hadoop.ipc.TestRPCCallBenchmark
> Running org.apache.hadoop.ipc.TestRPCCompatibility
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.681 sec -
> in org.apache.hadoop.ipc.TestRPCCompatibility
> Running org.apache.hadoop.util.TestLightWeightCache
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.24 sec -
> in org.apache.hadoop.util.TestLightWeightCache
> Running org.apache.hadoop.util.TestShutdownThreadsHelper
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.07 sec -
> in org.apache.hadoop.util.TestShutdownThreadsHelper
> Running org.apache.hadoop.util.TestVersionUtil
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.036 sec -
> in org.apache.hadoop.util.TestVersionUtil
> Running org.apache.hadoop.util.TestRunJar
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.056 sec -
> in org.apache.hadoop.util.TestRunJar
> Running org.apache.hadoop.util.TestStringUtils
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.129 sec -
> in org.apache.hadoop.util.TestStringUtils
> Running org.apache.hadoop.util.TestOptions
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.031 sec -
> in org.apache.hadoop.util.TestOptions
> Running org.apache.hadoop.util.TestShell
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.152 sec -
> in org.apache.hadoop.util.TestShell
> Running org.apache.hadoop.util.TestLineReader
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.037 sec -
> in org.apache.hadoop.util.TestLineReader
> Running org.apache.hadoop.util.TestIndexedSort
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.316 sec -
> in org.apache.hadoop.util.TestIndexedSort
> Running org.apache.hadoop.util.TestIdentityHashStore
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec -
> in org.apache.hadoop.util.TestIdentityHashStore
> Running org.apache.hadoop.util.TestNativeLibraryChecker
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.119 sec -
> in org.apache.hadoop.util.TestNativeLibraryChecker
> Running org.apache.hadoop.util.hash.TestHash
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec -
> in org.apache.hadoop.util.hash.TestHash
> Running org.apache.hadoop.util.TestDataChecksum
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 sec -
> in org.apache.hadoop.util.TestDataChecksum
> Running org.apache.hadoop.util.TestGenericsUtil
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.127 sec -
> in org.apache.hadoop.util.TestGenericsUtil
> Running org.apache.hadoop.util.TestNativeCodeLoader
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.068 sec -
> in org.apache.hadoop.util.TestNativeCodeLoader
> Running org.apache.hadoop.util.TestProtoUtil
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec -
> in org.apache.hadoop.util.TestProtoUtil
> Running org.apache.hadoop.util.TestDiskChecker
> Tests run: 14, Failures: 6, Errors: 0, Skipped: 0, Time elapsed: 0.515 sec
> <<< FAILURE! - in org.apache.hadoop.util.TestDiskChecker
> testCheckDir_notReadable(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.022 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notReadable(TestDiskChecker.java:101)
>
> testCheckDir_notWritable(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.018 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notWritable(TestDiskChecker.java:106)
>
> testCheckDir_notListable(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.015 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notListable(TestDiskChecker.java:111)
>
> testCheckDir_notReadable_local(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.001 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notReadable_local(TestDiskChecker.java:150)
>
> testCheckDir_notWritable_local(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.002 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notWritable_local(TestDiskChecker.java:155)
>
> testCheckDir_notListable_local(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.002 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notListable_local(TestDiskChecker.java:160)
>
> Running org.apache.hadoop.util.TestWinUtils
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.083 sec -
> in org.apache.hadoop.util.TestWinUtils
> Running org.apache.hadoop.util.TestStringInterner
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec -
> in org.apache.hadoop.util.TestStringInterner
> Running org.apache.hadoop.util.TestGSet
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.544 sec -
> in org.apache.hadoop.util.TestGSet
> Running org.apache.hadoop.util.TestSignalLogger
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec -
> in org.apache.hadoop.util.TestSignalLogger
> Running org.apache.hadoop.util.TestZKUtil
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 sec -
> in org.apache.hadoop.util.TestZKUtil
> Running org.apache.hadoop.util.TestAsyncDiskService
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec -
> in org.apache.hadoop.util.TestAsyncDiskService
> Running org.apache.hadoop.util.TestPureJavaCrc32
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.269 sec -
> in org.apache.hadoop.util.TestPureJavaCrc32
> Running org.apache.hadoop.util.TestHostsFileReader
> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.091 sec -
> in org.apache.hadoop.util.TestHostsFileReader
> Running org.apache.hadoop.util.TestShutdownHookManager
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec -
> in org.apache.hadoop.util.TestShutdownHookManager
> Running org.apache.hadoop.util.TestReflectionUtils
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec -
> in org.apache.hadoop.util.TestReflectionUtils
> Running org.apache.hadoop.util.TestClassUtil
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.033 sec -
> in org.apache.hadoop.util.TestClassUtil
> Running org.apache.hadoop.util.TestJarFinder
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.302 sec -
> in org.apache.hadoop.util.TestJarFinder
> Running org.apache.hadoop.util.TestGenericOptionsParser
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.404 sec -
> in org.apache.hadoop.util.TestGenericOptionsParser
> Running org.apache.hadoop.util.TestLightWeightGSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec -
> in org.apache.hadoop.util.TestLightWeightGSet
> Running org.apache.hadoop.util.bloom.TestBloomFilters
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.264 sec -
> in org.apache.hadoop.util.bloom.TestBloomFilters
>
> Results :
>
> Failed tests:
>   TestZKFailoverController.testGracefulFailoverFailBecomingActive:484 Did
> not fail to graceful failover when target failed to become active!
>   TestZKFailoverController.testGracefulFailoverFailBecomingStandby:518
> expected:<1> but was:<0>
>
> TestZKFailoverController.testGracefulFailoverFailBecomingStandbyAndFailFence:540
> Failover should have failed when old node wont fence
>   TestTableMapping.testResolve:56 expected:</[rack1]> but
> was:</[default-rack]>
>   TestTableMapping.testTableCaching:79 expected:</[rack1]> but
> was:</[default-rack]>
>   TestTableMapping.testClearingCachedMappings:144 expected:</[rack1]> but
> was:</[default-rack]>
>   TestNetUtils.testNormalizeHostName:619 expected:<[192.168.12.37]> but
> was:<[UnknownHost]>
>
> TestStaticMapping.testCachingRelaysResolveQueries:219->assertMapSize:94->Assert.assertEquals:472->Assert.assertEquals:128->Assert.failNotEquals:647->Assert.fail:93
> Expected two entries in the map Mapping: cached switch mapping relaying to
> static mapping with single switch = false
> Map:
>   192.168.12.37 -> /default-rack
> Nodes: 1
> Switches: 1
>  expected:<2> but was:<1>
>
> TestStaticMapping.testCachingCachesNegativeEntries:236->assertMapSize:94->Assert.assertEquals:472->Assert.assertEquals:128->Assert.failNotEquals:647->Assert.fail:93
> Expected two entries in the map Mapping: cached switch mapping relaying to
> static mapping with single switch = false
> Map:
>   192.168.12.37 -> /default-rack
> Nodes: 1
> Switches: 1
>  expected:<2> but was:<1>
>   TestLocalDirAllocator.test0:141->validateTempDirCreation:110 Checking for
> build/test/temp/RELATIVE1 in
> build/test/temp/RELATIVE0/block9179437685378573554.tmp - FAILED!
>
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:110
> Checking for build/test/temp/RELATIVE2 in
> build/test/temp/RELATIVE1/block7291734072352417917.tmp - FAILED!
>
> TestLocalDirAllocator.testRWBufferDirBecomesRO:220->validateTempDirCreation:110
> Checking for build/test/temp/RELATIVE3 in
> build/test/temp/RELATIVE4/block4513557287751895920.tmp - FAILED!
>   TestLocalDirAllocator.test0:141->validateTempDirCreation:110 Checking for
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block8523050700077504235.tmp
> - FAILED!
>
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:164->validateTempDirCreation:110
> Checking for
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block200624031350129544.tmp
> - FAILED!
>
> TestLocalDirAllocator.testRWBufferDirBecomesRO:219->validateTempDirCreation:110
> Checking for
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE3
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE4/block8868024598532665020.tmp
> - FAILED!
>   TestLocalDirAllocator.test0:142->validateTempDirCreation:110 Checking for
> file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block7318078621961387478.tmp
> - FAILED!
>
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:110
> Checking for
> file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1/block3298540567692029628.tmp
> - FAILED!
>
> TestLocalDirAllocator.testRWBufferDirBecomesRO:220->validateTempDirCreation:110
> Checking for
> file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED3
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED4/block6014893019370084121.tmp
> - FAILED!
>
> TestFileUtil.testFailFullyDelete:411->validateAndSetWritablePermissions:385
> The directory xSubDir *should* not have been deleted. expected:<true> but
> was:<false>
>
> TestFileUtil.testFailFullyDeleteContents:492->validateAndSetWritablePermissions:385
> The directory xSubDir *should* not have been deleted. expected:<true> but
> was:<false>
>   TestFileUtil.testGetDU:592 null
>
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:289
> Should throw IOException
>   TestLocalFileSystem.testReportChecksumFailure:356 null
>
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:289
> Should throw IOException
>   TestDiskChecker.testCheckDir_notReadable:101->_checkDirs:126 checkDir
> success
>   TestDiskChecker.testCheckDir_notWritable:106->_checkDirs:126 checkDir
> success
>   TestDiskChecker.testCheckDir_notListable:111->_checkDirs:126 checkDir
> success
>   TestDiskChecker.testCheckDir_notReadable_local:150->_checkDirs:174
> checkDir success
>   TestDiskChecker.testCheckDir_notWritable_local:155->_checkDirs:174
> checkDir success
>   TestDiskChecker.testCheckDir_notListable_local:160->_checkDirs:174
> checkDir success
>
> Tests in error:
>   TestZKFailoverController.testGracefulFailover:444->Object.wait:-2 »  test
> time...
>
> Tests run: 2285, Failures: 30, Errors: 1, Skipped: 104
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main ................................ SUCCESS [0.678s]
> [INFO] Apache Hadoop Project POM ......................... SUCCESS [0.247s]
> [INFO] Apache Hadoop Annotations ......................... SUCCESS [0.780s]
> [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [0.221s]
> [INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.087s]
> [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [0.773s]
> [INFO] Apache Hadoop MiniKDC ............................. SUCCESS
> [1:58.825s]
> [INFO] Apache Hadoop Auth ................................ SUCCESS
> [6:16.248s]
> [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [7.347s]
> [INFO] Apache Hadoop Common .............................. FAILURE
> [11:49.512s]
> [INFO] Apache Hadoop NFS ................................. SKIPPED
> [INFO] Apache Hadoop Common Project ...................... SKIPPED
> [INFO] Apache Hadoop HDFS ................................ SKIPPED
> [INFO] Apache Hadoop HttpFS .............................. SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
> [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
> [INFO] Apache Hadoop HDFS Project ........................ SKIPPED
> [INFO] hadoop-yarn ....................................... SKIPPED
> [INFO] hadoop-yarn-api ................................... SKIPPED
> [INFO] hadoop-yarn-common ................................ SKIPPED
> [INFO] hadoop-yarn-server ................................ SKIPPED
> [INFO] hadoop-yarn-server-common ......................... SKIPPED
> [INFO] hadoop-yarn-server-nodemanager .................... SKIPPED
> [INFO] hadoop-yarn-server-web-proxy ...................... SKIPPED
> [INFO] hadoop-yarn-server-applicationhistoryservice ...... SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager ................ SKIPPED
> [INFO] hadoop-yarn-server-tests .......................... SKIPPED
> [INFO] hadoop-yarn-client ................................ SKIPPED
> [INFO] hadoop-yarn-applications .......................... SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell ......... SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SKIPPED
> [INFO] hadoop-yarn-site .................................. SKIPPED
> [INFO] hadoop-yarn-project ............................... SKIPPED
> [INFO] hadoop-mapreduce-client ........................... SKIPPED
> [INFO] hadoop-mapreduce-client-core ...................... SKIPPED
> [INFO] hadoop-mapreduce-client-common .................... SKIPPED
> [INFO] hadoop-mapreduce-client-shuffle ................... SKIPPED
> [INFO] hadoop-mapreduce-client-app ....................... SKIPPED
> [INFO] hadoop-mapreduce-client-hs ........................ SKIPPED
> [INFO] hadoop-mapreduce-client-jobclient ................. SKIPPED
> [INFO] hadoop-mapreduce-client-hs-plugins ................ SKIPPED
> [INFO] Apache Hadoop MapReduce Examples .................. SKIPPED
> [INFO] hadoop-mapreduce .................................. SKIPPED
> [INFO] Apache Hadoop MapReduce Streaming ................. SKIPPED
> [INFO] Apache Hadoop Distributed Copy .................... SKIPPED
> [INFO] Apache Hadoop Archives ............................ SKIPPED
> [INFO] Apache Hadoop Rumen ............................... SKIPPED
> [INFO] Apache Hadoop Gridmix ............................. SKIPPED
> [INFO] Apache Hadoop Data Join ........................... SKIPPED
> [INFO] Apache Hadoop Extras .............................. SKIPPED
> [INFO] Apache Hadoop Pipes ............................... SKIPPED
> [INFO] Apache Hadoop OpenStack support ................... SKIPPED
> [INFO] Apache Hadoop Client .............................. SKIPPED
> [INFO] Apache Hadoop Mini-Cluster ........................ SKIPPED
> [INFO] Apache Hadoop Scheduler Load Simulator ............ SKIPPED
> [INFO] Apache Hadoop Tools Dist .......................... SKIPPED
> [INFO] Apache Hadoop Tools ............................... SKIPPED
> [INFO] Apache Hadoop Distribution ........................ SKIPPED
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 20:15.984s
> [INFO] Finished at: Sun Aug 03 18:00:44 HKT 2014
> [INFO] Final Memory: 56M/900M
> [INFO]
> ------------------------------------------------------------------------
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on
> project hadoop-common: There are test failures.
> [ERROR]
> [ERROR] Please refer to
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/target/surefire-reports
> for the individual test results.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please
> read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
> [ERROR]   mvn <goals> -rf :hadoop-common
>



-- 
- Tsuyoshi

ResourceManager version and Hadoop version

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I am running Apache Hadoop Cluster 2.4.1, I have two questions about Hadoop HTML link http://test_namenode:8088/cluster/cluster, 

1) If I click "Server metrics” to the page of http://test_namenode::8088/metrics, it is blank.
Can anyone please advise if this is normal or I have not yet setup some monitoring tools properly e.g. nagios?


2) On the page http://test_namenode:8088/cluster/cluster, I can see version is "2.4.1 from Unknown"
Is there a way to change the word from “Unknown” to a more meaningful word by myself?

ResourceManager version:                 	 2.4.1 from Unknown by hadoop  source checksum f74…...
Hadoop version:	 2.4.1 from Unknown by hadoop source checksum bb7…...


Many thanks!

Regards
Arthur

Re: Compile Hadoop 2.4.1 (with Tests and Without Tests)

Posted by Tsuyoshi OZAWA <oz...@gmail.com>.
Hi,

Unfortunately, sometimes we face unexpected test failures. Please
check whether the problem has been registered or resolved on Hadoop's
JIRAs.

* https://issues.apache.org/jira/browse/HADOOP
* https://issues.apache.org/jira/browse/HDFS
* https://issues.apache.org/jira/browse/MAPREDUCE
* https://issues.apache.org/jira/browse/YARN

If not, please register it as an issue of JIRAs.

Thanks,
- Tsuyoshi

On Sun, Aug 3, 2014 at 7:32 PM, Arthur.hk.chan@gmail.com
<ar...@gmail.com> wrote:
> Hi,
>
> I am trying to compile Hadoop 2.4.1.
>
> If I run "mvm clean install -DskipTests", the compilation is GOOD,
> However, if I run "mvn clean install”, i.e. didn’t skip the Tests, it
> returned “Failures”
>
> Can anyone please advise what should be prepared before unit tests in
> compilation?  From the error log, e.g. I found it used 192.168.12.37, but
> this was not my local IPs, should I change some configure file? any ideas?
> On the other hand, can I use the the compiled code from GOOD compilation and
> just ignore the failed tests?
>
> Please advise!!
>
> Regards
> Arthur
>
>
>
>
> Compilation results:
> run "mvm clean install -DskipTests", the compilation is GOOD,
> =====
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main ................................ SUCCESS [1.756s]
> [INFO] Apache Hadoop Project POM ......................... SUCCESS [0.586s]
> [INFO] Apache Hadoop Annotations ......................... SUCCESS [1.282s]
> [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [0.257s]
> [INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.136s]
> [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [1.189s]
> [INFO] Apache Hadoop MiniKDC ............................. SUCCESS [0.837s]
> [INFO] Apache Hadoop Auth ................................ SUCCESS [0.835s]
> [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [0.614s]
> [INFO] Apache Hadoop Common .............................. SUCCESS [9.020s]
> [INFO] Apache Hadoop NFS ................................. SUCCESS [9.341s]
> [INFO] Apache Hadoop Common Project ...................... SUCCESS [0.013s]
> [INFO] Apache Hadoop HDFS ................................ SUCCESS
> [1:11.329s]
> [INFO] Apache Hadoop HttpFS .............................. SUCCESS [1.943s]
> [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [8.236s]
> [INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [0.181s]
> [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.014s]
> [INFO] hadoop-yarn ....................................... SUCCESS [0.045s]
> [INFO] hadoop-yarn-api ................................... SUCCESS [3.080s]
> [INFO] hadoop-yarn-common ................................ SUCCESS [3.995s]
> [INFO] hadoop-yarn-server ................................ SUCCESS [0.036s]
> [INFO] hadoop-yarn-server-common ......................... SUCCESS [0.406s]
> [INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [7.874s]
> [INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [0.185s]
> [INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [2.766s]
> [INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [0.975s]
> [INFO] hadoop-yarn-server-tests .......................... SUCCESS [0.260s]
> [INFO] hadoop-yarn-client ................................ SUCCESS [0.401s]
> [INFO] hadoop-yarn-applications .......................... SUCCESS [0.012s]
> [INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [0.194s]
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [0.157s]
> [INFO] hadoop-yarn-site .................................. SUCCESS [0.028s]
> [INFO] hadoop-yarn-project ............................... SUCCESS [0.030s]
> [INFO] hadoop-mapreduce-client ........................... SUCCESS [0.027s]
> [INFO] hadoop-mapreduce-client-core ...................... SUCCESS [1.384s]
> [INFO] hadoop-mapreduce-client-common .................... SUCCESS [1.167s]
> [INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [0.151s]
> [INFO] hadoop-mapreduce-client-app ....................... SUCCESS [0.692s]
> [INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [0.521s]
> [INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [9.581s]
> [INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [0.105s]
> [INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [0.288s]
> [INFO] hadoop-mapreduce .................................. SUCCESS [0.031s]
> [INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [2.485s]
> [INFO] Apache Hadoop Distributed Copy .................... SUCCESS [14.204s]
> [INFO] Apache Hadoop Archives ............................ SUCCESS [0.147s]
> [INFO] Apache Hadoop Rumen ............................... SUCCESS [0.283s]
> [INFO] Apache Hadoop Gridmix ............................. SUCCESS [0.266s]
> [INFO] Apache Hadoop Data Join ........................... SUCCESS [0.109s]
> [INFO] Apache Hadoop Extras .............................. SUCCESS [0.173s]
> [INFO] Apache Hadoop Pipes ............................... SUCCESS [0.013s]
> [INFO] Apache Hadoop OpenStack support ................... SUCCESS [0.292s]
> [INFO] Apache Hadoop Client .............................. SUCCESS [0.093s]
> [INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.052s]
> [INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [1.123s]
> [INFO] Apache Hadoop Tools Dist .......................... SUCCESS [0.109s]
> [INFO] Apache Hadoop Tools ............................... SUCCESS [0.012s]
> [INFO] Apache Hadoop Distribution ........................ SUCCESS [0.038s]
> [INFO] ————————————————————————————————————
>
>
>
>
> However, if I run "mvn clean install”, i.e. with Tests, it returned
> “Failures”
> ====
> Running org.apache.hadoop.fs.viewfs.TestChRootedFs
> Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.626 sec -
> in org.apache.hadoop.fs.viewfs.TestChRootedFs
> Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
> Tests run: 49, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.099 sec
> <<< FAILURE! - in
> org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
> testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem)
> Time elapsed: 0.028 sec  <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:93)
> at
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:289)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>
> Running
> org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
> Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.282 sec -
> in
> org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
> Running org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
> Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.162 sec -
> in org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
> Running org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.434 sec -
> in org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
> Running org.apache.hadoop.fs.TestFileStatus
> Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec -
> in org.apache.hadoop.fs.TestFileStatus
> Running org.apache.hadoop.fs.TestFileContextResolveAfs
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec -
> in org.apache.hadoop.fs.TestFileContextResolveAfs
> Running org.apache.hadoop.fs.TestGlobPattern
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.034 sec -
> in org.apache.hadoop.fs.TestGlobPattern
> Running org.apache.hadoop.fs.s3native.TestJets3tNativeFileSystemStore
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.136 sec -
> in org.apache.hadoop.fs.s3native.TestJets3tNativeFileSystemStore
> Running org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
> Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.498 sec -
> in org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
> Running org.apache.hadoop.fs.TestPath
> Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.418 sec -
> in org.apache.hadoop.fs.TestPath
> Running org.apache.hadoop.fs.TestTrash
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.588 sec -
> in org.apache.hadoop.fs.TestTrash
> Running org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.37 sec -
> in org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
> Running org.apache.hadoop.fs.TestFileContextDeleteOnExit
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec -
> in org.apache.hadoop.fs.TestFileContextDeleteOnExit
> Running org.apache.hadoop.fs.TestAfsCheckPath
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 sec -
> in org.apache.hadoop.fs.TestAfsCheckPath
> Running org.apache.hadoop.fs.TestLocalFileSystem
> Tests run: 18, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 0.601 sec
> <<< FAILURE! - in org.apache.hadoop.fs.TestLocalFileSystem
> testReportChecksumFailure(org.apache.hadoop.fs.TestLocalFileSystem)  Time
> elapsed: 0.074 sec  <<< FAILURE!
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertTrue(Assert.java:54)
> at
> org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure(TestLocalFileSystem.java:356)
>
> Running org.apache.hadoop.fs.permission.TestAcl
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.039 sec -
> in org.apache.hadoop.fs.permission.TestAcl
> Running org.apache.hadoop.fs.permission.TestFsPermission
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.244 sec -
> in org.apache.hadoop.fs.permission.TestFsPermission
> Running org.apache.hadoop.fs.TestFileSystemCanonicalization
> Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec -
> in org.apache.hadoop.fs.TestFileSystemCanonicalization
> Running org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
> Tests run: 49, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.775 sec
> <<< FAILURE! - in org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
> testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem)
> Time elapsed: 0.012 sec  <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:93)
> at
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:289)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>
> Running org.apache.hadoop.fs.TestDFVariations
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec -
> in org.apache.hadoop.fs.TestDFVariations
> Running org.apache.hadoop.fs.TestDelegationTokenRenewer
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.323 sec -
> in org.apache.hadoop.fs.TestDelegationTokenRenewer
> Running org.apache.hadoop.fs.TestFileSystemInitialization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.244 sec -
> in org.apache.hadoop.fs.TestFileSystemInitialization
> Running org.apache.hadoop.fs.TestGetFileBlockLocations
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.331 sec -
> in org.apache.hadoop.fs.TestGetFileBlockLocations
> Running org.apache.hadoop.fs.TestFileSystemCaching
> Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.463 sec -
> in org.apache.hadoop.fs.TestFileSystemCaching
> Running org.apache.hadoop.fs.TestChecksumFileSystem
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.431 sec -
> in org.apache.hadoop.fs.TestChecksumFileSystem
> Running org.apache.hadoop.fs.TestLocalFsFCStatistics
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.291 sec -
> in org.apache.hadoop.fs.TestLocalFsFCStatistics
> Running org.apache.hadoop.fs.TestLocalFileSystemPermission
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.285 sec -
> in org.apache.hadoop.fs.TestLocalFileSystemPermission
> Running org.apache.hadoop.fs.TestFcLocalFsPermission
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.359 sec -
> in org.apache.hadoop.fs.TestFcLocalFsPermission
> Running org.apache.hadoop.fs.TestDU
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.243 sec -
> in org.apache.hadoop.fs.TestDU
> Running org.apache.hadoop.fs.s3.TestINode
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec -
> in org.apache.hadoop.fs.s3.TestINode
> Running org.apache.hadoop.fs.s3.TestS3FileSystem
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.241 sec -
> in org.apache.hadoop.fs.s3.TestS3FileSystem
> Running org.apache.hadoop.fs.s3.TestS3Credentials
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.061 sec -
> in org.apache.hadoop.fs.s3.TestS3Credentials
> Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
> Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.531 sec -
> in org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
> Running org.apache.hadoop.fs.TestFileSystemTokens
> Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.276 sec -
> in org.apache.hadoop.fs.TestFileSystemTokens
> Running org.apache.hadoop.metrics.ganglia.TestGangliaContext
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.075 sec -
> in org.apache.hadoop.metrics.ganglia.TestGangliaContext
> Running org.apache.hadoop.metrics.TestMetricsServlet
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 sec -
> in org.apache.hadoop.metrics.TestMetricsServlet
> Running org.apache.hadoop.metrics.spi.TestOutputRecord
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.016 sec -
> in org.apache.hadoop.metrics.spi.TestOutputRecord
> Running org.apache.hadoop.io.TestVersionedWritable
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.02 sec -
> in org.apache.hadoop.io.TestVersionedWritable
> Running org.apache.hadoop.io.TestEnumSetWritable
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.207 sec -
> in org.apache.hadoop.io.TestEnumSetWritable
> Running org.apache.hadoop.io.TestUTF8
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.35 sec -
> in org.apache.hadoop.io.TestUTF8
> Running org.apache.hadoop.io.TestGenericWritable
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.215 sec -
> in org.apache.hadoop.io.TestGenericWritable
> Running org.apache.hadoop.io.TestBoundedByteArrayOutputStream
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 sec -
> in org.apache.hadoop.io.TestBoundedByteArrayOutputStream
> Running org.apache.hadoop.io.retry.TestRetryProxy
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.069 sec -
> in org.apache.hadoop.io.retry.TestRetryProxy
> Running org.apache.hadoop.io.retry.TestFailoverProxy
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.551 sec -
> in org.apache.hadoop.io.retry.TestFailoverProxy
> Running org.apache.hadoop.io.TestArrayWritable
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec -
> in org.apache.hadoop.io.TestArrayWritable
> Running
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
> Tests run: 13, Failures: 0, Errors: 0, Skipped: 13, Time elapsed: 0.086 sec
> - in org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
> Running org.apache.hadoop.io.compress.TestCodec
> Tests run: 24, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 62.132 sec
> - in org.apache.hadoop.io.compress.TestCodec
> Running org.apache.hadoop.io.compress.lz4.TestLz4CompressorDecompressor
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 12, Time elapsed: 0.08 sec -
> in org.apache.hadoop.io.compress.lz4.TestLz4CompressorDecompressor
> Running org.apache.hadoop.io.compress.TestCompressorDecompressor
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.138 sec -
> in org.apache.hadoop.io.compress.TestCompressorDecompressor
> Running org.apache.hadoop.io.compress.TestCodecFactory
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec -
> in org.apache.hadoop.io.compress.TestCodecFactory
> Running org.apache.hadoop.io.compress.TestBlockDecompressorStream
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.032 sec -
> in org.apache.hadoop.io.compress.TestBlockDecompressorStream
> Running org.apache.hadoop.io.compress.TestCodecPool
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec -
> in org.apache.hadoop.io.compress.TestCodecPool
> Running org.apache.hadoop.io.compress.zlib.TestZlibCompressorDecompressor
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 0.086 sec -
> in org.apache.hadoop.io.compress.zlib.TestZlibCompressorDecompressor
> Running org.apache.hadoop.io.TestSecureIOUtils
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.308 sec -
> in org.apache.hadoop.io.TestSecureIOUtils
> Running org.apache.hadoop.io.TestBooleanWritable
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.033 sec -
> in org.apache.hadoop.io.TestBooleanWritable
> Running org.apache.hadoop.io.TestMapWritable
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec -
> in org.apache.hadoop.io.TestMapWritable
> Running org.apache.hadoop.io.TestTextNonUTF8
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec -
> in org.apache.hadoop.io.TestTextNonUTF8
> Running org.apache.hadoop.io.TestWritableUtils
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.051 sec -
> in org.apache.hadoop.io.TestWritableUtils
> Running org.apache.hadoop.io.TestObjectWritableProtos
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 sec -
> in org.apache.hadoop.io.TestObjectWritableProtos
> Running org.apache.hadoop.io.TestBloomMapFile
> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.579 sec -
> in org.apache.hadoop.io.TestBloomMapFile
> Running org.apache.hadoop.io.TestSortedMapWritable
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec -
> in org.apache.hadoop.io.TestSortedMapWritable
> Running org.apache.hadoop.io.TestDefaultStringifier
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.153 sec -
> in org.apache.hadoop.io.TestDefaultStringifier
> Running org.apache.hadoop.io.TestWritableName
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec -
> in org.apache.hadoop.io.TestWritableName
> Running org.apache.hadoop.io.TestSetFile
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.067 sec -
> in org.apache.hadoop.io.TestSetFile
> Running org.apache.hadoop.io.TestMD5Hash
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec -
> in org.apache.hadoop.io.TestMD5Hash
> Running org.apache.hadoop.io.TestSequenceFileSerialization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.297 sec -
> in org.apache.hadoop.io.TestSequenceFileSerialization
> Running org.apache.hadoop.io.TestDataByteBuffers
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.179 sec -
> in org.apache.hadoop.io.TestDataByteBuffers
> Running org.apache.hadoop.io.TestSequenceFileSync
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.398 sec -
> in org.apache.hadoop.io.TestSequenceFileSync
> Running org.apache.hadoop.io.TestArrayFile
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.286 sec -
> in org.apache.hadoop.io.TestArrayFile
> Running org.apache.hadoop.io.TestArrayPrimitiveWritable
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.054 sec -
> in org.apache.hadoop.io.TestArrayPrimitiveWritable
> Running org.apache.hadoop.io.nativeio.TestSharedFileDescriptorFactory
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 0.074 sec -
> in org.apache.hadoop.io.nativeio.TestSharedFileDescriptorFactory
> Running org.apache.hadoop.io.nativeio.TestNativeIO
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 17, Time elapsed: 0.088 sec
> - in org.apache.hadoop.io.nativeio.TestNativeIO
> Running org.apache.hadoop.io.TestText
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.327 sec -
> in org.apache.hadoop.io.TestText
> Running org.apache.hadoop.io.file.tfile.TestTFileByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.159 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.347 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileComparators
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileComparators
> Running org.apache.hadoop.io.file.tfile.TestTFileSplit
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.068 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileSplit
> Running org.apache.hadoop.io.file.tfile.TestTFileStreams
> Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.834 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileStreams
> Running
> org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.743 sec -
> in
> org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
> Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
> Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
> Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.138 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
> Running org.apache.hadoop.io.file.tfile.TestTFile
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec -
> in org.apache.hadoop.io.file.tfile.TestTFile
> Running org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.063 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
> Running org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.185 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileSeek
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.384 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileSeek
> Running org.apache.hadoop.io.file.tfile.TestVLong
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.78 sec -
> in org.apache.hadoop.io.file.tfile.TestVLong
> Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.718 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileComparator2
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.592 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileComparator2
> Running org.apache.hadoop.io.TestBytesWritable
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.035 sec -
> in org.apache.hadoop.io.TestBytesWritable
> Running org.apache.hadoop.io.TestWritable
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.024 sec -
> in org.apache.hadoop.io.TestWritable
> Running org.apache.hadoop.io.TestIOUtils
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.181 sec -
> in org.apache.hadoop.io.TestIOUtils
> Running org.apache.hadoop.io.serializer.TestWritableSerialization
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec -
> in org.apache.hadoop.io.serializer.TestWritableSerialization
> Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec -
> in org.apache.hadoop.io.serializer.avro.TestAvroSerialization
> Running org.apache.hadoop.io.serializer.TestSerializationFactory
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.149 sec -
> in org.apache.hadoop.io.serializer.TestSerializationFactory
> Running org.apache.hadoop.io.TestMapFile
> Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.652 sec -
> in org.apache.hadoop.io.TestMapFile
> Running org.apache.hadoop.io.TestSequenceFile
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.265 sec -
> in org.apache.hadoop.io.TestSequenceFile
> Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.952 sec -
> in org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
> Running org.apache.hadoop.security.ssl.TestSSLFactory
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.031 sec -
> in org.apache.hadoop.security.ssl.TestSSLFactory
> Running org.apache.hadoop.security.TestUserFromEnv
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec -
> in org.apache.hadoop.security.TestUserFromEnv
> Running org.apache.hadoop.security.TestJNIGroupsMapping
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.072 sec -
> in org.apache.hadoop.security.TestJNIGroupsMapping
> Running org.apache.hadoop.security.TestDoAsEffectiveUser
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.637 sec -
> in org.apache.hadoop.security.TestDoAsEffectiveUser
> Running org.apache.hadoop.security.TestGroupFallback
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec -
> in org.apache.hadoop.security.TestGroupFallback
> Running org.apache.hadoop.security.TestUserGroupInformation
> Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.644 sec -
> in org.apache.hadoop.security.TestUserGroupInformation
> Running org.apache.hadoop.security.TestAuthenticationFilter
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.214 sec -
> in org.apache.hadoop.security.TestAuthenticationFilter
> Running org.apache.hadoop.security.TestCredentials
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.239 sec -
> in org.apache.hadoop.security.TestCredentials
> Running org.apache.hadoop.security.TestLdapGroupsMapping
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec -
> in org.apache.hadoop.security.TestLdapGroupsMapping
> Running org.apache.hadoop.security.TestUGIWithExternalKdc
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.037 sec -
> in org.apache.hadoop.security.TestUGIWithExternalKdc
> Running org.apache.hadoop.security.authorize.TestAccessControlList
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.253 sec -
> in org.apache.hadoop.security.authorize.TestAccessControlList
> Running org.apache.hadoop.security.authorize.TestProxyUsers
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.275 sec -
> in org.apache.hadoop.security.authorize.TestProxyUsers
> Running org.apache.hadoop.security.TestGroupsCaching
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec -
> in org.apache.hadoop.security.TestGroupsCaching
> Running org.apache.hadoop.security.token.TestToken
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.23 sec -
> in org.apache.hadoop.security.token.TestToken
> Running org.apache.hadoop.security.token.delegation.TestDelegationToken
> Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.089 sec
> - in org.apache.hadoop.security.token.delegation.TestDelegationToken
> Running org.apache.hadoop.security.TestSecurityUtil
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.381 sec -
> in org.apache.hadoop.security.TestSecurityUtil
> Running org.apache.hadoop.security.TestProxyUserFromEnv
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec -
> in org.apache.hadoop.security.TestProxyUserFromEnv
> Running org.apache.hadoop.ipc.TestCallQueueManager
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.371 sec -
> in org.apache.hadoop.ipc.TestCallQueueManager
> Running org.apache.hadoop.ipc.TestMiniRPCBenchmark
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.393 sec -
> in org.apache.hadoop.ipc.TestMiniRPCBenchmark
> Running org.apache.hadoop.ipc.TestServer
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec -
> in org.apache.hadoop.ipc.TestServer
> Running org.apache.hadoop.ipc.TestIdentityProviders
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.233 sec -
> in org.apache.hadoop.ipc.TestIdentityProviders
> Running org.apache.hadoop.ipc.TestSaslRPC
> Tests run: 85, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.071 sec
> - in org.apache.hadoop.ipc.TestSaslRPC
> Running org.apache.hadoop.ipc.TestRetryCache
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.175 sec -
> in org.apache.hadoop.ipc.TestRetryCache
> Running org.apache.hadoop.ipc.TestRPC
> Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.518 sec
> - in org.apache.hadoop.ipc.TestRPC
> Running org.apache.hadoop.ipc.TestIPC
> Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.761 sec
> - in org.apache.hadoop.ipc.TestIPC
> Running org.apache.hadoop.ipc.TestRetryCacheMetrics
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec -
> in org.apache.hadoop.ipc.TestRetryCacheMetrics
> Running org.apache.hadoop.ipc.TestProtoBufRpc
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.53 sec -
> in org.apache.hadoop.ipc.TestProtoBufRpc
> Running org.apache.hadoop.ipc.TestSocketFactory
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.141 sec -
> in org.apache.hadoop.ipc.TestSocketFactory
> Running org.apache.hadoop.ipc.TestMultipleProtocolServer
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.413 sec -
> in org.apache.hadoop.ipc.TestMultipleProtocolServer
> Running org.apache.hadoop.ipc.TestIPCServerResponder
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.174 sec -
> in org.apache.hadoop.ipc.TestIPCServerResponder
> Running org.apache.hadoop.ipc.TestRPCCallBenchmark
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.397 sec -
> in org.apache.hadoop.ipc.TestRPCCallBenchmark
> Running org.apache.hadoop.ipc.TestRPCCompatibility
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.681 sec -
> in org.apache.hadoop.ipc.TestRPCCompatibility
> Running org.apache.hadoop.util.TestLightWeightCache
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.24 sec -
> in org.apache.hadoop.util.TestLightWeightCache
> Running org.apache.hadoop.util.TestShutdownThreadsHelper
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.07 sec -
> in org.apache.hadoop.util.TestShutdownThreadsHelper
> Running org.apache.hadoop.util.TestVersionUtil
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.036 sec -
> in org.apache.hadoop.util.TestVersionUtil
> Running org.apache.hadoop.util.TestRunJar
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.056 sec -
> in org.apache.hadoop.util.TestRunJar
> Running org.apache.hadoop.util.TestStringUtils
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.129 sec -
> in org.apache.hadoop.util.TestStringUtils
> Running org.apache.hadoop.util.TestOptions
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.031 sec -
> in org.apache.hadoop.util.TestOptions
> Running org.apache.hadoop.util.TestShell
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.152 sec -
> in org.apache.hadoop.util.TestShell
> Running org.apache.hadoop.util.TestLineReader
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.037 sec -
> in org.apache.hadoop.util.TestLineReader
> Running org.apache.hadoop.util.TestIndexedSort
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.316 sec -
> in org.apache.hadoop.util.TestIndexedSort
> Running org.apache.hadoop.util.TestIdentityHashStore
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec -
> in org.apache.hadoop.util.TestIdentityHashStore
> Running org.apache.hadoop.util.TestNativeLibraryChecker
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.119 sec -
> in org.apache.hadoop.util.TestNativeLibraryChecker
> Running org.apache.hadoop.util.hash.TestHash
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec -
> in org.apache.hadoop.util.hash.TestHash
> Running org.apache.hadoop.util.TestDataChecksum
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 sec -
> in org.apache.hadoop.util.TestDataChecksum
> Running org.apache.hadoop.util.TestGenericsUtil
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.127 sec -
> in org.apache.hadoop.util.TestGenericsUtil
> Running org.apache.hadoop.util.TestNativeCodeLoader
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.068 sec -
> in org.apache.hadoop.util.TestNativeCodeLoader
> Running org.apache.hadoop.util.TestProtoUtil
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec -
> in org.apache.hadoop.util.TestProtoUtil
> Running org.apache.hadoop.util.TestDiskChecker
> Tests run: 14, Failures: 6, Errors: 0, Skipped: 0, Time elapsed: 0.515 sec
> <<< FAILURE! - in org.apache.hadoop.util.TestDiskChecker
> testCheckDir_notReadable(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.022 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notReadable(TestDiskChecker.java:101)
>
> testCheckDir_notWritable(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.018 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notWritable(TestDiskChecker.java:106)
>
> testCheckDir_notListable(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.015 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notListable(TestDiskChecker.java:111)
>
> testCheckDir_notReadable_local(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.001 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notReadable_local(TestDiskChecker.java:150)
>
> testCheckDir_notWritable_local(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.002 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notWritable_local(TestDiskChecker.java:155)
>
> testCheckDir_notListable_local(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.002 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notListable_local(TestDiskChecker.java:160)
>
> Running org.apache.hadoop.util.TestWinUtils
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.083 sec -
> in org.apache.hadoop.util.TestWinUtils
> Running org.apache.hadoop.util.TestStringInterner
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec -
> in org.apache.hadoop.util.TestStringInterner
> Running org.apache.hadoop.util.TestGSet
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.544 sec -
> in org.apache.hadoop.util.TestGSet
> Running org.apache.hadoop.util.TestSignalLogger
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec -
> in org.apache.hadoop.util.TestSignalLogger
> Running org.apache.hadoop.util.TestZKUtil
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 sec -
> in org.apache.hadoop.util.TestZKUtil
> Running org.apache.hadoop.util.TestAsyncDiskService
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec -
> in org.apache.hadoop.util.TestAsyncDiskService
> Running org.apache.hadoop.util.TestPureJavaCrc32
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.269 sec -
> in org.apache.hadoop.util.TestPureJavaCrc32
> Running org.apache.hadoop.util.TestHostsFileReader
> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.091 sec -
> in org.apache.hadoop.util.TestHostsFileReader
> Running org.apache.hadoop.util.TestShutdownHookManager
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec -
> in org.apache.hadoop.util.TestShutdownHookManager
> Running org.apache.hadoop.util.TestReflectionUtils
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec -
> in org.apache.hadoop.util.TestReflectionUtils
> Running org.apache.hadoop.util.TestClassUtil
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.033 sec -
> in org.apache.hadoop.util.TestClassUtil
> Running org.apache.hadoop.util.TestJarFinder
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.302 sec -
> in org.apache.hadoop.util.TestJarFinder
> Running org.apache.hadoop.util.TestGenericOptionsParser
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.404 sec -
> in org.apache.hadoop.util.TestGenericOptionsParser
> Running org.apache.hadoop.util.TestLightWeightGSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec -
> in org.apache.hadoop.util.TestLightWeightGSet
> Running org.apache.hadoop.util.bloom.TestBloomFilters
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.264 sec -
> in org.apache.hadoop.util.bloom.TestBloomFilters
>
> Results :
>
> Failed tests:
>   TestZKFailoverController.testGracefulFailoverFailBecomingActive:484 Did
> not fail to graceful failover when target failed to become active!
>   TestZKFailoverController.testGracefulFailoverFailBecomingStandby:518
> expected:<1> but was:<0>
>
> TestZKFailoverController.testGracefulFailoverFailBecomingStandbyAndFailFence:540
> Failover should have failed when old node wont fence
>   TestTableMapping.testResolve:56 expected:</[rack1]> but
> was:</[default-rack]>
>   TestTableMapping.testTableCaching:79 expected:</[rack1]> but
> was:</[default-rack]>
>   TestTableMapping.testClearingCachedMappings:144 expected:</[rack1]> but
> was:</[default-rack]>
>   TestNetUtils.testNormalizeHostName:619 expected:<[192.168.12.37]> but
> was:<[UnknownHost]>
>
> TestStaticMapping.testCachingRelaysResolveQueries:219->assertMapSize:94->Assert.assertEquals:472->Assert.assertEquals:128->Assert.failNotEquals:647->Assert.fail:93
> Expected two entries in the map Mapping: cached switch mapping relaying to
> static mapping with single switch = false
> Map:
>   192.168.12.37 -> /default-rack
> Nodes: 1
> Switches: 1
>  expected:<2> but was:<1>
>
> TestStaticMapping.testCachingCachesNegativeEntries:236->assertMapSize:94->Assert.assertEquals:472->Assert.assertEquals:128->Assert.failNotEquals:647->Assert.fail:93
> Expected two entries in the map Mapping: cached switch mapping relaying to
> static mapping with single switch = false
> Map:
>   192.168.12.37 -> /default-rack
> Nodes: 1
> Switches: 1
>  expected:<2> but was:<1>
>   TestLocalDirAllocator.test0:141->validateTempDirCreation:110 Checking for
> build/test/temp/RELATIVE1 in
> build/test/temp/RELATIVE0/block9179437685378573554.tmp - FAILED!
>
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:110
> Checking for build/test/temp/RELATIVE2 in
> build/test/temp/RELATIVE1/block7291734072352417917.tmp - FAILED!
>
> TestLocalDirAllocator.testRWBufferDirBecomesRO:220->validateTempDirCreation:110
> Checking for build/test/temp/RELATIVE3 in
> build/test/temp/RELATIVE4/block4513557287751895920.tmp - FAILED!
>   TestLocalDirAllocator.test0:141->validateTempDirCreation:110 Checking for
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block8523050700077504235.tmp
> - FAILED!
>
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:164->validateTempDirCreation:110
> Checking for
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block200624031350129544.tmp
> - FAILED!
>
> TestLocalDirAllocator.testRWBufferDirBecomesRO:219->validateTempDirCreation:110
> Checking for
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE3
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE4/block8868024598532665020.tmp
> - FAILED!
>   TestLocalDirAllocator.test0:142->validateTempDirCreation:110 Checking for
> file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block7318078621961387478.tmp
> - FAILED!
>
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:110
> Checking for
> file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1/block3298540567692029628.tmp
> - FAILED!
>
> TestLocalDirAllocator.testRWBufferDirBecomesRO:220->validateTempDirCreation:110
> Checking for
> file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED3
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED4/block6014893019370084121.tmp
> - FAILED!
>
> TestFileUtil.testFailFullyDelete:411->validateAndSetWritablePermissions:385
> The directory xSubDir *should* not have been deleted. expected:<true> but
> was:<false>
>
> TestFileUtil.testFailFullyDeleteContents:492->validateAndSetWritablePermissions:385
> The directory xSubDir *should* not have been deleted. expected:<true> but
> was:<false>
>   TestFileUtil.testGetDU:592 null
>
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:289
> Should throw IOException
>   TestLocalFileSystem.testReportChecksumFailure:356 null
>
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:289
> Should throw IOException
>   TestDiskChecker.testCheckDir_notReadable:101->_checkDirs:126 checkDir
> success
>   TestDiskChecker.testCheckDir_notWritable:106->_checkDirs:126 checkDir
> success
>   TestDiskChecker.testCheckDir_notListable:111->_checkDirs:126 checkDir
> success
>   TestDiskChecker.testCheckDir_notReadable_local:150->_checkDirs:174
> checkDir success
>   TestDiskChecker.testCheckDir_notWritable_local:155->_checkDirs:174
> checkDir success
>   TestDiskChecker.testCheckDir_notListable_local:160->_checkDirs:174
> checkDir success
>
> Tests in error:
>   TestZKFailoverController.testGracefulFailover:444->Object.wait:-2 »  test
> time...
>
> Tests run: 2285, Failures: 30, Errors: 1, Skipped: 104
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main ................................ SUCCESS [0.678s]
> [INFO] Apache Hadoop Project POM ......................... SUCCESS [0.247s]
> [INFO] Apache Hadoop Annotations ......................... SUCCESS [0.780s]
> [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [0.221s]
> [INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.087s]
> [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [0.773s]
> [INFO] Apache Hadoop MiniKDC ............................. SUCCESS
> [1:58.825s]
> [INFO] Apache Hadoop Auth ................................ SUCCESS
> [6:16.248s]
> [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [7.347s]
> [INFO] Apache Hadoop Common .............................. FAILURE
> [11:49.512s]
> [INFO] Apache Hadoop NFS ................................. SKIPPED
> [INFO] Apache Hadoop Common Project ...................... SKIPPED
> [INFO] Apache Hadoop HDFS ................................ SKIPPED
> [INFO] Apache Hadoop HttpFS .............................. SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
> [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
> [INFO] Apache Hadoop HDFS Project ........................ SKIPPED
> [INFO] hadoop-yarn ....................................... SKIPPED
> [INFO] hadoop-yarn-api ................................... SKIPPED
> [INFO] hadoop-yarn-common ................................ SKIPPED
> [INFO] hadoop-yarn-server ................................ SKIPPED
> [INFO] hadoop-yarn-server-common ......................... SKIPPED
> [INFO] hadoop-yarn-server-nodemanager .................... SKIPPED
> [INFO] hadoop-yarn-server-web-proxy ...................... SKIPPED
> [INFO] hadoop-yarn-server-applicationhistoryservice ...... SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager ................ SKIPPED
> [INFO] hadoop-yarn-server-tests .......................... SKIPPED
> [INFO] hadoop-yarn-client ................................ SKIPPED
> [INFO] hadoop-yarn-applications .......................... SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell ......... SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SKIPPED
> [INFO] hadoop-yarn-site .................................. SKIPPED
> [INFO] hadoop-yarn-project ............................... SKIPPED
> [INFO] hadoop-mapreduce-client ........................... SKIPPED
> [INFO] hadoop-mapreduce-client-core ...................... SKIPPED
> [INFO] hadoop-mapreduce-client-common .................... SKIPPED
> [INFO] hadoop-mapreduce-client-shuffle ................... SKIPPED
> [INFO] hadoop-mapreduce-client-app ....................... SKIPPED
> [INFO] hadoop-mapreduce-client-hs ........................ SKIPPED
> [INFO] hadoop-mapreduce-client-jobclient ................. SKIPPED
> [INFO] hadoop-mapreduce-client-hs-plugins ................ SKIPPED
> [INFO] Apache Hadoop MapReduce Examples .................. SKIPPED
> [INFO] hadoop-mapreduce .................................. SKIPPED
> [INFO] Apache Hadoop MapReduce Streaming ................. SKIPPED
> [INFO] Apache Hadoop Distributed Copy .................... SKIPPED
> [INFO] Apache Hadoop Archives ............................ SKIPPED
> [INFO] Apache Hadoop Rumen ............................... SKIPPED
> [INFO] Apache Hadoop Gridmix ............................. SKIPPED
> [INFO] Apache Hadoop Data Join ........................... SKIPPED
> [INFO] Apache Hadoop Extras .............................. SKIPPED
> [INFO] Apache Hadoop Pipes ............................... SKIPPED
> [INFO] Apache Hadoop OpenStack support ................... SKIPPED
> [INFO] Apache Hadoop Client .............................. SKIPPED
> [INFO] Apache Hadoop Mini-Cluster ........................ SKIPPED
> [INFO] Apache Hadoop Scheduler Load Simulator ............ SKIPPED
> [INFO] Apache Hadoop Tools Dist .......................... SKIPPED
> [INFO] Apache Hadoop Tools ............................... SKIPPED
> [INFO] Apache Hadoop Distribution ........................ SKIPPED
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 20:15.984s
> [INFO] Finished at: Sun Aug 03 18:00:44 HKT 2014
> [INFO] Final Memory: 56M/900M
> [INFO]
> ------------------------------------------------------------------------
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on
> project hadoop-common: There are test failures.
> [ERROR]
> [ERROR] Please refer to
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/target/surefire-reports
> for the individual test results.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please
> read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
> [ERROR]   mvn <goals> -rf :hadoop-common
>



-- 
- Tsuyoshi

ResourceManager version and Hadoop version

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I am running Apache Hadoop Cluster 2.4.1, I have two questions about Hadoop HTML link http://test_namenode:8088/cluster/cluster, 

1) If I click "Server metrics” to the page of http://test_namenode::8088/metrics, it is blank.
Can anyone please advise if this is normal or I have not yet setup some monitoring tools properly e.g. nagios?


2) On the page http://test_namenode:8088/cluster/cluster, I can see version is "2.4.1 from Unknown"
Is there a way to change the word from “Unknown” to a more meaningful word by myself?

ResourceManager version:                 	 2.4.1 from Unknown by hadoop  source checksum f74…...
Hadoop version:	 2.4.1 from Unknown by hadoop source checksum bb7…...


Many thanks!

Regards
Arthur

Re: Compile Hadoop 2.4.1 (with Tests and Without Tests)

Posted by Tsuyoshi OZAWA <oz...@gmail.com>.
Hi,

Unfortunately, sometimes we face unexpected test failures. Please
check whether the problem has been registered or resolved on Hadoop's
JIRAs.

* https://issues.apache.org/jira/browse/HADOOP
* https://issues.apache.org/jira/browse/HDFS
* https://issues.apache.org/jira/browse/MAPREDUCE
* https://issues.apache.org/jira/browse/YARN

If not, please register it as an issue of JIRAs.

Thanks,
- Tsuyoshi

On Sun, Aug 3, 2014 at 7:32 PM, Arthur.hk.chan@gmail.com
<ar...@gmail.com> wrote:
> Hi,
>
> I am trying to compile Hadoop 2.4.1.
>
> If I run "mvm clean install -DskipTests", the compilation is GOOD,
> However, if I run "mvn clean install”, i.e. didn’t skip the Tests, it
> returned “Failures”
>
> Can anyone please advise what should be prepared before unit tests in
> compilation?  From the error log, e.g. I found it used 192.168.12.37, but
> this was not my local IPs, should I change some configure file? any ideas?
> On the other hand, can I use the the compiled code from GOOD compilation and
> just ignore the failed tests?
>
> Please advise!!
>
> Regards
> Arthur
>
>
>
>
> Compilation results:
> run "mvm clean install -DskipTests", the compilation is GOOD,
> =====
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main ................................ SUCCESS [1.756s]
> [INFO] Apache Hadoop Project POM ......................... SUCCESS [0.586s]
> [INFO] Apache Hadoop Annotations ......................... SUCCESS [1.282s]
> [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [0.257s]
> [INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.136s]
> [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [1.189s]
> [INFO] Apache Hadoop MiniKDC ............................. SUCCESS [0.837s]
> [INFO] Apache Hadoop Auth ................................ SUCCESS [0.835s]
> [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [0.614s]
> [INFO] Apache Hadoop Common .............................. SUCCESS [9.020s]
> [INFO] Apache Hadoop NFS ................................. SUCCESS [9.341s]
> [INFO] Apache Hadoop Common Project ...................... SUCCESS [0.013s]
> [INFO] Apache Hadoop HDFS ................................ SUCCESS
> [1:11.329s]
> [INFO] Apache Hadoop HttpFS .............................. SUCCESS [1.943s]
> [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [8.236s]
> [INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [0.181s]
> [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.014s]
> [INFO] hadoop-yarn ....................................... SUCCESS [0.045s]
> [INFO] hadoop-yarn-api ................................... SUCCESS [3.080s]
> [INFO] hadoop-yarn-common ................................ SUCCESS [3.995s]
> [INFO] hadoop-yarn-server ................................ SUCCESS [0.036s]
> [INFO] hadoop-yarn-server-common ......................... SUCCESS [0.406s]
> [INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [7.874s]
> [INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [0.185s]
> [INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [2.766s]
> [INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [0.975s]
> [INFO] hadoop-yarn-server-tests .......................... SUCCESS [0.260s]
> [INFO] hadoop-yarn-client ................................ SUCCESS [0.401s]
> [INFO] hadoop-yarn-applications .......................... SUCCESS [0.012s]
> [INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [0.194s]
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [0.157s]
> [INFO] hadoop-yarn-site .................................. SUCCESS [0.028s]
> [INFO] hadoop-yarn-project ............................... SUCCESS [0.030s]
> [INFO] hadoop-mapreduce-client ........................... SUCCESS [0.027s]
> [INFO] hadoop-mapreduce-client-core ...................... SUCCESS [1.384s]
> [INFO] hadoop-mapreduce-client-common .................... SUCCESS [1.167s]
> [INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [0.151s]
> [INFO] hadoop-mapreduce-client-app ....................... SUCCESS [0.692s]
> [INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [0.521s]
> [INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [9.581s]
> [INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [0.105s]
> [INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [0.288s]
> [INFO] hadoop-mapreduce .................................. SUCCESS [0.031s]
> [INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [2.485s]
> [INFO] Apache Hadoop Distributed Copy .................... SUCCESS [14.204s]
> [INFO] Apache Hadoop Archives ............................ SUCCESS [0.147s]
> [INFO] Apache Hadoop Rumen ............................... SUCCESS [0.283s]
> [INFO] Apache Hadoop Gridmix ............................. SUCCESS [0.266s]
> [INFO] Apache Hadoop Data Join ........................... SUCCESS [0.109s]
> [INFO] Apache Hadoop Extras .............................. SUCCESS [0.173s]
> [INFO] Apache Hadoop Pipes ............................... SUCCESS [0.013s]
> [INFO] Apache Hadoop OpenStack support ................... SUCCESS [0.292s]
> [INFO] Apache Hadoop Client .............................. SUCCESS [0.093s]
> [INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.052s]
> [INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [1.123s]
> [INFO] Apache Hadoop Tools Dist .......................... SUCCESS [0.109s]
> [INFO] Apache Hadoop Tools ............................... SUCCESS [0.012s]
> [INFO] Apache Hadoop Distribution ........................ SUCCESS [0.038s]
> [INFO] ————————————————————————————————————
>
>
>
>
> However, if I run "mvn clean install”, i.e. with Tests, it returned
> “Failures”
> ====
> Running org.apache.hadoop.fs.viewfs.TestChRootedFs
> Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.626 sec -
> in org.apache.hadoop.fs.viewfs.TestChRootedFs
> Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
> Tests run: 49, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.099 sec
> <<< FAILURE! - in
> org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
> testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem)
> Time elapsed: 0.028 sec  <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:93)
> at
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:289)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>
> Running
> org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
> Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.282 sec -
> in
> org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
> Running org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
> Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.162 sec -
> in org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
> Running org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.434 sec -
> in org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
> Running org.apache.hadoop.fs.TestFileStatus
> Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec -
> in org.apache.hadoop.fs.TestFileStatus
> Running org.apache.hadoop.fs.TestFileContextResolveAfs
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec -
> in org.apache.hadoop.fs.TestFileContextResolveAfs
> Running org.apache.hadoop.fs.TestGlobPattern
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.034 sec -
> in org.apache.hadoop.fs.TestGlobPattern
> Running org.apache.hadoop.fs.s3native.TestJets3tNativeFileSystemStore
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.136 sec -
> in org.apache.hadoop.fs.s3native.TestJets3tNativeFileSystemStore
> Running org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
> Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.498 sec -
> in org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
> Running org.apache.hadoop.fs.TestPath
> Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.418 sec -
> in org.apache.hadoop.fs.TestPath
> Running org.apache.hadoop.fs.TestTrash
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.588 sec -
> in org.apache.hadoop.fs.TestTrash
> Running org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.37 sec -
> in org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
> Running org.apache.hadoop.fs.TestFileContextDeleteOnExit
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec -
> in org.apache.hadoop.fs.TestFileContextDeleteOnExit
> Running org.apache.hadoop.fs.TestAfsCheckPath
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 sec -
> in org.apache.hadoop.fs.TestAfsCheckPath
> Running org.apache.hadoop.fs.TestLocalFileSystem
> Tests run: 18, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 0.601 sec
> <<< FAILURE! - in org.apache.hadoop.fs.TestLocalFileSystem
> testReportChecksumFailure(org.apache.hadoop.fs.TestLocalFileSystem)  Time
> elapsed: 0.074 sec  <<< FAILURE!
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertTrue(Assert.java:54)
> at
> org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure(TestLocalFileSystem.java:356)
>
> Running org.apache.hadoop.fs.permission.TestAcl
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.039 sec -
> in org.apache.hadoop.fs.permission.TestAcl
> Running org.apache.hadoop.fs.permission.TestFsPermission
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.244 sec -
> in org.apache.hadoop.fs.permission.TestFsPermission
> Running org.apache.hadoop.fs.TestFileSystemCanonicalization
> Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec -
> in org.apache.hadoop.fs.TestFileSystemCanonicalization
> Running org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
> Tests run: 49, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.775 sec
> <<< FAILURE! - in org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
> testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem)
> Time elapsed: 0.012 sec  <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:93)
> at
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:289)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>
> Running org.apache.hadoop.fs.TestDFVariations
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec -
> in org.apache.hadoop.fs.TestDFVariations
> Running org.apache.hadoop.fs.TestDelegationTokenRenewer
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.323 sec -
> in org.apache.hadoop.fs.TestDelegationTokenRenewer
> Running org.apache.hadoop.fs.TestFileSystemInitialization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.244 sec -
> in org.apache.hadoop.fs.TestFileSystemInitialization
> Running org.apache.hadoop.fs.TestGetFileBlockLocations
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.331 sec -
> in org.apache.hadoop.fs.TestGetFileBlockLocations
> Running org.apache.hadoop.fs.TestFileSystemCaching
> Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.463 sec -
> in org.apache.hadoop.fs.TestFileSystemCaching
> Running org.apache.hadoop.fs.TestChecksumFileSystem
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.431 sec -
> in org.apache.hadoop.fs.TestChecksumFileSystem
> Running org.apache.hadoop.fs.TestLocalFsFCStatistics
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.291 sec -
> in org.apache.hadoop.fs.TestLocalFsFCStatistics
> Running org.apache.hadoop.fs.TestLocalFileSystemPermission
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.285 sec -
> in org.apache.hadoop.fs.TestLocalFileSystemPermission
> Running org.apache.hadoop.fs.TestFcLocalFsPermission
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.359 sec -
> in org.apache.hadoop.fs.TestFcLocalFsPermission
> Running org.apache.hadoop.fs.TestDU
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.243 sec -
> in org.apache.hadoop.fs.TestDU
> Running org.apache.hadoop.fs.s3.TestINode
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec -
> in org.apache.hadoop.fs.s3.TestINode
> Running org.apache.hadoop.fs.s3.TestS3FileSystem
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.241 sec -
> in org.apache.hadoop.fs.s3.TestS3FileSystem
> Running org.apache.hadoop.fs.s3.TestS3Credentials
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.061 sec -
> in org.apache.hadoop.fs.s3.TestS3Credentials
> Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
> Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.531 sec -
> in org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
> Running org.apache.hadoop.fs.TestFileSystemTokens
> Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.276 sec -
> in org.apache.hadoop.fs.TestFileSystemTokens
> Running org.apache.hadoop.metrics.ganglia.TestGangliaContext
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.075 sec -
> in org.apache.hadoop.metrics.ganglia.TestGangliaContext
> Running org.apache.hadoop.metrics.TestMetricsServlet
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 sec -
> in org.apache.hadoop.metrics.TestMetricsServlet
> Running org.apache.hadoop.metrics.spi.TestOutputRecord
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.016 sec -
> in org.apache.hadoop.metrics.spi.TestOutputRecord
> Running org.apache.hadoop.io.TestVersionedWritable
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.02 sec -
> in org.apache.hadoop.io.TestVersionedWritable
> Running org.apache.hadoop.io.TestEnumSetWritable
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.207 sec -
> in org.apache.hadoop.io.TestEnumSetWritable
> Running org.apache.hadoop.io.TestUTF8
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.35 sec -
> in org.apache.hadoop.io.TestUTF8
> Running org.apache.hadoop.io.TestGenericWritable
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.215 sec -
> in org.apache.hadoop.io.TestGenericWritable
> Running org.apache.hadoop.io.TestBoundedByteArrayOutputStream
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 sec -
> in org.apache.hadoop.io.TestBoundedByteArrayOutputStream
> Running org.apache.hadoop.io.retry.TestRetryProxy
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.069 sec -
> in org.apache.hadoop.io.retry.TestRetryProxy
> Running org.apache.hadoop.io.retry.TestFailoverProxy
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.551 sec -
> in org.apache.hadoop.io.retry.TestFailoverProxy
> Running org.apache.hadoop.io.TestArrayWritable
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec -
> in org.apache.hadoop.io.TestArrayWritable
> Running
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
> Tests run: 13, Failures: 0, Errors: 0, Skipped: 13, Time elapsed: 0.086 sec
> - in org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
> Running org.apache.hadoop.io.compress.TestCodec
> Tests run: 24, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 62.132 sec
> - in org.apache.hadoop.io.compress.TestCodec
> Running org.apache.hadoop.io.compress.lz4.TestLz4CompressorDecompressor
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 12, Time elapsed: 0.08 sec -
> in org.apache.hadoop.io.compress.lz4.TestLz4CompressorDecompressor
> Running org.apache.hadoop.io.compress.TestCompressorDecompressor
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.138 sec -
> in org.apache.hadoop.io.compress.TestCompressorDecompressor
> Running org.apache.hadoop.io.compress.TestCodecFactory
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec -
> in org.apache.hadoop.io.compress.TestCodecFactory
> Running org.apache.hadoop.io.compress.TestBlockDecompressorStream
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.032 sec -
> in org.apache.hadoop.io.compress.TestBlockDecompressorStream
> Running org.apache.hadoop.io.compress.TestCodecPool
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec -
> in org.apache.hadoop.io.compress.TestCodecPool
> Running org.apache.hadoop.io.compress.zlib.TestZlibCompressorDecompressor
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 0.086 sec -
> in org.apache.hadoop.io.compress.zlib.TestZlibCompressorDecompressor
> Running org.apache.hadoop.io.TestSecureIOUtils
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.308 sec -
> in org.apache.hadoop.io.TestSecureIOUtils
> Running org.apache.hadoop.io.TestBooleanWritable
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.033 sec -
> in org.apache.hadoop.io.TestBooleanWritable
> Running org.apache.hadoop.io.TestMapWritable
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec -
> in org.apache.hadoop.io.TestMapWritable
> Running org.apache.hadoop.io.TestTextNonUTF8
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec -
> in org.apache.hadoop.io.TestTextNonUTF8
> Running org.apache.hadoop.io.TestWritableUtils
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.051 sec -
> in org.apache.hadoop.io.TestWritableUtils
> Running org.apache.hadoop.io.TestObjectWritableProtos
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 sec -
> in org.apache.hadoop.io.TestObjectWritableProtos
> Running org.apache.hadoop.io.TestBloomMapFile
> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.579 sec -
> in org.apache.hadoop.io.TestBloomMapFile
> Running org.apache.hadoop.io.TestSortedMapWritable
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec -
> in org.apache.hadoop.io.TestSortedMapWritable
> Running org.apache.hadoop.io.TestDefaultStringifier
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.153 sec -
> in org.apache.hadoop.io.TestDefaultStringifier
> Running org.apache.hadoop.io.TestWritableName
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec -
> in org.apache.hadoop.io.TestWritableName
> Running org.apache.hadoop.io.TestSetFile
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.067 sec -
> in org.apache.hadoop.io.TestSetFile
> Running org.apache.hadoop.io.TestMD5Hash
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec -
> in org.apache.hadoop.io.TestMD5Hash
> Running org.apache.hadoop.io.TestSequenceFileSerialization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.297 sec -
> in org.apache.hadoop.io.TestSequenceFileSerialization
> Running org.apache.hadoop.io.TestDataByteBuffers
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.179 sec -
> in org.apache.hadoop.io.TestDataByteBuffers
> Running org.apache.hadoop.io.TestSequenceFileSync
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.398 sec -
> in org.apache.hadoop.io.TestSequenceFileSync
> Running org.apache.hadoop.io.TestArrayFile
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.286 sec -
> in org.apache.hadoop.io.TestArrayFile
> Running org.apache.hadoop.io.TestArrayPrimitiveWritable
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.054 sec -
> in org.apache.hadoop.io.TestArrayPrimitiveWritable
> Running org.apache.hadoop.io.nativeio.TestSharedFileDescriptorFactory
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 0.074 sec -
> in org.apache.hadoop.io.nativeio.TestSharedFileDescriptorFactory
> Running org.apache.hadoop.io.nativeio.TestNativeIO
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 17, Time elapsed: 0.088 sec
> - in org.apache.hadoop.io.nativeio.TestNativeIO
> Running org.apache.hadoop.io.TestText
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.327 sec -
> in org.apache.hadoop.io.TestText
> Running org.apache.hadoop.io.file.tfile.TestTFileByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.159 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.347 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileComparators
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileComparators
> Running org.apache.hadoop.io.file.tfile.TestTFileSplit
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.068 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileSplit
> Running org.apache.hadoop.io.file.tfile.TestTFileStreams
> Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.834 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileStreams
> Running
> org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.743 sec -
> in
> org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
> Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
> Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
> Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.138 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
> Running org.apache.hadoop.io.file.tfile.TestTFile
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec -
> in org.apache.hadoop.io.file.tfile.TestTFile
> Running org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.063 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
> Running org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.185 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileSeek
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.384 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileSeek
> Running org.apache.hadoop.io.file.tfile.TestVLong
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.78 sec -
> in org.apache.hadoop.io.file.tfile.TestVLong
> Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.718 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileComparator2
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.592 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileComparator2
> Running org.apache.hadoop.io.TestBytesWritable
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.035 sec -
> in org.apache.hadoop.io.TestBytesWritable
> Running org.apache.hadoop.io.TestWritable
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.024 sec -
> in org.apache.hadoop.io.TestWritable
> Running org.apache.hadoop.io.TestIOUtils
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.181 sec -
> in org.apache.hadoop.io.TestIOUtils
> Running org.apache.hadoop.io.serializer.TestWritableSerialization
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec -
> in org.apache.hadoop.io.serializer.TestWritableSerialization
> Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec -
> in org.apache.hadoop.io.serializer.avro.TestAvroSerialization
> Running org.apache.hadoop.io.serializer.TestSerializationFactory
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.149 sec -
> in org.apache.hadoop.io.serializer.TestSerializationFactory
> Running org.apache.hadoop.io.TestMapFile
> Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.652 sec -
> in org.apache.hadoop.io.TestMapFile
> Running org.apache.hadoop.io.TestSequenceFile
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.265 sec -
> in org.apache.hadoop.io.TestSequenceFile
> Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.952 sec -
> in org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
> Running org.apache.hadoop.security.ssl.TestSSLFactory
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.031 sec -
> in org.apache.hadoop.security.ssl.TestSSLFactory
> Running org.apache.hadoop.security.TestUserFromEnv
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec -
> in org.apache.hadoop.security.TestUserFromEnv
> Running org.apache.hadoop.security.TestJNIGroupsMapping
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.072 sec -
> in org.apache.hadoop.security.TestJNIGroupsMapping
> Running org.apache.hadoop.security.TestDoAsEffectiveUser
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.637 sec -
> in org.apache.hadoop.security.TestDoAsEffectiveUser
> Running org.apache.hadoop.security.TestGroupFallback
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec -
> in org.apache.hadoop.security.TestGroupFallback
> Running org.apache.hadoop.security.TestUserGroupInformation
> Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.644 sec -
> in org.apache.hadoop.security.TestUserGroupInformation
> Running org.apache.hadoop.security.TestAuthenticationFilter
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.214 sec -
> in org.apache.hadoop.security.TestAuthenticationFilter
> Running org.apache.hadoop.security.TestCredentials
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.239 sec -
> in org.apache.hadoop.security.TestCredentials
> Running org.apache.hadoop.security.TestLdapGroupsMapping
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec -
> in org.apache.hadoop.security.TestLdapGroupsMapping
> Running org.apache.hadoop.security.TestUGIWithExternalKdc
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.037 sec -
> in org.apache.hadoop.security.TestUGIWithExternalKdc
> Running org.apache.hadoop.security.authorize.TestAccessControlList
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.253 sec -
> in org.apache.hadoop.security.authorize.TestAccessControlList
> Running org.apache.hadoop.security.authorize.TestProxyUsers
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.275 sec -
> in org.apache.hadoop.security.authorize.TestProxyUsers
> Running org.apache.hadoop.security.TestGroupsCaching
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec -
> in org.apache.hadoop.security.TestGroupsCaching
> Running org.apache.hadoop.security.token.TestToken
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.23 sec -
> in org.apache.hadoop.security.token.TestToken
> Running org.apache.hadoop.security.token.delegation.TestDelegationToken
> Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.089 sec
> - in org.apache.hadoop.security.token.delegation.TestDelegationToken
> Running org.apache.hadoop.security.TestSecurityUtil
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.381 sec -
> in org.apache.hadoop.security.TestSecurityUtil
> Running org.apache.hadoop.security.TestProxyUserFromEnv
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec -
> in org.apache.hadoop.security.TestProxyUserFromEnv
> Running org.apache.hadoop.ipc.TestCallQueueManager
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.371 sec -
> in org.apache.hadoop.ipc.TestCallQueueManager
> Running org.apache.hadoop.ipc.TestMiniRPCBenchmark
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.393 sec -
> in org.apache.hadoop.ipc.TestMiniRPCBenchmark
> Running org.apache.hadoop.ipc.TestServer
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec -
> in org.apache.hadoop.ipc.TestServer
> Running org.apache.hadoop.ipc.TestIdentityProviders
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.233 sec -
> in org.apache.hadoop.ipc.TestIdentityProviders
> Running org.apache.hadoop.ipc.TestSaslRPC
> Tests run: 85, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.071 sec
> - in org.apache.hadoop.ipc.TestSaslRPC
> Running org.apache.hadoop.ipc.TestRetryCache
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.175 sec -
> in org.apache.hadoop.ipc.TestRetryCache
> Running org.apache.hadoop.ipc.TestRPC
> Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.518 sec
> - in org.apache.hadoop.ipc.TestRPC
> Running org.apache.hadoop.ipc.TestIPC
> Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.761 sec
> - in org.apache.hadoop.ipc.TestIPC
> Running org.apache.hadoop.ipc.TestRetryCacheMetrics
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec -
> in org.apache.hadoop.ipc.TestRetryCacheMetrics
> Running org.apache.hadoop.ipc.TestProtoBufRpc
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.53 sec -
> in org.apache.hadoop.ipc.TestProtoBufRpc
> Running org.apache.hadoop.ipc.TestSocketFactory
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.141 sec -
> in org.apache.hadoop.ipc.TestSocketFactory
> Running org.apache.hadoop.ipc.TestMultipleProtocolServer
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.413 sec -
> in org.apache.hadoop.ipc.TestMultipleProtocolServer
> Running org.apache.hadoop.ipc.TestIPCServerResponder
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.174 sec -
> in org.apache.hadoop.ipc.TestIPCServerResponder
> Running org.apache.hadoop.ipc.TestRPCCallBenchmark
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.397 sec -
> in org.apache.hadoop.ipc.TestRPCCallBenchmark
> Running org.apache.hadoop.ipc.TestRPCCompatibility
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.681 sec -
> in org.apache.hadoop.ipc.TestRPCCompatibility
> Running org.apache.hadoop.util.TestLightWeightCache
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.24 sec -
> in org.apache.hadoop.util.TestLightWeightCache
> Running org.apache.hadoop.util.TestShutdownThreadsHelper
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.07 sec -
> in org.apache.hadoop.util.TestShutdownThreadsHelper
> Running org.apache.hadoop.util.TestVersionUtil
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.036 sec -
> in org.apache.hadoop.util.TestVersionUtil
> Running org.apache.hadoop.util.TestRunJar
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.056 sec -
> in org.apache.hadoop.util.TestRunJar
> Running org.apache.hadoop.util.TestStringUtils
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.129 sec -
> in org.apache.hadoop.util.TestStringUtils
> Running org.apache.hadoop.util.TestOptions
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.031 sec -
> in org.apache.hadoop.util.TestOptions
> Running org.apache.hadoop.util.TestShell
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.152 sec -
> in org.apache.hadoop.util.TestShell
> Running org.apache.hadoop.util.TestLineReader
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.037 sec -
> in org.apache.hadoop.util.TestLineReader
> Running org.apache.hadoop.util.TestIndexedSort
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.316 sec -
> in org.apache.hadoop.util.TestIndexedSort
> Running org.apache.hadoop.util.TestIdentityHashStore
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec -
> in org.apache.hadoop.util.TestIdentityHashStore
> Running org.apache.hadoop.util.TestNativeLibraryChecker
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.119 sec -
> in org.apache.hadoop.util.TestNativeLibraryChecker
> Running org.apache.hadoop.util.hash.TestHash
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec -
> in org.apache.hadoop.util.hash.TestHash
> Running org.apache.hadoop.util.TestDataChecksum
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 sec -
> in org.apache.hadoop.util.TestDataChecksum
> Running org.apache.hadoop.util.TestGenericsUtil
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.127 sec -
> in org.apache.hadoop.util.TestGenericsUtil
> Running org.apache.hadoop.util.TestNativeCodeLoader
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.068 sec -
> in org.apache.hadoop.util.TestNativeCodeLoader
> Running org.apache.hadoop.util.TestProtoUtil
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec -
> in org.apache.hadoop.util.TestProtoUtil
> Running org.apache.hadoop.util.TestDiskChecker
> Tests run: 14, Failures: 6, Errors: 0, Skipped: 0, Time elapsed: 0.515 sec
> <<< FAILURE! - in org.apache.hadoop.util.TestDiskChecker
> testCheckDir_notReadable(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.022 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notReadable(TestDiskChecker.java:101)
>
> testCheckDir_notWritable(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.018 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notWritable(TestDiskChecker.java:106)
>
> testCheckDir_notListable(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.015 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notListable(TestDiskChecker.java:111)
>
> testCheckDir_notReadable_local(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.001 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notReadable_local(TestDiskChecker.java:150)
>
> testCheckDir_notWritable_local(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.002 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notWritable_local(TestDiskChecker.java:155)
>
> testCheckDir_notListable_local(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.002 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notListable_local(TestDiskChecker.java:160)
>
> Running org.apache.hadoop.util.TestWinUtils
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.083 sec -
> in org.apache.hadoop.util.TestWinUtils
> Running org.apache.hadoop.util.TestStringInterner
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec -
> in org.apache.hadoop.util.TestStringInterner
> Running org.apache.hadoop.util.TestGSet
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.544 sec -
> in org.apache.hadoop.util.TestGSet
> Running org.apache.hadoop.util.TestSignalLogger
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec -
> in org.apache.hadoop.util.TestSignalLogger
> Running org.apache.hadoop.util.TestZKUtil
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 sec -
> in org.apache.hadoop.util.TestZKUtil
> Running org.apache.hadoop.util.TestAsyncDiskService
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec -
> in org.apache.hadoop.util.TestAsyncDiskService
> Running org.apache.hadoop.util.TestPureJavaCrc32
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.269 sec -
> in org.apache.hadoop.util.TestPureJavaCrc32
> Running org.apache.hadoop.util.TestHostsFileReader
> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.091 sec -
> in org.apache.hadoop.util.TestHostsFileReader
> Running org.apache.hadoop.util.TestShutdownHookManager
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec -
> in org.apache.hadoop.util.TestShutdownHookManager
> Running org.apache.hadoop.util.TestReflectionUtils
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec -
> in org.apache.hadoop.util.TestReflectionUtils
> Running org.apache.hadoop.util.TestClassUtil
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.033 sec -
> in org.apache.hadoop.util.TestClassUtil
> Running org.apache.hadoop.util.TestJarFinder
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.302 sec -
> in org.apache.hadoop.util.TestJarFinder
> Running org.apache.hadoop.util.TestGenericOptionsParser
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.404 sec -
> in org.apache.hadoop.util.TestGenericOptionsParser
> Running org.apache.hadoop.util.TestLightWeightGSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec -
> in org.apache.hadoop.util.TestLightWeightGSet
> Running org.apache.hadoop.util.bloom.TestBloomFilters
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.264 sec -
> in org.apache.hadoop.util.bloom.TestBloomFilters
>
> Results :
>
> Failed tests:
>   TestZKFailoverController.testGracefulFailoverFailBecomingActive:484 Did
> not fail to graceful failover when target failed to become active!
>   TestZKFailoverController.testGracefulFailoverFailBecomingStandby:518
> expected:<1> but was:<0>
>
> TestZKFailoverController.testGracefulFailoverFailBecomingStandbyAndFailFence:540
> Failover should have failed when old node wont fence
>   TestTableMapping.testResolve:56 expected:</[rack1]> but
> was:</[default-rack]>
>   TestTableMapping.testTableCaching:79 expected:</[rack1]> but
> was:</[default-rack]>
>   TestTableMapping.testClearingCachedMappings:144 expected:</[rack1]> but
> was:</[default-rack]>
>   TestNetUtils.testNormalizeHostName:619 expected:<[192.168.12.37]> but
> was:<[UnknownHost]>
>
> TestStaticMapping.testCachingRelaysResolveQueries:219->assertMapSize:94->Assert.assertEquals:472->Assert.assertEquals:128->Assert.failNotEquals:647->Assert.fail:93
> Expected two entries in the map Mapping: cached switch mapping relaying to
> static mapping with single switch = false
> Map:
>   192.168.12.37 -> /default-rack
> Nodes: 1
> Switches: 1
>  expected:<2> but was:<1>
>
> TestStaticMapping.testCachingCachesNegativeEntries:236->assertMapSize:94->Assert.assertEquals:472->Assert.assertEquals:128->Assert.failNotEquals:647->Assert.fail:93
> Expected two entries in the map Mapping: cached switch mapping relaying to
> static mapping with single switch = false
> Map:
>   192.168.12.37 -> /default-rack
> Nodes: 1
> Switches: 1
>  expected:<2> but was:<1>
>   TestLocalDirAllocator.test0:141->validateTempDirCreation:110 Checking for
> build/test/temp/RELATIVE1 in
> build/test/temp/RELATIVE0/block9179437685378573554.tmp - FAILED!
>
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:110
> Checking for build/test/temp/RELATIVE2 in
> build/test/temp/RELATIVE1/block7291734072352417917.tmp - FAILED!
>
> TestLocalDirAllocator.testRWBufferDirBecomesRO:220->validateTempDirCreation:110
> Checking for build/test/temp/RELATIVE3 in
> build/test/temp/RELATIVE4/block4513557287751895920.tmp - FAILED!
>   TestLocalDirAllocator.test0:141->validateTempDirCreation:110 Checking for
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block8523050700077504235.tmp
> - FAILED!
>
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:164->validateTempDirCreation:110
> Checking for
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block200624031350129544.tmp
> - FAILED!
>
> TestLocalDirAllocator.testRWBufferDirBecomesRO:219->validateTempDirCreation:110
> Checking for
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE3
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE4/block8868024598532665020.tmp
> - FAILED!
>   TestLocalDirAllocator.test0:142->validateTempDirCreation:110 Checking for
> file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block7318078621961387478.tmp
> - FAILED!
>
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:110
> Checking for
> file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1/block3298540567692029628.tmp
> - FAILED!
>
> TestLocalDirAllocator.testRWBufferDirBecomesRO:220->validateTempDirCreation:110
> Checking for
> file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED3
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED4/block6014893019370084121.tmp
> - FAILED!
>
> TestFileUtil.testFailFullyDelete:411->validateAndSetWritablePermissions:385
> The directory xSubDir *should* not have been deleted. expected:<true> but
> was:<false>
>
> TestFileUtil.testFailFullyDeleteContents:492->validateAndSetWritablePermissions:385
> The directory xSubDir *should* not have been deleted. expected:<true> but
> was:<false>
>   TestFileUtil.testGetDU:592 null
>
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:289
> Should throw IOException
>   TestLocalFileSystem.testReportChecksumFailure:356 null
>
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:289
> Should throw IOException
>   TestDiskChecker.testCheckDir_notReadable:101->_checkDirs:126 checkDir
> success
>   TestDiskChecker.testCheckDir_notWritable:106->_checkDirs:126 checkDir
> success
>   TestDiskChecker.testCheckDir_notListable:111->_checkDirs:126 checkDir
> success
>   TestDiskChecker.testCheckDir_notReadable_local:150->_checkDirs:174
> checkDir success
>   TestDiskChecker.testCheckDir_notWritable_local:155->_checkDirs:174
> checkDir success
>   TestDiskChecker.testCheckDir_notListable_local:160->_checkDirs:174
> checkDir success
>
> Tests in error:
>   TestZKFailoverController.testGracefulFailover:444->Object.wait:-2 »  test
> time...
>
> Tests run: 2285, Failures: 30, Errors: 1, Skipped: 104
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main ................................ SUCCESS [0.678s]
> [INFO] Apache Hadoop Project POM ......................... SUCCESS [0.247s]
> [INFO] Apache Hadoop Annotations ......................... SUCCESS [0.780s]
> [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [0.221s]
> [INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.087s]
> [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [0.773s]
> [INFO] Apache Hadoop MiniKDC ............................. SUCCESS
> [1:58.825s]
> [INFO] Apache Hadoop Auth ................................ SUCCESS
> [6:16.248s]
> [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [7.347s]
> [INFO] Apache Hadoop Common .............................. FAILURE
> [11:49.512s]
> [INFO] Apache Hadoop NFS ................................. SKIPPED
> [INFO] Apache Hadoop Common Project ...................... SKIPPED
> [INFO] Apache Hadoop HDFS ................................ SKIPPED
> [INFO] Apache Hadoop HttpFS .............................. SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
> [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
> [INFO] Apache Hadoop HDFS Project ........................ SKIPPED
> [INFO] hadoop-yarn ....................................... SKIPPED
> [INFO] hadoop-yarn-api ................................... SKIPPED
> [INFO] hadoop-yarn-common ................................ SKIPPED
> [INFO] hadoop-yarn-server ................................ SKIPPED
> [INFO] hadoop-yarn-server-common ......................... SKIPPED
> [INFO] hadoop-yarn-server-nodemanager .................... SKIPPED
> [INFO] hadoop-yarn-server-web-proxy ...................... SKIPPED
> [INFO] hadoop-yarn-server-applicationhistoryservice ...... SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager ................ SKIPPED
> [INFO] hadoop-yarn-server-tests .......................... SKIPPED
> [INFO] hadoop-yarn-client ................................ SKIPPED
> [INFO] hadoop-yarn-applications .......................... SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell ......... SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SKIPPED
> [INFO] hadoop-yarn-site .................................. SKIPPED
> [INFO] hadoop-yarn-project ............................... SKIPPED
> [INFO] hadoop-mapreduce-client ........................... SKIPPED
> [INFO] hadoop-mapreduce-client-core ...................... SKIPPED
> [INFO] hadoop-mapreduce-client-common .................... SKIPPED
> [INFO] hadoop-mapreduce-client-shuffle ................... SKIPPED
> [INFO] hadoop-mapreduce-client-app ....................... SKIPPED
> [INFO] hadoop-mapreduce-client-hs ........................ SKIPPED
> [INFO] hadoop-mapreduce-client-jobclient ................. SKIPPED
> [INFO] hadoop-mapreduce-client-hs-plugins ................ SKIPPED
> [INFO] Apache Hadoop MapReduce Examples .................. SKIPPED
> [INFO] hadoop-mapreduce .................................. SKIPPED
> [INFO] Apache Hadoop MapReduce Streaming ................. SKIPPED
> [INFO] Apache Hadoop Distributed Copy .................... SKIPPED
> [INFO] Apache Hadoop Archives ............................ SKIPPED
> [INFO] Apache Hadoop Rumen ............................... SKIPPED
> [INFO] Apache Hadoop Gridmix ............................. SKIPPED
> [INFO] Apache Hadoop Data Join ........................... SKIPPED
> [INFO] Apache Hadoop Extras .............................. SKIPPED
> [INFO] Apache Hadoop Pipes ............................... SKIPPED
> [INFO] Apache Hadoop OpenStack support ................... SKIPPED
> [INFO] Apache Hadoop Client .............................. SKIPPED
> [INFO] Apache Hadoop Mini-Cluster ........................ SKIPPED
> [INFO] Apache Hadoop Scheduler Load Simulator ............ SKIPPED
> [INFO] Apache Hadoop Tools Dist .......................... SKIPPED
> [INFO] Apache Hadoop Tools ............................... SKIPPED
> [INFO] Apache Hadoop Distribution ........................ SKIPPED
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 20:15.984s
> [INFO] Finished at: Sun Aug 03 18:00:44 HKT 2014
> [INFO] Final Memory: 56M/900M
> [INFO]
> ------------------------------------------------------------------------
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on
> project hadoop-common: There are test failures.
> [ERROR]
> [ERROR] Please refer to
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/target/surefire-reports
> for the individual test results.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please
> read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
> [ERROR]   mvn <goals> -rf :hadoop-common
>



-- 
- Tsuyoshi

Re: Compile Hadoop 2.4.1 (with Tests and Without Tests)

Posted by Tsuyoshi OZAWA <oz...@gmail.com>.
Hi,

Unfortunately, sometimes we face unexpected test failures. Please
check whether the problem has been registered or resolved on Hadoop's
JIRAs.

* https://issues.apache.org/jira/browse/HADOOP
* https://issues.apache.org/jira/browse/HDFS
* https://issues.apache.org/jira/browse/MAPREDUCE
* https://issues.apache.org/jira/browse/YARN

If not, please register it as an issue of JIRAs.

Thanks,
- Tsuyoshi

On Sun, Aug 3, 2014 at 7:32 PM, Arthur.hk.chan@gmail.com
<ar...@gmail.com> wrote:
> Hi,
>
> I am trying to compile Hadoop 2.4.1.
>
> If I run "mvm clean install -DskipTests", the compilation is GOOD,
> However, if I run "mvn clean install”, i.e. didn’t skip the Tests, it
> returned “Failures”
>
> Can anyone please advise what should be prepared before unit tests in
> compilation?  From the error log, e.g. I found it used 192.168.12.37, but
> this was not my local IPs, should I change some configure file? any ideas?
> On the other hand, can I use the the compiled code from GOOD compilation and
> just ignore the failed tests?
>
> Please advise!!
>
> Regards
> Arthur
>
>
>
>
> Compilation results:
> run "mvm clean install -DskipTests", the compilation is GOOD,
> =====
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main ................................ SUCCESS [1.756s]
> [INFO] Apache Hadoop Project POM ......................... SUCCESS [0.586s]
> [INFO] Apache Hadoop Annotations ......................... SUCCESS [1.282s]
> [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [0.257s]
> [INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.136s]
> [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [1.189s]
> [INFO] Apache Hadoop MiniKDC ............................. SUCCESS [0.837s]
> [INFO] Apache Hadoop Auth ................................ SUCCESS [0.835s]
> [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [0.614s]
> [INFO] Apache Hadoop Common .............................. SUCCESS [9.020s]
> [INFO] Apache Hadoop NFS ................................. SUCCESS [9.341s]
> [INFO] Apache Hadoop Common Project ...................... SUCCESS [0.013s]
> [INFO] Apache Hadoop HDFS ................................ SUCCESS
> [1:11.329s]
> [INFO] Apache Hadoop HttpFS .............................. SUCCESS [1.943s]
> [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [8.236s]
> [INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [0.181s]
> [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.014s]
> [INFO] hadoop-yarn ....................................... SUCCESS [0.045s]
> [INFO] hadoop-yarn-api ................................... SUCCESS [3.080s]
> [INFO] hadoop-yarn-common ................................ SUCCESS [3.995s]
> [INFO] hadoop-yarn-server ................................ SUCCESS [0.036s]
> [INFO] hadoop-yarn-server-common ......................... SUCCESS [0.406s]
> [INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [7.874s]
> [INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [0.185s]
> [INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [2.766s]
> [INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [0.975s]
> [INFO] hadoop-yarn-server-tests .......................... SUCCESS [0.260s]
> [INFO] hadoop-yarn-client ................................ SUCCESS [0.401s]
> [INFO] hadoop-yarn-applications .......................... SUCCESS [0.012s]
> [INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [0.194s]
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [0.157s]
> [INFO] hadoop-yarn-site .................................. SUCCESS [0.028s]
> [INFO] hadoop-yarn-project ............................... SUCCESS [0.030s]
> [INFO] hadoop-mapreduce-client ........................... SUCCESS [0.027s]
> [INFO] hadoop-mapreduce-client-core ...................... SUCCESS [1.384s]
> [INFO] hadoop-mapreduce-client-common .................... SUCCESS [1.167s]
> [INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [0.151s]
> [INFO] hadoop-mapreduce-client-app ....................... SUCCESS [0.692s]
> [INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [0.521s]
> [INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [9.581s]
> [INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [0.105s]
> [INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [0.288s]
> [INFO] hadoop-mapreduce .................................. SUCCESS [0.031s]
> [INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [2.485s]
> [INFO] Apache Hadoop Distributed Copy .................... SUCCESS [14.204s]
> [INFO] Apache Hadoop Archives ............................ SUCCESS [0.147s]
> [INFO] Apache Hadoop Rumen ............................... SUCCESS [0.283s]
> [INFO] Apache Hadoop Gridmix ............................. SUCCESS [0.266s]
> [INFO] Apache Hadoop Data Join ........................... SUCCESS [0.109s]
> [INFO] Apache Hadoop Extras .............................. SUCCESS [0.173s]
> [INFO] Apache Hadoop Pipes ............................... SUCCESS [0.013s]
> [INFO] Apache Hadoop OpenStack support ................... SUCCESS [0.292s]
> [INFO] Apache Hadoop Client .............................. SUCCESS [0.093s]
> [INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.052s]
> [INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [1.123s]
> [INFO] Apache Hadoop Tools Dist .......................... SUCCESS [0.109s]
> [INFO] Apache Hadoop Tools ............................... SUCCESS [0.012s]
> [INFO] Apache Hadoop Distribution ........................ SUCCESS [0.038s]
> [INFO] ————————————————————————————————————
>
>
>
>
> However, if I run "mvn clean install”, i.e. with Tests, it returned
> “Failures”
> ====
> Running org.apache.hadoop.fs.viewfs.TestChRootedFs
> Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.626 sec -
> in org.apache.hadoop.fs.viewfs.TestChRootedFs
> Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
> Tests run: 49, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.099 sec
> <<< FAILURE! - in
> org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
> testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem)
> Time elapsed: 0.028 sec  <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:93)
> at
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:289)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>
> Running
> org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
> Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.282 sec -
> in
> org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
> Running org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
> Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.162 sec -
> in org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
> Running org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.434 sec -
> in org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
> Running org.apache.hadoop.fs.TestFileStatus
> Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec -
> in org.apache.hadoop.fs.TestFileStatus
> Running org.apache.hadoop.fs.TestFileContextResolveAfs
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec -
> in org.apache.hadoop.fs.TestFileContextResolveAfs
> Running org.apache.hadoop.fs.TestGlobPattern
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.034 sec -
> in org.apache.hadoop.fs.TestGlobPattern
> Running org.apache.hadoop.fs.s3native.TestJets3tNativeFileSystemStore
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.136 sec -
> in org.apache.hadoop.fs.s3native.TestJets3tNativeFileSystemStore
> Running org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
> Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.498 sec -
> in org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
> Running org.apache.hadoop.fs.TestPath
> Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.418 sec -
> in org.apache.hadoop.fs.TestPath
> Running org.apache.hadoop.fs.TestTrash
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.588 sec -
> in org.apache.hadoop.fs.TestTrash
> Running org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.37 sec -
> in org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
> Running org.apache.hadoop.fs.TestFileContextDeleteOnExit
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec -
> in org.apache.hadoop.fs.TestFileContextDeleteOnExit
> Running org.apache.hadoop.fs.TestAfsCheckPath
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 sec -
> in org.apache.hadoop.fs.TestAfsCheckPath
> Running org.apache.hadoop.fs.TestLocalFileSystem
> Tests run: 18, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 0.601 sec
> <<< FAILURE! - in org.apache.hadoop.fs.TestLocalFileSystem
> testReportChecksumFailure(org.apache.hadoop.fs.TestLocalFileSystem)  Time
> elapsed: 0.074 sec  <<< FAILURE!
> java.lang.AssertionError: null
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertTrue(Assert.java:54)
> at
> org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure(TestLocalFileSystem.java:356)
>
> Running org.apache.hadoop.fs.permission.TestAcl
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.039 sec -
> in org.apache.hadoop.fs.permission.TestAcl
> Running org.apache.hadoop.fs.permission.TestFsPermission
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.244 sec -
> in org.apache.hadoop.fs.permission.TestFsPermission
> Running org.apache.hadoop.fs.TestFileSystemCanonicalization
> Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec -
> in org.apache.hadoop.fs.TestFileSystemCanonicalization
> Running org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
> Tests run: 49, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.775 sec
> <<< FAILURE! - in org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
> testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem)
> Time elapsed: 0.012 sec  <<< FAILURE!
> java.lang.AssertionError: Should throw IOException
> at org.junit.Assert.fail(Assert.java:93)
> at
> org.apache.hadoop.fs.FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir(FSMainOperationsBaseTest.java:289)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>
> Running org.apache.hadoop.fs.TestDFVariations
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec -
> in org.apache.hadoop.fs.TestDFVariations
> Running org.apache.hadoop.fs.TestDelegationTokenRenewer
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.323 sec -
> in org.apache.hadoop.fs.TestDelegationTokenRenewer
> Running org.apache.hadoop.fs.TestFileSystemInitialization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.244 sec -
> in org.apache.hadoop.fs.TestFileSystemInitialization
> Running org.apache.hadoop.fs.TestGetFileBlockLocations
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.331 sec -
> in org.apache.hadoop.fs.TestGetFileBlockLocations
> Running org.apache.hadoop.fs.TestFileSystemCaching
> Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.463 sec -
> in org.apache.hadoop.fs.TestFileSystemCaching
> Running org.apache.hadoop.fs.TestChecksumFileSystem
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.431 sec -
> in org.apache.hadoop.fs.TestChecksumFileSystem
> Running org.apache.hadoop.fs.TestLocalFsFCStatistics
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.291 sec -
> in org.apache.hadoop.fs.TestLocalFsFCStatistics
> Running org.apache.hadoop.fs.TestLocalFileSystemPermission
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.285 sec -
> in org.apache.hadoop.fs.TestLocalFileSystemPermission
> Running org.apache.hadoop.fs.TestFcLocalFsPermission
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.359 sec -
> in org.apache.hadoop.fs.TestFcLocalFsPermission
> Running org.apache.hadoop.fs.TestDU
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.243 sec -
> in org.apache.hadoop.fs.TestDU
> Running org.apache.hadoop.fs.s3.TestINode
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec -
> in org.apache.hadoop.fs.s3.TestINode
> Running org.apache.hadoop.fs.s3.TestS3FileSystem
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.241 sec -
> in org.apache.hadoop.fs.s3.TestS3FileSystem
> Running org.apache.hadoop.fs.s3.TestS3Credentials
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.061 sec -
> in org.apache.hadoop.fs.s3.TestS3Credentials
> Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
> Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.531 sec -
> in org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
> Running org.apache.hadoop.fs.TestFileSystemTokens
> Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.276 sec -
> in org.apache.hadoop.fs.TestFileSystemTokens
> Running org.apache.hadoop.metrics.ganglia.TestGangliaContext
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.075 sec -
> in org.apache.hadoop.metrics.ganglia.TestGangliaContext
> Running org.apache.hadoop.metrics.TestMetricsServlet
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 sec -
> in org.apache.hadoop.metrics.TestMetricsServlet
> Running org.apache.hadoop.metrics.spi.TestOutputRecord
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.016 sec -
> in org.apache.hadoop.metrics.spi.TestOutputRecord
> Running org.apache.hadoop.io.TestVersionedWritable
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.02 sec -
> in org.apache.hadoop.io.TestVersionedWritable
> Running org.apache.hadoop.io.TestEnumSetWritable
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.207 sec -
> in org.apache.hadoop.io.TestEnumSetWritable
> Running org.apache.hadoop.io.TestUTF8
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.35 sec -
> in org.apache.hadoop.io.TestUTF8
> Running org.apache.hadoop.io.TestGenericWritable
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.215 sec -
> in org.apache.hadoop.io.TestGenericWritable
> Running org.apache.hadoop.io.TestBoundedByteArrayOutputStream
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 sec -
> in org.apache.hadoop.io.TestBoundedByteArrayOutputStream
> Running org.apache.hadoop.io.retry.TestRetryProxy
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.069 sec -
> in org.apache.hadoop.io.retry.TestRetryProxy
> Running org.apache.hadoop.io.retry.TestFailoverProxy
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.551 sec -
> in org.apache.hadoop.io.retry.TestFailoverProxy
> Running org.apache.hadoop.io.TestArrayWritable
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec -
> in org.apache.hadoop.io.TestArrayWritable
> Running
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
> Tests run: 13, Failures: 0, Errors: 0, Skipped: 13, Time elapsed: 0.086 sec
> - in org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor
> Running org.apache.hadoop.io.compress.TestCodec
> Tests run: 24, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 62.132 sec
> - in org.apache.hadoop.io.compress.TestCodec
> Running org.apache.hadoop.io.compress.lz4.TestLz4CompressorDecompressor
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 12, Time elapsed: 0.08 sec -
> in org.apache.hadoop.io.compress.lz4.TestLz4CompressorDecompressor
> Running org.apache.hadoop.io.compress.TestCompressorDecompressor
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.138 sec -
> in org.apache.hadoop.io.compress.TestCompressorDecompressor
> Running org.apache.hadoop.io.compress.TestCodecFactory
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec -
> in org.apache.hadoop.io.compress.TestCodecFactory
> Running org.apache.hadoop.io.compress.TestBlockDecompressorStream
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.032 sec -
> in org.apache.hadoop.io.compress.TestBlockDecompressorStream
> Running org.apache.hadoop.io.compress.TestCodecPool
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec -
> in org.apache.hadoop.io.compress.TestCodecPool
> Running org.apache.hadoop.io.compress.zlib.TestZlibCompressorDecompressor
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 0.086 sec -
> in org.apache.hadoop.io.compress.zlib.TestZlibCompressorDecompressor
> Running org.apache.hadoop.io.TestSecureIOUtils
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.308 sec -
> in org.apache.hadoop.io.TestSecureIOUtils
> Running org.apache.hadoop.io.TestBooleanWritable
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.033 sec -
> in org.apache.hadoop.io.TestBooleanWritable
> Running org.apache.hadoop.io.TestMapWritable
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec -
> in org.apache.hadoop.io.TestMapWritable
> Running org.apache.hadoop.io.TestTextNonUTF8
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec -
> in org.apache.hadoop.io.TestTextNonUTF8
> Running org.apache.hadoop.io.TestWritableUtils
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.051 sec -
> in org.apache.hadoop.io.TestWritableUtils
> Running org.apache.hadoop.io.TestObjectWritableProtos
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 sec -
> in org.apache.hadoop.io.TestObjectWritableProtos
> Running org.apache.hadoop.io.TestBloomMapFile
> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.579 sec -
> in org.apache.hadoop.io.TestBloomMapFile
> Running org.apache.hadoop.io.TestSortedMapWritable
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec -
> in org.apache.hadoop.io.TestSortedMapWritable
> Running org.apache.hadoop.io.TestDefaultStringifier
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.153 sec -
> in org.apache.hadoop.io.TestDefaultStringifier
> Running org.apache.hadoop.io.TestWritableName
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec -
> in org.apache.hadoop.io.TestWritableName
> Running org.apache.hadoop.io.TestSetFile
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.067 sec -
> in org.apache.hadoop.io.TestSetFile
> Running org.apache.hadoop.io.TestMD5Hash
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec -
> in org.apache.hadoop.io.TestMD5Hash
> Running org.apache.hadoop.io.TestSequenceFileSerialization
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.297 sec -
> in org.apache.hadoop.io.TestSequenceFileSerialization
> Running org.apache.hadoop.io.TestDataByteBuffers
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.179 sec -
> in org.apache.hadoop.io.TestDataByteBuffers
> Running org.apache.hadoop.io.TestSequenceFileSync
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.398 sec -
> in org.apache.hadoop.io.TestSequenceFileSync
> Running org.apache.hadoop.io.TestArrayFile
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.286 sec -
> in org.apache.hadoop.io.TestArrayFile
> Running org.apache.hadoop.io.TestArrayPrimitiveWritable
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.054 sec -
> in org.apache.hadoop.io.TestArrayPrimitiveWritable
> Running org.apache.hadoop.io.nativeio.TestSharedFileDescriptorFactory
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 0.074 sec -
> in org.apache.hadoop.io.nativeio.TestSharedFileDescriptorFactory
> Running org.apache.hadoop.io.nativeio.TestNativeIO
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 17, Time elapsed: 0.088 sec
> - in org.apache.hadoop.io.nativeio.TestNativeIO
> Running org.apache.hadoop.io.TestText
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.327 sec -
> in org.apache.hadoop.io.TestText
> Running org.apache.hadoop.io.file.tfile.TestTFileByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.159 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.347 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileComparators
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.29 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileComparators
> Running org.apache.hadoop.io.file.tfile.TestTFileSplit
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.068 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileSplit
> Running org.apache.hadoop.io.file.tfile.TestTFileStreams
> Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.834 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileStreams
> Running
> org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.743 sec -
> in
> org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
> Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
> Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
> Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.138 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsStreams
> Running org.apache.hadoop.io.file.tfile.TestTFile
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.559 sec -
> in org.apache.hadoop.io.file.tfile.TestTFile
> Running org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.063 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
> Running org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.185 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileSeek
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.384 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileSeek
> Running org.apache.hadoop.io.file.tfile.TestVLong
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.78 sec -
> in org.apache.hadoop.io.file.tfile.TestVLong
> Running org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileLzoCodecsByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
> Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.718 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
> Running org.apache.hadoop.io.file.tfile.TestTFileComparator2
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.592 sec -
> in org.apache.hadoop.io.file.tfile.TestTFileComparator2
> Running org.apache.hadoop.io.TestBytesWritable
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.035 sec -
> in org.apache.hadoop.io.TestBytesWritable
> Running org.apache.hadoop.io.TestWritable
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.024 sec -
> in org.apache.hadoop.io.TestWritable
> Running org.apache.hadoop.io.TestIOUtils
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.181 sec -
> in org.apache.hadoop.io.TestIOUtils
> Running org.apache.hadoop.io.serializer.TestWritableSerialization
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec -
> in org.apache.hadoop.io.serializer.TestWritableSerialization
> Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.279 sec -
> in org.apache.hadoop.io.serializer.avro.TestAvroSerialization
> Running org.apache.hadoop.io.serializer.TestSerializationFactory
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.149 sec -
> in org.apache.hadoop.io.serializer.TestSerializationFactory
> Running org.apache.hadoop.io.TestMapFile
> Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.652 sec -
> in org.apache.hadoop.io.TestMapFile
> Running org.apache.hadoop.io.TestSequenceFile
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.265 sec -
> in org.apache.hadoop.io.TestSequenceFile
> Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.952 sec -
> in org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
> Running org.apache.hadoop.security.ssl.TestSSLFactory
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.031 sec -
> in org.apache.hadoop.security.ssl.TestSSLFactory
> Running org.apache.hadoop.security.TestUserFromEnv
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec -
> in org.apache.hadoop.security.TestUserFromEnv
> Running org.apache.hadoop.security.TestJNIGroupsMapping
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.072 sec -
> in org.apache.hadoop.security.TestJNIGroupsMapping
> Running org.apache.hadoop.security.TestDoAsEffectiveUser
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.637 sec -
> in org.apache.hadoop.security.TestDoAsEffectiveUser
> Running org.apache.hadoop.security.TestGroupFallback
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec -
> in org.apache.hadoop.security.TestGroupFallback
> Running org.apache.hadoop.security.TestUserGroupInformation
> Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.644 sec -
> in org.apache.hadoop.security.TestUserGroupInformation
> Running org.apache.hadoop.security.TestAuthenticationFilter
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.214 sec -
> in org.apache.hadoop.security.TestAuthenticationFilter
> Running org.apache.hadoop.security.TestCredentials
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.239 sec -
> in org.apache.hadoop.security.TestCredentials
> Running org.apache.hadoop.security.TestLdapGroupsMapping
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec -
> in org.apache.hadoop.security.TestLdapGroupsMapping
> Running org.apache.hadoop.security.TestUGIWithExternalKdc
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.037 sec -
> in org.apache.hadoop.security.TestUGIWithExternalKdc
> Running org.apache.hadoop.security.authorize.TestAccessControlList
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.253 sec -
> in org.apache.hadoop.security.authorize.TestAccessControlList
> Running org.apache.hadoop.security.authorize.TestProxyUsers
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.275 sec -
> in org.apache.hadoop.security.authorize.TestProxyUsers
> Running org.apache.hadoop.security.TestGroupsCaching
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.199 sec -
> in org.apache.hadoop.security.TestGroupsCaching
> Running org.apache.hadoop.security.token.TestToken
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.23 sec -
> in org.apache.hadoop.security.token.TestToken
> Running org.apache.hadoop.security.token.delegation.TestDelegationToken
> Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.089 sec
> - in org.apache.hadoop.security.token.delegation.TestDelegationToken
> Running org.apache.hadoop.security.TestSecurityUtil
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.381 sec -
> in org.apache.hadoop.security.TestSecurityUtil
> Running org.apache.hadoop.security.TestProxyUserFromEnv
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec -
> in org.apache.hadoop.security.TestProxyUserFromEnv
> Running org.apache.hadoop.ipc.TestCallQueueManager
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.371 sec -
> in org.apache.hadoop.ipc.TestCallQueueManager
> Running org.apache.hadoop.ipc.TestMiniRPCBenchmark
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.393 sec -
> in org.apache.hadoop.ipc.TestMiniRPCBenchmark
> Running org.apache.hadoop.ipc.TestServer
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec -
> in org.apache.hadoop.ipc.TestServer
> Running org.apache.hadoop.ipc.TestIdentityProviders
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.233 sec -
> in org.apache.hadoop.ipc.TestIdentityProviders
> Running org.apache.hadoop.ipc.TestSaslRPC
> Tests run: 85, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.071 sec
> - in org.apache.hadoop.ipc.TestSaslRPC
> Running org.apache.hadoop.ipc.TestRetryCache
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.175 sec -
> in org.apache.hadoop.ipc.TestRetryCache
> Running org.apache.hadoop.ipc.TestRPC
> Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.518 sec
> - in org.apache.hadoop.ipc.TestRPC
> Running org.apache.hadoop.ipc.TestIPC
> Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.761 sec
> - in org.apache.hadoop.ipc.TestIPC
> Running org.apache.hadoop.ipc.TestRetryCacheMetrics
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec -
> in org.apache.hadoop.ipc.TestRetryCacheMetrics
> Running org.apache.hadoop.ipc.TestProtoBufRpc
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.53 sec -
> in org.apache.hadoop.ipc.TestProtoBufRpc
> Running org.apache.hadoop.ipc.TestSocketFactory
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.141 sec -
> in org.apache.hadoop.ipc.TestSocketFactory
> Running org.apache.hadoop.ipc.TestMultipleProtocolServer
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.413 sec -
> in org.apache.hadoop.ipc.TestMultipleProtocolServer
> Running org.apache.hadoop.ipc.TestIPCServerResponder
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.174 sec -
> in org.apache.hadoop.ipc.TestIPCServerResponder
> Running org.apache.hadoop.ipc.TestRPCCallBenchmark
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.397 sec -
> in org.apache.hadoop.ipc.TestRPCCallBenchmark
> Running org.apache.hadoop.ipc.TestRPCCompatibility
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.681 sec -
> in org.apache.hadoop.ipc.TestRPCCompatibility
> Running org.apache.hadoop.util.TestLightWeightCache
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.24 sec -
> in org.apache.hadoop.util.TestLightWeightCache
> Running org.apache.hadoop.util.TestShutdownThreadsHelper
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.07 sec -
> in org.apache.hadoop.util.TestShutdownThreadsHelper
> Running org.apache.hadoop.util.TestVersionUtil
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.036 sec -
> in org.apache.hadoop.util.TestVersionUtil
> Running org.apache.hadoop.util.TestRunJar
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.056 sec -
> in org.apache.hadoop.util.TestRunJar
> Running org.apache.hadoop.util.TestStringUtils
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.129 sec -
> in org.apache.hadoop.util.TestStringUtils
> Running org.apache.hadoop.util.TestOptions
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.031 sec -
> in org.apache.hadoop.util.TestOptions
> Running org.apache.hadoop.util.TestShell
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.152 sec -
> in org.apache.hadoop.util.TestShell
> Running org.apache.hadoop.util.TestLineReader
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.037 sec -
> in org.apache.hadoop.util.TestLineReader
> Running org.apache.hadoop.util.TestIndexedSort
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.316 sec -
> in org.apache.hadoop.util.TestIndexedSort
> Running org.apache.hadoop.util.TestIdentityHashStore
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec -
> in org.apache.hadoop.util.TestIdentityHashStore
> Running org.apache.hadoop.util.TestNativeLibraryChecker
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.119 sec -
> in org.apache.hadoop.util.TestNativeLibraryChecker
> Running org.apache.hadoop.util.hash.TestHash
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec -
> in org.apache.hadoop.util.hash.TestHash
> Running org.apache.hadoop.util.TestDataChecksum
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 sec -
> in org.apache.hadoop.util.TestDataChecksum
> Running org.apache.hadoop.util.TestGenericsUtil
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.127 sec -
> in org.apache.hadoop.util.TestGenericsUtil
> Running org.apache.hadoop.util.TestNativeCodeLoader
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.068 sec -
> in org.apache.hadoop.util.TestNativeCodeLoader
> Running org.apache.hadoop.util.TestProtoUtil
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.06 sec -
> in org.apache.hadoop.util.TestProtoUtil
> Running org.apache.hadoop.util.TestDiskChecker
> Tests run: 14, Failures: 6, Errors: 0, Skipped: 0, Time elapsed: 0.515 sec
> <<< FAILURE! - in org.apache.hadoop.util.TestDiskChecker
> testCheckDir_notReadable(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.022 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notReadable(TestDiskChecker.java:101)
>
> testCheckDir_notWritable(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.018 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notWritable(TestDiskChecker.java:106)
>
> testCheckDir_notListable(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.015 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:126)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notListable(TestDiskChecker.java:111)
>
> testCheckDir_notReadable_local(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.001 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notReadable_local(TestDiskChecker.java:150)
>
> testCheckDir_notWritable_local(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.002 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notWritable_local(TestDiskChecker.java:155)
>
> testCheckDir_notListable_local(org.apache.hadoop.util.TestDiskChecker)  Time
> elapsed: 0.002 sec  <<< FAILURE!
> java.lang.AssertionError: checkDir success
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:174)
> at
> org.apache.hadoop.util.TestDiskChecker.testCheckDir_notListable_local(TestDiskChecker.java:160)
>
> Running org.apache.hadoop.util.TestWinUtils
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.083 sec -
> in org.apache.hadoop.util.TestWinUtils
> Running org.apache.hadoop.util.TestStringInterner
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec -
> in org.apache.hadoop.util.TestStringInterner
> Running org.apache.hadoop.util.TestGSet
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.544 sec -
> in org.apache.hadoop.util.TestGSet
> Running org.apache.hadoop.util.TestSignalLogger
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec -
> in org.apache.hadoop.util.TestSignalLogger
> Running org.apache.hadoop.util.TestZKUtil
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 sec -
> in org.apache.hadoop.util.TestZKUtil
> Running org.apache.hadoop.util.TestAsyncDiskService
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec -
> in org.apache.hadoop.util.TestAsyncDiskService
> Running org.apache.hadoop.util.TestPureJavaCrc32
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.269 sec -
> in org.apache.hadoop.util.TestPureJavaCrc32
> Running org.apache.hadoop.util.TestHostsFileReader
> Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.091 sec -
> in org.apache.hadoop.util.TestHostsFileReader
> Running org.apache.hadoop.util.TestShutdownHookManager
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec -
> in org.apache.hadoop.util.TestShutdownHookManager
> Running org.apache.hadoop.util.TestReflectionUtils
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.187 sec -
> in org.apache.hadoop.util.TestReflectionUtils
> Running org.apache.hadoop.util.TestClassUtil
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.033 sec -
> in org.apache.hadoop.util.TestClassUtil
> Running org.apache.hadoop.util.TestJarFinder
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.302 sec -
> in org.apache.hadoop.util.TestJarFinder
> Running org.apache.hadoop.util.TestGenericOptionsParser
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.404 sec -
> in org.apache.hadoop.util.TestGenericOptionsParser
> Running org.apache.hadoop.util.TestLightWeightGSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec -
> in org.apache.hadoop.util.TestLightWeightGSet
> Running org.apache.hadoop.util.bloom.TestBloomFilters
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.264 sec -
> in org.apache.hadoop.util.bloom.TestBloomFilters
>
> Results :
>
> Failed tests:
>   TestZKFailoverController.testGracefulFailoverFailBecomingActive:484 Did
> not fail to graceful failover when target failed to become active!
>   TestZKFailoverController.testGracefulFailoverFailBecomingStandby:518
> expected:<1> but was:<0>
>
> TestZKFailoverController.testGracefulFailoverFailBecomingStandbyAndFailFence:540
> Failover should have failed when old node wont fence
>   TestTableMapping.testResolve:56 expected:</[rack1]> but
> was:</[default-rack]>
>   TestTableMapping.testTableCaching:79 expected:</[rack1]> but
> was:</[default-rack]>
>   TestTableMapping.testClearingCachedMappings:144 expected:</[rack1]> but
> was:</[default-rack]>
>   TestNetUtils.testNormalizeHostName:619 expected:<[192.168.12.37]> but
> was:<[UnknownHost]>
>
> TestStaticMapping.testCachingRelaysResolveQueries:219->assertMapSize:94->Assert.assertEquals:472->Assert.assertEquals:128->Assert.failNotEquals:647->Assert.fail:93
> Expected two entries in the map Mapping: cached switch mapping relaying to
> static mapping with single switch = false
> Map:
>   192.168.12.37 -> /default-rack
> Nodes: 1
> Switches: 1
>  expected:<2> but was:<1>
>
> TestStaticMapping.testCachingCachesNegativeEntries:236->assertMapSize:94->Assert.assertEquals:472->Assert.assertEquals:128->Assert.failNotEquals:647->Assert.fail:93
> Expected two entries in the map Mapping: cached switch mapping relaying to
> static mapping with single switch = false
> Map:
>   192.168.12.37 -> /default-rack
> Nodes: 1
> Switches: 1
>  expected:<2> but was:<1>
>   TestLocalDirAllocator.test0:141->validateTempDirCreation:110 Checking for
> build/test/temp/RELATIVE1 in
> build/test/temp/RELATIVE0/block9179437685378573554.tmp - FAILED!
>
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:110
> Checking for build/test/temp/RELATIVE2 in
> build/test/temp/RELATIVE1/block7291734072352417917.tmp - FAILED!
>
> TestLocalDirAllocator.testRWBufferDirBecomesRO:220->validateTempDirCreation:110
> Checking for build/test/temp/RELATIVE3 in
> build/test/temp/RELATIVE4/block4513557287751895920.tmp - FAILED!
>   TestLocalDirAllocator.test0:141->validateTempDirCreation:110 Checking for
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block8523050700077504235.tmp
> - FAILED!
>
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:164->validateTempDirCreation:110
> Checking for
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block200624031350129544.tmp
> - FAILED!
>
> TestLocalDirAllocator.testRWBufferDirBecomesRO:219->validateTempDirCreation:110
> Checking for
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE3
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE4/block8868024598532665020.tmp
> - FAILED!
>   TestLocalDirAllocator.test0:142->validateTempDirCreation:110 Checking for
> file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block7318078621961387478.tmp
> - FAILED!
>
> TestLocalDirAllocator.testROBufferDirAndRWBufferDir:163->validateTempDirCreation:110
> Checking for
> file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1/block3298540567692029628.tmp
> - FAILED!
>
> TestLocalDirAllocator.testRWBufferDirBecomesRO:220->validateTempDirCreation:110
> Checking for
> file:/hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED3
> in
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED4/block6014893019370084121.tmp
> - FAILED!
>
> TestFileUtil.testFailFullyDelete:411->validateAndSetWritablePermissions:385
> The directory xSubDir *should* not have been deleted. expected:<true> but
> was:<false>
>
> TestFileUtil.testFailFullyDeleteContents:492->validateAndSetWritablePermissions:385
> The directory xSubDir *should* not have been deleted. expected:<true> but
> was:<false>
>   TestFileUtil.testGetDU:592 null
>
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:289
> Should throw IOException
>   TestLocalFileSystem.testReportChecksumFailure:356 null
>
> TestFSMainOperationsLocalFileSystem>FSMainOperationsBaseTest.testListStatusThrowsExceptionForUnreadableDir:289
> Should throw IOException
>   TestDiskChecker.testCheckDir_notReadable:101->_checkDirs:126 checkDir
> success
>   TestDiskChecker.testCheckDir_notWritable:106->_checkDirs:126 checkDir
> success
>   TestDiskChecker.testCheckDir_notListable:111->_checkDirs:126 checkDir
> success
>   TestDiskChecker.testCheckDir_notReadable_local:150->_checkDirs:174
> checkDir success
>   TestDiskChecker.testCheckDir_notWritable_local:155->_checkDirs:174
> checkDir success
>   TestDiskChecker.testCheckDir_notListable_local:160->_checkDirs:174
> checkDir success
>
> Tests in error:
>   TestZKFailoverController.testGracefulFailover:444->Object.wait:-2 »  test
> time...
>
> Tests run: 2285, Failures: 30, Errors: 1, Skipped: 104
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Main ................................ SUCCESS [0.678s]
> [INFO] Apache Hadoop Project POM ......................... SUCCESS [0.247s]
> [INFO] Apache Hadoop Annotations ......................... SUCCESS [0.780s]
> [INFO] Apache Hadoop Project Dist POM .................... SUCCESS [0.221s]
> [INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.087s]
> [INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [0.773s]
> [INFO] Apache Hadoop MiniKDC ............................. SUCCESS
> [1:58.825s]
> [INFO] Apache Hadoop Auth ................................ SUCCESS
> [6:16.248s]
> [INFO] Apache Hadoop Auth Examples ....................... SUCCESS [7.347s]
> [INFO] Apache Hadoop Common .............................. FAILURE
> [11:49.512s]
> [INFO] Apache Hadoop NFS ................................. SKIPPED
> [INFO] Apache Hadoop Common Project ...................... SKIPPED
> [INFO] Apache Hadoop HDFS ................................ SKIPPED
> [INFO] Apache Hadoop HttpFS .............................. SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
> [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
> [INFO] Apache Hadoop HDFS Project ........................ SKIPPED
> [INFO] hadoop-yarn ....................................... SKIPPED
> [INFO] hadoop-yarn-api ................................... SKIPPED
> [INFO] hadoop-yarn-common ................................ SKIPPED
> [INFO] hadoop-yarn-server ................................ SKIPPED
> [INFO] hadoop-yarn-server-common ......................... SKIPPED
> [INFO] hadoop-yarn-server-nodemanager .................... SKIPPED
> [INFO] hadoop-yarn-server-web-proxy ...................... SKIPPED
> [INFO] hadoop-yarn-server-applicationhistoryservice ...... SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager ................ SKIPPED
> [INFO] hadoop-yarn-server-tests .......................... SKIPPED
> [INFO] hadoop-yarn-client ................................ SKIPPED
> [INFO] hadoop-yarn-applications .......................... SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell ......... SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SKIPPED
> [INFO] hadoop-yarn-site .................................. SKIPPED
> [INFO] hadoop-yarn-project ............................... SKIPPED
> [INFO] hadoop-mapreduce-client ........................... SKIPPED
> [INFO] hadoop-mapreduce-client-core ...................... SKIPPED
> [INFO] hadoop-mapreduce-client-common .................... SKIPPED
> [INFO] hadoop-mapreduce-client-shuffle ................... SKIPPED
> [INFO] hadoop-mapreduce-client-app ....................... SKIPPED
> [INFO] hadoop-mapreduce-client-hs ........................ SKIPPED
> [INFO] hadoop-mapreduce-client-jobclient ................. SKIPPED
> [INFO] hadoop-mapreduce-client-hs-plugins ................ SKIPPED
> [INFO] Apache Hadoop MapReduce Examples .................. SKIPPED
> [INFO] hadoop-mapreduce .................................. SKIPPED
> [INFO] Apache Hadoop MapReduce Streaming ................. SKIPPED
> [INFO] Apache Hadoop Distributed Copy .................... SKIPPED
> [INFO] Apache Hadoop Archives ............................ SKIPPED
> [INFO] Apache Hadoop Rumen ............................... SKIPPED
> [INFO] Apache Hadoop Gridmix ............................. SKIPPED
> [INFO] Apache Hadoop Data Join ........................... SKIPPED
> [INFO] Apache Hadoop Extras .............................. SKIPPED
> [INFO] Apache Hadoop Pipes ............................... SKIPPED
> [INFO] Apache Hadoop OpenStack support ................... SKIPPED
> [INFO] Apache Hadoop Client .............................. SKIPPED
> [INFO] Apache Hadoop Mini-Cluster ........................ SKIPPED
> [INFO] Apache Hadoop Scheduler Load Simulator ............ SKIPPED
> [INFO] Apache Hadoop Tools Dist .......................... SKIPPED
> [INFO] Apache Hadoop Tools ............................... SKIPPED
> [INFO] Apache Hadoop Distribution ........................ SKIPPED
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 20:15.984s
> [INFO] Finished at: Sun Aug 03 18:00:44 HKT 2014
> [INFO] Final Memory: 56M/900M
> [INFO]
> ------------------------------------------------------------------------
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on
> project hadoop-common: There are test failures.
> [ERROR]
> [ERROR] Please refer to
> /hadoop_all_sources/hadoop-2.4.1-src/hadoop-common-project/hadoop-common/target/surefire-reports
> for the individual test results.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please
> read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
> [ERROR]   mvn <goals> -rf :hadoop-common
>



-- 
- Tsuyoshi