You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "zhiyong zhang (JIRA)" <ji...@apache.org> on 2009/06/29 07:52:47 UTC

[jira] Commented: (HADOOP-5837) TestHdfsProxy fails in Linux

    [ https://issues.apache.org/jira/browse/HADOOP-5837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12725042#action_12725042 ] 

zhiyong zhang commented on HADOOP-5837:
---------------------------------------

--->This problem also exists in cygwin. 

in cygwin, the error is different.  I used the old svn trunk before split reproduced the error and solved the error. 

The error is due to the environment difference between cygwin and linux.  In cygwin, ugi from "id -Gn" returns none for the user. add the following lines before "new MiniDFSCluster" in testHdfsProxyInterface()  would solve the error.

    String osName = System.getProperty("os.name");
      if (osName.indexOf("Windows") >= 0) {
    	  dfsConf.setStrings("hadoop.job.ugi", "Administrators", "Administrators");
      } else {
    	  dfsConf.setStrings("hadoop.job.ugi", "nobody", "users");
      }

In the new hdfs trunk after trunk split, the test would give different error in cygwin, probably something to do with namenode as TestDFSShell would give similar error. Here is what I got from both TestHdfsProxy and TestDFSShell.

2009-06-28 22:15:05,381 ERROR namenode.FSNamesystem (FSNamesystem.java:<init>(238)) - FSNamesystem initialization failed.
java.io.IOException: All specified directories are not accessible or do not exist.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:370)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:99)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:255)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:236)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:254)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:405)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:399)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1159)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:278)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:120)

> TestHdfsProxy fails in Linux
> ----------------------------
>
>                 Key: HADOOP-5837
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5837
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: contrib/hdfsproxy
>         Environment: Linux hostname 2.6.9-55.ELsmp #1 SMP Fri Apr 20 16:36:54 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux
>            Reporter: Tsz Wo (Nicholas), SZE
>
> {noformat}
> test-junit:
>     [junit] Running org.apache.hadoop.hdfsproxy.TestHdfsProxy
>     [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 4.397 sec
>     [junit] Test org.apache.hadoop.hdfsproxy.TestHdfsProxy FAILED
>     [junit] Running org.apache.hadoop.hdfsproxy.TestProxyUgiManager
>     [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 4.219 sec
> BUILD FAILED
> /home/tsz/hadoop/latest/build.xml:1022: The following error occurred while executing this line:
> /home/tsz/hadoop/latest/src/contrib/build.xml:48: The following error occurred while executing this line:
> /home/tsz/hadoop/latest/src/contrib/hdfsproxy/build.xml:224: Tests failed!
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.