You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by Apache Wiki <wi...@apache.org> on 2013/10/25 11:32:47 UTC

[Hadoop Wiki] Trivial Update of "HowToSetupYourDevelopmentEnvironment" by SteveLoughran

Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "HowToSetupYourDevelopmentEnvironment" page has been changed by SteveLoughran:
https://wiki.apache.org/hadoop/HowToSetupYourDevelopmentEnvironment?action=diff&rev1=30&rev2=31

Comment:
updated; highlights that this is a 1.x era doc

  
  This page describes how to get your environment setup and is IDE agnostic.
  
+ ''this article is out of date -it covers Hadoop 1.x, not the restructured and maven-based Hadoop 2.x build''
+ 
  = Requirements =
   * Java 6
   * Ant
-  * Junit
   * Your favorite IDE
  
  = Setup Your Development Environment in Linux =
@@ -117, +118 @@

  
  = Build Errors =
  
- == /code/hadoop-core-trunk/build.xml:634: Could not create task or type of type: junit. ==
+ == OS/X + Java 7 build failure ==
  
- Refer to the following site: http://ant.apache.org/manual/OptionalTasks/junit.html
  
- My default install of ant was missing ant-junit.jar in ANT_HOME/lib (for me, /usr/share/ant/lib), so I downloaded ant manually to get the jar.  Make sure you download the same version of ant that as the version that is preinstalled!  To check the version of ant that you have, run ''ant -version''.
+ {{{Exception in thread "main" java.lang.AssertionError: Missing tools.jar at: /Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/Classes/classes.jar. Expression: file.exists()}}}
+ 
+ This happens because one of the modules used in the Hadoop build expects {{classes.jar}} to be in a location it no longer is on Oracle Java 7+ on OS/X. See [https://issues.apache.org/jira/browse/HADOOP-9350|HADOOP-9350]
  
  = Runtime Errors =
  
  == ERROR dfs.NameNode: java.io.IOException: javax.security.auth.login.LoginException: Login failed: id: cannot find name for group ID ==
  
- This was a specific error that I received that probably no one else will receive.  The error has to do with a permission issue when running Hadoop.  At runtime, Hadoop SSHes into all nodes, including itself in a single-node setup.  This error occurred because some LDAP groups that I belonged to were not found at Hadoop runtime.  If you get an error like this, then there can be a number of things wrong, but very generally you have a permissions error.  The following guide may  help if you're running Ubuntu: http://wiki.apache.org/hadoop/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29
+ This was a specific error that one developer received.  The error has to do with a permission issue when running Hadoop.  At runtime, Hadoop SSHes into all nodes, including itself in a single-node setup.  This error occurred because some LDAP groups that the developer belonged to were not found at Hadoop runtime.  If you get an error like this, then there can be a number of things wrong, but very generally you have a permissions error.
  
- == could only be replicated to 0 nodes, instead of 1 ==
  
- I was getting this error when putting data into the dfs.  The solution is strange and probably inconsistent: I erased all temporary data along with the namenode, reformatted the namenode, started everything up, and visited my "cluster's" dfs health page (http://your_host:50070/dfshealth.jsp).  The last step, visiting the health page, is the only way I can get around the error.  Once I've visited the page, putting and getting files in and out of the dfs works great!
- 
- == DataNode process appearing then disappearing on slave ==
+ == DataNode process appearing then disappearing on worker niode ==
  
  When transitioning from a single-node cluster to a multi-node cluster, one of your nodes may appear to be up at first and then will go down immediately.  Check the datanode logs of the node that goes down, and look for a connection refused error.  If you get this connection refused error, then this means that your slave is having difficulties finding the master.  I did two things to solve this problem, and I'm not sure which one, if not both, solved it.  First, erase all of your hadoop temporary data and the namenode on all masters and slaves.  Reformat the namenode.  Second, make sure all of your master and slave hosts in the conf files (slaves, masters, hadoop-site.xml) refer to full host names (ex: host.domain.com instead of host).