You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by Apache Wiki <wi...@apache.org> on 2007/08/19 10:11:24 UTC

[Lucene-hadoop Wiki] Trivial Update of "QuickStart" by masukomi

Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for change notification.

The following page has been changed by masukomi:
http://wiki.apache.org/lucene-hadoop/QuickStart

------------------------------------------------------------------------------
  == Get up and running fast ==
  Based on the docs found at the following link, but modified to work with the current distribution:
  http://lucene.apache.org/hadoop/api/overview-summary.html#overview_description
+ 
+ Please note this was last updated to match svn version 567368. Things may have changed since then. If they have, please update this page.
  
  == Requirements ==
  Java 1.5.X
@@ -17, +19 @@

  Then grab the latest with subversion 
  {{{svn co http://svn.apache.org/repos/asf/lucene/hadoop/trunk hadoop}}}
  
- edit `hadoop/conf-env.sh` and define `JAVA_HOME` in it.
+ copy `hadoop/conf-env.sh.template` to `hadoop/conf-env.sh` and define `JAVA_HOME` in it.
  
  run the following commands:
  {{{
@@ -26, +28 @@

  ant examples
  bin/hadoop
  }}}
- This should display the basic command line help docs and let you know it's at lest basically working. 
+ `bin/hadoop` should display the basic command line help docs and let you know it's at lest basically working. If any of the above steps failed use subversion to roll back to an earlier days revision.
  
  == Stage 1: Standalone Operation ==
  By default, Hadoop is configured to run things in a non-distributed mode, as a single Java process. This is useful for debugging, and can be demonstrated as follows:
@@ -55, +57 @@

  1       dfs.datanode.port
  ...(and so on)
  }}}
+ 
+ If you saw the error `Exception in thread "main" java.lang.NoClassDefFoundError: build/hadoop-0/15/0-dev-examples/jar` it means you forgot to type `jar` after `bin/hadoop` If you were unable to run this example, roll back to a previous night's version.
  
  Congratulations you have just successfully run your first MapReduce with Hadoop.