You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Sandy <sn...@gmail.com> on 2008/06/25 19:44:20 UTC

Compiling Word Count in C++ : Hadoop Pipes

Hi,

I am currently trying to get Hadoop Pipes working. I am following the
instructions at the hadoop wiki, where it provides code for a C++
implementation of Word Count (located here:
http://wiki.apache.org/hadoop/C++WordCount?highlight=%28C%2B%2B%29)

I am having some trouble parsing the instructions. What should the file
containing the new word count program be called? "examples"?

If I were to call the file "example" and type in the following:
$ ant -Dcompile.c++=yes example
Buildfile: build.xml

BUILD FAILED
Target `example' does not exist in this project.

Total time: 0 seconds


If I try and compile with "examples" as stated on the wiki, I get:
$ ant -Dcompile.c++=yes examples
Buildfile: build.xml

clover.setup:

clover.info:
     [echo]
     [echo]      Clover not found. Code coverage reports disabled.
     [echo]

clover:

init:
    [touch] Creating /tmp/null810513231
   [delete] Deleting: /tmp/null810513231
     [exec] svn: '.' is not a working copy
     [exec] svn: '.' is not a working copy

record-parser:

compile-rcc-compiler:
    [javac] Compiling 29 source files to
/home/sjm/Desktop/hadoop-0.16.4/build/classes

BUILD FAILED
/home/sjm/Desktop/hadoop-0.16.4/build.xml:241: Unable to find a javac
compiler;
com.sun.tools.javac.Main is not on the classpath.
Perhaps JAVA_HOME does not point to the JDK

Total time: 1 second



I am a bit puzzled by this. Originally I got the error that tools.jar was
not found, because it was looking for it under
/usr/java/jre1.6.0_06/lib/tools.jar . There is a tools.jar under
/usr/java/jdk1.6.0_06/lib/tools.jar. If I copy this file over to the jre
folder, that message goes away and its replaced with the above message.

My hadoop-env.sh file looks something like:
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
# export JAVA_HOME=$JAVA_HOME


and my .bash_profile file has this line in it:
JAVA_HOME=/usr/java/jre1.6.0_06; export JAVA_HOME
export PATH


Furthermore, if I go to the command line and type in javac -version, I get:
$ javac -version
javac 1.6.0_06


I also had no problem getting through the hadoop word count map reduce
tutorial in Java. It was able to find my java compiler fine. Could someone
please point me in the right direction? Also, since it is an sh file, should
that export line in hadoop-env.sh really start with a hash sign?

Thank you in advance for your assistance.

-SM