You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Andrea Barbato <an...@gmail.com> on 2014/01/13 11:06:50 UTC

Hadoop C++ HDFS test running Exception

I'm working with Hadoop 2.2.0 and trying to run this *hdfs_test.cpp*
 application:

#include "hdfs.h"
int main(int argc, char **argv) {

    hdfsFS fs = hdfsConnect("default", 0);
    const char* writePath = "/tmp/testfile.txt";
    hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
    if(!writeFile) {
          fprintf(stderr, "Failed to open %s for writing!\n", writePath);
          exit(-1);
    }
    char* buffer = "Hello, World!";
    tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
strlen(buffer)+1);
    if (hdfsFlush(fs, writeFile)) {
           fprintf(stderr, "Failed to 'flush' %s\n", writePath);
          exit(-1);
    }
   hdfsCloseFile(fs, writeFile);}

I compiled it but when I'm running it with *./hdfs_test* I have this:

loadFileSystems error:(unable to get stack trace for
java.lang.NoClassDefFoundError exception:
ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
kerbTicketCachePath=(NULL), userName=(NULL)) error:(unable to get
stack trace for java.lang.NoClassDefFoundError exception:
ExceptionUtils::getStackTrace error.)
hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath
error:(unable to get stack trace for java.lang.NoClassDefFoundError
exception: ExceptionUtils::getStackTrace error.)Failed to open
/tmp/testfile.txt for writing!

Maybe is a problem with the classpath. My $HADOOP_HOME is /usr/local/hadoop
and actually this is my *variable *CLASSPATH**:

echo $CLASSPATH/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar


Any help is appreciated.. thanks

Re: Hadoop C++ HDFS test running Exception

Posted by Andrea Barbato <an...@gmail.com>.
Thanks for the answer, but i can't find the client folder that contains the
files .jar: /usr/lib/hadoop/client/*.jar.
I'm using hadoop 2.2.0, can you tell me the name of this folder in that
version?



2014/1/14 Harsh J <ha...@cloudera.com>

> I've found in past that the native code runtime somehow doesn't
> support wildcarded classpaths. If you add the jars explicitly to the
> CLASSPATH, your app will work. You could use a simple shell loop such
> as at one of my random examples at
> https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3
> to populate it easily instead of doing it by hand.
>
> On Mon, Jan 13, 2014 at 3:36 PM, Andrea Barbato <an...@gmail.com>
> wrote:
> > I'm working with Hadoop 2.2.0 and trying to run this hdfs_test.cpp
> > application:
> >
> > #include "hdfs.h"
> >
> > int main(int argc, char **argv) {
> >
> >     hdfsFS fs = hdfsConnect("default", 0);
> >     const char* writePath = "/tmp/testfile.txt";
> >     hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT,
> 0, 0,
> > 0);
> >     if(!writeFile) {
> >           fprintf(stderr, "Failed to open %s for writing!\n", writePath);
> >           exit(-1);
> >     }
> >     char* buffer = "Hello, World!";
> >     tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
> > strlen(buffer)+1);
> >     if (hdfsFlush(fs, writeFile)) {
> >            fprintf(stderr, "Failed to 'flush' %s\n", writePath);
> >           exit(-1);
> >     }
> >    hdfsCloseFile(fs, writeFile);
> > }
> >
> > I compiled it but when I'm running it with ./hdfs_test I have this:
> >
> > loadFileSystems error:
> > (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> > ExceptionUtils::getStackTrace error.)
> > hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
> > kerbTicketCachePath=(NULL), userName=(NULL)) error:
> > (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> > ExceptionUtils::getStackTrace error.)
> > hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
> > (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> > ExceptionUtils::getStackTrace error.)
> > Failed to open /tmp/testfile.txt for writing!
> >
> > Maybe is a problem with the classpath. My $HADOOP_HOME is
> /usr/local/hadoop
> > and actually this is my variable *CLASSPATH*:
> >
> > echo $CLASSPATH
> >
> /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
> >
> >
> > Any help is appreciated.. thanks
>
>
>
> --
> Harsh J
>

Re: Hadoop C++ HDFS test running Exception

Posted by Andrea Barbato <an...@gmail.com>.
Thanks for the answer, but i can't find the client folder that contains the
files .jar: /usr/lib/hadoop/client/*.jar.
I'm using hadoop 2.2.0, can you tell me the name of this folder in that
version?



2014/1/14 Harsh J <ha...@cloudera.com>

> I've found in past that the native code runtime somehow doesn't
> support wildcarded classpaths. If you add the jars explicitly to the
> CLASSPATH, your app will work. You could use a simple shell loop such
> as at one of my random examples at
> https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3
> to populate it easily instead of doing it by hand.
>
> On Mon, Jan 13, 2014 at 3:36 PM, Andrea Barbato <an...@gmail.com>
> wrote:
> > I'm working with Hadoop 2.2.0 and trying to run this hdfs_test.cpp
> > application:
> >
> > #include "hdfs.h"
> >
> > int main(int argc, char **argv) {
> >
> >     hdfsFS fs = hdfsConnect("default", 0);
> >     const char* writePath = "/tmp/testfile.txt";
> >     hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT,
> 0, 0,
> > 0);
> >     if(!writeFile) {
> >           fprintf(stderr, "Failed to open %s for writing!\n", writePath);
> >           exit(-1);
> >     }
> >     char* buffer = "Hello, World!";
> >     tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
> > strlen(buffer)+1);
> >     if (hdfsFlush(fs, writeFile)) {
> >            fprintf(stderr, "Failed to 'flush' %s\n", writePath);
> >           exit(-1);
> >     }
> >    hdfsCloseFile(fs, writeFile);
> > }
> >
> > I compiled it but when I'm running it with ./hdfs_test I have this:
> >
> > loadFileSystems error:
> > (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> > ExceptionUtils::getStackTrace error.)
> > hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
> > kerbTicketCachePath=(NULL), userName=(NULL)) error:
> > (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> > ExceptionUtils::getStackTrace error.)
> > hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
> > (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> > ExceptionUtils::getStackTrace error.)
> > Failed to open /tmp/testfile.txt for writing!
> >
> > Maybe is a problem with the classpath. My $HADOOP_HOME is
> /usr/local/hadoop
> > and actually this is my variable *CLASSPATH*:
> >
> > echo $CLASSPATH
> >
> /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
> >
> >
> > Any help is appreciated.. thanks
>
>
>
> --
> Harsh J
>

Re: Hadoop C++ HDFS test running Exception

Posted by Andrea Barbato <an...@gmail.com>.
Thanks for the answer, but i can't find the client folder that contains the
files .jar: /usr/lib/hadoop/client/*.jar.
I'm using hadoop 2.2.0, can you tell me the name of this folder in that
version?



2014/1/14 Harsh J <ha...@cloudera.com>

> I've found in past that the native code runtime somehow doesn't
> support wildcarded classpaths. If you add the jars explicitly to the
> CLASSPATH, your app will work. You could use a simple shell loop such
> as at one of my random examples at
> https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3
> to populate it easily instead of doing it by hand.
>
> On Mon, Jan 13, 2014 at 3:36 PM, Andrea Barbato <an...@gmail.com>
> wrote:
> > I'm working with Hadoop 2.2.0 and trying to run this hdfs_test.cpp
> > application:
> >
> > #include "hdfs.h"
> >
> > int main(int argc, char **argv) {
> >
> >     hdfsFS fs = hdfsConnect("default", 0);
> >     const char* writePath = "/tmp/testfile.txt";
> >     hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT,
> 0, 0,
> > 0);
> >     if(!writeFile) {
> >           fprintf(stderr, "Failed to open %s for writing!\n", writePath);
> >           exit(-1);
> >     }
> >     char* buffer = "Hello, World!";
> >     tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
> > strlen(buffer)+1);
> >     if (hdfsFlush(fs, writeFile)) {
> >            fprintf(stderr, "Failed to 'flush' %s\n", writePath);
> >           exit(-1);
> >     }
> >    hdfsCloseFile(fs, writeFile);
> > }
> >
> > I compiled it but when I'm running it with ./hdfs_test I have this:
> >
> > loadFileSystems error:
> > (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> > ExceptionUtils::getStackTrace error.)
> > hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
> > kerbTicketCachePath=(NULL), userName=(NULL)) error:
> > (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> > ExceptionUtils::getStackTrace error.)
> > hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
> > (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> > ExceptionUtils::getStackTrace error.)
> > Failed to open /tmp/testfile.txt for writing!
> >
> > Maybe is a problem with the classpath. My $HADOOP_HOME is
> /usr/local/hadoop
> > and actually this is my variable *CLASSPATH*:
> >
> > echo $CLASSPATH
> >
> /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
> >
> >
> > Any help is appreciated.. thanks
>
>
>
> --
> Harsh J
>

Re: Hadoop C++ HDFS test running Exception

Posted by Andrea Barbato <an...@gmail.com>.
Thanks for the answer, but i can't find the client folder that contains the
files .jar: /usr/lib/hadoop/client/*.jar.
I'm using hadoop 2.2.0, can you tell me the name of this folder in that
version?



2014/1/14 Harsh J <ha...@cloudera.com>

> I've found in past that the native code runtime somehow doesn't
> support wildcarded classpaths. If you add the jars explicitly to the
> CLASSPATH, your app will work. You could use a simple shell loop such
> as at one of my random examples at
> https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3
> to populate it easily instead of doing it by hand.
>
> On Mon, Jan 13, 2014 at 3:36 PM, Andrea Barbato <an...@gmail.com>
> wrote:
> > I'm working with Hadoop 2.2.0 and trying to run this hdfs_test.cpp
> > application:
> >
> > #include "hdfs.h"
> >
> > int main(int argc, char **argv) {
> >
> >     hdfsFS fs = hdfsConnect("default", 0);
> >     const char* writePath = "/tmp/testfile.txt";
> >     hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT,
> 0, 0,
> > 0);
> >     if(!writeFile) {
> >           fprintf(stderr, "Failed to open %s for writing!\n", writePath);
> >           exit(-1);
> >     }
> >     char* buffer = "Hello, World!";
> >     tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
> > strlen(buffer)+1);
> >     if (hdfsFlush(fs, writeFile)) {
> >            fprintf(stderr, "Failed to 'flush' %s\n", writePath);
> >           exit(-1);
> >     }
> >    hdfsCloseFile(fs, writeFile);
> > }
> >
> > I compiled it but when I'm running it with ./hdfs_test I have this:
> >
> > loadFileSystems error:
> > (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> > ExceptionUtils::getStackTrace error.)
> > hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
> > kerbTicketCachePath=(NULL), userName=(NULL)) error:
> > (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> > ExceptionUtils::getStackTrace error.)
> > hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
> > (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> > ExceptionUtils::getStackTrace error.)
> > Failed to open /tmp/testfile.txt for writing!
> >
> > Maybe is a problem with the classpath. My $HADOOP_HOME is
> /usr/local/hadoop
> > and actually this is my variable *CLASSPATH*:
> >
> > echo $CLASSPATH
> >
> /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
> >
> >
> > Any help is appreciated.. thanks
>
>
>
> --
> Harsh J
>

Re: Hadoop C++ HDFS test running Exception

Posted by Harsh J <ha...@cloudera.com>.
I've found in past that the native code runtime somehow doesn't
support wildcarded classpaths. If you add the jars explicitly to the
CLASSPATH, your app will work. You could use a simple shell loop such
as at one of my random examples at
https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3
to populate it easily instead of doing it by hand.

On Mon, Jan 13, 2014 at 3:36 PM, Andrea Barbato <an...@gmail.com> wrote:
> I'm working with Hadoop 2.2.0 and trying to run this hdfs_test.cpp
> application:
>
> #include "hdfs.h"
>
> int main(int argc, char **argv) {
>
>     hdfsFS fs = hdfsConnect("default", 0);
>     const char* writePath = "/tmp/testfile.txt";
>     hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0,
> 0);
>     if(!writeFile) {
>           fprintf(stderr, "Failed to open %s for writing!\n", writePath);
>           exit(-1);
>     }
>     char* buffer = "Hello, World!";
>     tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
> strlen(buffer)+1);
>     if (hdfsFlush(fs, writeFile)) {
>            fprintf(stderr, "Failed to 'flush' %s\n", writePath);
>           exit(-1);
>     }
>    hdfsCloseFile(fs, writeFile);
> }
>
> I compiled it but when I'm running it with ./hdfs_test I have this:
>
> loadFileSystems error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
> kerbTicketCachePath=(NULL), userName=(NULL)) error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> Failed to open /tmp/testfile.txt for writing!
>
> Maybe is a problem with the classpath. My $HADOOP_HOME is /usr/local/hadoop
> and actually this is my variable *CLASSPATH*:
>
> echo $CLASSPATH
> /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
>
>
> Any help is appreciated.. thanks



-- 
Harsh J

Re: Hadoop C++ HDFS test running Exception

Posted by Harsh J <ha...@cloudera.com>.
I've found in past that the native code runtime somehow doesn't
support wildcarded classpaths. If you add the jars explicitly to the
CLASSPATH, your app will work. You could use a simple shell loop such
as at one of my random examples at
https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3
to populate it easily instead of doing it by hand.

On Mon, Jan 13, 2014 at 3:36 PM, Andrea Barbato <an...@gmail.com> wrote:
> I'm working with Hadoop 2.2.0 and trying to run this hdfs_test.cpp
> application:
>
> #include "hdfs.h"
>
> int main(int argc, char **argv) {
>
>     hdfsFS fs = hdfsConnect("default", 0);
>     const char* writePath = "/tmp/testfile.txt";
>     hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0,
> 0);
>     if(!writeFile) {
>           fprintf(stderr, "Failed to open %s for writing!\n", writePath);
>           exit(-1);
>     }
>     char* buffer = "Hello, World!";
>     tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
> strlen(buffer)+1);
>     if (hdfsFlush(fs, writeFile)) {
>            fprintf(stderr, "Failed to 'flush' %s\n", writePath);
>           exit(-1);
>     }
>    hdfsCloseFile(fs, writeFile);
> }
>
> I compiled it but when I'm running it with ./hdfs_test I have this:
>
> loadFileSystems error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
> kerbTicketCachePath=(NULL), userName=(NULL)) error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> Failed to open /tmp/testfile.txt for writing!
>
> Maybe is a problem with the classpath. My $HADOOP_HOME is /usr/local/hadoop
> and actually this is my variable *CLASSPATH*:
>
> echo $CLASSPATH
> /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
>
>
> Any help is appreciated.. thanks



-- 
Harsh J

Re: Hadoop C++ HDFS test running Exception

Posted by Harsh J <ha...@cloudera.com>.
I've found in past that the native code runtime somehow doesn't
support wildcarded classpaths. If you add the jars explicitly to the
CLASSPATH, your app will work. You could use a simple shell loop such
as at one of my random examples at
https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3
to populate it easily instead of doing it by hand.

On Mon, Jan 13, 2014 at 3:36 PM, Andrea Barbato <an...@gmail.com> wrote:
> I'm working with Hadoop 2.2.0 and trying to run this hdfs_test.cpp
> application:
>
> #include "hdfs.h"
>
> int main(int argc, char **argv) {
>
>     hdfsFS fs = hdfsConnect("default", 0);
>     const char* writePath = "/tmp/testfile.txt";
>     hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0,
> 0);
>     if(!writeFile) {
>           fprintf(stderr, "Failed to open %s for writing!\n", writePath);
>           exit(-1);
>     }
>     char* buffer = "Hello, World!";
>     tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
> strlen(buffer)+1);
>     if (hdfsFlush(fs, writeFile)) {
>            fprintf(stderr, "Failed to 'flush' %s\n", writePath);
>           exit(-1);
>     }
>    hdfsCloseFile(fs, writeFile);
> }
>
> I compiled it but when I'm running it with ./hdfs_test I have this:
>
> loadFileSystems error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
> kerbTicketCachePath=(NULL), userName=(NULL)) error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> Failed to open /tmp/testfile.txt for writing!
>
> Maybe is a problem with the classpath. My $HADOOP_HOME is /usr/local/hadoop
> and actually this is my variable *CLASSPATH*:
>
> echo $CLASSPATH
> /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
>
>
> Any help is appreciated.. thanks



-- 
Harsh J

Re: Hadoop C++ HDFS test running Exception

Posted by Harsh J <ha...@cloudera.com>.
I've found in past that the native code runtime somehow doesn't
support wildcarded classpaths. If you add the jars explicitly to the
CLASSPATH, your app will work. You could use a simple shell loop such
as at one of my random examples at
https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3
to populate it easily instead of doing it by hand.

On Mon, Jan 13, 2014 at 3:36 PM, Andrea Barbato <an...@gmail.com> wrote:
> I'm working with Hadoop 2.2.0 and trying to run this hdfs_test.cpp
> application:
>
> #include "hdfs.h"
>
> int main(int argc, char **argv) {
>
>     hdfsFS fs = hdfsConnect("default", 0);
>     const char* writePath = "/tmp/testfile.txt";
>     hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0,
> 0);
>     if(!writeFile) {
>           fprintf(stderr, "Failed to open %s for writing!\n", writePath);
>           exit(-1);
>     }
>     char* buffer = "Hello, World!";
>     tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
> strlen(buffer)+1);
>     if (hdfsFlush(fs, writeFile)) {
>            fprintf(stderr, "Failed to 'flush' %s\n", writePath);
>           exit(-1);
>     }
>    hdfsCloseFile(fs, writeFile);
> }
>
> I compiled it but when I'm running it with ./hdfs_test I have this:
>
> loadFileSystems error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
> kerbTicketCachePath=(NULL), userName=(NULL)) error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
> (unable to get stack trace for java.lang.NoClassDefFoundError exception:
> ExceptionUtils::getStackTrace error.)
> Failed to open /tmp/testfile.txt for writing!
>
> Maybe is a problem with the classpath. My $HADOOP_HOME is /usr/local/hadoop
> and actually this is my variable *CLASSPATH*:
>
> echo $CLASSPATH
> /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
>
>
> Any help is appreciated.. thanks



-- 
Harsh J