You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by Chris Sigman <cy...@gmail.com> on 2013/02/14 19:41:32 UTC

Jobs failing with ClassNotFoundException

Hi everyone,

I've got a job I'm running that I can't figure out why it's failing.  I've
tried running jobs from the examples, and they work just fine.  I'm running
the job via

> ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst namenode root
pass stockdata movingaverage

which I see is running the following exec call that seems perfect to me:

exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar movingaverage.MAJob
-libjars "/opt/accumulo/lib/libthrift-0.6.1.jar,/opt/accumulo/lib/
accumulo-core-1.4.2.jar,/usr/lib/zookeeper//zookeeper-3.3.
5-cdh3u5.jar,/opt/accumulo/lib/cloudtrace-1.4.2.jar,/opt/
accumulo/lib/commons-collections-3.2.jar,/opt/accumulo/lib/commons-
configuration-1.5.jar,/opt/accumulo/lib/commons-io-1.4.
jar,/opt/accumulo/lib/commons-jci-core-1.0.jar,/opt/
accumulo/lib/commons-jci-fam-1.0.jar,/opt/accumulo/lib/
commons-lang-2.4.jar,/opt/accumulo/lib/commons-logging-
1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar" inst namenode
root pass tmpdatatable movingaverage

but when the job runs, it gets to the map phase and fails:

13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
attempt_201301171408_0293_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException:
org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
    at org.apache.hadoop.conf.Configuration.getClass(
Configuration.java:1004)
    at org.apache.hadoop.mapreduce.JobContext.getInputFormatClass(
JobContext.java:205)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:606)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(
UserGroupInformation.java:1278)
    at org.apache.hadoop.mapred.Child.main(Child.java:260)
Caused by: java.lang.ClassNotFoundException: org.apache.accumulo.core.
client.mapreduce.AccumuloInputFormat
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)

I've also tried hacking it to work by adding the accumulo-core jar to
hadoop's lib dir, but that doesn't seem to work either.

Thanks for any help,
--
Chris

Re: Jobs failing with ClassNotFoundException

Posted by Chris Sigman <cy...@gmail.com>.
Hi everyone,

I've figured out what's going on.  I'm not quite sure why, but specifying
the class name for the job was messing up the options parsing, causing it
to never process any of the actual arguments.  I still can't run through
tool.sh since it expects the first argument to be the class name for the
job, but that's rather inconsequential.

Thanks everyone for the help,
--
Chris


On Thu, Feb 14, 2013 at 4:15 PM, Keith Turner <ke...@deenlo.com> wrote:

> On Thu, Feb 14, 2013 at 4:07 PM, Chris Sigman <cy...@gmail.com> wrote:
> > Is it possible that ToolRunner.run isn't working right? How might I
> > determine that it's putting the libs into the distributed cache?
>
> If you look at the resulting config that generated for the map reduce
> job you may see something of use there.
>
> >
> >
> > --
> > Chris
> >
> >
> > On Thu, Feb 14, 2013 at 3:17 PM, Chris Sigman <cy...@gmail.com>
> wrote:
> >>
> >> All of those jars exist, and there aren't any differences in those from
> >> when I run one of the example jobs.  I'm also using ToolRunner.run.
> >>
> >>
> >> --
> >> Chris
> >>
> >>
> >> On Thu, Feb 14, 2013 at 2:34 PM, William Slacum
> >> <wi...@accumulo.net> wrote:
> >>>
> >>> Make sure that all of the jars you pass to libjars exist and you're
> using
> >>> ToolRunner.run, which will parse out those options.
> >>>
> >>>
> >>> On Thu, Feb 14, 2013 at 2:20 PM, Chris Sigman <cy...@gmail.com>
> wrote:
> >>>>
> >>>> Yes, everything's readable by everyone.  As I said before, the odd
> thing
> >>>> is that running one of the example jobs like Wordcount work just fine.
> >>>>
> >>>>
> >>>> --
> >>>> Chris
> >>>>
> >>>>
> >>>> On Thu, Feb 14, 2013 at 2:17 PM, Keith Turner <ke...@deenlo.com>
> wrote:
> >>>>>
> >>>>> On Thu, Feb 14, 2013 at 1:53 PM, Chris Sigman <cy...@gmail.com>
> >>>>> wrote:
> >>>>> > Yep, all of the jars are also available on the datanodes
> >>>>>
> >>>>> Also are the jars readable by the user running the M/R job?
> >>>>>
> >>>>> >
> >>>>> >
> >>>>> > --
> >>>>> > Chris
> >>>>> >
> >>>>> >
> >>>>> > On Thu, Feb 14, 2013 at 1:51 PM, Billie Rinaldi <billie@apache.org
> >
> >>>>> > wrote:
> >>>>> >>
> >>>>> >> On Thu, Feb 14, 2013 at 10:41 AM, Chris Sigman <
> cypris87@gmail.com>
> >>>>> >> wrote:
> >>>>> >>>
> >>>>> >>> Hi everyone,
> >>>>> >>>
> >>>>> >>> I've got a job I'm running that I can't figure out why it's
> >>>>> >>> failing.
> >>>>> >>> I've tried running jobs from the examples, and they work just
> fine.
> >>>>> >>> I'm
> >>>>> >>> running the job via
> >>>>> >>>
> >>>>> >>> > ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst
> >>>>> >>> > namenode
> >>>>> >>> > root pass stockdata movingaverage
> >>>>> >>>
> >>>>> >>> which I see is running the following exec call that seems perfect
> >>>>> >>> to me:
> >>>>> >>>
> >>>>> >>> exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar
> >>>>> >>> movingaverage.MAJob -libjars
> >>>>> >>>
> >>>>> >>>
> "/opt/accumulo/lib/libthrift-0.6.1.jar,/opt/accumulo/lib/accumulo-core-1.4.2.jar,/usr/lib/zookeeper//zookeeper-3.3.5-cdh3u5.jar,/opt/accumulo/lib/cloudtrace-1.4.2.jar,/opt/accumulo/lib/commons-collections-3.2.jar,/opt/accumulo/lib/commons-configuration-1.5.jar,/opt/accumulo/lib/commons-io-1.4.jar,/opt/accumulo/lib/commons-jci-core-1.0.jar,/opt/accumulo/lib/commons-jci-fam-1.0.jar,/opt/accumulo/lib/commons-lang-2.4.jar,/opt/accumulo/lib/commons-logging-1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar"
> >>>>> >>> inst namenode root pass tmpdatatable movingaverage
> >>>>> >>
> >>>>> >>
> >>>>> >> Does /opt/accumulo/lib/accumulo-core-1.4.2.jar exist on your
> hadoop
> >>>>> >> nodes,
> >>>>> >> specifically the one that's running the map?
> >>>>> >>
> >>>>> >> Billie
> >>>>> >>
> >>>>> >>
> >>>>> >>>
> >>>>> >>>
> >>>>> >>> but when the job runs, it gets to the map phase and fails:
> >>>>> >>>
> >>>>> >>> 13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
> >>>>> >>> attempt_201301171408_0293_m_000000_0, Status : FAILED
> >>>>> >>> java.lang.RuntimeException: java.lang.ClassNotFoundException:
> >>>>> >>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
> >>>>> >>>     at
> >>>>> >>>
> >>>>> >>>
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1004)
> >>>>> >>>     at
> >>>>> >>>
> >>>>> >>>
> org.apache.hadoop.mapreduce.JobContext.getInputFormatClass(JobContext.java:205)
> >>>>> >>>     at
> >>>>> >>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:606)
> >>>>> >>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
> >>>>> >>>     at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
> >>>>> >>>     at java.security.AccessController.doPrivileged(Native Method)
> >>>>> >>>     at javax.security.auth.Subject.doAs(Subject.java:415)
> >>>>> >>>     at
> >>>>> >>>
> >>>>> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
> >>>>> >>>     at org.apache.hadoop.mapred.Child.main(Child.java:260)
> >>>>> >>> Caused by: java.lang.ClassNotFoundException:
> >>>>> >>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
> >>>>> >>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> >>>>> >>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> >>>>> >>>     at java.security.AccessController.doPrivileged(Native Method)
> >>>>> >>>
> >>>>> >>> I've also tried hacking it to work by adding the accumulo-core
> jar
> >>>>> >>> to
> >>>>> >>> hadoop's lib dir, but that doesn't seem to work either.
> >>>>> >>>
> >>>>> >>> Thanks for any help,
> >>>>> >>> --
> >>>>> >>> Chris
> >>>>> >>
> >>>>> >>
> >>>>> >
> >>>>
> >>>>
> >>>
> >>
> >
>

Re: Jobs failing with ClassNotFoundException

Posted by Keith Turner <ke...@deenlo.com>.
On Thu, Feb 14, 2013 at 4:07 PM, Chris Sigman <cy...@gmail.com> wrote:
> Is it possible that ToolRunner.run isn't working right? How might I
> determine that it's putting the libs into the distributed cache?

If you look at the resulting config that generated for the map reduce
job you may see something of use there.

>
>
> --
> Chris
>
>
> On Thu, Feb 14, 2013 at 3:17 PM, Chris Sigman <cy...@gmail.com> wrote:
>>
>> All of those jars exist, and there aren't any differences in those from
>> when I run one of the example jobs.  I'm also using ToolRunner.run.
>>
>>
>> --
>> Chris
>>
>>
>> On Thu, Feb 14, 2013 at 2:34 PM, William Slacum
>> <wi...@accumulo.net> wrote:
>>>
>>> Make sure that all of the jars you pass to libjars exist and you're using
>>> ToolRunner.run, which will parse out those options.
>>>
>>>
>>> On Thu, Feb 14, 2013 at 2:20 PM, Chris Sigman <cy...@gmail.com> wrote:
>>>>
>>>> Yes, everything's readable by everyone.  As I said before, the odd thing
>>>> is that running one of the example jobs like Wordcount work just fine.
>>>>
>>>>
>>>> --
>>>> Chris
>>>>
>>>>
>>>> On Thu, Feb 14, 2013 at 2:17 PM, Keith Turner <ke...@deenlo.com> wrote:
>>>>>
>>>>> On Thu, Feb 14, 2013 at 1:53 PM, Chris Sigman <cy...@gmail.com>
>>>>> wrote:
>>>>> > Yep, all of the jars are also available on the datanodes
>>>>>
>>>>> Also are the jars readable by the user running the M/R job?
>>>>>
>>>>> >
>>>>> >
>>>>> > --
>>>>> > Chris
>>>>> >
>>>>> >
>>>>> > On Thu, Feb 14, 2013 at 1:51 PM, Billie Rinaldi <bi...@apache.org>
>>>>> > wrote:
>>>>> >>
>>>>> >> On Thu, Feb 14, 2013 at 10:41 AM, Chris Sigman <cy...@gmail.com>
>>>>> >> wrote:
>>>>> >>>
>>>>> >>> Hi everyone,
>>>>> >>>
>>>>> >>> I've got a job I'm running that I can't figure out why it's
>>>>> >>> failing.
>>>>> >>> I've tried running jobs from the examples, and they work just fine.
>>>>> >>> I'm
>>>>> >>> running the job via
>>>>> >>>
>>>>> >>> > ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst
>>>>> >>> > namenode
>>>>> >>> > root pass stockdata movingaverage
>>>>> >>>
>>>>> >>> which I see is running the following exec call that seems perfect
>>>>> >>> to me:
>>>>> >>>
>>>>> >>> exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar
>>>>> >>> movingaverage.MAJob -libjars
>>>>> >>>
>>>>> >>> "/opt/accumulo/lib/libthrift-0.6.1.jar,/opt/accumulo/lib/accumulo-core-1.4.2.jar,/usr/lib/zookeeper//zookeeper-3.3.5-cdh3u5.jar,/opt/accumulo/lib/cloudtrace-1.4.2.jar,/opt/accumulo/lib/commons-collections-3.2.jar,/opt/accumulo/lib/commons-configuration-1.5.jar,/opt/accumulo/lib/commons-io-1.4.jar,/opt/accumulo/lib/commons-jci-core-1.0.jar,/opt/accumulo/lib/commons-jci-fam-1.0.jar,/opt/accumulo/lib/commons-lang-2.4.jar,/opt/accumulo/lib/commons-logging-1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar"
>>>>> >>> inst namenode root pass tmpdatatable movingaverage
>>>>> >>
>>>>> >>
>>>>> >> Does /opt/accumulo/lib/accumulo-core-1.4.2.jar exist on your hadoop
>>>>> >> nodes,
>>>>> >> specifically the one that's running the map?
>>>>> >>
>>>>> >> Billie
>>>>> >>
>>>>> >>
>>>>> >>>
>>>>> >>>
>>>>> >>> but when the job runs, it gets to the map phase and fails:
>>>>> >>>
>>>>> >>> 13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
>>>>> >>> attempt_201301171408_0293_m_000000_0, Status : FAILED
>>>>> >>> java.lang.RuntimeException: java.lang.ClassNotFoundException:
>>>>> >>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>>>>> >>>     at
>>>>> >>>
>>>>> >>> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1004)
>>>>> >>>     at
>>>>> >>>
>>>>> >>> org.apache.hadoop.mapreduce.JobContext.getInputFormatClass(JobContext.java:205)
>>>>> >>>     at
>>>>> >>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:606)
>>>>> >>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>>>>> >>>     at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
>>>>> >>>     at java.security.AccessController.doPrivileged(Native Method)
>>>>> >>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>> >>>     at
>>>>> >>>
>>>>> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
>>>>> >>>     at org.apache.hadoop.mapred.Child.main(Child.java:260)
>>>>> >>> Caused by: java.lang.ClassNotFoundException:
>>>>> >>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>>>>> >>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>>> >>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>>> >>>     at java.security.AccessController.doPrivileged(Native Method)
>>>>> >>>
>>>>> >>> I've also tried hacking it to work by adding the accumulo-core jar
>>>>> >>> to
>>>>> >>> hadoop's lib dir, but that doesn't seem to work either.
>>>>> >>>
>>>>> >>> Thanks for any help,
>>>>> >>> --
>>>>> >>> Chris
>>>>> >>
>>>>> >>
>>>>> >
>>>>
>>>>
>>>
>>
>

Re: Jobs failing with ClassNotFoundException

Posted by Chris Sigman <cy...@gmail.com>.
Is it possible that ToolRunner.run isn't working right? How might I
determine that it's putting the libs into the distributed cache?


--
Chris


On Thu, Feb 14, 2013 at 3:17 PM, Chris Sigman <cy...@gmail.com> wrote:

> All of those jars exist, and there aren't any differences in those from
> when I run one of the example jobs.  I'm also using ToolRunner.run.
>
>
> --
> Chris
>
>
> On Thu, Feb 14, 2013 at 2:34 PM, William Slacum <
> wilhelm.von.cloud@accumulo.net> wrote:
>
>> Make sure that all of the jars you pass to libjars exist and you're using
>> ToolRunner.run, which will parse out those options.
>>
>>
>> On Thu, Feb 14, 2013 at 2:20 PM, Chris Sigman <cy...@gmail.com> wrote:
>>
>>> Yes, everything's readable by everyone.  As I said before, the odd thing
>>> is that running one of the example jobs like Wordcount work just fine.
>>>
>>>
>>> --
>>> Chris
>>>
>>>
>>> On Thu, Feb 14, 2013 at 2:17 PM, Keith Turner <ke...@deenlo.com> wrote:
>>>
>>>> On Thu, Feb 14, 2013 at 1:53 PM, Chris Sigman <cy...@gmail.com>
>>>> wrote:
>>>> > Yep, all of the jars are also available on the datanodes
>>>>
>>>> Also are the jars readable by the user running the M/R job?
>>>>
>>>> >
>>>> >
>>>> > --
>>>> > Chris
>>>> >
>>>> >
>>>> > On Thu, Feb 14, 2013 at 1:51 PM, Billie Rinaldi <bi...@apache.org>
>>>> wrote:
>>>> >>
>>>> >> On Thu, Feb 14, 2013 at 10:41 AM, Chris Sigman <cy...@gmail.com>
>>>> wrote:
>>>> >>>
>>>> >>> Hi everyone,
>>>> >>>
>>>> >>> I've got a job I'm running that I can't figure out why it's failing.
>>>> >>> I've tried running jobs from the examples, and they work just fine.
>>>>  I'm
>>>> >>> running the job via
>>>> >>>
>>>> >>> > ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst
>>>> namenode
>>>> >>> > root pass stockdata movingaverage
>>>> >>>
>>>> >>> which I see is running the following exec call that seems perfect
>>>> to me:
>>>> >>>
>>>> >>> exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar
>>>> >>> movingaverage.MAJob -libjars
>>>> >>>
>>>> "/opt/accumulo/lib/libthrift-0.6.1.jar,/opt/accumulo/lib/accumulo-core-1.4.2.jar,/usr/lib/zookeeper//zookeeper-3.3.5-cdh3u5.jar,/opt/accumulo/lib/cloudtrace-1.4.2.jar,/opt/accumulo/lib/commons-collections-3.2.jar,/opt/accumulo/lib/commons-configuration-1.5.jar,/opt/accumulo/lib/commons-io-1.4.jar,/opt/accumulo/lib/commons-jci-core-1.0.jar,/opt/accumulo/lib/commons-jci-fam-1.0.jar,/opt/accumulo/lib/commons-lang-2.4.jar,/opt/accumulo/lib/commons-logging-1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar"
>>>> >>> inst namenode root pass tmpdatatable movingaverage
>>>> >>
>>>> >>
>>>> >> Does /opt/accumulo/lib/accumulo-core-1.4.2.jar exist on your hadoop
>>>> nodes,
>>>> >> specifically the one that's running the map?
>>>> >>
>>>> >> Billie
>>>> >>
>>>> >>
>>>> >>>
>>>> >>>
>>>> >>> but when the job runs, it gets to the map phase and fails:
>>>> >>>
>>>> >>> 13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
>>>> >>> attempt_201301171408_0293_m_000000_0, Status : FAILED
>>>> >>> java.lang.RuntimeException: java.lang.ClassNotFoundException:
>>>> >>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>>>> >>>     at
>>>> >>>
>>>> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1004)
>>>> >>>     at
>>>> >>>
>>>> org.apache.hadoop.mapreduce.JobContext.getInputFormatClass(JobContext.java:205)
>>>> >>>     at
>>>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:606)
>>>> >>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>>>> >>>     at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
>>>> >>>     at java.security.AccessController.doPrivileged(Native Method)
>>>> >>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> >>>     at
>>>> >>>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
>>>> >>>     at org.apache.hadoop.mapred.Child.main(Child.java:260)
>>>> >>> Caused by: java.lang.ClassNotFoundException:
>>>> >>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>>>> >>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>> >>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>> >>>     at java.security.AccessController.doPrivileged(Native Method)
>>>> >>>
>>>> >>> I've also tried hacking it to work by adding the accumulo-core jar
>>>> to
>>>> >>> hadoop's lib dir, but that doesn't seem to work either.
>>>> >>>
>>>> >>> Thanks for any help,
>>>> >>> --
>>>> >>> Chris
>>>> >>
>>>> >>
>>>> >
>>>>
>>>
>>>
>>
>

Re: Jobs failing with ClassNotFoundException

Posted by Chris Sigman <cy...@gmail.com>.
All of those jars exist, and there aren't any differences in those from
when I run one of the example jobs.  I'm also using ToolRunner.run.


--
Chris


On Thu, Feb 14, 2013 at 2:34 PM, William Slacum <
wilhelm.von.cloud@accumulo.net> wrote:

> Make sure that all of the jars you pass to libjars exist and you're using
> ToolRunner.run, which will parse out those options.
>
>
> On Thu, Feb 14, 2013 at 2:20 PM, Chris Sigman <cy...@gmail.com> wrote:
>
>> Yes, everything's readable by everyone.  As I said before, the odd thing
>> is that running one of the example jobs like Wordcount work just fine.
>>
>>
>> --
>> Chris
>>
>>
>> On Thu, Feb 14, 2013 at 2:17 PM, Keith Turner <ke...@deenlo.com> wrote:
>>
>>> On Thu, Feb 14, 2013 at 1:53 PM, Chris Sigman <cy...@gmail.com>
>>> wrote:
>>> > Yep, all of the jars are also available on the datanodes
>>>
>>> Also are the jars readable by the user running the M/R job?
>>>
>>> >
>>> >
>>> > --
>>> > Chris
>>> >
>>> >
>>> > On Thu, Feb 14, 2013 at 1:51 PM, Billie Rinaldi <bi...@apache.org>
>>> wrote:
>>> >>
>>> >> On Thu, Feb 14, 2013 at 10:41 AM, Chris Sigman <cy...@gmail.com>
>>> wrote:
>>> >>>
>>> >>> Hi everyone,
>>> >>>
>>> >>> I've got a job I'm running that I can't figure out why it's failing.
>>> >>> I've tried running jobs from the examples, and they work just fine.
>>>  I'm
>>> >>> running the job via
>>> >>>
>>> >>> > ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst namenode
>>> >>> > root pass stockdata movingaverage
>>> >>>
>>> >>> which I see is running the following exec call that seems perfect to
>>> me:
>>> >>>
>>> >>> exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar
>>> >>> movingaverage.MAJob -libjars
>>> >>>
>>> "/opt/accumulo/lib/libthrift-0.6.1.jar,/opt/accumulo/lib/accumulo-core-1.4.2.jar,/usr/lib/zookeeper//zookeeper-3.3.5-cdh3u5.jar,/opt/accumulo/lib/cloudtrace-1.4.2.jar,/opt/accumulo/lib/commons-collections-3.2.jar,/opt/accumulo/lib/commons-configuration-1.5.jar,/opt/accumulo/lib/commons-io-1.4.jar,/opt/accumulo/lib/commons-jci-core-1.0.jar,/opt/accumulo/lib/commons-jci-fam-1.0.jar,/opt/accumulo/lib/commons-lang-2.4.jar,/opt/accumulo/lib/commons-logging-1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar"
>>> >>> inst namenode root pass tmpdatatable movingaverage
>>> >>
>>> >>
>>> >> Does /opt/accumulo/lib/accumulo-core-1.4.2.jar exist on your hadoop
>>> nodes,
>>> >> specifically the one that's running the map?
>>> >>
>>> >> Billie
>>> >>
>>> >>
>>> >>>
>>> >>>
>>> >>> but when the job runs, it gets to the map phase and fails:
>>> >>>
>>> >>> 13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
>>> >>> attempt_201301171408_0293_m_000000_0, Status : FAILED
>>> >>> java.lang.RuntimeException: java.lang.ClassNotFoundException:
>>> >>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>>> >>>     at
>>> >>>
>>> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1004)
>>> >>>     at
>>> >>>
>>> org.apache.hadoop.mapreduce.JobContext.getInputFormatClass(JobContext.java:205)
>>> >>>     at
>>> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:606)
>>> >>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>>> >>>     at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
>>> >>>     at java.security.AccessController.doPrivileged(Native Method)
>>> >>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>> >>>     at
>>> >>>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
>>> >>>     at org.apache.hadoop.mapred.Child.main(Child.java:260)
>>> >>> Caused by: java.lang.ClassNotFoundException:
>>> >>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>>> >>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>> >>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>> >>>     at java.security.AccessController.doPrivileged(Native Method)
>>> >>>
>>> >>> I've also tried hacking it to work by adding the accumulo-core jar to
>>> >>> hadoop's lib dir, but that doesn't seem to work either.
>>> >>>
>>> >>> Thanks for any help,
>>> >>> --
>>> >>> Chris
>>> >>
>>> >>
>>> >
>>>
>>
>>
>

Re: Jobs failing with ClassNotFoundException

Posted by William Slacum <wi...@accumulo.net>.
Make sure that all of the jars you pass to libjars exist and you're using
ToolRunner.run, which will parse out those options.

On Thu, Feb 14, 2013 at 2:20 PM, Chris Sigman <cy...@gmail.com> wrote:

> Yes, everything's readable by everyone.  As I said before, the odd thing
> is that running one of the example jobs like Wordcount work just fine.
>
>
> --
> Chris
>
>
> On Thu, Feb 14, 2013 at 2:17 PM, Keith Turner <ke...@deenlo.com> wrote:
>
>> On Thu, Feb 14, 2013 at 1:53 PM, Chris Sigman <cy...@gmail.com> wrote:
>> > Yep, all of the jars are also available on the datanodes
>>
>> Also are the jars readable by the user running the M/R job?
>>
>> >
>> >
>> > --
>> > Chris
>> >
>> >
>> > On Thu, Feb 14, 2013 at 1:51 PM, Billie Rinaldi <bi...@apache.org>
>> wrote:
>> >>
>> >> On Thu, Feb 14, 2013 at 10:41 AM, Chris Sigman <cy...@gmail.com>
>> wrote:
>> >>>
>> >>> Hi everyone,
>> >>>
>> >>> I've got a job I'm running that I can't figure out why it's failing.
>> >>> I've tried running jobs from the examples, and they work just fine.
>>  I'm
>> >>> running the job via
>> >>>
>> >>> > ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst namenode
>> >>> > root pass stockdata movingaverage
>> >>>
>> >>> which I see is running the following exec call that seems perfect to
>> me:
>> >>>
>> >>> exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar
>> >>> movingaverage.MAJob -libjars
>> >>>
>> "/opt/accumulo/lib/libthrift-0.6.1.jar,/opt/accumulo/lib/accumulo-core-1.4.2.jar,/usr/lib/zookeeper//zookeeper-3.3.5-cdh3u5.jar,/opt/accumulo/lib/cloudtrace-1.4.2.jar,/opt/accumulo/lib/commons-collections-3.2.jar,/opt/accumulo/lib/commons-configuration-1.5.jar,/opt/accumulo/lib/commons-io-1.4.jar,/opt/accumulo/lib/commons-jci-core-1.0.jar,/opt/accumulo/lib/commons-jci-fam-1.0.jar,/opt/accumulo/lib/commons-lang-2.4.jar,/opt/accumulo/lib/commons-logging-1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar"
>> >>> inst namenode root pass tmpdatatable movingaverage
>> >>
>> >>
>> >> Does /opt/accumulo/lib/accumulo-core-1.4.2.jar exist on your hadoop
>> nodes,
>> >> specifically the one that's running the map?
>> >>
>> >> Billie
>> >>
>> >>
>> >>>
>> >>>
>> >>> but when the job runs, it gets to the map phase and fails:
>> >>>
>> >>> 13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
>> >>> attempt_201301171408_0293_m_000000_0, Status : FAILED
>> >>> java.lang.RuntimeException: java.lang.ClassNotFoundException:
>> >>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>> >>>     at
>> >>> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1004)
>> >>>     at
>> >>>
>> org.apache.hadoop.mapreduce.JobContext.getInputFormatClass(JobContext.java:205)
>> >>>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:606)
>> >>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>> >>>     at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
>> >>>     at java.security.AccessController.doPrivileged(Native Method)
>> >>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>> >>>     at
>> >>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
>> >>>     at org.apache.hadoop.mapred.Child.main(Child.java:260)
>> >>> Caused by: java.lang.ClassNotFoundException:
>> >>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>> >>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>> >>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>> >>>     at java.security.AccessController.doPrivileged(Native Method)
>> >>>
>> >>> I've also tried hacking it to work by adding the accumulo-core jar to
>> >>> hadoop's lib dir, but that doesn't seem to work either.
>> >>>
>> >>> Thanks for any help,
>> >>> --
>> >>> Chris
>> >>
>> >>
>> >
>>
>
>

Re: Jobs failing with ClassNotFoundException

Posted by Chris Sigman <cy...@gmail.com>.
Yes, everything's readable by everyone.  As I said before, the odd thing is
that running one of the example jobs like Wordcount work just fine.


--
Chris


On Thu, Feb 14, 2013 at 2:17 PM, Keith Turner <ke...@deenlo.com> wrote:

> On Thu, Feb 14, 2013 at 1:53 PM, Chris Sigman <cy...@gmail.com> wrote:
> > Yep, all of the jars are also available on the datanodes
>
> Also are the jars readable by the user running the M/R job?
>
> >
> >
> > --
> > Chris
> >
> >
> > On Thu, Feb 14, 2013 at 1:51 PM, Billie Rinaldi <bi...@apache.org>
> wrote:
> >>
> >> On Thu, Feb 14, 2013 at 10:41 AM, Chris Sigman <cy...@gmail.com>
> wrote:
> >>>
> >>> Hi everyone,
> >>>
> >>> I've got a job I'm running that I can't figure out why it's failing.
> >>> I've tried running jobs from the examples, and they work just fine.
>  I'm
> >>> running the job via
> >>>
> >>> > ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst namenode
> >>> > root pass stockdata movingaverage
> >>>
> >>> which I see is running the following exec call that seems perfect to
> me:
> >>>
> >>> exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar
> >>> movingaverage.MAJob -libjars
> >>>
> "/opt/accumulo/lib/libthrift-0.6.1.jar,/opt/accumulo/lib/accumulo-core-1.4.2.jar,/usr/lib/zookeeper//zookeeper-3.3.5-cdh3u5.jar,/opt/accumulo/lib/cloudtrace-1.4.2.jar,/opt/accumulo/lib/commons-collections-3.2.jar,/opt/accumulo/lib/commons-configuration-1.5.jar,/opt/accumulo/lib/commons-io-1.4.jar,/opt/accumulo/lib/commons-jci-core-1.0.jar,/opt/accumulo/lib/commons-jci-fam-1.0.jar,/opt/accumulo/lib/commons-lang-2.4.jar,/opt/accumulo/lib/commons-logging-1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar"
> >>> inst namenode root pass tmpdatatable movingaverage
> >>
> >>
> >> Does /opt/accumulo/lib/accumulo-core-1.4.2.jar exist on your hadoop
> nodes,
> >> specifically the one that's running the map?
> >>
> >> Billie
> >>
> >>
> >>>
> >>>
> >>> but when the job runs, it gets to the map phase and fails:
> >>>
> >>> 13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
> >>> attempt_201301171408_0293_m_000000_0, Status : FAILED
> >>> java.lang.RuntimeException: java.lang.ClassNotFoundException:
> >>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
> >>>     at
> >>> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1004)
> >>>     at
> >>>
> org.apache.hadoop.mapreduce.JobContext.getInputFormatClass(JobContext.java:205)
> >>>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:606)
> >>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
> >>>     at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
> >>>     at java.security.AccessController.doPrivileged(Native Method)
> >>>     at javax.security.auth.Subject.doAs(Subject.java:415)
> >>>     at
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
> >>>     at org.apache.hadoop.mapred.Child.main(Child.java:260)
> >>> Caused by: java.lang.ClassNotFoundException:
> >>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
> >>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> >>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> >>>     at java.security.AccessController.doPrivileged(Native Method)
> >>>
> >>> I've also tried hacking it to work by adding the accumulo-core jar to
> >>> hadoop's lib dir, but that doesn't seem to work either.
> >>>
> >>> Thanks for any help,
> >>> --
> >>> Chris
> >>
> >>
> >
>

Re: Jobs failing with ClassNotFoundException

Posted by Keith Turner <ke...@deenlo.com>.
On Thu, Feb 14, 2013 at 1:53 PM, Chris Sigman <cy...@gmail.com> wrote:
> Yep, all of the jars are also available on the datanodes

Also are the jars readable by the user running the M/R job?

>
>
> --
> Chris
>
>
> On Thu, Feb 14, 2013 at 1:51 PM, Billie Rinaldi <bi...@apache.org> wrote:
>>
>> On Thu, Feb 14, 2013 at 10:41 AM, Chris Sigman <cy...@gmail.com> wrote:
>>>
>>> Hi everyone,
>>>
>>> I've got a job I'm running that I can't figure out why it's failing.
>>> I've tried running jobs from the examples, and they work just fine.  I'm
>>> running the job via
>>>
>>> > ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst namenode
>>> > root pass stockdata movingaverage
>>>
>>> which I see is running the following exec call that seems perfect to me:
>>>
>>> exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar
>>> movingaverage.MAJob -libjars
>>> "/opt/accumulo/lib/libthrift-0.6.1.jar,/opt/accumulo/lib/accumulo-core-1.4.2.jar,/usr/lib/zookeeper//zookeeper-3.3.5-cdh3u5.jar,/opt/accumulo/lib/cloudtrace-1.4.2.jar,/opt/accumulo/lib/commons-collections-3.2.jar,/opt/accumulo/lib/commons-configuration-1.5.jar,/opt/accumulo/lib/commons-io-1.4.jar,/opt/accumulo/lib/commons-jci-core-1.0.jar,/opt/accumulo/lib/commons-jci-fam-1.0.jar,/opt/accumulo/lib/commons-lang-2.4.jar,/opt/accumulo/lib/commons-logging-1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar"
>>> inst namenode root pass tmpdatatable movingaverage
>>
>>
>> Does /opt/accumulo/lib/accumulo-core-1.4.2.jar exist on your hadoop nodes,
>> specifically the one that's running the map?
>>
>> Billie
>>
>>
>>>
>>>
>>> but when the job runs, it gets to the map phase and fails:
>>>
>>> 13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
>>> attempt_201301171408_0293_m_000000_0, Status : FAILED
>>> java.lang.RuntimeException: java.lang.ClassNotFoundException:
>>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>>>     at
>>> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1004)
>>>     at
>>> org.apache.hadoop.mapreduce.JobContext.getInputFormatClass(JobContext.java:205)
>>>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:606)
>>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>>>     at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
>>>     at java.security.AccessController.doPrivileged(Native Method)
>>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>>     at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
>>>     at org.apache.hadoop.mapred.Child.main(Child.java:260)
>>> Caused by: java.lang.ClassNotFoundException:
>>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>     at java.security.AccessController.doPrivileged(Native Method)
>>>
>>> I've also tried hacking it to work by adding the accumulo-core jar to
>>> hadoop's lib dir, but that doesn't seem to work either.
>>>
>>> Thanks for any help,
>>> --
>>> Chris
>>
>>
>

Re: Jobs failing with ClassNotFoundException

Posted by Chris Sigman <cy...@gmail.com>.
Yep, all of the jars are also available on the datanodes


--
Chris


On Thu, Feb 14, 2013 at 1:51 PM, Billie Rinaldi <bi...@apache.org> wrote:

> On Thu, Feb 14, 2013 at 10:41 AM, Chris Sigman <cy...@gmail.com> wrote:
>
>> Hi everyone,
>>
>> I've got a job I'm running that I can't figure out why it's failing.
>>  I've tried running jobs from the examples, and they work just fine.  I'm
>> running the job via
>>
>> > ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst namenode
>> root pass stockdata movingaverage
>>
>> which I see is running the following exec call that seems perfect to me:
>>
>> exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar
>> movingaverage.MAJob -libjars "/opt/accumulo/lib/libthrift-
>> 0.6.1.jar,/opt/accumulo/lib/accumulo-core-1.4.2.jar,/usr/
>> lib/zookeeper//zookeeper-3.3.5-cdh3u5.jar,/opt/accumulo/
>> lib/cloudtrace-1.4.2.jar,/opt/accumulo/lib/commons-
>> collections-3.2.jar,/opt/accumulo/lib/commons-configuration-1.5.jar,/opt/
>> accumulo/lib/commons-io-1.4.jar,/opt/accumulo/lib/commons-
>> jci-core-1.0.jar,/opt/accumulo/lib/commons-jci-fam-
>> 1.0.jar,/opt/accumulo/lib/commons-lang-2.4.jar,/opt/
>> accumulo/lib/commons-logging-1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar"
>> inst namenode root pass tmpdatatable movingaverage
>>
>
> Does /opt/accumulo/lib/accumulo-core-1.4.2.jar exist on your hadoop nodes,
> specifically the one that's running the map?
>
> Billie
>
>
>
>>
>> but when the job runs, it gets to the map phase and fails:
>>
>> 13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
>> attempt_201301171408_0293_m_000000_0, Status : FAILED
>> java.lang.RuntimeException: java.lang.ClassNotFoundException:
>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>>     at org.apache.hadoop.conf.Configuration.getClass(
>> Configuration.java:1004)
>>     at org.apache.hadoop.mapreduce.JobContext.getInputFormatClass(
>> JobContext.java:205)
>>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:606)
>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>>     at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.UserGroupInformation.doAs(
>> UserGroupInformation.java:1278)
>>     at org.apache.hadoop.mapred.Child.main(Child.java:260)
>> Caused by: java.lang.ClassNotFoundException: org.apache.accumulo.core.
>> client.mapreduce.AccumuloInputFormat
>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>
>> I've also tried hacking it to work by adding the accumulo-core jar to
>> hadoop's lib dir, but that doesn't seem to work either.
>>
>> Thanks for any help,
>> --
>> Chris
>>
>
>

Re: Jobs failing with ClassNotFoundException

Posted by Chris Sigman <cy...@gmail.com>.
I should also note that my job does implement Tool.

Thanks,
--
Chris
On Feb 14, 2013 2:12 PM, "John Vines" <vi...@apache.org> wrote:

> He shouldn't have to, since he's using tool.sh, which uses -libjars.
> Unless cdh3u5 changed -libjars behavior?
>
>
> On Thu, Feb 14, 2013 at 1:51 PM, Billie Rinaldi <bi...@apache.org> wrote:
>
>> On Thu, Feb 14, 2013 at 10:41 AM, Chris Sigman <cy...@gmail.com>wrote:
>>
>>> Hi everyone,
>>>
>>> I've got a job I'm running that I can't figure out why it's failing.
>>>  I've tried running jobs from the examples, and they work just fine.  I'm
>>> running the job via
>>>
>>> > ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst namenode
>>> root pass stockdata movingaverage
>>>
>>> which I see is running the following exec call that seems perfect to me:
>>>
>>> exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar
>>> movingaverage.MAJob -libjars "/opt/accumulo/lib/libthrift-
>>> 0.6.1.jar,/opt/accumulo/lib/accumulo-core-1.4.2.jar,/usr/
>>> lib/zookeeper//zookeeper-3.3.5-cdh3u5.jar,/opt/accumulo/
>>> lib/cloudtrace-1.4.2.jar,/opt/accumulo/lib/commons-
>>> collections-3.2.jar,/opt/accumulo/lib/commons-
>>> configuration-1.5.jar,/opt/accumulo/lib/commons-io-1.4.
>>> jar,/opt/accumulo/lib/commons-jci-core-1.0.jar,/opt/
>>> accumulo/lib/commons-jci-fam-1.0.jar,/opt/accumulo/lib/
>>> commons-lang-2.4.jar,/opt/accumulo/lib/commons-logging-
>>> 1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar" inst
>>> namenode root pass tmpdatatable movingaverage
>>>
>>
>> Does /opt/accumulo/lib/accumulo-core-1.4.2.jar exist on your hadoop
>> nodes, specifically the one that's running the map?
>>
>> Billie
>>
>>
>>
>>>
>>> but when the job runs, it gets to the map phase and fails:
>>>
>>> 13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
>>> attempt_201301171408_0293_m_000000_0, Status : FAILED
>>> java.lang.RuntimeException: java.lang.ClassNotFoundException:
>>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>>>     at org.apache.hadoop.conf.Configuration.getClass(
>>> Configuration.java:1004)
>>>     at org.apache.hadoop.mapreduce.JobContext.getInputFormatClass(
>>> JobContext.java:205)
>>>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:606)
>>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>>>     at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
>>>     at java.security.AccessController.doPrivileged(Native Method)
>>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>>     at org.apache.hadoop.security.UserGroupInformation.doAs(
>>> UserGroupInformation.java:1278)
>>>     at org.apache.hadoop.mapred.Child.main(Child.java:260)
>>> Caused by: java.lang.ClassNotFoundException: org.apache.accumulo.core.
>>> client.mapreduce.AccumuloInputFormat
>>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>     at java.security.AccessController.doPrivileged(Native Method)
>>>
>>> I've also tried hacking it to work by adding the accumulo-core jar to
>>> hadoop's lib dir, but that doesn't seem to work either.
>>>
>>> Thanks for any help,
>>> --
>>> Chris
>>>
>>
>>
>

Re: Jobs failing with ClassNotFoundException

Posted by John Vines <vi...@apache.org>.
He shouldn't have to, since he's using tool.sh, which uses -libjars. Unless
cdh3u5 changed -libjars behavior?


On Thu, Feb 14, 2013 at 1:51 PM, Billie Rinaldi <bi...@apache.org> wrote:

> On Thu, Feb 14, 2013 at 10:41 AM, Chris Sigman <cy...@gmail.com> wrote:
>
>> Hi everyone,
>>
>> I've got a job I'm running that I can't figure out why it's failing.
>>  I've tried running jobs from the examples, and they work just fine.  I'm
>> running the job via
>>
>> > ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst namenode
>> root pass stockdata movingaverage
>>
>> which I see is running the following exec call that seems perfect to me:
>>
>> exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar
>> movingaverage.MAJob -libjars "/opt/accumulo/lib/libthrift-
>> 0.6.1.jar,/opt/accumulo/lib/accumulo-core-1.4.2.jar,/usr/
>> lib/zookeeper//zookeeper-3.3.5-cdh3u5.jar,/opt/accumulo/
>> lib/cloudtrace-1.4.2.jar,/opt/accumulo/lib/commons-
>> collections-3.2.jar,/opt/accumulo/lib/commons-configuration-1.5.jar,/opt/
>> accumulo/lib/commons-io-1.4.jar,/opt/accumulo/lib/commons-
>> jci-core-1.0.jar,/opt/accumulo/lib/commons-jci-fam-
>> 1.0.jar,/opt/accumulo/lib/commons-lang-2.4.jar,/opt/
>> accumulo/lib/commons-logging-1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar"
>> inst namenode root pass tmpdatatable movingaverage
>>
>
> Does /opt/accumulo/lib/accumulo-core-1.4.2.jar exist on your hadoop nodes,
> specifically the one that's running the map?
>
> Billie
>
>
>
>>
>> but when the job runs, it gets to the map phase and fails:
>>
>> 13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
>> attempt_201301171408_0293_m_000000_0, Status : FAILED
>> java.lang.RuntimeException: java.lang.ClassNotFoundException:
>> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>>     at org.apache.hadoop.conf.Configuration.getClass(
>> Configuration.java:1004)
>>     at org.apache.hadoop.mapreduce.JobContext.getInputFormatClass(
>> JobContext.java:205)
>>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:606)
>>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>>     at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.UserGroupInformation.doAs(
>> UserGroupInformation.java:1278)
>>     at org.apache.hadoop.mapred.Child.main(Child.java:260)
>> Caused by: java.lang.ClassNotFoundException: org.apache.accumulo.core.
>> client.mapreduce.AccumuloInputFormat
>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>
>> I've also tried hacking it to work by adding the accumulo-core jar to
>> hadoop's lib dir, but that doesn't seem to work either.
>>
>> Thanks for any help,
>> --
>> Chris
>>
>
>

Re: Jobs failing with ClassNotFoundException

Posted by Billie Rinaldi <bi...@apache.org>.
On Thu, Feb 14, 2013 at 10:41 AM, Chris Sigman <cy...@gmail.com> wrote:

> Hi everyone,
>
> I've got a job I'm running that I can't figure out why it's failing.  I've
> tried running jobs from the examples, and they work just fine.  I'm running
> the job via
>
> > ./bin/tool.sh ~/MovingAverage.jar movingaverage.MAJob inst namenode
> root pass stockdata movingaverage
>
> which I see is running the following exec call that seems perfect to me:
>
> exec /usr/lib/hadoop/bin/hadoop jar /MovingAverage.jar movingaverage.MAJob
> -libjars "/opt/accumulo/lib/libthrift-0.6.1.jar,/opt/accumulo/lib/
> accumulo-core-1.4.2.jar,/usr/lib/zookeeper//zookeeper-3.3.
> 5-cdh3u5.jar,/opt/accumulo/lib/cloudtrace-1.4.2.jar,/opt/
> accumulo/lib/commons-collections-3.2.jar,/opt/accumulo/lib/commons-
> configuration-1.5.jar,/opt/accumulo/lib/commons-io-1.4.
> jar,/opt/accumulo/lib/commons-jci-core-1.0.jar,/opt/
> accumulo/lib/commons-jci-fam-1.0.jar,/opt/accumulo/lib/
> commons-lang-2.4.jar,/opt/accumulo/lib/commons-logging-
> 1.0.4.jar,/opt/accumulo/lib/commons-logging-api-1.0.4.jar" inst namenode
> root pass tmpdatatable movingaverage
>

Does /opt/accumulo/lib/accumulo-core-1.4.2.jar exist on your hadoop nodes,
specifically the one that's running the map?

Billie



>
> but when the job runs, it gets to the map phase and fails:
>
> 13/02/14 13:25:26 INFO mapred.JobClient: Task Id :
> attempt_201301171408_0293_m_000000_0, Status : FAILED
> java.lang.RuntimeException: java.lang.ClassNotFoundException:
> org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat
>     at org.apache.hadoop.conf.Configuration.getClass(
> Configuration.java:1004)
>     at org.apache.hadoop.mapreduce.JobContext.getInputFormatClass(
> JobContext.java:205)
>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:606)
>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
>     at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1278)
>     at org.apache.hadoop.mapred.Child.main(Child.java:260)
> Caused by: java.lang.ClassNotFoundException: org.apache.accumulo.core.
> client.mapreduce.AccumuloInputFormat
>     at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>     at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>     at java.security.AccessController.doPrivileged(Native Method)
>
> I've also tried hacking it to work by adding the accumulo-core jar to
> hadoop's lib dir, but that doesn't seem to work either.
>
> Thanks for any help,
> --
> Chris
>