You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@pig.apache.org by Berin Loritsch <be...@d-haven.org> on 2015/01/07 03:41:59 UTC

Exception during ececute

I'm trying to run a Pig Latin script on 0.14.0, and I've been having some
configuration issues.  I'm assuming this is part of that.  I have Hadoop
2.3.0 on windows running as a single node.  When I run my PIG script, I get
this exception:

Backend error message during job submission
-------------------------------------------
Unexpected System Error Occured: java.lang.IncompatibleClassChangeError:
Found interface org.apache.hadoop.mapreduce.JobContext, but class was
expected
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:235)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:183)
at
org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at
org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at
org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:240)
at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:121)
at java.lang.Thread.run(Thread.java:745)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)

Pig Stack Trace
---------------
ERROR 2244: Job failed, hadoop does not return any error message

org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job
failed, hadoop does not return any error message
at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:179)
at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
at
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
at org.apache.pig.Main.run(Main.java:624)
at org.apache.pig.Main.main(Main.java:170)

Re: Exception during ececute

Posted by Berin Loritsch <be...@d-haven.org>.
Thanks.  I may go that route.

On Wednesday, January 7, 2015, Cheolsoo Park <pi...@gmail.com> wrote:

> Hi Berin,
>
> Sorry to hear that you're having trouble using Pig. I admit that Pig
> doesn't always work out of box, but it's very hard to support all the
> different environments that users have.
>
> You might try a vendor distribution instead of Apache distribution if
> you're looking for an out-of-box solution. Particularly, since you're on
> Windows, HDP might be the best choice. My two cents.
>
> Cheolsoo
>
>
>
>
>
>
> On Wed, Jan 7, 2015 at 7:32 AM, Berin Loritsch <berin@d-haven.org
> <javascript:;>> wrote:
>
> > I'm sorry, that sounded rude.  I'm just trying to get an "out of the box"
> > solution.  So I either need a newer hadoop compiled for Windows, or I
> need
> > to know which version of Pig was built against Hadoop 2.3.
> >
> > Recompiling to me smells like Pig is making use of conditional
> compilation
> > rather than dynamically loading the compatibility jar it needs.  I did
> some
> > more googling, and discovered this was more likely an issue of Pig trying
> > to run against H1 when it is connected to H2.  There's separate
> directories
> > for h1/h2 jars in the lib folder, is it using the wrong one even though
> it
> > listed the installed Hadoop version on output?  Are there extra steps?
> If
> > it's not compatible with a version why can't it error out with a clear
> > message saying this version of Pig can't work with that version of
> Hadoop,
> > please recompile.
> >
> > For the record, I am using an already compiled version of Pig.  I'm
> trying
> > not to set up the 3+ different build tools I seem to have come across
> just
> > in the Hadoop world.  For me, it would be easier to just have a known
> > working configuration so I can just use it.
> >
> > On Wed, Jan 7, 2015 at 6:45 AM, Berin Loritsch <berin@d-haven.org
> <javascript:;>> wrote:
> >
> > > Better yet, tell me where I can get the right Hadoop version
> precompiled
> > > for Windows.  I'm in a .net shop, and my goal is to set up a test
> > > infrastructure, not a Java development stack.
> > >
> > > Pig wasn't working with the stock Hadoop 2.6 on windows, so I had to
> > > downgrade to get this far.
> > >
> > >
> > > On Wednesday, January 7, 2015, Lorand Bendig <lbendig@gmail.com
> <javascript:;>> wrote:
> > >
> > >> Pig 0.14 uses Hadoop 2.4.0 by default, but you have Hadoop 2.3.0 .
> > >> You may change the Hadoop version in
> $PIG_HOME/ivy/libraries.properties
> > >> to:
> > >> hadoop-common.version=2.3.0
> > >> hadoop-hdfs.version=2.3.0
> > >> hadoop-mapreduce.version=2.3.0
> > >>
> > >> Then try to recompile Pig:
> > >> ant clean jar -Dhadoopversion=23
> > >>
> > >> --Lorand
> > >>
> > >> On 07/01/15 03:41, Berin Loritsch wrote:
> > >>
> > >>> I'm trying to run a Pig Latin script on 0.14.0, and I've been having
> > some
> > >>> configuration issues.  I'm assuming this is part of that.  I have
> > Hadoop
> > >>> 2.3.0 on windows running as a single node.  When I run my PIG
> script, I
> > >>> get
> > >>> this exception:
> > >>>
> > >>> Backend error message during job submission
> > >>> -------------------------------------------
> > >>> Unexpected System Error Occured:
> > java.lang.IncompatibleClassChangeError:
> > >>> Found interface org.apache.hadoop.mapreduce.JobContext, but class was
> > >>> expected
> > >>> at
> > >>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
> > >>> PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:235)
> > >>> at
> > >>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
> > >>> PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:183)
> > >>> at
> > >>> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(
> > >>> JobSubmitter.java:458)
> > >>> at
> > >>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
> > >>> JobSubmitter.java:343)
> > >>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> > >>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> > >>> at java.security.AccessController.doPrivileged(Native Method)
> > >>> at javax.security.auth.Subject.doAs(Subject.java:422)
> > >>> at
> > >>> org.apache.hadoop.security.UserGroupInformation.doAs(
> > >>> UserGroupInformation.java:1548)
> > >>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> > >>> at
> > >>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.
> > >>> submit(ControlledJob.java:335)
> > >>> at
> > >>> org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(
> > >>> JobControl.java:240)
> > >>> at org.apache.pig.backend.hadoop20.PigJobControl.run(
> > >>> PigJobControl.java:121)
> > >>> at java.lang.Thread.run(Thread.java:745)
> > >>> at
> > >>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
> > >>> MapReduceLauncher$1.run(MapReduceLauncher.java:276)
> > >>>
> > >>> Pig Stack Trace
> > >>> ---------------
> > >>> ERROR 2244: Job failed, hadoop does not return any error message
> > >>>
> > >>> org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job
> > >>> failed, hadoop does not return any error message
> > >>> at org.apache.pig.tools.grunt.GruntParser.executeBatch(
> > >>> GruntParser.java:179)
> > >>> at
> > >>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(
> > >>> GruntParser.java:234)
> > >>> at
> > >>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(
> > >>> GruntParser.java:205)
> > >>> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
> > >>> at org.apache.pig.Main.run(Main.java:624)
> > >>> at org.apache.pig.Main.main(Main.java:170)
> > >>>
> > >>>
> > >>
> >
>

Re: Exception during ececute

Posted by Cheolsoo Park <pi...@gmail.com>.
Hi Berin,

Sorry to hear that you're having trouble using Pig. I admit that Pig
doesn't always work out of box, but it's very hard to support all the
different environments that users have.

You might try a vendor distribution instead of Apache distribution if
you're looking for an out-of-box solution. Particularly, since you're on
Windows, HDP might be the best choice. My two cents.

Cheolsoo






On Wed, Jan 7, 2015 at 7:32 AM, Berin Loritsch <be...@d-haven.org> wrote:

> I'm sorry, that sounded rude.  I'm just trying to get an "out of the box"
> solution.  So I either need a newer hadoop compiled for Windows, or I need
> to know which version of Pig was built against Hadoop 2.3.
>
> Recompiling to me smells like Pig is making use of conditional compilation
> rather than dynamically loading the compatibility jar it needs.  I did some
> more googling, and discovered this was more likely an issue of Pig trying
> to run against H1 when it is connected to H2.  There's separate directories
> for h1/h2 jars in the lib folder, is it using the wrong one even though it
> listed the installed Hadoop version on output?  Are there extra steps?  If
> it's not compatible with a version why can't it error out with a clear
> message saying this version of Pig can't work with that version of Hadoop,
> please recompile.
>
> For the record, I am using an already compiled version of Pig.  I'm trying
> not to set up the 3+ different build tools I seem to have come across just
> in the Hadoop world.  For me, it would be easier to just have a known
> working configuration so I can just use it.
>
> On Wed, Jan 7, 2015 at 6:45 AM, Berin Loritsch <be...@d-haven.org> wrote:
>
> > Better yet, tell me where I can get the right Hadoop version precompiled
> > for Windows.  I'm in a .net shop, and my goal is to set up a test
> > infrastructure, not a Java development stack.
> >
> > Pig wasn't working with the stock Hadoop 2.6 on windows, so I had to
> > downgrade to get this far.
> >
> >
> > On Wednesday, January 7, 2015, Lorand Bendig <lb...@gmail.com> wrote:
> >
> >> Pig 0.14 uses Hadoop 2.4.0 by default, but you have Hadoop 2.3.0 .
> >> You may change the Hadoop version in $PIG_HOME/ivy/libraries.properties
> >> to:
> >> hadoop-common.version=2.3.0
> >> hadoop-hdfs.version=2.3.0
> >> hadoop-mapreduce.version=2.3.0
> >>
> >> Then try to recompile Pig:
> >> ant clean jar -Dhadoopversion=23
> >>
> >> --Lorand
> >>
> >> On 07/01/15 03:41, Berin Loritsch wrote:
> >>
> >>> I'm trying to run a Pig Latin script on 0.14.0, and I've been having
> some
> >>> configuration issues.  I'm assuming this is part of that.  I have
> Hadoop
> >>> 2.3.0 on windows running as a single node.  When I run my PIG script, I
> >>> get
> >>> this exception:
> >>>
> >>> Backend error message during job submission
> >>> -------------------------------------------
> >>> Unexpected System Error Occured:
> java.lang.IncompatibleClassChangeError:
> >>> Found interface org.apache.hadoop.mapreduce.JobContext, but class was
> >>> expected
> >>> at
> >>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
> >>> PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:235)
> >>> at
> >>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
> >>> PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:183)
> >>> at
> >>> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(
> >>> JobSubmitter.java:458)
> >>> at
> >>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
> >>> JobSubmitter.java:343)
> >>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> >>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> >>> at java.security.AccessController.doPrivileged(Native Method)
> >>> at javax.security.auth.Subject.doAs(Subject.java:422)
> >>> at
> >>> org.apache.hadoop.security.UserGroupInformation.doAs(
> >>> UserGroupInformation.java:1548)
> >>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> >>> at
> >>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.
> >>> submit(ControlledJob.java:335)
> >>> at
> >>> org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(
> >>> JobControl.java:240)
> >>> at org.apache.pig.backend.hadoop20.PigJobControl.run(
> >>> PigJobControl.java:121)
> >>> at java.lang.Thread.run(Thread.java:745)
> >>> at
> >>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
> >>> MapReduceLauncher$1.run(MapReduceLauncher.java:276)
> >>>
> >>> Pig Stack Trace
> >>> ---------------
> >>> ERROR 2244: Job failed, hadoop does not return any error message
> >>>
> >>> org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job
> >>> failed, hadoop does not return any error message
> >>> at org.apache.pig.tools.grunt.GruntParser.executeBatch(
> >>> GruntParser.java:179)
> >>> at
> >>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(
> >>> GruntParser.java:234)
> >>> at
> >>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(
> >>> GruntParser.java:205)
> >>> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
> >>> at org.apache.pig.Main.run(Main.java:624)
> >>> at org.apache.pig.Main.main(Main.java:170)
> >>>
> >>>
> >>
>

Re: Exception during ececute

Posted by Berin Loritsch <be...@d-haven.org>.
I'm sorry, that sounded rude.  I'm just trying to get an "out of the box"
solution.  So I either need a newer hadoop compiled for Windows, or I need
to know which version of Pig was built against Hadoop 2.3.

Recompiling to me smells like Pig is making use of conditional compilation
rather than dynamically loading the compatibility jar it needs.  I did some
more googling, and discovered this was more likely an issue of Pig trying
to run against H1 when it is connected to H2.  There's separate directories
for h1/h2 jars in the lib folder, is it using the wrong one even though it
listed the installed Hadoop version on output?  Are there extra steps?  If
it's not compatible with a version why can't it error out with a clear
message saying this version of Pig can't work with that version of Hadoop,
please recompile.

For the record, I am using an already compiled version of Pig.  I'm trying
not to set up the 3+ different build tools I seem to have come across just
in the Hadoop world.  For me, it would be easier to just have a known
working configuration so I can just use it.

On Wed, Jan 7, 2015 at 6:45 AM, Berin Loritsch <be...@d-haven.org> wrote:

> Better yet, tell me where I can get the right Hadoop version precompiled
> for Windows.  I'm in a .net shop, and my goal is to set up a test
> infrastructure, not a Java development stack.
>
> Pig wasn't working with the stock Hadoop 2.6 on windows, so I had to
> downgrade to get this far.
>
>
> On Wednesday, January 7, 2015, Lorand Bendig <lb...@gmail.com> wrote:
>
>> Pig 0.14 uses Hadoop 2.4.0 by default, but you have Hadoop 2.3.0 .
>> You may change the Hadoop version in $PIG_HOME/ivy/libraries.properties
>> to:
>> hadoop-common.version=2.3.0
>> hadoop-hdfs.version=2.3.0
>> hadoop-mapreduce.version=2.3.0
>>
>> Then try to recompile Pig:
>> ant clean jar -Dhadoopversion=23
>>
>> --Lorand
>>
>> On 07/01/15 03:41, Berin Loritsch wrote:
>>
>>> I'm trying to run a Pig Latin script on 0.14.0, and I've been having some
>>> configuration issues.  I'm assuming this is part of that.  I have Hadoop
>>> 2.3.0 on windows running as a single node.  When I run my PIG script, I
>>> get
>>> this exception:
>>>
>>> Backend error message during job submission
>>> -------------------------------------------
>>> Unexpected System Error Occured: java.lang.IncompatibleClassChangeError:
>>> Found interface org.apache.hadoop.mapreduce.JobContext, but class was
>>> expected
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
>>> PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:235)
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
>>> PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:183)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(
>>> JobSubmitter.java:458)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
>>> JobSubmitter.java:343)
>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(
>>> UserGroupInformation.java:1548)
>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
>>> at
>>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.
>>> submit(ControlledJob.java:335)
>>> at
>>> org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(
>>> JobControl.java:240)
>>> at org.apache.pig.backend.hadoop20.PigJobControl.run(
>>> PigJobControl.java:121)
>>> at java.lang.Thread.run(Thread.java:745)
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
>>> MapReduceLauncher$1.run(MapReduceLauncher.java:276)
>>>
>>> Pig Stack Trace
>>> ---------------
>>> ERROR 2244: Job failed, hadoop does not return any error message
>>>
>>> org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job
>>> failed, hadoop does not return any error message
>>> at org.apache.pig.tools.grunt.GruntParser.executeBatch(
>>> GruntParser.java:179)
>>> at
>>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(
>>> GruntParser.java:234)
>>> at
>>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(
>>> GruntParser.java:205)
>>> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
>>> at org.apache.pig.Main.run(Main.java:624)
>>> at org.apache.pig.Main.main(Main.java:170)
>>>
>>>
>>

Re: Exception during ececute

Posted by Berin Loritsch <be...@d-haven.org>.
Better yet, tell me where I can get the right Hadoop version precompiled
for Windows.  I'm in a .net shop, and my goal is to set up a test
infrastructure, not a Java development stack.

Pig wasn't working with the stock Hadoop 2.6 on windows, so I had to
downgrade to get this far.

On Wednesday, January 7, 2015, Lorand Bendig <lb...@gmail.com> wrote:

> Pig 0.14 uses Hadoop 2.4.0 by default, but you have Hadoop 2.3.0 .
> You may change the Hadoop version in $PIG_HOME/ivy/libraries.properties
> to:
> hadoop-common.version=2.3.0
> hadoop-hdfs.version=2.3.0
> hadoop-mapreduce.version=2.3.0
>
> Then try to recompile Pig:
> ant clean jar -Dhadoopversion=23
>
> --Lorand
>
> On 07/01/15 03:41, Berin Loritsch wrote:
>
>> I'm trying to run a Pig Latin script on 0.14.0, and I've been having some
>> configuration issues.  I'm assuming this is part of that.  I have Hadoop
>> 2.3.0 on windows running as a single node.  When I run my PIG script, I
>> get
>> this exception:
>>
>> Backend error message during job submission
>> -------------------------------------------
>> Unexpected System Error Occured: java.lang.IncompatibleClassChangeError:
>> Found interface org.apache.hadoop.mapreduce.JobContext, but class was
>> expected
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
>> PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:235)
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
>> PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:183)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(
>> JobSubmitter.java:458)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(
>> JobSubmitter.java:343)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:422)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(
>> UserGroupInformation.java:1548)
>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
>> at
>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.
>> submit(ControlledJob.java:335)
>> at
>> org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(
>> JobControl.java:240)
>> at org.apache.pig.backend.hadoop20.PigJobControl.run(
>> PigJobControl.java:121)
>> at java.lang.Thread.run(Thread.java:745)
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
>> MapReduceLauncher$1.run(MapReduceLauncher.java:276)
>>
>> Pig Stack Trace
>> ---------------
>> ERROR 2244: Job failed, hadoop does not return any error message
>>
>> org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job
>> failed, hadoop does not return any error message
>> at org.apache.pig.tools.grunt.GruntParser.executeBatch(
>> GruntParser.java:179)
>> at
>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(
>> GruntParser.java:234)
>> at
>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(
>> GruntParser.java:205)
>> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
>> at org.apache.pig.Main.run(Main.java:624)
>> at org.apache.pig.Main.main(Main.java:170)
>>
>>
>

Re: Exception during ececute

Posted by Lorand Bendig <lb...@gmail.com>.
Pig 0.14 uses Hadoop 2.4.0 by default, but you have Hadoop 2.3.0 .
You may change the Hadoop version in $PIG_HOME/ivy/libraries.properties to:
hadoop-common.version=2.3.0
hadoop-hdfs.version=2.3.0
hadoop-mapreduce.version=2.3.0

Then try to recompile Pig:
ant clean jar -Dhadoopversion=23

--Lorand

On 07/01/15 03:41, Berin Loritsch wrote:
> I'm trying to run a Pig Latin script on 0.14.0, and I've been having some
> configuration issues.  I'm assuming this is part of that.  I have Hadoop
> 2.3.0 on windows running as a single node.  When I run my PIG script, I get
> this exception:
>
> Backend error message during job submission
> -------------------------------------------
> Unexpected System Error Occured: java.lang.IncompatibleClassChangeError:
> Found interface org.apache.hadoop.mapreduce.JobContext, but class was
> expected
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:235)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:183)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> at
> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> at
> org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:240)
> at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:121)
> at java.lang.Thread.run(Thread.java:745)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
>
> Pig Stack Trace
> ---------------
> ERROR 2244: Job failed, hadoop does not return any error message
>
> org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job
> failed, hadoop does not return any error message
> at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:179)
> at
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:234)
> at
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
> at org.apache.pig.Main.run(Main.java:624)
> at org.apache.pig.Main.main(Main.java:170)
>