You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@avro.apache.org by Deepak <de...@gmail.com> on 2014/05/12 01:52:17 UTC

Re: 2.4 v of Hadoop causes IncompatibleClassChangeError


> On 07-May-2014, at 7:35 am, ÐΞ€ρ@Ҝ (๏̯͡๏) <de...@gmail.com> wrote:
> 
> Exception:
> jjava.lang.Exception: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
> 
> 	at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
> 
> 	at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
> 
> Caused by: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
> 
> 	at org.apache.avro.mapreduce.AvroRecordReaderBase.initialize(AvroRecordReaderBase.java:86)
> 
> 	at com.tracking.sdk.pig.load.format.AggregateRecordReader.initialize(AggregateRecordReader.java:41)
> 
> 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initialize(PigRecordReader.java:192)
> 
> 	at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:525)
> 
> 	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
> 
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
> 
> 	at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
> 
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> 
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> 
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 
> 	at java.lang.Thread.run(Thread.java:744)
> 
> 
> 
> Imports used in my recordreader class.
> import org.apache.avro.Schema;
> 
> import org.apache.avro.mapreduce.AvroKeyValueRecordReader;
> 
> import org.apache.hadoop.mapreduce.InputSplit;
> 
> import org.apache.hadoop.mapreduce.TaskAttemptContext;
> 
> Any suggestions ? Or does this require a fix from Avro ?
> 
> Regards,
> 
> Deepak
> 

Re: 2.4 v of Hadoop causes IncompatibleClassChangeError

Posted by Harsh J <ha...@cloudera.com>.
We've had hadoop2 profiles for quite a few released versions now, and
the maven dependencies accept a classifier for this as well.

Are you certain you are referencing the right
avro-mapred-x.x.x-hadoop2.jar in your job? Its a separate download if
you do not use Maven/etc.: http://apache.claz.org/avro/stable/java/
(search page for hadoop2, and use these jars where relevant)

On Mon, May 12, 2014 at 9:06 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <de...@gmail.com> wrote:
> I see that in trunk there is a provision to build avro with hadoop2/hadoop1
> profiles. So i guess this is no longer a bug in trunk.
> Hence i built avro from trunk with
> mvn clean install -DskipTests=true eclipse:clean eclipse:eclipse -P hadoop2
> I navigated to org.apache.avro.mapreduce.AvroRecordReaderBase using eclipse
> and clicked on import org.apache.hadoop.mapreduce.TaskAttemptContext. This
> is still pointing me to hadoop.20.205 library instead of hadoop2.x client
> library.
>
> Am i doing something wrongly ?
>
>
>
> On Mon, May 12, 2014 at 8:51 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <de...@gmail.com> wrote:
>>
>> Thanks.
>> https://issues.apache.org/jira/browse/AVRO-1506
>>
>> I can take it up.
>>
>>
>> On Mon, May 12, 2014 at 7:22 AM, Lewis John Mcgibbney
>> <le...@gmail.com> wrote:
>>>
>>> My guess is that this is Avro side. We've seen similar traces with Nutch.
>>> This looks like a JIRA ticket.
>>>
>>> On May 11, 2014 4:53 PM, "Deepak" <de...@gmail.com> wrote:
>>>>
>>>>
>>>>
>>>> On 07-May-2014, at 7:35 am, ÐΞ€ρ@Ҝ (๏̯͡๏) <de...@gmail.com> wrote:
>>>>
>>>> Exception:
>>>>
>>>> jjava.lang.Exception: java.lang.IncompatibleClassChangeError: Found
>>>> interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was
>>>> expected
>>>>
>>>> at
>>>> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
>>>>
>>>> at
>>>> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
>>>>
>>>> Caused by: java.lang.IncompatibleClassChangeError: Found interface
>>>> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
>>>>
>>>> at
>>>> org.apache.avro.mapreduce.AvroRecordReaderBase.initialize(AvroRecordReaderBase.java:86)
>>>>
>>>> at
>>>> com.tracking.sdk.pig.load.format.AggregateRecordReader.initialize(AggregateRecordReader.java:41)
>>>>
>>>> at
>>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initialize(PigRecordReader.java:192)
>>>>
>>>> at
>>>> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:525)
>>>>
>>>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
>>>>
>>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>>>
>>>> at
>>>> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>>>>
>>>> at
>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>>
>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>>
>>>> at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>
>>>> at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>
>>>> at java.lang.Thread.run(Thread.java:744)
>>>>
>>>>
>>>>
>>>> Imports used in my recordreader class.
>>>>
>>>> import org.apache.avro.Schema;
>>>>
>>>> import org.apache.avro.mapreduce.AvroKeyValueRecordReader;
>>>>
>>>> import org.apache.hadoop.mapreduce.InputSplit;
>>>>
>>>> import org.apache.hadoop.mapreduce.TaskAttemptContext;
>>>>
>>>> Any suggestions ? Or does this require a fix from Avro ?
>>>>
>>>> Regards,
>>>>
>>>> Deepak
>>
>>
>>
>>
>> --
>> Deepak
>>
>
>
>
> --
> Deepak
>



-- 
Harsh J

Re: 2.4 v of Hadoop causes IncompatibleClassChangeError

Posted by ๏̯͡๏ <ÐΞ€ρ@Ҝ>, de...@gmail.com.
I see that in trunk there is a provision to build avro with hadoop2/hadoop1
profiles. So i guess this is no longer a bug in trunk.
Hence i built avro from trunk with
mvn clean install -DskipTests=true eclipse:clean eclipse:eclipse -P hadoop2
I navigated to org.apache.avro.mapreduce.AvroRecordReaderBase using eclipse
and clicked on import org.apache.hadoop.mapreduce.TaskAttemptContext. This
is still pointing me to hadoop.20.205 library instead of hadoop2.x client
library.

Am i doing something wrongly ?


On Mon, May 12, 2014 at 8:51 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <de...@gmail.com> wrote:

> Thanks.
> https://issues.apache.org/jira/browse/AVRO-1506
>
> I can take it up.
>
>
> On Mon, May 12, 2014 at 7:22 AM, Lewis John Mcgibbney <
> lewis.mcgibbney@gmail.com> wrote:
>
>> My guess is that this is Avro side. We've seen similar traces with Nutch.
>> This looks like a JIRA ticket.
>> On May 11, 2014 4:53 PM, "Deepak" <de...@gmail.com> wrote:
>>
>>>
>>>
>>> On 07-May-2014, at 7:35 am, ÐΞ€ρ@Ҝ (๏̯͡๏) <de...@gmail.com> wrote:
>>>
>>> Exception:
>>>
>>> jjava.lang.Exception: java.lang.IncompatibleClassChangeError: Found
>>> interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was
>>> expected
>>>
>>> at
>>> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
>>>
>>> at
>>> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
>>>
>>> Caused by: java.lang.IncompatibleClassChangeError: Found interface
>>> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
>>>
>>> at
>>> org.apache.avro.mapreduce.AvroRecordReaderBase.initialize(AvroRecordReaderBase.java:86)
>>>
>>> at
>>> com.tracking.sdk.pig.load.format.AggregateRecordReader.initialize(AggregateRecordReader.java:41)
>>>
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initialize(PigRecordReader.java:192)
>>>
>>> at
>>> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:525)
>>>
>>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
>>>
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>>
>>> at
>>> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>>>
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>
>>> at java.lang.Thread.run(Thread.java:744)
>>>
>>>
>>> Imports used in my recordreader class.
>>>
>>> import org.apache.avro.Schema;
>>>
>>> import org.apache.avro.mapreduce.AvroKeyValueRecordReader;
>>>
>>> import org.apache.hadoop.mapreduce.InputSplit;
>>>
>>> import org.apache.hadoop.mapreduce.TaskAttemptContext;
>>>
>>> Any suggestions ? Or does this require a fix from Avro ?
>>>
>>> Regards,
>>>
>>> Deepak
>>>
>>>
>
>
> --
> Deepak
>
>


-- 
Deepak

Re: 2.4 v of Hadoop causes IncompatibleClassChangeError

Posted by ๏̯͡๏ <ÐΞ€ρ@Ҝ>, de...@gmail.com.
Thanks.
https://issues.apache.org/jira/browse/AVRO-1506

I can take it up.


On Mon, May 12, 2014 at 7:22 AM, Lewis John Mcgibbney <
lewis.mcgibbney@gmail.com> wrote:

> My guess is that this is Avro side. We've seen similar traces with Nutch.
> This looks like a JIRA ticket.
> On May 11, 2014 4:53 PM, "Deepak" <de...@gmail.com> wrote:
>
>>
>>
>> On 07-May-2014, at 7:35 am, ÐΞ€ρ@Ҝ (๏̯͡๏) <de...@gmail.com> wrote:
>>
>> Exception:
>>
>> jjava.lang.Exception: java.lang.IncompatibleClassChangeError: Found
>> interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was
>> expected
>>
>> at
>> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
>>
>> at
>> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
>>
>> Caused by: java.lang.IncompatibleClassChangeError: Found interface
>> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
>>
>> at
>> org.apache.avro.mapreduce.AvroRecordReaderBase.initialize(AvroRecordReaderBase.java:86)
>>
>> at
>> com.tracking.sdk.pig.load.format.AggregateRecordReader.initialize(AggregateRecordReader.java:41)
>>
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initialize(PigRecordReader.java:192)
>>
>> at
>> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:525)
>>
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
>>
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>>
>> at
>> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>>
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>
>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>
>> at java.lang.Thread.run(Thread.java:744)
>>
>>
>> Imports used in my recordreader class.
>>
>> import org.apache.avro.Schema;
>>
>> import org.apache.avro.mapreduce.AvroKeyValueRecordReader;
>>
>> import org.apache.hadoop.mapreduce.InputSplit;
>>
>> import org.apache.hadoop.mapreduce.TaskAttemptContext;
>>
>> Any suggestions ? Or does this require a fix from Avro ?
>>
>> Regards,
>>
>> Deepak
>>
>>


-- 
Deepak

Re: 2.4 v of Hadoop causes IncompatibleClassChangeError

Posted by Lewis John Mcgibbney <le...@gmail.com>.
My guess is that this is Avro side. We've seen similar traces with Nutch.
This looks like a JIRA ticket.
On May 11, 2014 4:53 PM, "Deepak" <de...@gmail.com> wrote:

>
>
> On 07-May-2014, at 7:35 am, ÐΞ€ρ@Ҝ (๏̯͡๏) <de...@gmail.com> wrote:
>
> Exception:
>
> jjava.lang.Exception: java.lang.IncompatibleClassChangeError: Found
> interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was
> expected
>
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
>
> at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
>
> Caused by: java.lang.IncompatibleClassChangeError: Found interface
> org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
>
> at
> org.apache.avro.mapreduce.AvroRecordReaderBase.initialize(AvroRecordReaderBase.java:86)
>
> at
> com.tracking.sdk.pig.load.format.AggregateRecordReader.initialize(AggregateRecordReader.java:41)
>
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.initialize(PigRecordReader.java:192)
>
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:525)
>
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
>
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> at java.lang.Thread.run(Thread.java:744)
>
>
> Imports used in my recordreader class.
>
> import org.apache.avro.Schema;
>
> import org.apache.avro.mapreduce.AvroKeyValueRecordReader;
>
> import org.apache.hadoop.mapreduce.InputSplit;
>
> import org.apache.hadoop.mapreduce.TaskAttemptContext;
>
> Any suggestions ? Or does this require a fix from Avro ?
>
> Regards,
>
> Deepak
>
>