You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Raimon Bosch <ra...@gmail.com> on 2011/11/24 19:37:24 UTC

Cannot integrate hadoop-consumer in my application

Hi,

I'm trying to rewrite the code of the hadoop-consumer to make it more
custom and emit hive data directly. For that we are using our own library
compiled with JDK6. There is the problem:

java.lang.UnsupportedClassVersionError:
kafka/javaapi/producer/async/CallbackHandler : Unsupported major.minor
version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
        at
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
        at
kafka.javaapi.consumer.SimpleConsumer.multifetch(SimpleConsumer.scala:55)
        at kafka.etl.KafkaETLContext.fetchMore(KafkaETLContext.java:158)
        at
kafka.etl.KafkaETLRecordReader.next(KafkaETLRecordReader.java:156)
        at kafka.etl.KafkaETLRecordReader.next(KafkaETLRecordReader.java:30)
        at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:192)
        at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:176)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
        at
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
11/11/24 19:29:07 INFO mapred.JobClient: Job complete: job_local_0001

Seems that the scala code included in the trunk is compiled with JDK5 so my
code cannot be integrated with kafka code. That's weird because I compiled
everything from the same machine using "./sbt package"

Probably is because the scala part is precompiled. What do you think?

Re: Cannot integrate hadoop-consumer in my application

Posted by Raimon Bosch <ra...@gmail.com>.
Problem fixed. It's mess with my local configuration... I have tried on a
linux machine that is running only with OpenJDK6 and all works perfectly.

2011/11/24 Raimon Bosch <ra...@gmail.com>

>
> Hi,
>
> I'm trying to rewrite the code of the hadoop-consumer to make it more
> custom and emit hive data directly. For that we are using our own library
> compiled with JDK6. There is the problem:
>
> java.lang.UnsupportedClassVersionError:
> kafka/javaapi/producer/async/CallbackHandler : Unsupported major.minor
> version 51.0
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
>         at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
>         at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>         at
> kafka.javaapi.consumer.SimpleConsumer.multifetch(SimpleConsumer.scala:55)
>         at kafka.etl.KafkaETLContext.fetchMore(KafkaETLContext.java:158)
>         at
> kafka.etl.KafkaETLRecordReader.next(KafkaETLRecordReader.java:156)
>         at
> kafka.etl.KafkaETLRecordReader.next(KafkaETLRecordReader.java:30)
>         at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:192)
>         at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:176)
>         at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>         at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
> 11/11/24 19:29:07 INFO mapred.JobClient: Job complete: job_local_0001
>
> Seems that the scala code included in the trunk is compiled with JDK5 so
> my code cannot be integrated with kafka code. That's weird because I
> compiled everything from the same machine using "./sbt package"
>
> Probably is because the scala part is precompiled. What do you think?
>