You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "chenwen (JIRA)" <ji...@apache.org> on 2018/08/24 08:47:00 UTC

[jira] [Resolved] (KYLIN-3508) kylin cube kafka streaming lz4 exception

     [ https://issues.apache.org/jira/browse/KYLIN-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

chenwen resolved KYLIN-3508.
----------------------------
       Resolution: Fixed
    Fix Version/s: v2.4.0

Add the kafka lz4 dependency jar to hadoop's mapreduce lib dependency directory.
{hadoop_home}/share/hadoop/mapreduce/lib

> kylin cube kafka streaming lz4 exception
> ----------------------------------------
>
>                 Key: KYLIN-3508
>                 URL: https://issues.apache.org/jira/browse/KYLIN-3508
>             Project: Kylin
>          Issue Type: Bug
>          Components: Streaming
>    Affects Versions: v2.4.0
>         Environment: hadoop 2.7.2
> hbase 1.2.5
> hive 1.2.2
> kylin-2.4.0-bin-hbase1x
> kafka_2.10-0.10.2.2
> centos 7
>            Reporter: chenwen
>            Priority: Major
>             Fix For: v2.4.0
>
>
> I have a kafka topic using lz4 compression algorithm, then I created a cube in kylin to consume this topic, it will report the following error, is my configuration wrong? I switched to the other algorithm gzip, and snappy works fine.
> Error: java.lang.ClassNotFoundException: net.jpountz.lz4.LZ4Exception at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.kafka.common.record.MemoryRecordsBuilder$4.get(MemoryRecordsBuilder.java:82) at org.apache.kafka.common.record.MemoryRecordsBuilder$MemoizingConstructorSupplier.get(MemoryRecordsBuilder.java:489) at org.apache.kafka.common.record.MemoryRecordsBuilder.wrapForInput(MemoryRecordsBuilder.java:455) at org.apache.kafka.common.record.RecordsIterator$DeepRecordsIterator.<init>(RecordsIterator.java:157) at org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:81) at org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:33) at org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79) at org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45) at org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:787) at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:482) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1062) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:996) at org.apache.kylin.source.kafka.hadoop.KafkaInputRecordReader.nextKeyValue(KafkaInputRecordReader.java:119) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)