You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by 张洪涛 <ho...@163.com> on 2019/02/22 06:55:57 UTC

[Blink]sql client kafka sink 失败


大家好!


我正在测试Blink sql client kafka sink connector ,但是发现写入失败,以下是我的步骤


环境配置
blink standalone 模式




1. 配置environment 启动sql client


2. 创建kafka sink table
CREATETABLEkafka_sink(
   messageKeyVARBINARY,
   messageValueVARBINARY,
   PRIMARYKEY(messageKey))
with(
   type='KAFKA011',
   topic='sink-topic',
   `bootstrap.servers`='172.19.0.108:9092',
   retries='3'
);


3. 创建查询语句并执行
INSERT INTO kafka_sink
SELECT CAST('123' AS VARBINARY) AS key,
CAST(CONCAT_WS(',', 'HELLO', 'WORLD') AS VARBINARY) AS msg;




错误日志(from task executor log)


主要是找不到kafka common package下面的一个类, 但是启动sql client 时候已经把kafka connector 相关的jar包包括在内 在提交job时候 也会把这些jars 和 jobgraph一并上传到cluster,理论上这些class都会被加载






2019-02-22 14:37:18,356 ERROR org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread  - Uncaught exception in kafka-producer-network-thread | producer-1:
java.lang.NoClassDefFoundError: org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C
        at org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.DefaultRecordBatch.writeHeader(DefaultRecordBatch.java:468)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.writeDefaultBatchHeader(MemoryRecordsBuilder.java:339)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.close(MemoryRecordsBuilder.java:293)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.ProducerBatch.close(ProducerBatch.java:391)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.RecordAccumulator.drain(RecordAccumulator.java:485)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:254)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:223)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.Crc32C
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)




--
  Best Regards
  Hongtao


Re: [Blink]sql client kafka sink 失败

Posted by Zhenghua Gao <do...@gmail.com>.
换个干净的环境(清除 standalone sql client 进程及日志, 然后reproduce你的问题),
然后把对应的 standalonesession, taskexecutor, 及 sql client日志传上来看看。


On Tue, Feb 26, 2019 at 10:43 AM 张洪涛 <ho...@163.com> wrote:

>
>
> 如果把kafka connector shade jar放在blink lib 下面 然后启动是没有问题的  但是放在sql client
> --jar 参数就有问题
>
>
> 我又多测试了几遍 发现class not found的类 是随机的
>
>
> 有什么建议么?
>
>
> 2019-02-26 10:36:10,343 ERROR
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread
> - Uncaught exception in kafka-producer-network-thread | producer-1:
> java.lang.NoClassDefFoundError:
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/requests/ProduceResponse$PartitionResponse
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.requests.ProduceResponse.<init>(ProduceResponse.java:107)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.requests.AbstractResponse.getResponse(AbstractResponse.java:55)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.NetworkClient.createResponse(NetworkClient.java:569)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:663)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException:
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.requests.ProduceResponse$PartitionResponse
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>         at
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>
>
>
>
> 在 2019-02-25 14:34:54,"Becket Qin" <be...@gmail.com> 写道:
> >@Kurt,
> >
> >这个是符合预期的。为了防止和用户code中可能的Kafka依赖发生冲突。
> >
> >On Mon, Feb 25, 2019 at 10:28 AM Kurt Young <yk...@gmail.com> wrote:
> >
> >> kafka的包看路径是shade过的,这是符合预期的吗? @Becket
> >>
> >> Best,
> >> Kurt
> >>
> >>
> >> On Mon, Feb 25, 2019 at 9:56 AM 张洪涛 <ho...@163.com> wrote:
> >>
> >> >
> >> >
> >> > sql-client.sh 的启动参数首先在classpath里面会包含kafka相关的jar  另外会有--jar
> >> > 包含所有connector的jar
> >> >
> >> >
> >> > 这些jars在sql-client提交job时候会上传到cluster的blob store 但是很奇怪为啥找不到
> >> >
> >> >
> >> >  00:00:06 /usr/lib/jvm/java-1.8.0-openjdk/bin/java
> >> > -Dlog.file=/bigdata/flink-1.5.1/log/flink-root-sql-client-gpu06.log
> >> >
> -Dlog4j.configuration=file:/bigdata/flink-1.5.1/conf/log4j-cli.properties
> >> > -Dlogback.configurationFile=file:/bigdata/flink-1.5.1/conf/logback.xml
> >> > -classpath
> >> >
> >>
> /bigdata/flink-1.5.1/lib/flink-python_2.11-1.5.1.jar:/bigdata/flink-1.5.1/lib/flink-shaded-hadoop2-uber-1.5.1.jar:/bigdata/flink-1.5.1/lib/log4j-1.2.17.jar:/bigdata/flink-1.5.1/lib/slf4j-log4j12-1.7.7.jar:/bigdata/flink-1.5.1/lib/flink-dist_2.11-1.5.1.jar::/bigdata/hadoop-2.7.5/etc/hadoop::/bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-api-jdo-4.2.4.jar:/bigdata/flink-1.5.1/opt/sql-client/javax.jdo-3.2.0-m3.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-core-4.1.17.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-rdbms-4.1.19.jar:/bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar
> >> > org.apache.flink.table.client.SqlClient embedded -d
> >> > conf/sql-client-defaults.yaml --jar
> >> >
> >>
> /bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar
> >> > --jar
> >> >
> >>
> /bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar
> >> > --jar
> >> >
> >>
> /bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar
> >> > --jar
> >> >
> >>
> /bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar
> >> > --jar /bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar
> >> --jar
> >> >
> >>
> /bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar
> >> > --jar
> >> >
> /bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar
> >> > --jar /bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > 在 2019-02-22 19:32:18,"Becket Qin" <be...@gmail.com> 写道:
> >> > >能不能看一下运行sql-client.sh的运行参数。具体做法是:
> >> > >
> >> > >运行sql-client.sh
> >> > >ps | grep sql-client
> >> > >
> >> > >查看一下其中是不是有这个 flink-connector-kafka-0.11 的 jar.
> >> > >
> >> > >Jiangjie (Becket) Qin
> >> > >
> >> > >On Fri, Feb 22, 2019 at 6:54 PM 张洪涛 <ho...@163.com> wrote:
> >> > >
> >> > >>
> >> > >>
> >> > >> 是包含这个类的
> >> > >>
> >> > >>
> >> > >> jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
> >> > >>
> >> > >>
> >> >
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$1.class
> >> > >>
> >> > >>
> >> >
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$ChecksumFactory.class
> >> > >>
> >> > >>
> >> >
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$Java9ChecksumFactory.class
> >> > >>
> >> > >>
> >> >
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$PureJavaChecksumFactory.class
> >> > >>
> >> >
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C.class
> >> > >>
> >> > >>
> >> >
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/PureJavaCrc32C.class
> >> > >>
> >> > >>
> >> > >>
> >> > >>
> >> > >>
> >> > >>
> >> > >> 在 2019-02-22 18:03:18,"Zhenghua Gao" <do...@gmail.com> 写道:
> >> > >> >能否看一下对应的包里是否有这个类, 方法如下(假设你的blink安装包在 /tmp/blink):
> >> > >> >
> >> > >> >cd /tmp/blink/opt/connectors/kafka011
> >> > >> >jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
> >> > >> >
> >> > >> >On Fri, Feb 22, 2019 at 2:56 PM 张洪涛 <ho...@163.com> wrote:
> >> > >> >
> >> > >> >>
> >> > >> >>
> >> > >> >> 大家好!
> >> > >> >>
> >> > >> >>
> >> > >> >> 我正在测试Blink sql client kafka sink connector ,但是发现写入失败,以下是我的步骤
> >> > >> >>
> >> > >> >>
> >> > >> >> 环境配置
> >> > >> >> blink standalone 模式
> >> > >> >>
> >> > >> >>
> >> > >> >>
> >> > >> >>
> >> > >> >> 1. 配置environment 启动sql client
> >> > >> >>
> >> > >> >>
> >> > >> >> 2. 创建kafka sink table
> >> > >> >> CREATETABLEkafka_sink(
> >> > >> >>    messageKeyVARBINARY,
> >> > >> >>    messageValueVARBINARY,
> >> > >> >>    PRIMARYKEY(messageKey))
> >> > >> >> with(
> >> > >> >>    type='KAFKA011',
> >> > >> >>    topic='sink-topic',
> >> > >> >>    `bootstrap.servers`='172.19.0.108:9092',
> >> > >> >>    retries='3'
> >> > >> >> );
> >> > >> >>
> >> > >> >>
> >> > >> >> 3. 创建查询语句并执行
> >> > >> >> INSERT INTO kafka_sink
> >> > >> >> SELECT CAST('123' AS VARBINARY) AS key,
> >> > >> >> CAST(CONCAT_WS(',', 'HELLO', 'WORLD') AS VARBINARY) AS msg;
> >> > >> >>
> >> > >> >>
> >> > >> >>
> >> > >> >>
> >> > >> >> 错误日志(from task executor log)
> >> > >> >>
> >> > >> >>
> >> > >> >> 主要是找不到kafka common package下面的一个类, 但是启动sql client 时候已经把kafka
> >> connector
> >> > >> >> 相关的jar包包括在内 在提交job时候 也会把这些jars 和
> >> jobgraph一并上传到cluster,理论上这些class都会被加载
> >> > >> >>
> >> > >> >>
> >> > >> >>
> >> > >> >>
> >> > >> >>
> >> > >> >>
> >> > >> >> 2019-02-22 14:37:18,356 ERROR
> >> > >> >>
> >> > >>
> >> >
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread
> >> > >> >> - Uncaught exception in kafka-producer-network-thread |
> producer-1:
> >> > >> >> java.lang.NoClassDefFoundError:
> >> > >> >>
> >> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C
> >> > >> >>         at
> >> > >> >>
> >> > >>
> >> >
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.DefaultRecordBatch.writeHeader(DefaultRecordBatch.java:468)
> >> > >> >>         at
> >> > >> >>
> >> > >>
> >> >
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.writeDefaultBatchHeader(MemoryRecordsBuilder.java:339)
> >> > >> >>         at
> >> > >> >>
> >> > >>
> >> >
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.close(MemoryRecordsBuilder.java:293)
> >> > >> >>         at
> >> > >> >>
> >> > >>
> >> >
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.ProducerBatch.close(ProducerBatch.java:391)
> >> > >> >>         at
> >> > >> >>
> >> > >>
> >> >
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.RecordAccumulator.drain(RecordAccumulator.java:485)
> >> > >> >>         at
> >> > >> >>
> >> > >>
> >> >
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:254)
> >> > >> >>         at
> >> > >> >>
> >> > >>
> >> >
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:223)
> >> > >> >>         at
> >> > >> >>
> >> > >>
> >> >
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
> >> > >> >>         at java.lang.Thread.run(Thread.java:748)
> >> > >> >> Caused by: java.lang.ClassNotFoundException:
> >> > >> >>
> >> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.Crc32C
> >> > >> >>         at
> >> java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> >> > >> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >> > >> >>         at
> >> > >> >>
> >> > >>
> >> >
> >>
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
> >> > >> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >> > >> >>
> >> > >> >>
> >> > >> >>
> >> > >> >>
> >> > >> >> --
> >> > >> >>   Best Regards
> >> > >> >>   Hongtao
> >> > >> >>
> >> > >> >>
> >> > >> >
> >> > >> >--
> >> > >> >若批評無自由,則讚美無意義!
> >> > >>
> >> > >>
> >> > >>
> >> > >>
> >> > >>
> >> > >>
> >> > >>
> >> > >> --
> >> > >>   Best Regards,
> >> > >>   HongTao
> >> > >>
> >> > >>
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> >   Best Regards,
> >> >   HongTao
> >> >
> >> >
> >>
>
>
>
>
>
>
>
> --
>   Best Regards,
>   HongTao
>
>

Re: [Blink]sql client kafka sink 失败

Posted by 张洪涛 <ho...@163.com>.

如果把kafka connector shade jar放在blink lib 下面 然后启动是没有问题的  但是放在sql client --jar 参数就有问题


我又多测试了几遍 发现class not found的类 是随机的 


有什么建议么?


2019-02-26 10:36:10,343 ERROR org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread  - Uncaught exception in kafka-producer-network-thread | producer-1:
java.lang.NoClassDefFoundError: org/apache/flink/kafka011/shaded/org/apache/kafka/common/requests/ProduceResponse$PartitionResponse
        at org.apache.flink.kafka011.shaded.org.apache.kafka.common.requests.ProduceResponse.<init>(ProduceResponse.java:107)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.common.requests.AbstractResponse.getResponse(AbstractResponse.java:55)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.NetworkClient.createResponse(NetworkClient.java:569)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:663)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
        at org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.apache.flink.kafka011.shaded.org.apache.kafka.common.requests.ProduceResponse$PartitionResponse
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)




在 2019-02-25 14:34:54,"Becket Qin" <be...@gmail.com> 写道:
>@Kurt,
>
>这个是符合预期的。为了防止和用户code中可能的Kafka依赖发生冲突。
>
>On Mon, Feb 25, 2019 at 10:28 AM Kurt Young <yk...@gmail.com> wrote:
>
>> kafka的包看路径是shade过的,这是符合预期的吗? @Becket
>>
>> Best,
>> Kurt
>>
>>
>> On Mon, Feb 25, 2019 at 9:56 AM 张洪涛 <ho...@163.com> wrote:
>>
>> >
>> >
>> > sql-client.sh 的启动参数首先在classpath里面会包含kafka相关的jar  另外会有--jar
>> > 包含所有connector的jar
>> >
>> >
>> > 这些jars在sql-client提交job时候会上传到cluster的blob store 但是很奇怪为啥找不到
>> >
>> >
>> >  00:00:06 /usr/lib/jvm/java-1.8.0-openjdk/bin/java
>> > -Dlog.file=/bigdata/flink-1.5.1/log/flink-root-sql-client-gpu06.log
>> > -Dlog4j.configuration=file:/bigdata/flink-1.5.1/conf/log4j-cli.properties
>> > -Dlogback.configurationFile=file:/bigdata/flink-1.5.1/conf/logback.xml
>> > -classpath
>> >
>> /bigdata/flink-1.5.1/lib/flink-python_2.11-1.5.1.jar:/bigdata/flink-1.5.1/lib/flink-shaded-hadoop2-uber-1.5.1.jar:/bigdata/flink-1.5.1/lib/log4j-1.2.17.jar:/bigdata/flink-1.5.1/lib/slf4j-log4j12-1.7.7.jar:/bigdata/flink-1.5.1/lib/flink-dist_2.11-1.5.1.jar::/bigdata/hadoop-2.7.5/etc/hadoop::/bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-api-jdo-4.2.4.jar:/bigdata/flink-1.5.1/opt/sql-client/javax.jdo-3.2.0-m3.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-core-4.1.17.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-rdbms-4.1.19.jar:/bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar
>> > org.apache.flink.table.client.SqlClient embedded -d
>> > conf/sql-client-defaults.yaml --jar
>> >
>> /bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar
>> > --jar
>> >
>> /bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar
>> > --jar
>> >
>> /bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar
>> > --jar
>> >
>> /bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar
>> > --jar /bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar
>> --jar
>> >
>> /bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar
>> > --jar
>> > /bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar
>> > --jar /bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar
>> >
>> >
>> >
>> >
>> >
>> >
>> > 在 2019-02-22 19:32:18,"Becket Qin" <be...@gmail.com> 写道:
>> > >能不能看一下运行sql-client.sh的运行参数。具体做法是:
>> > >
>> > >运行sql-client.sh
>> > >ps | grep sql-client
>> > >
>> > >查看一下其中是不是有这个 flink-connector-kafka-0.11 的 jar.
>> > >
>> > >Jiangjie (Becket) Qin
>> > >
>> > >On Fri, Feb 22, 2019 at 6:54 PM 张洪涛 <ho...@163.com> wrote:
>> > >
>> > >>
>> > >>
>> > >> 是包含这个类的
>> > >>
>> > >>
>> > >> jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
>> > >>
>> > >>
>> >
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$1.class
>> > >>
>> > >>
>> >
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$ChecksumFactory.class
>> > >>
>> > >>
>> >
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$Java9ChecksumFactory.class
>> > >>
>> > >>
>> >
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$PureJavaChecksumFactory.class
>> > >>
>> >
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C.class
>> > >>
>> > >>
>> >
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/PureJavaCrc32C.class
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >> 在 2019-02-22 18:03:18,"Zhenghua Gao" <do...@gmail.com> 写道:
>> > >> >能否看一下对应的包里是否有这个类, 方法如下(假设你的blink安装包在 /tmp/blink):
>> > >> >
>> > >> >cd /tmp/blink/opt/connectors/kafka011
>> > >> >jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
>> > >> >
>> > >> >On Fri, Feb 22, 2019 at 2:56 PM 张洪涛 <ho...@163.com> wrote:
>> > >> >
>> > >> >>
>> > >> >>
>> > >> >> 大家好!
>> > >> >>
>> > >> >>
>> > >> >> 我正在测试Blink sql client kafka sink connector ,但是发现写入失败,以下是我的步骤
>> > >> >>
>> > >> >>
>> > >> >> 环境配置
>> > >> >> blink standalone 模式
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >> 1. 配置environment 启动sql client
>> > >> >>
>> > >> >>
>> > >> >> 2. 创建kafka sink table
>> > >> >> CREATETABLEkafka_sink(
>> > >> >>    messageKeyVARBINARY,
>> > >> >>    messageValueVARBINARY,
>> > >> >>    PRIMARYKEY(messageKey))
>> > >> >> with(
>> > >> >>    type='KAFKA011',
>> > >> >>    topic='sink-topic',
>> > >> >>    `bootstrap.servers`='172.19.0.108:9092',
>> > >> >>    retries='3'
>> > >> >> );
>> > >> >>
>> > >> >>
>> > >> >> 3. 创建查询语句并执行
>> > >> >> INSERT INTO kafka_sink
>> > >> >> SELECT CAST('123' AS VARBINARY) AS key,
>> > >> >> CAST(CONCAT_WS(',', 'HELLO', 'WORLD') AS VARBINARY) AS msg;
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >> 错误日志(from task executor log)
>> > >> >>
>> > >> >>
>> > >> >> 主要是找不到kafka common package下面的一个类, 但是启动sql client 时候已经把kafka
>> connector
>> > >> >> 相关的jar包包括在内 在提交job时候 也会把这些jars 和
>> jobgraph一并上传到cluster,理论上这些class都会被加载
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >> 2019-02-22 14:37:18,356 ERROR
>> > >> >>
>> > >>
>> >
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread
>> > >> >> - Uncaught exception in kafka-producer-network-thread | producer-1:
>> > >> >> java.lang.NoClassDefFoundError:
>> > >> >>
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C
>> > >> >>         at
>> > >> >>
>> > >>
>> >
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.DefaultRecordBatch.writeHeader(DefaultRecordBatch.java:468)
>> > >> >>         at
>> > >> >>
>> > >>
>> >
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.writeDefaultBatchHeader(MemoryRecordsBuilder.java:339)
>> > >> >>         at
>> > >> >>
>> > >>
>> >
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.close(MemoryRecordsBuilder.java:293)
>> > >> >>         at
>> > >> >>
>> > >>
>> >
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.ProducerBatch.close(ProducerBatch.java:391)
>> > >> >>         at
>> > >> >>
>> > >>
>> >
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.RecordAccumulator.drain(RecordAccumulator.java:485)
>> > >> >>         at
>> > >> >>
>> > >>
>> >
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:254)
>> > >> >>         at
>> > >> >>
>> > >>
>> >
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:223)
>> > >> >>         at
>> > >> >>
>> > >>
>> >
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
>> > >> >>         at java.lang.Thread.run(Thread.java:748)
>> > >> >> Caused by: java.lang.ClassNotFoundException:
>> > >> >>
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.Crc32C
>> > >> >>         at
>> java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>> > >> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>> > >> >>         at
>> > >> >>
>> > >>
>> >
>> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
>> > >> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >> --
>> > >> >>   Best Regards
>> > >> >>   Hongtao
>> > >> >>
>> > >> >>
>> > >> >
>> > >> >--
>> > >> >若批評無自由,則讚美無意義!
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >> --
>> > >>   Best Regards,
>> > >>   HongTao
>> > >>
>> > >>
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> >   Best Regards,
>> >   HongTao
>> >
>> >
>>







--
  Best Regards,
  HongTao


Re: Re: Re: [Blink]sql client kafka sink 失败

Posted by Becket Qin <be...@gmail.com>.
@Kurt,

这个是符合预期的。为了防止和用户code中可能的Kafka依赖发生冲突。

On Mon, Feb 25, 2019 at 10:28 AM Kurt Young <yk...@gmail.com> wrote:

> kafka的包看路径是shade过的,这是符合预期的吗? @Becket
>
> Best,
> Kurt
>
>
> On Mon, Feb 25, 2019 at 9:56 AM 张洪涛 <ho...@163.com> wrote:
>
> >
> >
> > sql-client.sh 的启动参数首先在classpath里面会包含kafka相关的jar  另外会有--jar
> > 包含所有connector的jar
> >
> >
> > 这些jars在sql-client提交job时候会上传到cluster的blob store 但是很奇怪为啥找不到
> >
> >
> >  00:00:06 /usr/lib/jvm/java-1.8.0-openjdk/bin/java
> > -Dlog.file=/bigdata/flink-1.5.1/log/flink-root-sql-client-gpu06.log
> > -Dlog4j.configuration=file:/bigdata/flink-1.5.1/conf/log4j-cli.properties
> > -Dlogback.configurationFile=file:/bigdata/flink-1.5.1/conf/logback.xml
> > -classpath
> >
> /bigdata/flink-1.5.1/lib/flink-python_2.11-1.5.1.jar:/bigdata/flink-1.5.1/lib/flink-shaded-hadoop2-uber-1.5.1.jar:/bigdata/flink-1.5.1/lib/log4j-1.2.17.jar:/bigdata/flink-1.5.1/lib/slf4j-log4j12-1.7.7.jar:/bigdata/flink-1.5.1/lib/flink-dist_2.11-1.5.1.jar::/bigdata/hadoop-2.7.5/etc/hadoop::/bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-api-jdo-4.2.4.jar:/bigdata/flink-1.5.1/opt/sql-client/javax.jdo-3.2.0-m3.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-core-4.1.17.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-rdbms-4.1.19.jar:/bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar
> > org.apache.flink.table.client.SqlClient embedded -d
> > conf/sql-client-defaults.yaml --jar
> >
> /bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar
> > --jar
> >
> /bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar
> > --jar
> >
> /bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar
> > --jar
> >
> /bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar
> > --jar /bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar
> --jar
> >
> /bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar
> > --jar
> > /bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar
> > --jar /bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar
> >
> >
> >
> >
> >
> >
> > 在 2019-02-22 19:32:18,"Becket Qin" <be...@gmail.com> 写道:
> > >能不能看一下运行sql-client.sh的运行参数。具体做法是:
> > >
> > >运行sql-client.sh
> > >ps | grep sql-client
> > >
> > >查看一下其中是不是有这个 flink-connector-kafka-0.11 的 jar.
> > >
> > >Jiangjie (Becket) Qin
> > >
> > >On Fri, Feb 22, 2019 at 6:54 PM 张洪涛 <ho...@163.com> wrote:
> > >
> > >>
> > >>
> > >> 是包含这个类的
> > >>
> > >>
> > >> jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
> > >>
> > >>
> >
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$1.class
> > >>
> > >>
> >
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$ChecksumFactory.class
> > >>
> > >>
> >
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$Java9ChecksumFactory.class
> > >>
> > >>
> >
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$PureJavaChecksumFactory.class
> > >>
> >
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C.class
> > >>
> > >>
> >
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/PureJavaCrc32C.class
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> 在 2019-02-22 18:03:18,"Zhenghua Gao" <do...@gmail.com> 写道:
> > >> >能否看一下对应的包里是否有这个类, 方法如下(假设你的blink安装包在 /tmp/blink):
> > >> >
> > >> >cd /tmp/blink/opt/connectors/kafka011
> > >> >jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
> > >> >
> > >> >On Fri, Feb 22, 2019 at 2:56 PM 张洪涛 <ho...@163.com> wrote:
> > >> >
> > >> >>
> > >> >>
> > >> >> 大家好!
> > >> >>
> > >> >>
> > >> >> 我正在测试Blink sql client kafka sink connector ,但是发现写入失败,以下是我的步骤
> > >> >>
> > >> >>
> > >> >> 环境配置
> > >> >> blink standalone 模式
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >> 1. 配置environment 启动sql client
> > >> >>
> > >> >>
> > >> >> 2. 创建kafka sink table
> > >> >> CREATETABLEkafka_sink(
> > >> >>    messageKeyVARBINARY,
> > >> >>    messageValueVARBINARY,
> > >> >>    PRIMARYKEY(messageKey))
> > >> >> with(
> > >> >>    type='KAFKA011',
> > >> >>    topic='sink-topic',
> > >> >>    `bootstrap.servers`='172.19.0.108:9092',
> > >> >>    retries='3'
> > >> >> );
> > >> >>
> > >> >>
> > >> >> 3. 创建查询语句并执行
> > >> >> INSERT INTO kafka_sink
> > >> >> SELECT CAST('123' AS VARBINARY) AS key,
> > >> >> CAST(CONCAT_WS(',', 'HELLO', 'WORLD') AS VARBINARY) AS msg;
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >> 错误日志(from task executor log)
> > >> >>
> > >> >>
> > >> >> 主要是找不到kafka common package下面的一个类, 但是启动sql client 时候已经把kafka
> connector
> > >> >> 相关的jar包包括在内 在提交job时候 也会把这些jars 和
> jobgraph一并上传到cluster,理论上这些class都会被加载
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >> 2019-02-22 14:37:18,356 ERROR
> > >> >>
> > >>
> >
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread
> > >> >> - Uncaught exception in kafka-producer-network-thread | producer-1:
> > >> >> java.lang.NoClassDefFoundError:
> > >> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C
> > >> >>         at
> > >> >>
> > >>
> >
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.DefaultRecordBatch.writeHeader(DefaultRecordBatch.java:468)
> > >> >>         at
> > >> >>
> > >>
> >
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.writeDefaultBatchHeader(MemoryRecordsBuilder.java:339)
> > >> >>         at
> > >> >>
> > >>
> >
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.close(MemoryRecordsBuilder.java:293)
> > >> >>         at
> > >> >>
> > >>
> >
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.ProducerBatch.close(ProducerBatch.java:391)
> > >> >>         at
> > >> >>
> > >>
> >
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.RecordAccumulator.drain(RecordAccumulator.java:485)
> > >> >>         at
> > >> >>
> > >>
> >
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:254)
> > >> >>         at
> > >> >>
> > >>
> >
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:223)
> > >> >>         at
> > >> >>
> > >>
> >
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
> > >> >>         at java.lang.Thread.run(Thread.java:748)
> > >> >> Caused by: java.lang.ClassNotFoundException:
> > >> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.Crc32C
> > >> >>         at
> java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> > >> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> > >> >>         at
> > >> >>
> > >>
> >
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
> > >> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >> --
> > >> >>   Best Regards
> > >> >>   Hongtao
> > >> >>
> > >> >>
> > >> >
> > >> >--
> > >> >若批評無自由,則讚美無意義!
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> --
> > >>   Best Regards,
> > >>   HongTao
> > >>
> > >>
> >
> >
> >
> >
> >
> >
> >
> > --
> >   Best Regards,
> >   HongTao
> >
> >
>

Re: Re: Re: [Blink]sql client kafka sink 失败

Posted by Kurt Young <yk...@gmail.com>.
kafka的包看路径是shade过的,这是符合预期的吗? @Becket

Best,
Kurt


On Mon, Feb 25, 2019 at 9:56 AM 张洪涛 <ho...@163.com> wrote:

>
>
> sql-client.sh 的启动参数首先在classpath里面会包含kafka相关的jar  另外会有--jar
> 包含所有connector的jar
>
>
> 这些jars在sql-client提交job时候会上传到cluster的blob store 但是很奇怪为啥找不到
>
>
>  00:00:06 /usr/lib/jvm/java-1.8.0-openjdk/bin/java
> -Dlog.file=/bigdata/flink-1.5.1/log/flink-root-sql-client-gpu06.log
> -Dlog4j.configuration=file:/bigdata/flink-1.5.1/conf/log4j-cli.properties
> -Dlogback.configurationFile=file:/bigdata/flink-1.5.1/conf/logback.xml
> -classpath
> /bigdata/flink-1.5.1/lib/flink-python_2.11-1.5.1.jar:/bigdata/flink-1.5.1/lib/flink-shaded-hadoop2-uber-1.5.1.jar:/bigdata/flink-1.5.1/lib/log4j-1.2.17.jar:/bigdata/flink-1.5.1/lib/slf4j-log4j12-1.7.7.jar:/bigdata/flink-1.5.1/lib/flink-dist_2.11-1.5.1.jar::/bigdata/hadoop-2.7.5/etc/hadoop::/bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-api-jdo-4.2.4.jar:/bigdata/flink-1.5.1/opt/sql-client/javax.jdo-3.2.0-m3.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-core-4.1.17.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-rdbms-4.1.19.jar:/bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar
> org.apache.flink.table.client.SqlClient embedded -d
> conf/sql-client-defaults.yaml --jar
> /bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar
> --jar
> /bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar
> --jar
> /bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar
> --jar
> /bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar
> --jar /bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar --jar
> /bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar
> --jar
> /bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar
> --jar /bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar
>
>
>
>
>
>
> 在 2019-02-22 19:32:18,"Becket Qin" <be...@gmail.com> 写道:
> >能不能看一下运行sql-client.sh的运行参数。具体做法是:
> >
> >运行sql-client.sh
> >ps | grep sql-client
> >
> >查看一下其中是不是有这个 flink-connector-kafka-0.11 的 jar.
> >
> >Jiangjie (Becket) Qin
> >
> >On Fri, Feb 22, 2019 at 6:54 PM 张洪涛 <ho...@163.com> wrote:
> >
> >>
> >>
> >> 是包含这个类的
> >>
> >>
> >> jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$1.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$ChecksumFactory.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$Java9ChecksumFactory.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$PureJavaChecksumFactory.class
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/PureJavaCrc32C.class
> >>
> >>
> >>
> >>
> >>
> >>
> >> 在 2019-02-22 18:03:18,"Zhenghua Gao" <do...@gmail.com> 写道:
> >> >能否看一下对应的包里是否有这个类, 方法如下(假设你的blink安装包在 /tmp/blink):
> >> >
> >> >cd /tmp/blink/opt/connectors/kafka011
> >> >jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
> >> >
> >> >On Fri, Feb 22, 2019 at 2:56 PM 张洪涛 <ho...@163.com> wrote:
> >> >
> >> >>
> >> >>
> >> >> 大家好!
> >> >>
> >> >>
> >> >> 我正在测试Blink sql client kafka sink connector ,但是发现写入失败,以下是我的步骤
> >> >>
> >> >>
> >> >> 环境配置
> >> >> blink standalone 模式
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> 1. 配置environment 启动sql client
> >> >>
> >> >>
> >> >> 2. 创建kafka sink table
> >> >> CREATETABLEkafka_sink(
> >> >>    messageKeyVARBINARY,
> >> >>    messageValueVARBINARY,
> >> >>    PRIMARYKEY(messageKey))
> >> >> with(
> >> >>    type='KAFKA011',
> >> >>    topic='sink-topic',
> >> >>    `bootstrap.servers`='172.19.0.108:9092',
> >> >>    retries='3'
> >> >> );
> >> >>
> >> >>
> >> >> 3. 创建查询语句并执行
> >> >> INSERT INTO kafka_sink
> >> >> SELECT CAST('123' AS VARBINARY) AS key,
> >> >> CAST(CONCAT_WS(',', 'HELLO', 'WORLD') AS VARBINARY) AS msg;
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> 错误日志(from task executor log)
> >> >>
> >> >>
> >> >> 主要是找不到kafka common package下面的一个类, 但是启动sql client 时候已经把kafka connector
> >> >> 相关的jar包包括在内 在提交job时候 也会把这些jars 和 jobgraph一并上传到cluster,理论上这些class都会被加载
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> 2019-02-22 14:37:18,356 ERROR
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread
> >> >> - Uncaught exception in kafka-producer-network-thread | producer-1:
> >> >> java.lang.NoClassDefFoundError:
> >> >> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.DefaultRecordBatch.writeHeader(DefaultRecordBatch.java:468)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.writeDefaultBatchHeader(MemoryRecordsBuilder.java:339)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.close(MemoryRecordsBuilder.java:293)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.ProducerBatch.close(ProducerBatch.java:391)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.RecordAccumulator.drain(RecordAccumulator.java:485)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:254)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:223)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
> >> >>         at java.lang.Thread.run(Thread.java:748)
> >> >> Caused by: java.lang.ClassNotFoundException:
> >> >> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.Crc32C
> >> >>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> >> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >> >>         at
> >> >>
> >>
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
> >> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >>   Best Regards
> >> >>   Hongtao
> >> >>
> >> >>
> >> >
> >> >--
> >> >若批評無自由,則讚美無意義!
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> --
> >>   Best Regards,
> >>   HongTao
> >>
> >>
>
>
>
>
>
>
>
> --
>   Best Regards,
>   HongTao
>
>

Re: Re: Re: [Blink]sql client kafka sink 失败

Posted by Zhenghua Gao <do...@gmail.com>.
确认一下standalone cluster 和 sql client 用的是同一份 flink/blink bin
印象中两者不一致会有一些奇怪的问题。


On Mon, Feb 25, 2019 at 9:56 AM 张洪涛 <ho...@163.com> wrote:

>
>
> sql-client.sh 的启动参数首先在classpath里面会包含kafka相关的jar  另外会有--jar
> 包含所有connector的jar
>
>
> 这些jars在sql-client提交job时候会上传到cluster的blob store 但是很奇怪为啥找不到
>
>
>  00:00:06 /usr/lib/jvm/java-1.8.0-openjdk/bin/java
> -Dlog.file=/bigdata/flink-1.5.1/log/flink-root-sql-client-gpu06.log
> -Dlog4j.configuration=file:/bigdata/flink-1.5.1/conf/log4j-cli.properties
> -Dlogback.configurationFile=file:/bigdata/flink-1.5.1/conf/logback.xml
> -classpath
> /bigdata/flink-1.5.1/lib/flink-python_2.11-1.5.1.jar:/bigdata/flink-1.5.1/lib/flink-shaded-hadoop2-uber-1.5.1.jar:/bigdata/flink-1.5.1/lib/log4j-1.2.17.jar:/bigdata/flink-1.5.1/lib/slf4j-log4j12-1.7.7.jar:/bigdata/flink-1.5.1/lib/flink-dist_2.11-1.5.1.jar::/bigdata/hadoop-2.7.5/etc/hadoop::/bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-api-jdo-4.2.4.jar:/bigdata/flink-1.5.1/opt/sql-client/javax.jdo-3.2.0-m3.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-core-4.1.17.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-rdbms-4.1.19.jar:/bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar
> org.apache.flink.table.client.SqlClient embedded -d
> conf/sql-client-defaults.yaml --jar
> /bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar
> --jar
> /bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar
> --jar
> /bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar
> --jar
> /bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar
> --jar /bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar --jar
> /bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar
> --jar
> /bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar
> --jar /bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar
>
>
>
>
>
>
> 在 2019-02-22 19:32:18,"Becket Qin" <be...@gmail.com> 写道:
> >能不能看一下运行sql-client.sh的运行参数。具体做法是:
> >
> >运行sql-client.sh
> >ps | grep sql-client
> >
> >查看一下其中是不是有这个 flink-connector-kafka-0.11 的 jar.
> >
> >Jiangjie (Becket) Qin
> >
> >On Fri, Feb 22, 2019 at 6:54 PM 张洪涛 <ho...@163.com> wrote:
> >
> >>
> >>
> >> 是包含这个类的
> >>
> >>
> >> jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$1.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$ChecksumFactory.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$Java9ChecksumFactory.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$PureJavaChecksumFactory.class
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C.class
> >>
> >>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/PureJavaCrc32C.class
> >>
> >>
> >>
> >>
> >>
> >>
> >> 在 2019-02-22 18:03:18,"Zhenghua Gao" <do...@gmail.com> 写道:
> >> >能否看一下对应的包里是否有这个类, 方法如下(假设你的blink安装包在 /tmp/blink):
> >> >
> >> >cd /tmp/blink/opt/connectors/kafka011
> >> >jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
> >> >
> >> >On Fri, Feb 22, 2019 at 2:56 PM 张洪涛 <ho...@163.com> wrote:
> >> >
> >> >>
> >> >>
> >> >> 大家好!
> >> >>
> >> >>
> >> >> 我正在测试Blink sql client kafka sink connector ,但是发现写入失败,以下是我的步骤
> >> >>
> >> >>
> >> >> 环境配置
> >> >> blink standalone 模式
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> 1. 配置environment 启动sql client
> >> >>
> >> >>
> >> >> 2. 创建kafka sink table
> >> >> CREATETABLEkafka_sink(
> >> >>    messageKeyVARBINARY,
> >> >>    messageValueVARBINARY,
> >> >>    PRIMARYKEY(messageKey))
> >> >> with(
> >> >>    type='KAFKA011',
> >> >>    topic='sink-topic',
> >> >>    `bootstrap.servers`='172.19.0.108:9092',
> >> >>    retries='3'
> >> >> );
> >> >>
> >> >>
> >> >> 3. 创建查询语句并执行
> >> >> INSERT INTO kafka_sink
> >> >> SELECT CAST('123' AS VARBINARY) AS key,
> >> >> CAST(CONCAT_WS(',', 'HELLO', 'WORLD') AS VARBINARY) AS msg;
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> 错误日志(from task executor log)
> >> >>
> >> >>
> >> >> 主要是找不到kafka common package下面的一个类, 但是启动sql client 时候已经把kafka connector
> >> >> 相关的jar包包括在内 在提交job时候 也会把这些jars 和 jobgraph一并上传到cluster,理论上这些class都会被加载
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> 2019-02-22 14:37:18,356 ERROR
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread
> >> >> - Uncaught exception in kafka-producer-network-thread | producer-1:
> >> >> java.lang.NoClassDefFoundError:
> >> >> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.DefaultRecordBatch.writeHeader(DefaultRecordBatch.java:468)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.writeDefaultBatchHeader(MemoryRecordsBuilder.java:339)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.close(MemoryRecordsBuilder.java:293)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.ProducerBatch.close(ProducerBatch.java:391)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.RecordAccumulator.drain(RecordAccumulator.java:485)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:254)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:223)
> >> >>         at
> >> >>
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
> >> >>         at java.lang.Thread.run(Thread.java:748)
> >> >> Caused by: java.lang.ClassNotFoundException:
> >> >> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.Crc32C
> >> >>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> >> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >> >>         at
> >> >>
> >>
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
> >> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >>   Best Regards
> >> >>   Hongtao
> >> >>
> >> >>
> >> >
> >> >--
> >> >若批評無自由,則讚美無意義!
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> --
> >>   Best Regards,
> >>   HongTao
> >>
> >>
>
>
>
>
>
>
>
> --
>   Best Regards,
>   HongTao
>
>

Re:Re: Re: [Blink]sql client kafka sink 失败

Posted by 张洪涛 <ho...@163.com>.

sql-client.sh 的启动参数首先在classpath里面会包含kafka相关的jar  另外会有--jar 包含所有connector的jar


这些jars在sql-client提交job时候会上传到cluster的blob store 但是很奇怪为啥找不到


 00:00:06 /usr/lib/jvm/java-1.8.0-openjdk/bin/java -Dlog.file=/bigdata/flink-1.5.1/log/flink-root-sql-client-gpu06.log -Dlog4j.configuration=file:/bigdata/flink-1.5.1/conf/log4j-cli.properties -Dlogback.configurationFile=file:/bigdata/flink-1.5.1/conf/logback.xml -classpath /bigdata/flink-1.5.1/lib/flink-python_2.11-1.5.1.jar:/bigdata/flink-1.5.1/lib/flink-shaded-hadoop2-uber-1.5.1.jar:/bigdata/flink-1.5.1/lib/log4j-1.2.17.jar:/bigdata/flink-1.5.1/lib/slf4j-log4j12-1.7.7.jar:/bigdata/flink-1.5.1/lib/flink-dist_2.11-1.5.1.jar::/bigdata/hadoop-2.7.5/etc/hadoop::/bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar:/bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-api-jdo-4.2.4.jar:/bigdata/flink-1.5.1/opt/sql-client/javax.jdo-3.2.0-m3.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-core-4.1.17.jar:/bigdata/flink-1.5.1/opt/sql-client/datanucleus-rdbms-4.1.19.jar:/bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar org.apache.flink.table.client.SqlClient embedded -d conf/sql-client-defaults.yaml --jar /bigdata/flink-1.5.1/opt/connectors/kafka011/flink-connector-kafka-0.11_2.11-1.5.1-sql-jar.jar --jar /bigdata/flink-1.5.1/opt/connectors/kafka010/flink-connector-kafka-0.10_2.11-1.5.1-sql-jar.jar --jar /bigdata/flink-1.5.1/opt/connectors/kafka09/flink-connector-kafka-0.9_2.11-1.5.1-sql-jar.jar --jar /bigdata/flink-1.5.1/opt/connectors/kafka08/flink-connector-kafka-0.8_2.11-1.5.1.jar --jar /bigdata/flink-1.5.1/opt/connectors/flink-hbase_2.11-1.5.1.jar --jar /bigdata/flink-1.5.1/opt/connectors/flink-connector-hadoop-compatibility_2.11-1.5.1.jar --jar /bigdata/flink-1.5.1/opt/connectors/flink-connector-hive_2.11-1.5.1.jar --jar /bigdata/flink-1.5.1/opt/sql-client/flink-sql-client-1.5.1.jar






在 2019-02-22 19:32:18,"Becket Qin" <be...@gmail.com> 写道:
>能不能看一下运行sql-client.sh的运行参数。具体做法是:
>
>运行sql-client.sh
>ps | grep sql-client
>
>查看一下其中是不是有这个 flink-connector-kafka-0.11 的 jar.
>
>Jiangjie (Becket) Qin
>
>On Fri, Feb 22, 2019 at 6:54 PM 张洪涛 <ho...@163.com> wrote:
>
>>
>>
>> 是包含这个类的
>>
>>
>> jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
>>
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$1.class
>>
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$ChecksumFactory.class
>>
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$Java9ChecksumFactory.class
>>
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$PureJavaChecksumFactory.class
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C.class
>>
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/PureJavaCrc32C.class
>>
>>
>>
>>
>>
>>
>> 在 2019-02-22 18:03:18,"Zhenghua Gao" <do...@gmail.com> 写道:
>> >能否看一下对应的包里是否有这个类, 方法如下(假设你的blink安装包在 /tmp/blink):
>> >
>> >cd /tmp/blink/opt/connectors/kafka011
>> >jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
>> >
>> >On Fri, Feb 22, 2019 at 2:56 PM 张洪涛 <ho...@163.com> wrote:
>> >
>> >>
>> >>
>> >> 大家好!
>> >>
>> >>
>> >> 我正在测试Blink sql client kafka sink connector ,但是发现写入失败,以下是我的步骤
>> >>
>> >>
>> >> 环境配置
>> >> blink standalone 模式
>> >>
>> >>
>> >>
>> >>
>> >> 1. 配置environment 启动sql client
>> >>
>> >>
>> >> 2. 创建kafka sink table
>> >> CREATETABLEkafka_sink(
>> >>    messageKeyVARBINARY,
>> >>    messageValueVARBINARY,
>> >>    PRIMARYKEY(messageKey))
>> >> with(
>> >>    type='KAFKA011',
>> >>    topic='sink-topic',
>> >>    `bootstrap.servers`='172.19.0.108:9092',
>> >>    retries='3'
>> >> );
>> >>
>> >>
>> >> 3. 创建查询语句并执行
>> >> INSERT INTO kafka_sink
>> >> SELECT CAST('123' AS VARBINARY) AS key,
>> >> CAST(CONCAT_WS(',', 'HELLO', 'WORLD') AS VARBINARY) AS msg;
>> >>
>> >>
>> >>
>> >>
>> >> 错误日志(from task executor log)
>> >>
>> >>
>> >> 主要是找不到kafka common package下面的一个类, 但是启动sql client 时候已经把kafka connector
>> >> 相关的jar包包括在内 在提交job时候 也会把这些jars 和 jobgraph一并上传到cluster,理论上这些class都会被加载
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> 2019-02-22 14:37:18,356 ERROR
>> >>
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread
>> >> - Uncaught exception in kafka-producer-network-thread | producer-1:
>> >> java.lang.NoClassDefFoundError:
>> >> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C
>> >>         at
>> >>
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.DefaultRecordBatch.writeHeader(DefaultRecordBatch.java:468)
>> >>         at
>> >>
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.writeDefaultBatchHeader(MemoryRecordsBuilder.java:339)
>> >>         at
>> >>
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.close(MemoryRecordsBuilder.java:293)
>> >>         at
>> >>
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.ProducerBatch.close(ProducerBatch.java:391)
>> >>         at
>> >>
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.RecordAccumulator.drain(RecordAccumulator.java:485)
>> >>         at
>> >>
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:254)
>> >>         at
>> >>
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:223)
>> >>         at
>> >>
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
>> >>         at java.lang.Thread.run(Thread.java:748)
>> >> Caused by: java.lang.ClassNotFoundException:
>> >> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.Crc32C
>> >>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>> >>         at
>> >>
>> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
>> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >>   Best Regards
>> >>   Hongtao
>> >>
>> >>
>> >
>> >--
>> >若批評無自由,則讚美無意義!
>>
>>
>>
>>
>>
>>
>>
>> --
>>   Best Regards,
>>   HongTao
>>
>>







--
  Best Regards,
  HongTao


Re: Re: [Blink]sql client kafka sink 失败

Posted by Becket Qin <be...@gmail.com>.
能不能看一下运行sql-client.sh的运行参数。具体做法是:

运行sql-client.sh
ps | grep sql-client

查看一下其中是不是有这个 flink-connector-kafka-0.11 的 jar.

Jiangjie (Becket) Qin

On Fri, Feb 22, 2019 at 6:54 PM 张洪涛 <ho...@163.com> wrote:

>
>
> 是包含这个类的
>
>
> jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$1.class
>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$ChecksumFactory.class
>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$Java9ChecksumFactory.class
>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$PureJavaChecksumFactory.class
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C.class
>
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/PureJavaCrc32C.class
>
>
>
>
>
>
> 在 2019-02-22 18:03:18,"Zhenghua Gao" <do...@gmail.com> 写道:
> >能否看一下对应的包里是否有这个类, 方法如下(假设你的blink安装包在 /tmp/blink):
> >
> >cd /tmp/blink/opt/connectors/kafka011
> >jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
> >
> >On Fri, Feb 22, 2019 at 2:56 PM 张洪涛 <ho...@163.com> wrote:
> >
> >>
> >>
> >> 大家好!
> >>
> >>
> >> 我正在测试Blink sql client kafka sink connector ,但是发现写入失败,以下是我的步骤
> >>
> >>
> >> 环境配置
> >> blink standalone 模式
> >>
> >>
> >>
> >>
> >> 1. 配置environment 启动sql client
> >>
> >>
> >> 2. 创建kafka sink table
> >> CREATETABLEkafka_sink(
> >>    messageKeyVARBINARY,
> >>    messageValueVARBINARY,
> >>    PRIMARYKEY(messageKey))
> >> with(
> >>    type='KAFKA011',
> >>    topic='sink-topic',
> >>    `bootstrap.servers`='172.19.0.108:9092',
> >>    retries='3'
> >> );
> >>
> >>
> >> 3. 创建查询语句并执行
> >> INSERT INTO kafka_sink
> >> SELECT CAST('123' AS VARBINARY) AS key,
> >> CAST(CONCAT_WS(',', 'HELLO', 'WORLD') AS VARBINARY) AS msg;
> >>
> >>
> >>
> >>
> >> 错误日志(from task executor log)
> >>
> >>
> >> 主要是找不到kafka common package下面的一个类, 但是启动sql client 时候已经把kafka connector
> >> 相关的jar包包括在内 在提交job时候 也会把这些jars 和 jobgraph一并上传到cluster,理论上这些class都会被加载
> >>
> >>
> >>
> >>
> >>
> >>
> >> 2019-02-22 14:37:18,356 ERROR
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread
> >> - Uncaught exception in kafka-producer-network-thread | producer-1:
> >> java.lang.NoClassDefFoundError:
> >> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C
> >>         at
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.DefaultRecordBatch.writeHeader(DefaultRecordBatch.java:468)
> >>         at
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.writeDefaultBatchHeader(MemoryRecordsBuilder.java:339)
> >>         at
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.close(MemoryRecordsBuilder.java:293)
> >>         at
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.ProducerBatch.close(ProducerBatch.java:391)
> >>         at
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.RecordAccumulator.drain(RecordAccumulator.java:485)
> >>         at
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:254)
> >>         at
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:223)
> >>         at
> >>
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
> >>         at java.lang.Thread.run(Thread.java:748)
> >> Caused by: java.lang.ClassNotFoundException:
> >> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.Crc32C
> >>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >>         at
> >>
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >>
> >>
> >>
> >>
> >> --
> >>   Best Regards
> >>   Hongtao
> >>
> >>
> >
> >--
> >若批評無自由,則讚美無意義!
>
>
>
>
>
>
>
> --
>   Best Regards,
>   HongTao
>
>

Re:Re: [Blink]sql client kafka sink 失败

Posted by 张洪涛 <ho...@163.com>.

是包含这个类的


jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$1.class
org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$ChecksumFactory.class
org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$Java9ChecksumFactory.class
org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C$PureJavaChecksumFactory.class
org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C.class
org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/PureJavaCrc32C.class






在 2019-02-22 18:03:18,"Zhenghua Gao" <do...@gmail.com> 写道:
>能否看一下对应的包里是否有这个类, 方法如下(假设你的blink安装包在 /tmp/blink):
>
>cd /tmp/blink/opt/connectors/kafka011
>jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C
>
>On Fri, Feb 22, 2019 at 2:56 PM 张洪涛 <ho...@163.com> wrote:
>
>>
>>
>> 大家好!
>>
>>
>> 我正在测试Blink sql client kafka sink connector ,但是发现写入失败,以下是我的步骤
>>
>>
>> 环境配置
>> blink standalone 模式
>>
>>
>>
>>
>> 1. 配置environment 启动sql client
>>
>>
>> 2. 创建kafka sink table
>> CREATETABLEkafka_sink(
>>    messageKeyVARBINARY,
>>    messageValueVARBINARY,
>>    PRIMARYKEY(messageKey))
>> with(
>>    type='KAFKA011',
>>    topic='sink-topic',
>>    `bootstrap.servers`='172.19.0.108:9092',
>>    retries='3'
>> );
>>
>>
>> 3. 创建查询语句并执行
>> INSERT INTO kafka_sink
>> SELECT CAST('123' AS VARBINARY) AS key,
>> CAST(CONCAT_WS(',', 'HELLO', 'WORLD') AS VARBINARY) AS msg;
>>
>>
>>
>>
>> 错误日志(from task executor log)
>>
>>
>> 主要是找不到kafka common package下面的一个类, 但是启动sql client 时候已经把kafka connector
>> 相关的jar包包括在内 在提交job时候 也会把这些jars 和 jobgraph一并上传到cluster,理论上这些class都会被加载
>>
>>
>>
>>
>>
>>
>> 2019-02-22 14:37:18,356 ERROR
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread
>> - Uncaught exception in kafka-producer-network-thread | producer-1:
>> java.lang.NoClassDefFoundError:
>> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C
>>         at
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.DefaultRecordBatch.writeHeader(DefaultRecordBatch.java:468)
>>         at
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.writeDefaultBatchHeader(MemoryRecordsBuilder.java:339)
>>         at
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.close(MemoryRecordsBuilder.java:293)
>>         at
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.ProducerBatch.close(ProducerBatch.java:391)
>>         at
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.RecordAccumulator.drain(RecordAccumulator.java:485)
>>         at
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:254)
>>         at
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:223)
>>         at
>> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
>>         at java.lang.Thread.run(Thread.java:748)
>> Caused by: java.lang.ClassNotFoundException:
>> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.Crc32C
>>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>         at
>> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>
>>
>>
>>
>> --
>>   Best Regards
>>   Hongtao
>>
>>
>
>-- 
>若批評無自由,則讚美無意義!







--
  Best Regards,
  HongTao


Re: [Blink]sql client kafka sink 失败

Posted by Zhenghua Gao <do...@gmail.com>.
能否看一下对应的包里是否有这个类, 方法如下(假设你的blink安装包在 /tmp/blink):

cd /tmp/blink/opt/connectors/kafka011
jar -tf flink-connector-kafka-0.11_2.11-*.jar | grep Crc32C

On Fri, Feb 22, 2019 at 2:56 PM 张洪涛 <ho...@163.com> wrote:

>
>
> 大家好!
>
>
> 我正在测试Blink sql client kafka sink connector ,但是发现写入失败,以下是我的步骤
>
>
> 环境配置
> blink standalone 模式
>
>
>
>
> 1. 配置environment 启动sql client
>
>
> 2. 创建kafka sink table
> CREATETABLEkafka_sink(
>    messageKeyVARBINARY,
>    messageValueVARBINARY,
>    PRIMARYKEY(messageKey))
> with(
>    type='KAFKA011',
>    topic='sink-topic',
>    `bootstrap.servers`='172.19.0.108:9092',
>    retries='3'
> );
>
>
> 3. 创建查询语句并执行
> INSERT INTO kafka_sink
> SELECT CAST('123' AS VARBINARY) AS key,
> CAST(CONCAT_WS(',', 'HELLO', 'WORLD') AS VARBINARY) AS msg;
>
>
>
>
> 错误日志(from task executor log)
>
>
> 主要是找不到kafka common package下面的一个类, 但是启动sql client 时候已经把kafka connector
> 相关的jar包包括在内 在提交job时候 也会把这些jars 和 jobgraph一并上传到cluster,理论上这些class都会被加载
>
>
>
>
>
>
> 2019-02-22 14:37:18,356 ERROR
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.KafkaThread
> - Uncaught exception in kafka-producer-network-thread | producer-1:
> java.lang.NoClassDefFoundError:
> org/apache/flink/kafka011/shaded/org/apache/kafka/common/utils/Crc32C
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.DefaultRecordBatch.writeHeader(DefaultRecordBatch.java:468)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.writeDefaultBatchHeader(MemoryRecordsBuilder.java:339)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.record.MemoryRecordsBuilder.close(MemoryRecordsBuilder.java:293)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.ProducerBatch.close(ProducerBatch.java:391)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.RecordAccumulator.drain(RecordAccumulator.java:485)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:254)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:223)
>         at
> org.apache.flink.kafka011.shaded.org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException:
> org.apache.flink.kafka011.shaded.org.apache.kafka.common.utils.Crc32C
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>         at
> org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>
>
>
>
> --
>   Best Regards
>   Hongtao
>
>

-- 
若批評無自由,則讚美無意義!