You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by Howie Yang <ha...@126.com> on 2023/02/02 08:28:16 UTC

Flink消费消息队列写入HDFS

Hey,


最近想把消费日志写入到HDFS中,找这块的connector发现大部分都停留在使用 BucketingSink 的方式,这个好像是老版本的api了,
这块官方推荐的最新的方式是什么呢?










--

Best,
Howie

Re: Flink SQL使用hive dialect异常

Posted by yuxia <lu...@alumni.sjtu.edu.cn>.
你好,可以贴一下详细的异常栈吗?
可以在 FLINK_HOME/log/ 下面看到

Best regards,
Yuxia

----- 原始邮件 -----
发件人: "aiden" <18...@163.com>
收件人: "user-zh" <us...@flink.apache.org>
发送时间: 星期一, 2023年 2 月 06日 下午 4:44:02
主题: Flink SQL使用hive dialect异常

HI

   我在使用Flink SQL Client开发hive时遇到一个问题,在设置set table.sql-dialect=hive时,会报如下错误
Flink SQL> CREATE CATALOG myhive WITH (
>   'type' = 'hive',
>   'hive-conf-dir' = '/opt/bd/flink/hive-conf'
> );

[INFO] Execute statement succeed.

Flink SQL> 
> use catalog myhive;
[INFO] Execute statement succeed.

Flink SQL> show tables;
+------------------------------------+
|                         table name |
+------------------------------------+
|                 hive_table_name|
+------------------------------------+
20 rows in set

Flink SQL> SET table.sql-dialect=hive;
[INFO] Session property has been set.

Flink SQL> show tables;
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.client.gateway.SqlExecutionException: Failed to parse statement: show tables;

Flink SQL> show tables;
[ERROR] Could not execute SQL statement. Reason:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hive.ql.exec.FunctionRegistry

Flink版本为1.15.2,hadoop版本为3.0.0,hive版本为2.1.1,lib下jar为:
flink-cep-1.15.2.jar
flink-connector-files-1.15.2.jar
flink-csv-1.15.2.jar
flink-dist-1.15.2.jar
flink-json-1.15.2.jar
flink-scala_2.12-1.15.2.jar
flink-shaded-zookeeper-3.5.9.jar
flink-sql-connector-hive-2.2.0_2.12-1.15.2.jar
flink-table-api-java-uber-1.15.2.jar
flink-table-planner_2.12-1.15.2.jar
flink-table-runtime-1.15.2.jar
log4j-1.2-api-2.17.1.jar
log4j-api-2.17.1.jar
log4j-core-2.17.1.jar
log4j-slf4j-impl-2.17.1.jar
按照官网说明,将flink-sql-connector-hive-2.2.0_2.12-1.15.2.jar包删除并添加antlr-runtime-3.5.2.jar、flink-connector-hive_2.12-1.15.2.ja及hive-exec-2.1.1.jar包后依然报同样错误,请问这个该如何解决。

Flink SQL使用hive dialect异常

Posted by aiden <18...@163.com>.
HI

   我在使用Flink SQL Client开发hive时遇到一个问题,在设置set table.sql-dialect=hive时,会报如下错误
Flink SQL> CREATE CATALOG myhive WITH (
>   'type' = 'hive',
>   'hive-conf-dir' = '/opt/bd/flink/hive-conf'
> );

[INFO] Execute statement succeed.

Flink SQL> 
> use catalog myhive;
[INFO] Execute statement succeed.

Flink SQL> show tables;
+------------------------------------+
|                         table name |
+------------------------------------+
|                 hive_table_name|
+------------------------------------+
20 rows in set

Flink SQL> SET table.sql-dialect=hive;
[INFO] Session property has been set.

Flink SQL> show tables;
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.client.gateway.SqlExecutionException: Failed to parse statement: show tables;

Flink SQL> show tables;
[ERROR] Could not execute SQL statement. Reason:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hive.ql.exec.FunctionRegistry

Flink版本为1.15.2,hadoop版本为3.0.0,hive版本为2.1.1,lib下jar为:
flink-cep-1.15.2.jar
flink-connector-files-1.15.2.jar
flink-csv-1.15.2.jar
flink-dist-1.15.2.jar
flink-json-1.15.2.jar
flink-scala_2.12-1.15.2.jar
flink-shaded-zookeeper-3.5.9.jar
flink-sql-connector-hive-2.2.0_2.12-1.15.2.jar
flink-table-api-java-uber-1.15.2.jar
flink-table-planner_2.12-1.15.2.jar
flink-table-runtime-1.15.2.jar
log4j-1.2-api-2.17.1.jar
log4j-api-2.17.1.jar
log4j-core-2.17.1.jar
log4j-slf4j-impl-2.17.1.jar
按照官网说明,将flink-sql-connector-hive-2.2.0_2.12-1.15.2.jar包删除并添加antlr-runtime-3.5.2.jar、flink-connector-hive_2.12-1.15.2.ja及hive-exec-2.1.1.jar包后依然报同样错误,请问这个该如何解决。


Re: Flink消费消息队列写入HDFS

Posted by weijie guo <gu...@gmail.com>.
你好,可以使用FileSink,这个是基于新的sink API的。

Best regards,

Weijie


Howie Yang <ha...@126.com> 于2023年2月2日周四 16:28写道:

> Hey,
>
>
> 最近想把消费日志写入到HDFS中,找这块的connector发现大部分都停留在使用 BucketingSink 的方式,这个好像是老版本的api了,
> 这块官方推荐的最新的方式是什么呢?
>
>
>
>
>
>
>
>
>
>
> --
>
> Best,
> Howie