You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by "yinghua_zh@163.com" <yi...@163.com> on 2021/02/22 06:36:49 UTC

Flink SQL 写入Hive问题请教

我们在开发一个Flink SQL 框架,在从kafka读取数据加工写入到Hive时一直不成功,sql脚本如下:
CREATE TABLE hive_table_from_kafka (
   collect_time STRING,
   content1 STRING, 
   content2 STRING
) PARTITIONED BY (
  dt STRING,hr STRING
) TBLPROPERTIES (
  'partition.time-extractor.timestamp-pattern'='$dt $hr',
  'sink.partition-commit.trigger'='partition-time',
  'sink.partition-commit.delay'='0S',
  'sink.partition-commit.policy.kind'='metastore,success-file'
);

然后代码中对于创建表的sql做如下的处理
private void callCreateTable(SqlCommandParser.SqlCommandCall cmdCall) {
  String ddl = cmdCall.operands[0];
  if (ddl.contains("hive_table")) {
    tableEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
  } else {
    tableEnv.getConfig().setSqlDialect(SqlDialect.DEFAULT);
  }
  try {
    tableEnv.executeSql(ddl);
  } catch (SqlParserException e) {
    throw new RuntimeException("SQL execute failed:\n" + ddl + "\n", e);
  }
}在执行上面的SQL语句时,总是报没有设置connector:Caused by: org.apache.flink.table.api.ValidationException: Table options do not contain an option key 'connector' for discovering a connector



yinghua_zh@163.com

Re: Re: Flink SQL 写入Hive问题请教

Posted by Rui Li <li...@gmail.com>.
是的,hive表必须存在HiveCatalog里才能正常读写

On Tue, Feb 23, 2021 at 10:14 AM yinghua_zh@163.com <yi...@163.com>
wrote:

>
> Flink的版本是1.11.3,目前我们所有表的catalog类型都是GenericInMemoryCatalog,是不是Hive表要用HiveCatalog才行?
>
>
>
> yinghua_zh@163.com
>
> 发件人: Rui Li
> 发送时间: 2021-02-23 10:05
> 收件人: user-zh
> 主题: Re: Re: Flink SQL 写入Hive问题请教
> 你好,
>
> 用的flink是什么版本呢?另外这张hive表是创建在HiveCatalog里的么?
>
> On Tue, Feb 23, 2021 at 9:01 AM 邮件帮助中心 <yi...@163.com> wrote:
>
> > 我增加调试日志后,发现执行DDL语句创建hive表时,设置了dialect 为hive,现在报错根据堆栈信息是在执行DML语句insert
> > into时创建Hive表时提示没有连接器的配置
> > Table options are: 'is_generic'='false'
> > 'partition.time-extractor.timestamp-pattern'='$dt $hr'
> > 'sink.partition-commit.delay'='0S'
> > 'sink.partition-commit.policy.kind'='metastore,success-file'
> > 'sink.partition-commit.trigger'='partition-time' at
> >
> org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:164)
> > at
> >
> org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:344)
> > at
> >
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
> > at
> >
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
> > at
> >
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
> > at
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> > at
> >
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> > at scala.collection.Iterator$class.foreach(Iterator.scala:891) at
> > scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at
> > scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at
> > scala.collection.AbstractIterable.foreach(Iterable.scala:54) at
> > scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at
> > scala.collection.AbstractTraversable.map(Traversable.scala:104) at
> >
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
> > at
> >
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1270)
> > at
> >
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:701)
> > at
> >
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:789)
> > at
> >
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:691)
> > at
> >
> com.cgws.ccp.flink.sql.submit.SqlSubmit.callInsertInto(SqlSubmit.java:242)
> > at
> com.cgws.ccp.flink.sql.submit.SqlSubmit.callCommand(SqlSubmit.java:201)
> > at com.cgws.ccp.flink.sql.submit.SqlSubmit.run(SqlSubmit.java:126) at
> > com.cgws.ccp.flink.sql.submit.SqlSubmit.main(SqlSubmit.java:84) at
> > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498) at
> >
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
> > ... 11 more Caused by: org.apache.flink.table.api.ValidationException:
> > Table options do not contain an option key 'connector' for discovering a
> > connector. at
> >
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:321)
> > at
> >
> org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:157)
> > ... 37 more
> >
> >
> > 假如在执行DML语句时设置Hive方言,那么Kafka的表不是Hive语法,这个该怎么处理?
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > 在 2021-02-22 17:12:55,"eriendeng" <er...@tencent.com> 写道:
> > >你这没有把dialect set成hive吧,走到了else分支。default
> > >dialect是需要指定connector的,参考文档的kafka到hive代码
> > >
> >
> https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/hive/hive_read_write.html#writing
> > >
> > >
> > >
> > >--
> > >Sent from: http://apache-flink.147419.n8.nabble.com/
> >
>
>
> --
> Best regards!
> Rui Li
>


-- 
Best regards!
Rui Li

Re: Re: Flink SQL 写入Hive问题请教

Posted by "yinghua_zh@163.com" <yi...@163.com>.
Flink的版本是1.11.3,目前我们所有表的catalog类型都是GenericInMemoryCatalog,是不是Hive表要用HiveCatalog才行?



yinghua_zh@163.com
 
发件人: Rui Li
发送时间: 2021-02-23 10:05
收件人: user-zh
主题: Re: Re: Flink SQL 写入Hive问题请教
你好,
 
用的flink是什么版本呢?另外这张hive表是创建在HiveCatalog里的么?
 
On Tue, Feb 23, 2021 at 9:01 AM 邮件帮助中心 <yi...@163.com> wrote:
 
> 我增加调试日志后,发现执行DDL语句创建hive表时,设置了dialect 为hive,现在报错根据堆栈信息是在执行DML语句insert
> into时创建Hive表时提示没有连接器的配置
> Table options are: 'is_generic'='false'
> 'partition.time-extractor.timestamp-pattern'='$dt $hr'
> 'sink.partition-commit.delay'='0S'
> 'sink.partition-commit.policy.kind'='metastore,success-file'
> 'sink.partition-commit.trigger'='partition-time' at
> org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:164)
> at
> org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:344)
> at
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
> at
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
> at
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891) at
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1270)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:701)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:789)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:691)
> at
> com.cgws.ccp.flink.sql.submit.SqlSubmit.callInsertInto(SqlSubmit.java:242)
> at com.cgws.ccp.flink.sql.submit.SqlSubmit.callCommand(SqlSubmit.java:201)
> at com.cgws.ccp.flink.sql.submit.SqlSubmit.run(SqlSubmit.java:126) at
> com.cgws.ccp.flink.sql.submit.SqlSubmit.main(SqlSubmit.java:84) at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
> ... 11 more Caused by: org.apache.flink.table.api.ValidationException:
> Table options do not contain an option key 'connector' for discovering a
> connector. at
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:321)
> at
> org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:157)
> ... 37 more
>
>
> 假如在执行DML语句时设置Hive方言,那么Kafka的表不是Hive语法,这个该怎么处理?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 在 2021-02-22 17:12:55,"eriendeng" <er...@tencent.com> 写道:
> >你这没有把dialect set成hive吧,走到了else分支。default
> >dialect是需要指定connector的,参考文档的kafka到hive代码
> >
> https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/hive/hive_read_write.html#writing
> >
> >
> >
> >--
> >Sent from: http://apache-flink.147419.n8.nabble.com/
>
 
 
-- 
Best regards!
Rui Li

Re: Re: Flink SQL 写入Hive问题请教

Posted by Rui Li <li...@gmail.com>.
你好,

用的flink是什么版本呢?另外这张hive表是创建在HiveCatalog里的么?

On Tue, Feb 23, 2021 at 9:01 AM 邮件帮助中心 <yi...@163.com> wrote:

> 我增加调试日志后,发现执行DDL语句创建hive表时,设置了dialect 为hive,现在报错根据堆栈信息是在执行DML语句insert
> into时创建Hive表时提示没有连接器的配置
> Table options are: 'is_generic'='false'
> 'partition.time-extractor.timestamp-pattern'='$dt $hr'
> 'sink.partition-commit.delay'='0S'
> 'sink.partition-commit.policy.kind'='metastore,success-file'
> 'sink.partition-commit.trigger'='partition-time' at
> org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:164)
> at
> org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:344)
> at
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
> at
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
> at
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891) at
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1270)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:701)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:789)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:691)
> at
> com.cgws.ccp.flink.sql.submit.SqlSubmit.callInsertInto(SqlSubmit.java:242)
> at com.cgws.ccp.flink.sql.submit.SqlSubmit.callCommand(SqlSubmit.java:201)
> at com.cgws.ccp.flink.sql.submit.SqlSubmit.run(SqlSubmit.java:126) at
> com.cgws.ccp.flink.sql.submit.SqlSubmit.main(SqlSubmit.java:84) at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
> ... 11 more Caused by: org.apache.flink.table.api.ValidationException:
> Table options do not contain an option key 'connector' for discovering a
> connector. at
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:321)
> at
> org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:157)
> ... 37 more
>
>
> 假如在执行DML语句时设置Hive方言,那么Kafka的表不是Hive语法,这个该怎么处理?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 在 2021-02-22 17:12:55,"eriendeng" <er...@tencent.com> 写道:
> >你这没有把dialect set成hive吧,走到了else分支。default
> >dialect是需要指定connector的,参考文档的kafka到hive代码
> >
> https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/hive/hive_read_write.html#writing
> >
> >
> >
> >--
> >Sent from: http://apache-flink.147419.n8.nabble.com/
>


-- 
Best regards!
Rui Li

Re: Re:Re: Flink SQL 写入Hive问题请教

Posted by eriendeng <er...@tencent.com>.
在hive catalog下创建kafka source表会在hive
metastore中创建一张仅包含元数据的表,hive不可查,flink任务中可以识别并当成hive表读,然后只需要在hive
dialect下正常读出写入即可。
参考 https://my.oschina.net/u/2828172/blog/4415970



--
Sent from: http://apache-flink.147419.n8.nabble.com/

Re:Re: Flink SQL 写入Hive问题请教

Posted by 邮件帮助中心 <yi...@163.com>.
我增加调试日志后,发现执行DDL语句创建hive表时,设置了dialect 为hive,现在报错根据堆栈信息是在执行DML语句insert into时创建Hive表时提示没有连接器的配置
Table options are: 'is_generic'='false' 'partition.time-extractor.timestamp-pattern'='$dt $hr' 'sink.partition-commit.delay'='0S' 'sink.partition-commit.policy.kind'='metastore,success-file' 'sink.partition-commit.trigger'='partition-time' at org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:164) at org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:344) at org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204) at org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163) at org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.AbstractTraversable.map(Traversable.scala:104) at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1270) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:701) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:789) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:691) at com.cgws.ccp.flink.sql.submit.SqlSubmit.callInsertInto(SqlSubmit.java:242) at com.cgws.ccp.flink.sql.submit.SqlSubmit.callCommand(SqlSubmit.java:201) at com.cgws.ccp.flink.sql.submit.SqlSubmit.run(SqlSubmit.java:126) at com.cgws.ccp.flink.sql.submit.SqlSubmit.main(SqlSubmit.java:84) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288) ... 11 more Caused by: org.apache.flink.table.api.ValidationException: Table options do not contain an option key 'connector' for discovering a connector. at org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:321) at org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:157) ... 37 more


假如在执行DML语句时设置Hive方言,那么Kafka的表不是Hive语法,这个该怎么处理?

















在 2021-02-22 17:12:55,"eriendeng" <er...@tencent.com> 写道:
>你这没有把dialect set成hive吧,走到了else分支。default
>dialect是需要指定connector的,参考文档的kafka到hive代码
>https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/hive/hive_read_write.html#writing
>
>
>
>--
>Sent from: http://apache-flink.147419.n8.nabble.com/

Re: Flink SQL 写入Hive问题请教

Posted by eriendeng <er...@tencent.com>.
你这没有把dialect set成hive吧,走到了else分支。default
dialect是需要指定connector的,参考文档的kafka到hive代码
https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/hive/hive_read_write.html#writing



--
Sent from: http://apache-flink.147419.n8.nabble.com/