You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by Zhou Zach <wa...@163.com> on 2020/06/15 07:36:59 UTC

flink sql sink hbase failed

flink version: 1.10.0
hbase version: 2.1.0




SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/Zach/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.4.1/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/Zach/.m2/repository/org/slf4j/slf4j-log4j12/1.7.7/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Exception in thread "main" org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSinkFactory' in
the classpath.


Reason: Required context properties mismatch.


The matching candidates:
org.apache.flink.addons.hbase.HBaseTableFactory
Mismatched properties:
'connector.version' expects '1.4.3', but is '2.1.0'


The following properties are requested:
connector.table-name=user_hbase
connector.type=hbase
connector.version=2.1.0
connector.write.buffer-flush.interval=2s
connector.write.buffer-flush.max-rows=1000
connector.write.buffer-flush.max-size=10mb
connector.zookeeper.quorum=cdh1:2181,cdh2:2181,cdh3:2181
connector.zookeeper.znode.parent=/hbase
schema.0.data-type=VARCHAR(2147483647)
schema.0.name=rowkey
schema.1.data-type=ROW<`sex` VARCHAR(2147483647), `age` INT, `created_time` TIMESTAMP(3)>
schema.1.name=cf


The following factories have been considered:
org.apache.flink.api.java.io.jdbc.JDBCTableSourceSinkFactory
org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactory
org.apache.flink.table.sinks.CsvBatchTableSinkFactory
org.apache.flink.table.sinks.CsvAppendTableSinkFactory
org.apache.flink.addons.hbase.HBaseTableFactory
at org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
at org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
at org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
at org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
at org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:310)
at org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:190)
at org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:682)
at org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:495)
at org.rabbit.sql.FromKafkaSinkHbase$.main(FromKafkaSinkHbase.scala:61)
at org.rabbit.sql.FromKafkaSinkHbase.main(FromKafkaSinkHbase.scala)






Query:


streamTableEnv.sqlUpdate(
"""
    |
    |CREATE TABLE user_hbase(
    |    rowkey string,
    |    cf ROW(sex VARCHAR, age INT, created_time TIMESTAMP(3))
    |) WITH (
    |    'connector.type' = 'hbase',
    |    'connector.version' = '2.1.0',
    |    'connector.table-name' = 'user_hbase',
    |    'connector.zookeeper.quorum' = 'cdh1:2181,cdh2:2181,cdh3:2181',
    |    'connector.zookeeper.znode.parent' = '/hbase',
    |    'connector.write.buffer-flush.max-size' = '10mb',
    |    'connector.write.buffer-flush.max-rows' = '1000',
    |    'connector.write.buffer-flush.interval' = '2s'
    |)
    |""".stripMargin)

Re: flink sql sink hbase failed

Posted by "Sun.Zhu" <17...@163.com>.
好像不需要改源码
'connector.version' = ‘1.4.3’ 也可以往2.x版本里写
| |
Sun.Zhu
|
|
17626017841@163.com
|
签名由网易邮箱大师定制


On 06/15/2020 19:22,Zhou Zach<wa...@163.com> wrote:
改了源码,可以了

















在 2020-06-15 16:17:46,"Leonard Xu" <xb...@gmail.com> 写道:
Hi


在 2020年6月15日,15:36,Zhou Zach <wa...@163.com> 写道:

'connector.version' expects '1.4.3', but is '2.1.0'

Hbase connector只支持1.4.3的版本,其他不支持,但之前看有社区用户用1.4.3的connector写入高版本的case,你可以试下。

祝好
Leonard Xu

Re:Re: flink sql sink hbase failed

Posted by Zhou Zach <wa...@163.com>.
改了源码,可以了

















在 2020-06-15 16:17:46,"Leonard Xu" <xb...@gmail.com> 写道:
>Hi
>
>
>> 在 2020年6月15日,15:36,Zhou Zach <wa...@163.com> 写道:
>> 
>> 'connector.version' expects '1.4.3', but is '2.1.0'
>
>Hbase connector只支持1.4.3的版本,其他不支持,但之前看有社区用户用1.4.3的connector写入高版本的case,你可以试下。
>
>祝好
>Leonard Xu

Re: flink sql sink hbase failed

Posted by Leonard Xu <xb...@gmail.com>.
Hi


> 在 2020年6月15日,15:36,Zhou Zach <wa...@163.com> 写道:
> 
> 'connector.version' expects '1.4.3', but is '2.1.0'

Hbase connector只支持1.4.3的版本,其他不支持,但之前看有社区用户用1.4.3的connector写入高版本的case,你可以试下。

祝好
Leonard Xu