You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@bahir.apache.org by "Gyula Fora (Jira)" <ji...@apache.org> on 2020/04/10 07:50:00 UTC
[jira] [Commented] (BAHIR-228) Flink SQL supports kudu sink
[ https://issues.apache.org/jira/browse/BAHIR-228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080315#comment-17080315 ]
Gyula Fora commented on BAHIR-228:
----------------------------------
Hi!
cc [~mbalassi]
Thanks for opening this Jira ticket. We have been working on complete Table/SQL api support for the Kudu connector including some refactorings and other improvements. We have started a discussion on the ML already and a PR should follow in the next couple of days :)
> Flink SQL supports kudu sink
> ----------------------------
>
> Key: BAHIR-228
> URL: https://issues.apache.org/jira/browse/BAHIR-228
> Project: Bahir
> Issue Type: New Feature
> Components: Flink Streaming Connectors
> Reporter: dalongliu
> Priority: Major
>
> currently, for Flink-1.10.0, we can use the catalog to store our stream table sink for kudu, it should exist a kudu table sink so we can register it to catalog, and use kudu as a table in SQL environment.
> we can use kudu table sink like this:
> {code:java}
> KuduOptions options = KuduOptions.builder()
> .setKuduMaster(kuduMaster)
> .setTableName(kuduTable)
> .build();
> KuduWriterOptions writerOptions = KuduWriterOptions.builder() .setWriteMode(KuduWriterMode.UPSERT)
> .setFlushMode(FlushMode.AUTO_FLUSH_BACKGROUND)
> .build();
> KuduTableSink tableSink = KuduTableSink.builder()
> .setOptions(options)
> .setWriterOptions(writerOptions)
> .setTableSchema(schema)
> .build();
> tEnv.registerTableSink("kudu", tableSink);
> tEnv.sqlUpdate("insert into kudu select * from source");
> {code}
> I have used kudu table sink to sync data in company's production environment, the writing speed at 5w/s in upsert mode
--
This message was sent by Atlassian Jira
(v8.3.4#803005)