You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Kenneth William Krugler (Jira)" <ji...@apache.org> on 2022/09/15 00:18:00 UTC

[jira] [Commented] (HUDI-3953) Flink Hudi module should support low-level read and write APIs

    [ https://issues.apache.org/jira/browse/HUDI-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17605011#comment-17605011 ] 

Kenneth William Krugler commented on HUDI-3953:
-----------------------------------------------

I was initially wondering why Hudi didn't have a regular Flink sink. But after having implemented code to write Pinot segments, I can see advantages to having control over partitioning, which isn't possible at the sink level.

> Flink Hudi module should support  low-level read and write APIs
> ---------------------------------------------------------------
>
>                 Key: HUDI-3953
>                 URL: https://issues.apache.org/jira/browse/HUDI-3953
>             Project: Apache Hudi
>          Issue Type: Improvement
>            Reporter: yuemeng
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 0.12.0
>
>
> Currently.  Flink Hudi Module only supports SQL APIs. People who want to use low-level APIs such used for operating Flink state or another purpose don't have a friendly way.
> It can be provided a low-level APIs for users to write/read hoodie data
> The API design and main change will be:
>  # add sink and source API in Pipelines
>  # getSinkRuntimeProvider in HoodieTableSink call Pipelines.sink(...) to return DataStreamSink
>  # getScanRuntimeProvider in HoodieTableSource call Pipelines.source() to return DataStream
>  # move some common methods such as getInputFormat in util class
>  # low-level API such as read and write just call Pipelines.sink(...)  and Pipelines.source()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)