You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "pengzhiwei (Jira)" <ji...@apache.org> on 2021/02/09 02:19:00 UTC

[jira] [Assigned] (HUDI-1601) Support Record Level Streaming Consumption For Hudi

     [ https://issues.apache.org/jira/browse/HUDI-1601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

pengzhiwei reassigned HUDI-1601:
--------------------------------

    Assignee: pengzhiwei

> Support Record Level Streaming Consumption For Hudi
> ---------------------------------------------------
>
>                 Key: HUDI-1601
>                 URL: https://issues.apache.org/jira/browse/HUDI-1601
>             Project: Apache Hudi
>          Issue Type: Improvement
>          Components: Spark Integration
>            Reporter: pengzhiwei
>            Assignee: pengzhiwei
>            Priority: Major
>
> Currently the {{HoodieSourceOffset(implement in }} HUDI-1109 just keep the {{commitTime}} . In every min batch we consume the incremental data between {{(lastCommitTime, currentCommitTime]}} If it failed during the consuming, It will recovered from the offset state and continue to consuming the data between {{(lastCommitTime, currentCommitTime]}}. It is a commit level recovery.
> Introducing {{_hoodie_commit_seq_no}} to the {{offset}} may makes recovery more fine-grained to the record level just like the kafka. It will provide better real-time consumption.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)