You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "sivabalan narayanan (Jira)" <ji...@apache.org> on 2021/01/30 15:19:00 UTC

[jira] [Commented] (HUDI-1007) When earliestOffsets is greater than checkpoint, Hudi will not be able to successfully consume data

    [ https://issues.apache.org/jira/browse/HUDI-1007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275658#comment-17275658 ] 

sivabalan narayanan commented on HUDI-1007:
-------------------------------------------

[~liujinhui]: So, you mean to say with spark2.4, and w/ config " spark.streaming.kafka.allowNonConsecutiveOffsets" set, you are still hitting the exception? in other words, if offsets sent to Kafka to consume have been compacted, fetching from Kafka fails? Can you try using a vanilla Kafka consumer with some older offset (which was compacted) and see what happens. I don't think we need to make changes in Hudi/deltastreamer for this. Kafka should the one moving the offset to some valid offset and start serving msgs from there.  

> When earliestOffsets is greater than checkpoint, Hudi will not be able to successfully consume data
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HUDI-1007
>                 URL: https://issues.apache.org/jira/browse/HUDI-1007
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: DeltaStreamer
>            Reporter: liujinhui
>            Assignee: liujinhui
>            Priority: Major
>              Labels: user-support-issues
>             Fix For: 0.8.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Use deltastreamer to consume kafka,
>  When earliestOffsets is greater than checkpoint, Hudi will not be able to successfully consume data
> org.apache.hudi.utilities.sources.helpers.KafkaOffsetGen#checkupValidOffsets
> boolean checkpointOffsetReseter = checkpointOffsets.entrySet().stream()
>  .anyMatch(offset -> offset.getValue() < earliestOffsets.get(offset.getKey()));
> return checkpointOffsetReseter ? earliestOffsets : checkpointOffsets;
> Kafka data is continuously generated, which means that some data will continue to expire.
>  When earliestOffsets is greater than checkpoint, earliestOffsets will be taken. But at this moment, some data expired. In the end, consumption fails. This process is an endless cycle. I can understand that this design may be to avoid the loss of data, but it will lead to such a situation, I want to fix this problem, I want to hear your opinion  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)