You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Julian Eberius (Jira)" <ji...@apache.org> on 2019/11/12 15:26:00 UTC

[jira] [Commented] (SPARK-20050) Kafka 0.10 DirectStream doesn't commit last processed batch's offset when graceful shutdown

    [ https://issues.apache.org/jira/browse/SPARK-20050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16972524#comment-16972524 ] 

Julian Eberius commented on SPARK-20050:
----------------------------------------

Sorry to comment on an old, closed issue, but: what is the solution for this issue? I have noticed the same issue: the offsets of a last batch of a Spark/Kafka streaming app is never commited to Kafka, as commitAll() is only called in compute(). In effect, offsets commited by the user via the CanCommitOffsets interface are actually only ever commited in the following batch. If there is no following batch, nothing is commited. Therefore, on the next streaming job instance, duplicate processing will occur. The Github PR therefore makes a lot of sense to me. 

Is there anything that I'm missing? What is the recommended way to commit offsets into Kafka on Spark Streaming application shutdown?

> Kafka 0.10 DirectStream doesn't commit last processed batch's offset when graceful shutdown
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-20050
>                 URL: https://issues.apache.org/jira/browse/SPARK-20050
>             Project: Spark
>          Issue Type: Bug
>          Components: DStreams
>    Affects Versions: 2.2.0
>            Reporter: Sasaki Toru
>            Priority: Major
>
> I use Kafka 0.10 DirectStream with properties 'enable.auto.commit=false' and call 'DirectKafkaInputDStream#commitAsync' finally in each batches,  such below
> {code}
> val kafkaStream = KafkaUtils.createDirectStream[String, String](...)
> kafkaStream.map { input =>
>   "key: " + input.key.toString + " value: " + input.value.toString + " offset: " + input.offset.toString
>   }.foreachRDD { rdd =>
>     rdd.foreach { input =>
>     println(input)
>   }
> }
> kafkaStream.foreachRDD { rdd =>
>   val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
>   kafkaStream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
> }
> {\code}
> Some records which processed in the last batch before Streaming graceful shutdown reprocess in the first batch after Spark Streaming restart, such below
> * output first run of this application
> {code}
> key: null value: 1 offset: 101452472
> key: null value: 2 offset: 101452473
> key: null value: 3 offset: 101452474
> key: null value: 4 offset: 101452475
> key: null value: 5 offset: 101452476
> key: null value: 6 offset: 101452477
> key: null value: 7 offset: 101452478
> key: null value: 8 offset: 101452479
> key: null value: 9 offset: 101452480  // this is a last record before shutdown Spark Streaming gracefully
> {\code}
> * output re-run of this application
> {code}
> key: null value: 7 offset: 101452478   // duplication
> key: null value: 8 offset: 101452479   // duplication
> key: null value: 9 offset: 101452480   // duplication
> key: null value: 10 offset: 101452481
> {\code}
> It may cause offsets specified in commitAsync will commit in the head of next batch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org