You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Kanishka Chauhan (Jira)" <ji...@apache.org> on 2021/10/12 10:42:00 UTC

[jira] [Comment Edited] (SPARK-24156) Enable no-data micro batches for more eager streaming state clean up

    [ https://issues.apache.org/jira/browse/SPARK-24156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17427584#comment-17427584 ] 

Kanishka Chauhan edited comment on SPARK-24156 at 10/12/21, 10:41 AM:
----------------------------------------------------------------------

Hi [~tdas],

We observed on Spark 2.4.0 and Spark 3.0.3 that the Last data ( or window) will get evicted/flushed to Sink only if it falls below the watermark timestamp, which is clearly mentioned in Spark documentation especially with "append" output mode.

We are facing a similar issue as mentioned by [~taransaini43], that the last group of data not getting flushed to sink.

Is there a way we can force Spark to flush the last group of data after some pre-configured amount of time, in case no new data arrives in Spark for long?

I understand that Streaming inherently means an unbounded continuous stream of data with no end to it. But Spark users always wanted to see the complete data which they have pushed to the source.

 


was (Author: kcsrms):
Hi [~tdas],

We observed on Spark 2.4.0 and Spark 3.0.3 that the Last data ( or window) will get evicted/flushed to Sink only if it falls below the watermark timestamp, which is clearly mentioned in Spark documentation especially with "append" output mode.



We are facing a similar issue as mentioned by [~taransaini43], that the last group of data not getting flushed to sink.

Is there a way we can force Spark to flush the last group of data after some pre-configured amount of time, in case no new data arrives in Spark for long?

I understand that Streaming inherently means an unbounded continuous stream of data with no end to it. But Spark users always wanted to see the complete data which they pushed to the source.

 

> Enable no-data micro batches for more eager streaming state clean up 
> ---------------------------------------------------------------------
>
>                 Key: SPARK-24156
>                 URL: https://issues.apache.org/jira/browse/SPARK-24156
>             Project: Spark
>          Issue Type: Improvement
>          Components: Structured Streaming
>    Affects Versions: 2.3.0
>            Reporter: Tathagata Das
>            Assignee: Tathagata Das
>            Priority: Major
>             Fix For: 2.4.0
>
>
> Currently, MicroBatchExecution in Structured Streaming runs batches only when there is new data to process. This is sensible in most cases as we dont want to unnecessarily use resources when there is nothing new to process. However, in some cases of stateful streaming queries, this delays state clean up as well as clean-up based output. For example, consider a streaming aggregation query with watermark-based state cleanup. The watermark is updated after every batch with new data completes. The updated value is used in the next batch to clean up state, and output finalized aggregates in append mode. However, if there is no data, then the next batch does not occur, and cleanup/output gets delayed unnecessarily. This is true for all stateful streaming operators - aggregation, deduplication, joins, mapGroupsWithState
> This issue tracks the work to enable no-data batches in MicroBatchExecution. The major challenge is that all the tests of relevant stateful operations add dummy data to force another batch for testing the state cleanup. So a lot of the tests are going to be changed. So my plan is to enable no-data batches for different stateful operators one at a time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org