You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marcelo Vanzin (JIRA)" <ji...@apache.org> on 2019/02/27 17:54:00 UTC

[jira] [Resolved] (SPARK-24063) Control maximum epoch backlog

     [ https://issues.apache.org/jira/browse/SPARK-24063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Marcelo Vanzin resolved SPARK-24063.
------------------------------------
       Resolution: Fixed
    Fix Version/s: 3.0.0

Issue resolved by pull request 23156
[https://github.com/apache/spark/pull/23156]

> Control maximum epoch backlog
> -----------------------------
>
>                 Key: SPARK-24063
>                 URL: https://issues.apache.org/jira/browse/SPARK-24063
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Structured Streaming
>    Affects Versions: 2.4.0
>            Reporter: Efim Poberezkin
>            Assignee: Gabor Somogyi
>            Priority: Major
>             Fix For: 3.0.0
>
>
> As pointed out by [~joseph.torres] in [https://github.com/apache/spark/pull/20936], both epoch queue and commits/offsets maps are unbounded by the number of waiting epochs. According to his proposal, we should introduce some configuration option for maximum epoch backlog and report an error if the number of waiting epochs exceeds it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org