You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Basil Hariri <Ba...@microsoft.com.INVALID> on 2018/11/02 19:08:56 UTC

Continuous task retry support

Hi all,

I found that task retries are currently not supported<https://github.com/apache/spark/blob/5264164a67df498b73facae207eda12ee133be7d/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousTaskRetryException.scala> in continuous processing mode. Is there another way to recover from continuous task failures currently? If not, are there plans to support this in a future release?
Thanks,
Basil

Re: Continuous task retry support

Posted by Yuanjian Li <xy...@gmail.com>.
>
> *I found that task retries are currently not supported
> <https://github.com/apache/spark/blob/5264164a67df498b73facae207eda12ee133be7d/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousTaskRetryException.scala> in
> continuous processing mode. Is there another way to recover from continuous
> task failures currently?*

Yes, currently task level retry is not supported in CP mode and the recover
strategy instead by stage restart.

 *If not, are there plans to support this in a future release?*

 Actually task level retry in CP mode is easy to implement in map-only
operators but need more discussion when we plan to support more shuffled
stateful operators in CP. More discussion in
https://github.com/apache/spark/pull/20675.

Basil Hariri <Ba...@microsoft.com.invalid> 于2018年11月3日周六 上午3:09写道:

> *Hi all,*
>
>
>
> *I found that task retries are currently not supported
> <https://github.com/apache/spark/blob/5264164a67df498b73facae207eda12ee133be7d/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousTaskRetryException.scala>
> in continuous processing mode. Is there another way to recover from
> continuous task failures currently? If not, are there plans to support this
> in a future release?*
>
> Thanks,
>
> Basil
>