You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/07/08 14:36:33 UTC

[GitHub] [spark] gczsjdy commented on issue #24462: [SPARK-26268][CORE] Do not resubmit tasks when executors are lost

gczsjdy commented on issue #24462: [SPARK-26268][CORE] Do not resubmit tasks when executors are lost
URL: https://github.com/apache/spark/pull/24462#issuecomment-509251934
 
 
   @squito I agree with you, but still want to make sure I understand it right: The function https://docs.google.com/document/d/1d6egnL6WHOwWZe8MWv3m8n4PToNacdx7n_0iMSWwhCQ/edit?disco=AAAADN6g3wY resembles this PR's work. However, it just works when users leverage the new `ShuffleIO` pluggable API. But since the lower-level `ShuffleIO` & upper-level `ShuffleManager` API will both exist in the future Spark. We still need this PR's work for a custom `ShuffleManager` implementation to avoid resubmitting map tasks(for example when shuffle files are persisted in DFS).
   
   So actually we have plans to push this PR forward?
   
   @bsidhom Could we create a field in `ShuffleManager` like most people agreed on? 
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org