You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Attila Zsolt Piros (JIRA)" <ji...@apache.org> on 2019/05/23 11:59:00 UTC
[jira] [Created] (SPARK-27819) Retry cleanup of disk persisted RDD
via external shuffle service when it failed via executor
Attila Zsolt Piros created SPARK-27819:
------------------------------------------
Summary: Retry cleanup of disk persisted RDD via external shuffle service when it failed via executor
Key: SPARK-27819
URL: https://issues.apache.org/jira/browse/SPARK-27819
Project: Spark
Issue Type: Improvement
Components: Spark Core
Affects Versions: 3.0.0
Reporter: Attila Zsolt Piros
This issue is created for not losing the idea came up during SPARK-27677 (at org.apache.spark.storage.BlockManagerMasterEndpoint#removeRdd):
{noformat}
In certain situations (e.g. executor death) it would make sense to retry this through the shuffle service. But I'm not sure how to safely detect those situations (or whether it makes sense to always retry through the shuffle service).
{noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org