You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:34:06 UTC

[jira] [Resolved] (SPARK-16684) Standalone mode local dirs not properly cleaned if job is killed

     [ https://issues.apache.org/jira/browse/SPARK-16684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-16684.
----------------------------------
    Resolution: Incomplete

> Standalone mode local dirs not properly cleaned if job is killed
> ----------------------------------------------------------------
>
>                 Key: SPARK-16684
>                 URL: https://issues.apache.org/jira/browse/SPARK-16684
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Shell
>    Affects Versions: 1.6.2
>         Environment: MacOS, but probably the same for Linux
>            Reporter: Dean Wampler
>            Priority: Minor
>              Labels: bulk-closed
>
> The shuffle service wasn't used.
> If you control-c out of a job, e.g., the spark-shell, cleanup does in fact occur correctly, but if you send a kill -9 to the process, then clean up isn't done (using these methods to simulate certain crash scenarios). 
> Possible solution: Have the master and worker daemons delete temporary files older than a user-configurable time.
> Workaround: setup a cron job that does this clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org