You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Matt Cheah (JIRA)" <ji...@apache.org> on 2014/09/30 00:23:33 UTC

[jira] [Commented] (SPARK-1860) Standalone Worker cleanup should not clean up running executors

    [ https://issues.apache.org/jira/browse/SPARK-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14152421#comment-14152421 ] 

Matt Cheah commented on SPARK-1860:
-----------------------------------

Apologies for any naivety - this will be the first issue I tackle as a Spark contributor.

Mingyu and I had a short chat and we thought it would be reasonable for the Executor to simply clean up its own state when it shuts down. Is there anything preventing Executor.stop() from cleaning up the app directory it was using?

> Standalone Worker cleanup should not clean up running executors
> ---------------------------------------------------------------
>
>                 Key: SPARK-1860
>                 URL: https://issues.apache.org/jira/browse/SPARK-1860
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy
>    Affects Versions: 1.0.0
>            Reporter: Aaron Davidson
>            Priority: Blocker
>
> The default values of the standalone worker cleanup code cleanup all application data every 7 days. This includes jars that were added to any executors that happen to be running for longer than 7 days, hitting streaming jobs especially hard.
> Executor's log/data folders should not be cleaned up if they're still running. Until then, this behavior should not be enabled by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org