You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Cory Nguyen (JIRA)" <ji...@apache.org> on 2015/05/29 09:46:17 UTC

[jira] [Created] (SPARK-7941) Cache Cleanup Failure when job is killed by Spark

Cory Nguyen created SPARK-7941:
----------------------------------

             Summary: Cache Cleanup Failure when job is killed by Spark 
                 Key: SPARK-7941
                 URL: https://issues.apache.org/jira/browse/SPARK-7941
             Project: Spark
          Issue Type: Bug
          Components: PySpark, YARN
    Affects Versions: 1.3.1
            Reporter: Cory Nguyen


Problem/Bug:
If a job is running and Spark kills the job intentionally, the cache files remains on the local/worker nodes and are not cleaned up properly. Over time the old cache builds up and causes "No Space Left on Device" error. 

The cache is cleaned up properly when the job succeeds. I have not verified if the cached remains when the user intentionally kills the job. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org