You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/08/25 08:40:20 UTC

[jira] [Commented] (SPARK-17233) Shuffle file will be left over the capacity when dynamic schedule is enabled in a long running case.

    [ https://issues.apache.org/jira/browse/SPARK-17233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15436502#comment-15436502 ] 

Sean Owen commented on SPARK-17233:
-----------------------------------

I think this was by design IIRC because this allows another executor to pick up the shuffle files?
How about using the shuffle service?

> Shuffle file will be left over the capacity when dynamic schedule is enabled in a long running case.
> ----------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-17233
>                 URL: https://issues.apache.org/jira/browse/SPARK-17233
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.5.2, 1.6.2, 2.0.0
>            Reporter: carlmartin
>
> When I execute some sql statement periodically in the long running thriftserver, I found the disk device will be full after about one week later.
> After check the file on linux, I found so many shuffle files left on the block-mgr dir whose shuffle stage had finished long time ago.
> Finally I find when it's need to clean shuffle file, driver will total each executor to do the ShuffleClean. But when dynamic schedule is enabled, executor will be down itself and executor can't clean its shuffle file, then file was left.
> I test it in Spark 1.5 but master branch must have this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org