You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2018/11/08 05:26:00 UTC

[jira] [Commented] (SPARK-22827) Avoid throwing OutOfMemoryError in case of exception in spill

    [ https://issues.apache.org/jira/browse/SPARK-22827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679306#comment-16679306 ] 

Apache Spark commented on SPARK-22827:
--------------------------------------

User 'ueshin' has created a pull request for this issue:
https://github.com/apache/spark/pull/22969

> Avoid throwing OutOfMemoryError in case of exception in spill
> -------------------------------------------------------------
>
>                 Key: SPARK-22827
>                 URL: https://issues.apache.org/jira/browse/SPARK-22827
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.2.0
>            Reporter: Sital Kedia
>            Assignee: Sital Kedia
>            Priority: Major
>             Fix For: 2.3.0
>
>
> Currently, the task memory manager throws an OutofMemory error when there is an IO exception happens in spill() - https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/memory/TaskMemoryManager.java#L194. Similarly there any many other places in code when if a task is not able to acquire memory due to an exception we throw an OutofMemory error which kills the entire executor and hence failing all the tasks that are running on that executor instead of just failing one single task. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org