You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "DjvuLee (JIRA)" <ji...@apache.org> on 2017/06/13 17:33:00 UTC

[jira] [Commented] (SPARK-21082) Consider Executor's memory usage when scheduling task

    [ https://issues.apache.org/jira/browse/SPARK-21082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16048146#comment-16048146 ] 

DjvuLee commented on SPARK-21082:
---------------------------------

If this feature is a good suggestion(we encounter this problem in fact), I will give a pull request.

> Consider Executor's memory usage when scheduling task 
> ------------------------------------------------------
>
>                 Key: SPARK-21082
>                 URL: https://issues.apache.org/jira/browse/SPARK-21082
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler, Spark Core
>    Affects Versions: 2.2.1
>            Reporter: DjvuLee
>
>  Spark Scheduler do not consider the memory usage during dispatch tasks, this can lead to Executor OOM if the RDD is cached sometimes, because Spark can not estimate the memory usage enough well(especially when the RDD type is not flatten), scheduler may dispatch so many task on one Executor.
> We can offer a configuration for user to decide whether scheduler will consider the memory usage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org