You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2019/02/26 15:42:00 UTC

[jira] [Resolved] (SPARK-26750) Estimate memory overhead should taking multi-cores into account

     [ https://issues.apache.org/jira/browse/SPARK-26750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-26750.
-------------------------------
    Resolution: Won't Fix

> Estimate memory overhead should taking multi-cores into account
> ---------------------------------------------------------------
>
>                 Key: SPARK-26750
>                 URL: https://issues.apache.org/jira/browse/SPARK-26750
>             Project: Spark
>          Issue Type: Improvement
>          Components: YARN
>    Affects Versions: 2.4.0
>            Reporter: liupengcheng
>            Priority: Major
>
> Currently, spark esitmate the memory overhead without taking multi-cores into account, sometimes, it might cause direct memory oom, or killed by yarn for exceeding requested physical memory. 
> I think the memory overhead is related to the executor's core number(mainly the spark direct memory and some related jvm native memory, for instance, the thread stacks, GC data etc.). so maybe we can improve this estimation by taking the core number into account.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org