You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "yangZhiguo (JIRA)" <ji...@apache.org> on 2017/06/27 09:30:00 UTC

[jira] [Created] (SPARK-21225) decrease the Mem using for variable 'tasks' in function resourceOffers

yangZhiguo created SPARK-21225:
----------------------------------

             Summary: decrease the Mem using for variable 'tasks' in function resourceOffers
                 Key: SPARK-21225
                 URL: https://issues.apache.org/jira/browse/SPARK-21225
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core
    Affects Versions: 2.1.1, 2.1.0
            Reporter: yangZhiguo
            Priority: Minor


    In the function 'resourceOffers', It declare a variable 'tasks' for storage the tasks which have  allocated a executor. It declared like this:
*{color:#d04437}val tasks = shuffledOffers.map(o => new ArrayBuffer[TaskDescription](o.cores)){color}*

But, I think this code only conside a situation for that one task per core. If the user config the "spark.task.cpus" as 2 or 3, It really don't need so much space. I think It can motify as follow:

val tasks = shuffledOffers.map(o => new ArrayBuffer[TaskDescription](Math.ceil(o.cores*1.0/CPUS_PER_TASK).toInt))



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org