You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hong Shen (JIRA)" <ji...@apache.org> on 2014/11/12 09:40:34 UTC

[jira] [Comment Edited] (SPARK-4341) Spark need to set num-executors automatically

    [ https://issues.apache.org/jira/browse/SPARK-4341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207816#comment-14207816 ] 

Hong Shen edited comment on SPARK-4341 at 11/12/14 8:40 AM:
------------------------------------------------------------

After the first action computed,  we can set nimPartition for the following HadoopRDD.

So the following HadoopRDD's partitions won't less than num-executors, and it will prevent  wasting of resources. On the other hand if  the following HadoopRDD's partitions  is much bigger than num-executors, we can reset numExecuors to ApplicaitonMaster and allocate new executors.


was (Author: shenhong):
After the first action computed,  we can set set nimPartition for the following HadoopRDD.

So the following HadoopRDD's partitions won't less than num-executors, and it will prevent  wasting of resources. On the other hand if  the following HadoopRDD's partitions  is much bigger than num-executors, we can reset numExecuors to ApplicaitonMaster and allocate new executors.

> Spark need to set num-executors automatically
> ---------------------------------------------
>
>                 Key: SPARK-4341
>                 URL: https://issues.apache.org/jira/browse/SPARK-4341
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.1.0
>            Reporter: Hong Shen
>
> The mapreduce job can set maptask automaticlly, but in spark, we have to set num-executors, executor memory and cores. It's difficult for users to set these args, especially for the users want to use spark sql. So when user havn't set num-executors,  spark should set num-executors automatically accroding to the input partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org