You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/03/07 19:59:40 UTC
[jira] [Updated] (SPARK-13723) YARN - Change behavior of
--num-executors when spark.dynamicAllocation.enabled true
[ https://issues.apache.org/jira/browse/SPARK-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen updated SPARK-13723:
------------------------------
Priority: Minor (was: Major)
That makes more sense, but it introduces another behavior change to paper over what is probably a misunderstanding about how these options interact. My preference is still to leave it as is, since it's documented and warned already (or, that could be improved I guess)
> YARN - Change behavior of --num-executors when spark.dynamicAllocation.enabled true
> -----------------------------------------------------------------------------------
>
> Key: SPARK-13723
> URL: https://issues.apache.org/jira/browse/SPARK-13723
> Project: Spark
> Issue Type: Improvement
> Components: YARN
> Affects Versions: 2.0.0
> Reporter: Thomas Graves
> Priority: Minor
>
> I think we should change the behavior when --num-executors is specified when dynamic allocation is enabled. Currently if --num-executors is specified dynamic allocation is disabled and it just uses a static number of executors.
> I would rather see the default behavior changed in the 2.x line. If dynamic allocation config is on then num-executors goes to max and initial # of executors. I think this would allow users to easily cap their usage and would still allow it to free up executors. It would also allow users doing ML start out with a # of executors and if they are actually caching the data the executors wouldn't be freed up. So you would get very similar behavior to if dynamic allocation was off.
> Part of the reason for this is when using a static number if generally wastes resources, especially with people doing adhoc things with spark-shell. It also has a big affect when people are doing MapReduce/ETL type work loads. The problem is that people are used to specifying num-executors so if we turn it on by default in a cluster config its just overridden.
> We should also update the spark-submit --help description for --num-executors
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org