You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2017/06/09 17:54:18 UTC
[jira] [Resolved] (SPARK-20662) Block jobs that have greater than a
configured number of tasks
[ https://issues.apache.org/jira/browse/SPARK-20662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen resolved SPARK-20662.
-------------------------------
Resolution: Won't Fix
> Block jobs that have greater than a configured number of tasks
> --------------------------------------------------------------
>
> Key: SPARK-20662
> URL: https://issues.apache.org/jira/browse/SPARK-20662
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 1.6.0, 2.0.0
> Reporter: Xuefu Zhang
>
> In a shared cluster, it's desirable for an admin to block large Spark jobs. While there might not be a single metrics defining the size of a job, the number of tasks is usually a good indicator. Thus, it would be useful for Spark scheduler to block a job whose number of tasks reaches a configured limit. By default, the limit could be just infinite, to retain the existing behavior.
> MapReduce has mapreduce.job.max.map and mapreduce.job.max.reduce to be configured, which blocks a MR job at job submission time.
> The proposed configuration is spark.job.max.tasks with a default value -1 (infinite).
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org