You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Mario Briggs (JIRA)" <ji...@apache.org> on 2016/10/13 18:36:20 UTC

[jira] [Commented] (SPARK-17917) Convert 'Initial job has not accepted any resources..' logWarning to a SparkListener event

    [ https://issues.apache.org/jira/browse/SPARK-17917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15572795#comment-15572795 ] 

Mario Briggs commented on SPARK-17917:
--------------------------------------

would appreciate if the spark devs comment in whether they see this as a bad idea for some reason. 

I basically see add 2 events to SparkListener like
  onTaskStarved() and OnTaskUnStarved() - the latter fires only if onTaskStarved() fired in the first place for a taskSet

> Convert 'Initial job has not accepted any resources..' logWarning to a SparkListener event
> ------------------------------------------------------------------------------------------
>
>                 Key: SPARK-17917
>                 URL: https://issues.apache.org/jira/browse/SPARK-17917
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Mario Briggs
>
> When supporting Spark on a multi-tenant shared large cluster with quotas per tenant, often a submitted taskSet might not get executors because quotas have been exhausted (or) resources unavailable. In these situations, firing a SparkListener event instead of just logging the issue (as done currently at https://github.com/apache/spark/blob/9216901d52c9c763bfb908013587dcf5e781f15b/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala#L192), would give applications/listeners an opportunity to handle this more appropriately as needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org