You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2019/03/18 15:00:00 UTC

[jira] [Assigned] (SPARK-27192) spark.task.cpus should be less or equal than spark.task.cpus when use static executor allocation

     [ https://issues.apache.org/jira/browse/SPARK-27192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-27192:
------------------------------------

    Assignee:     (was: Apache Spark)

> spark.task.cpus should be less or equal than spark.task.cpus when use static executor allocation
> ------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-27192
>                 URL: https://issues.apache.org/jira/browse/SPARK-27192
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.2.0, 2.3.0, 2.4.0
>            Reporter: Lijia Liu
>            Priority: Major
>
> When use dynamic executor allocation, if we set spark.executor.cores small than  spark.task.cpus, exception will be thrown as follows:
> '''spark.executor.cores must not be < spark.task.cpus'''
> But, if dynamic executor allocation not enabled, spark will hang when submit new job for TaskSchedulerImpl will not schedule a task in a executor which available cores is small than 
> spark.task.cpus.See [https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala#L351]
> So, when start task scheduler, spark.task.cpus should be check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org