You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Lijia Liu (JIRA)" <ji...@apache.org> on 2019/03/19 02:22:00 UTC

[jira] [Updated] (SPARK-27192) spark.task.cpus should be less or equal than spark.task.cpus when use static executor allocation

     [ https://issues.apache.org/jira/browse/SPARK-27192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Lijia Liu updated SPARK-27192:
------------------------------
    Description: 
When use dynamic executor allocation, if we set spark.executor.cores small than  spark.task.cpus, exception will be thrown as follows:

'''spark.executor.cores must not be < spark.task.cpus'''

But, if dynamic executor allocation not enabled, spark will hang when submit new job for TaskSchedulerImpl will not schedule a task in a executor which available cores is small than 

spark.task.cpus.See [https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala#L351]

So, when start task scheduler, spark.task.cpus should be check.

reproduce

$SPARK_HOME/bin/spark-shell --conf spark.task.cpus=2  --master local[1]

scala> sc.parallelize(1 to 9).collect

  was:
When use dynamic executor allocation, if we set spark.executor.cores small than  spark.task.cpus, exception will be thrown as follows:

'''spark.executor.cores must not be < spark.task.cpus'''

But, if dynamic executor allocation not enabled, spark will hang when submit new job for TaskSchedulerImpl will not schedule a task in a executor which available cores is small than 

spark.task.cpus.See [https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala#L351]

So, when start task scheduler, spark.task.cpus should be check.


> spark.task.cpus should be less or equal than spark.task.cpus when use static executor allocation
> ------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-27192
>                 URL: https://issues.apache.org/jira/browse/SPARK-27192
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.2.0, 2.3.0, 2.4.0
>            Reporter: Lijia Liu
>            Priority: Major
>
> When use dynamic executor allocation, if we set spark.executor.cores small than  spark.task.cpus, exception will be thrown as follows:
> '''spark.executor.cores must not be < spark.task.cpus'''
> But, if dynamic executor allocation not enabled, spark will hang when submit new job for TaskSchedulerImpl will not schedule a task in a executor which available cores is small than 
> spark.task.cpus.See [https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala#L351]
> So, when start task scheduler, spark.task.cpus should be check.
> reproduce
> $SPARK_HOME/bin/spark-shell --conf spark.task.cpus=2  --master local[1]
> scala> sc.parallelize(1 to 9).collect



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org