You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Bago Amirbekian (Jira)" <ji...@apache.org> on 2019/10/31 21:49:00 UTC
[jira] [Created] (SPARK-29692) SparkContext.defaultParallism should
reflect resource limits when resource limits are set
Bago Amirbekian created SPARK-29692:
---------------------------------------
Summary: SparkContext.defaultParallism should reflect resource limits when resource limits are set
Key: SPARK-29692
URL: https://issues.apache.org/jira/browse/SPARK-29692
Project: Spark
Issue Type: Bug
Components: Spark Core
Affects Versions: 3.0.0
Reporter: Bago Amirbekian
With the new gpu/fpga support in spark, defaultParallelism may not be computed correctly. Specifically defaultParaallelism may be much higher than the total possible concurrent tasks if workers have many more cores than gpus for example.
Steps to reproduce:
Start a cluster with spark.executor.resource.gpu.amount < cores per executor. Set spark.task.resource.gpu.amount = 1. Keep cores per task as 1.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org