You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "David McWhorter (JIRA)" <ji...@apache.org> on 2017/10/25 19:06:00 UTC

[jira] [Created] (SPARK-22354) --executor-cores in spark-submit fails to set "spark.executor.cores" for mesos workers

David McWhorter created SPARK-22354:
---------------------------------------

             Summary: --executor-cores in spark-submit fails to set "spark.executor.cores" for mesos workers
                 Key: SPARK-22354
                 URL: https://issues.apache.org/jira/browse/SPARK-22354
             Project: Spark
          Issue Type: Bug
          Components: Mesos
    Affects Versions: 2.2.0
         Environment: Mesos 1.0.1
Spark 2.2.0
            Reporter: David McWhorter


We are running spark in cluster-mode and limit the amount of CPU and memory per executor so that many executors spin up per mesos worker.

When we specify --executor-cores 1 in the spark-submit command to the dispatcher, Mesos allocates only one CPU for the workers, but Spark itself thinks that it hasas many CPUs as are available for each worker and so only one spark worker starts per mesos worker.  If we explicitly set --conf "spark.executor.cores=1" then this problem goes away and many spark workers spin up for each mesos worker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org