You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2015/06/04 14:44:38 UTC
[jira] [Commented] (SPARK-8099) In yarn-cluster mode,
"--executor-cores" can't be setted into SparkConf
[ https://issues.apache.org/jira/browse/SPARK-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572692#comment-14572692 ]
Apache Spark commented on SPARK-8099:
-------------------------------------
User 'XuTingjun' has created a pull request for this issue:
https://github.com/apache/spark/pull/6643
> In yarn-cluster mode, "--executor-cores" can't be setted into SparkConf
> -----------------------------------------------------------------------
>
> Key: SPARK-8099
> URL: https://issues.apache.org/jira/browse/SPARK-8099
> Project: Spark
> Issue Type: Bug
> Components: YARN
> Reporter: meiyoula
>
> While testing dynamic executor allocation function, I set the executor cores with *--executor-cores 4* in spark-submit command. But in *ExecutorAllocationManager*, the *private val tasksPerExecutor =conf.getInt("spark.executor.cores", 1) / conf.getInt("spark.task.cpus", 1)* is still to be 1.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org