You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:21:23 UTC

[jira] [Updated] (SPARK-17125) Allow to specify spark config using non-string type in SparkR

     [ https://issues.apache.org/jira/browse/SPARK-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon updated SPARK-17125:
---------------------------------
    Labels: bulk-closed  (was: )

> Allow to specify spark config using non-string type in SparkR
> -------------------------------------------------------------
>
>                 Key: SPARK-17125
>                 URL: https://issues.apache.org/jira/browse/SPARK-17125
>             Project: Spark
>          Issue Type: Improvement
>          Components: SparkR
>    Affects Versions: 2.0.0
>            Reporter: Jeff Zhang
>            Priority: Minor
>              Labels: bulk-closed
>
> I try to specify spark conf spark.executor.instances as following in SparkR, but fails. Since list supports any kind of data type, it is natural for user to specify int type for configuration like spark.exeucotr.instances. 
> {code}
> sparkR.session(master="yarn-client", sparkConfig = list(spark.executor.instances=1))
> {code}
> {noformat}
> Error in invokeJava(isStatic = TRUE, className, methodName, ...) : 
>   java.lang.IllegalArgumentException: spark.executor.instances should be int, but was 1.0
> 	at org.apache.spark.internal.config.ConfigHelpers$.toNumber(ConfigBuilder.scala:31)
> 	at org.apache.spark.internal.config.ConfigBuilder$$anonfun$intConf$1.apply(ConfigBuilder.scala:178)
> 	at org.apache.spark.internal.config.ConfigBuilder$$anonfun$intConf$1.apply(ConfigBuilder.scala:178)
> 	at scala.Option.map(Option.scala:146)
> 	at org.apache.spark.internal.config.OptionalConfigEntry.readFrom(ConfigEntry.scala:150)
> 	at org.apache.spark.internal.config.OptionalConfigEntry.readFrom(ConfigEntry.scala:138)
> 	at org.apache.spark.SparkConf.get(SparkConf.scala:251)
> 	at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil$.getInitialTargetExecutorNumber(YarnSparkHadoopUtil.scala:313)
> 	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:54)
> 	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:154)
> 	at org.apache.spark.SparkContext
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org