You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2014/12/31 10:18:14 UTC

[jira] [Commented] (SPARK-5033) Spark 1.1.0/1.1.1/1.2.0 can't run well in HDP on Windows

    [ https://issues.apache.org/jira/browse/SPARK-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14262041#comment-14262041 ] 

Sean Owen commented on SPARK-5033:
----------------------------------

Related to / maybe duplicate of SPARK-2221. I don't think Spark quite runs on Windows and am not sure whether that's viewed as something that is supposed to be supported.

> Spark 1.1.0/1.1.1/1.2.0 can't run well in HDP on Windows
> --------------------------------------------------------
>
>                 Key: SPARK-5033
>                 URL: https://issues.apache.org/jira/browse/SPARK-5033
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.1.0, 1.2.0
>         Environment: HDInsight 3.1 in Azure
>            Reporter: Rice
>              Labels: easyfix
>
> After installation, when I ran .\bin\spark-shell --master yarn, YarnClient will report Error in running commands like the followings:
> %JAVA_HOME%/bin/java -server -cp %CLASSPATH%;C:\hdp\spark-1.1.1\lib\spark-assembly-1.1.1-hadoop2.4.0.jar -Xmx512m -Djava.io.tmpdir=%PWD%/tmp '-Dspark.tachyonStore.folderName=spark-919783cd-bdf7-4e6b-86bf-011244e4a49f' '-Dspark.yarn.secondary.jars=' '-Dspark.repl.class.uri=http://192.168.0.13:12972' '-Dspark.driver.host=HOME-HYPERVS' '-Dspark.driver.appUIHistoryAddress=' '-Dspark.app.name=Spark shell' '-Dspark.driver.appUIAddress=HOME-HYPERVS:4040' '-Dspark.jars=' '-Dspark.fileserver.uri=http://192.168.0.13:12992' '-Dspark.master=yarn-client' '-Dspark.driver.port=12988' org.apache.spark.deploy.yarn.ExecutorLauncher --class 'notused' --jar  null  --arg  'HOME-HYPERVS:12988' --executor-memory 1024 --executor-cores 1 --num-executors  2 
> It will run error because of single quote instead of double quote. The following file will need to be modified in File YarnSparkHadoopUtil.scala:
> def escapeForShell(arg: String): String = {
>     if (arg != null) {
>       val escaped = new StringBuilder("'")
>       for (i <- 0 to arg.length() - 1) {
>         arg.charAt(i) match {
>           case '$' => escaped.append("\\$")
>           case '"' => escaped.append("\\\"")
>           case '\'' => escaped.append("'\\''")
>           case c => escaped.append(c)
>         }
>       }
>       escaped.append("'").toString()
>     } else {
>       arg
>     }
>   }
> After modification from single quote to doulbe quote, the command is OK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org