You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jeff Zhang (JIRA)" <ji...@apache.org> on 2016/08/15 01:47:20 UTC

[jira] [Commented] (SPARK-17054) SparkR can not run in yarn-cluster mode on mac os

    [ https://issues.apache.org/jira/browse/SPARK-17054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15420553#comment-15420553 ] 

Jeff Zhang commented on SPARK-17054:
------------------------------------

Although I can fix it by using the correct cache dir for mac OS, I am confused why we need to download sparkR. I don't remember it is needed in spark 1.x.  Is this expected behavior ? [~shivaram] [~junyangq]

> SparkR can not run in yarn-cluster mode on mac os
> -------------------------------------------------
>
>                 Key: SPARK-17054
>                 URL: https://issues.apache.org/jira/browse/SPARK-17054
>             Project: Spark
>          Issue Type: Bug
>          Components: SparkR
>    Affects Versions: 2.0.0
>            Reporter: Jeff Zhang
>
> This is due to it download sparkR to the wrong place.
> {noformat}
> Warning message:
> 'sparkR.init' is deprecated.
> Use 'sparkR.session' instead.
> See help("Deprecated")
> Spark not found in SPARK_HOME:  .
> To search in the cache directory. Installation will start if not found.
> Mirror site not provided.
> Looking for site suggested from apache website...
> Preferred mirror site found: http://apache.mirror.cdnetworks.com/spark
> Downloading Spark spark-2.0.0 for Hadoop 2.7 from:
> - http://apache.mirror.cdnetworks.com/spark/spark-2.0.0/spark-2.0.0-bin-hadoop2.7.tgz
> Fetch failed from http://apache.mirror.cdnetworks.com/spark
> <simpleError in download.file(packageRemotePath, packageLocalPath): cannot open destfile '/home//Library/Caches/spark/spark-2.0.0-bin-hadoop2.7.tgz', reason 'No such file or directory'>
> To use backup site...
> Downloading Spark spark-2.0.0 for Hadoop 2.7 from:
> - http://www-us.apache.org/dist/spark/spark-2.0.0/spark-2.0.0-bin-hadoop2.7.tgz
> Fetch failed from http://www-us.apache.org/dist/spark
> <simpleError in download.file(packageRemotePath, packageLocalPath): cannot open destfile '/home//Library/Caches/spark/spark-2.0.0-bin-hadoop2.7.tgz', reason 'No such file or directory'>
> Error in robust_download_tar(mirrorUrl, version, hadoopVersion, packageName,  :
>   Unable to download Spark spark-2.0.0 for Hadoop 2.7. Please check network connection, Hadoop version, or provide other mirror sites.
> Calls: sparkRSQL.init ... sparkR.session -> install.spark -> robust_download_tar
> In addition: Warning messages:
> 1: 'sparkRSQL.init' is deprecated.
> Use 'sparkR.session' instead.
> See help("Deprecated")
> 2: In dir.create(localDir, recursive = TRUE) :
>   cannot create dir '/home//Library', reason 'Operation not supported'
> Execution halted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org