You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yanbo Liang (JIRA)" <ji...@apache.org> on 2017/03/12 09:32:04 UTC
[jira] [Commented] (SPARK-19925) SparkR spark.getSparkFiles fails
on executor
[ https://issues.apache.org/jira/browse/SPARK-19925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15906469#comment-15906469 ]
Yanbo Liang commented on SPARK-19925:
-------------------------------------
The error was caused by {{spark.getSparkFiles}} calling backend Java code, however, it can not connect to backend when it's on executors.
{code}
spark.getSparkFiles <- function(fileName) {
callJStatic("org.apache.spark.SparkFiles", "get", as.character(fileName))
}
{code}
I think we should have special handling when it was called on executors, following the PySpark implementation:
{code}
@classmethod
def getRootDirectory(cls):
"""
Get the root directory that contains files added through
C{SparkContext.addFile()}.
"""
if cls._is_running_on_worker:
return cls._root_directory
else:
# This will have to change if we support multiple SparkContexts:
return cls._sc._jvm.org.apache.spark.SparkFiles.getRootDirectory()
{code}
> SparkR spark.getSparkFiles fails on executor
> --------------------------------------------
>
> Key: SPARK-19925
> URL: https://issues.apache.org/jira/browse/SPARK-19925
> Project: Spark
> Issue Type: Bug
> Components: SparkR
> Affects Versions: 2.1.0
> Reporter: Yanbo Liang
> Priority: Critical
> Attachments: error-log
>
>
> SparkR function {{spark.getSparkFiles}} fails when it was called on executors. For examples, the following R code will fail. (See error logs in attachment.)
> {code}
> spark.addFile("./README.md")
> seq <- seq(from = 1, to = 10, length.out = 5)
> train <- function(seq) {
> path <- spark.getSparkFiles("README.md")
> print(path)
> }
> spark.lapply(seq, train)
> {code}
> However, we can run successfully with Scala API:
> {code}
> import org.apache.spark.SparkFiles
> sc.addFile("./README.md”)
> sc.parallelize(Seq(0)).map{ _ => SparkFiles.get("README.md")}.first()
> {code}
> and also successfully with Python API:
> {code}
> from pyspark import SparkFiles
> sc.addFile("./README.md")
> sc.parallelize(range(1)).map(lambda x: SparkFiles.get("README.md")).first()
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org