You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sun Rui (JIRA)" <ji...@apache.org> on 2016/04/20 08:07:25 UTC
[jira] [Commented] (SPARK-14746) Support transformations in R
source code for Dataset/DataFrame
[ https://issues.apache.org/jira/browse/SPARK-14746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15249335#comment-15249335 ]
Sun Rui commented on SPARK-14746:
---------------------------------
[~shivaram], [~davies], [~rxin] any comments on this idea?
> Support transformations in R source code for Dataset/DataFrame
> --------------------------------------------------------------
>
> Key: SPARK-14746
> URL: https://issues.apache.org/jira/browse/SPARK-14746
> Project: Spark
> Issue Type: New Feature
> Components: SparkR, SQL
> Reporter: Sun Rui
>
> there actually is a desired scenario mentioned several times in the Spark mailing list that users are writing Scala/Java Spark applications (not SparkR) but want to use R functions in some transformations. typically this can be achieved by calling Pipe() in RDD. However, there are limitations on pipe(). So we can support applying a R function in source code format to a Dataset/DataFrame (Thus SparkR is not needed for serializing an R function.)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org