You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "sujeetjog (JIRA)" <ji...@apache.org> on 2016/06/30 08:16:10 UTC

[jira] [Commented] (SPARK-14746) Support transformations in R source code for Dataset/DataFrame

    [ https://issues.apache.org/jira/browse/SPARK-14746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15356713#comment-15356713 ] 

sujeetjog commented on SPARK-14746:
-----------------------------------

I believe running external Scripts like R code in Data Frames is a much needed facility,  for example for the algorithms that are not available in MLLIB, invoking such from a R script would definitely be a powerful feature when your APP is Scala/Python based,  you don;t have to use Spark-R for this sake when much of your application code is in Scala/python.


> Support transformations in R source code for Dataset/DataFrame
> --------------------------------------------------------------
>
>                 Key: SPARK-14746
>                 URL: https://issues.apache.org/jira/browse/SPARK-14746
>             Project: Spark
>          Issue Type: New Feature
>          Components: SparkR, SQL
>            Reporter: Sun Rui
>
> there actually is a desired scenario mentioned several times in the Spark mailing list that users are writing Scala/Java Spark applications (not SparkR) but want to use R functions in some transformations. typically this can be achieved by calling Pipe() in RDD. However, there are limitations on pipe(). So we can support applying a R function in source code format to a Dataset/DataFrame (Thus SparkR is not needed for serializing an R function.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org