You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Miao Wang (JIRA)" <ji...@apache.org> on 2017/10/05 23:13:00 UTC

[jira] [Commented] (SPARK-18131) Support returning Vector/Dense Vector from backend

    [ https://issues.apache.org/jira/browse/SPARK-18131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16193891#comment-16193891 ] 

Miao Wang commented on SPARK-18131:
-----------------------------------

[~felixcheung] We got stuck at the data types definitions. There is a cycle in dependencies for ML data types. Long time ago, I talked with [~smilegator] about this issue and he suggested moving the data types to from ML module to the parent module. But we didn't discuss details and there was no design document either. We need some updates on this issue to add this support in R.

> Support returning Vector/Dense Vector from backend
> --------------------------------------------------
>
>                 Key: SPARK-18131
>                 URL: https://issues.apache.org/jira/browse/SPARK-18131
>             Project: Spark
>          Issue Type: New Feature
>          Components: SparkR
>            Reporter: Miao Wang
>
> For `spark.logit`, there is a `probabilityCol`, which is a vector in the backend (scala side). When we do collect(select(df, "probabilityCol")), backend returns the java object handle (memory address). We need to implement a method to convert a Vector/Dense Vector column as R vector, which can be read in SparkR. It is a followup JIRA of adding `spark.logit`.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org