You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Weichen Xu (Jira)" <ji...@apache.org> on 2019/12/08 03:22:00 UTC

[jira] [Assigned] (SPARK-30154) PySpark UDF to convert MLlib vectors to dense arrays

     [ https://issues.apache.org/jira/browse/SPARK-30154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Weichen Xu reassigned SPARK-30154:
----------------------------------

    Assignee: Weichen Xu

> PySpark UDF to convert MLlib vectors to dense arrays
> ----------------------------------------------------
>
>                 Key: SPARK-30154
>                 URL: https://issues.apache.org/jira/browse/SPARK-30154
>             Project: Spark
>          Issue Type: New Feature
>          Components: ML, MLlib, PySpark
>    Affects Versions: 3.0.0
>            Reporter: Xiangrui Meng
>            Assignee: Weichen Xu
>            Priority: Major
>
> If a PySpark user wants to convert MLlib sparse/dense vectors in a DataFrame into dense arrays, an efficient approach is to do that in JVM. However, it requires PySpark user to write Scala code and register it as a UDF. Often this is infeasible for a pure python project.
> What we can do is to predefine those converters in Scala and expose them in PySpark, e.g.:
> {code}
> from pyspark.ml.functions import vector_to_dense_array
> df.select(vector_to_dense_array(col("features"))
> {code}
> cc: [~weichenxu123]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org