You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Xiangrui Meng (JIRA)" <ji...@apache.org> on 2015/05/28 05:25:17 UTC
[jira] [Issue Comment Deleted] (SPARK-7903) PythonUDT shouldn't get
serialized on the Scala side
[ https://issues.apache.org/jira/browse/SPARK-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Xiangrui Meng updated SPARK-7903:
---------------------------------
Comment: was deleted
(was: User 'mengxr' has created a pull request for this issue:
https://github.com/apache/spark/pull/6442)
> PythonUDT shouldn't get serialized on the Scala side
> ----------------------------------------------------
>
> Key: SPARK-7903
> URL: https://issues.apache.org/jira/browse/SPARK-7903
> Project: Spark
> Issue Type: Bug
> Components: PySpark, SQL
> Affects Versions: 1.4.0
> Reporter: Xiangrui Meng
> Assignee: Xiangrui Meng
>
> A round trip for a pure Python UDT should be: Python UDT -> Python SQL internal types -> Scala/Java SQL internal types -> transformation -> Scala/Java SQL internal types -> Python SQL internal types -> Python UDT. So the serialization shouldn't be invoked on the Scala side if no Scala code is applied to the UDT.
> Code (from [~rams]) to reproduce this bug:
> {code}
> from pyspark.mllib.linalg import SparseVector
> from pyspark.sql.functions import udf
> from pyspark.sql.types import IntegerType
> df = sqlContext.createDataFrame([(SparseVector(2, {0: 0.0}),)], ["features"])
> sz = udf(lambda s: s.size, IntegerType())
> df.select(sz(df.features).alias("sz")).collect()
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org