You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Xiao Li (JIRA)" <ji...@apache.org> on 2018/01/10 10:30:00 UTC

[jira] [Updated] (SPARK-23026) Add RegisterUDF to PySpark

     [ https://issues.apache.org/jira/browse/SPARK-23026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Xiao Li updated SPARK-23026:
----------------------------
    Description: 
Add a new API for registering row-at-a-time or scalar vectorized UDFs. The registered UDFs can be used in the SQL statement.

{noformat}
>>> from pyspark.sql.types import IntegerType
>>> from pyspark.sql.functions import udf
>>> slen = udf(lambda s: len(s), IntegerType())
>>> _ = spark.udf.registerUDF("slen", slen)
>>> spark.sql("SELECT slen('test')").collect()
[Row(slen(test)=4)]

>>> import random
>>> from pyspark.sql.functions import udf
>>> from pyspark.sql.types import IntegerType
>>> random_udf = udf(lambda: random.randint(0, 100), IntegerType()).asNondeterministic()
>>> newRandom_udf = spark.catalog.registerUDF("random_udf", random_udf)
>>> spark.sql("SELECT random_udf()").collect()  
[Row(random_udf()=82)]
>>> spark.range(1).select(newRandom_udf()).collect()  
[Row(random_udf()=62)]

>>> from pyspark.sql.functions import pandas_udf, PandasUDFType
>>> @pandas_udf("integer", PandasUDFType.SCALAR)  
... def add_one(x):
...     return x + 1
...
>>> _ = spark.udf.registerUDF("add_one", add_one)  
>>> spark.sql("SELECT add_one(id) FROM range(10)").collect()  
{noformat}

  was:Add a new API for registering row-at-a-time or scalar vectorized UDFs. The registered UDFs can be used in the SQL statement.


> Add RegisterUDF to PySpark
> --------------------------
>
>                 Key: SPARK-23026
>                 URL: https://issues.apache.org/jira/browse/SPARK-23026
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.3.0
>            Reporter: Xiao Li
>            Assignee: Xiao Li
>
> Add a new API for registering row-at-a-time or scalar vectorized UDFs. The registered UDFs can be used in the SQL statement.
> {noformat}
> >>> from pyspark.sql.types import IntegerType
> >>> from pyspark.sql.functions import udf
> >>> slen = udf(lambda s: len(s), IntegerType())
> >>> _ = spark.udf.registerUDF("slen", slen)
> >>> spark.sql("SELECT slen('test')").collect()
> [Row(slen(test)=4)]
> >>> import random
> >>> from pyspark.sql.functions import udf
> >>> from pyspark.sql.types import IntegerType
> >>> random_udf = udf(lambda: random.randint(0, 100), IntegerType()).asNondeterministic()
> >>> newRandom_udf = spark.catalog.registerUDF("random_udf", random_udf)
> >>> spark.sql("SELECT random_udf()").collect()  
> [Row(random_udf()=82)]
> >>> spark.range(1).select(newRandom_udf()).collect()  
> [Row(random_udf()=62)]
> >>> from pyspark.sql.functions import pandas_udf, PandasUDFType
> >>> @pandas_udf("integer", PandasUDFType.SCALAR)  
> ... def add_one(x):
> ...     return x + 1
> ...
> >>> _ = spark.udf.registerUDF("add_one", add_one)  
> >>> spark.sql("SELECT add_one(id) FROM range(10)").collect()  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org