You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Alberto (JIRA)" <ji...@apache.org> on 2019/01/15 08:57:00 UTC

[jira] [Comment Edited] (SPARK-26611) GROUPED_MAP pandas_udf crashing "Python worker exited unexpectedly"

    [ https://issues.apache.org/jira/browse/SPARK-26611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16742872#comment-16742872 ] 

Alberto edited comment on SPARK-26611 at 1/15/19 8:56 AM:
----------------------------------------------------------

Tried both with a debian ) docker container working on Ubuntu 18.04 and on Databricks runtime 5.0.

The env is not having any effect.


was (Author: afumagalli):
Tried both with a debian docker container working on Ubuntu and on Databricks runtime 5.0.

The env is not having any effect.

> GROUPED_MAP pandas_udf crashing "Python worker exited unexpectedly"
> -------------------------------------------------------------------
>
>                 Key: SPARK-26611
>                 URL: https://issues.apache.org/jira/browse/SPARK-26611
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.4.0
>            Reporter: Alberto
>            Priority: Major
>              Labels: UDF, pandas, pyspark
>
> The following snippet crashes with error: org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
> {code:java}
> from pyspark.sql.functions import pandas_udf, PandasUDFType
> df = spark.createDataFrame([("1","2"),("2","2"),("2","3"), ("3","4"), ("5","6")], ("first","second"))
> @pandas_udf("first string, second string", PandasUDFType.GROUPED_MAP)
> def filter_pandas(df):
>     return df[df['first']=="9"]
> df.groupby("second").apply(filter_pandas).count()
> {code}
>  while this one does not:
> {code:java}
> from pyspark.sql.functions import pandas_udf, PandasUDFType
> df = spark.createDataFrame([(1,2),(2,2),(2,3), (3,4), (5,6)], ("first","second"))
> @pandas_udf("first string, second string", PandasUDFType.GROUPED_MAP)
> def filter_pandas(df):
>     return df[df['first']==9]
> df.groupby("second").apply(filter_pandas).count()
> {code}
> and neither this:
> {code:java}
> from pyspark.sql.functions import pandas_udf, PandasUDFType
> df = spark.createDataFrame([("1","2"),("2","2"),("2","3"), ("3","4"), ("5","6")], ("first","second"))
> @pandas_udf("first string, second string", PandasUDFType.GROUPED_MAP)
> def filter_pandas(df):
>     df = df[df['first']=="9"]
>     if len(df)>0:
>         return df
>     else:
>         return pd.DataFrame({"first":[],"second":[]})
> df.groupby("second").apply(filter_pandas).count()
> {code}
>  
> See stacktrace [here|https://gist.github.com/afumagallireply/02d4c1355bc64a9d2129cdd6d0e9d9f3]
>  
> Using:
> spark 2.4.0
> Pandas 0.19.2
> Pyarrow 0.8.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org