You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Alberto (JIRA)" <ji...@apache.org> on 2019/01/14 10:45:00 UTC
[jira] [Updated] (SPARK-26611) GROUPED_MAP pandas_udf crashing
"Python worker exited unexpectedly"
[ https://issues.apache.org/jira/browse/SPARK-26611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Alberto updated SPARK-26611:
----------------------------
Description:
The following snippet crashes with error: org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
{code:java}
df = spark.createDataFrame([("1","2"),("2","2"),("2","3"), ("3","4"), ("5","6")], ("first","second"))
@pandas_udf("first string, second string", PandasUDFType.GROUPED_MAP)
def filter_pandas(df):
return df[df['first']=="9"]
df.groupby("second").apply(filter_pandas).count()
{code}
while this one does not:
{code:java}
df = spark.createDataFrame([(1,2),(2,2),(2,3), (3,4), (5,6)], ("first","second"))
@pandas_udf("first string, second string", PandasUDFType.GROUPED_MAP)
def filter_pandas(df):
return df[df['first']==9]
df.groupby("second").apply(filter_pandas).count()
{code}
and niether this:
{code:java}
df = spark.createDataFrame([("1","2"),("2","2"),("2","3"), ("3","4"), ("5","6")], ("first","second"))
@pandas_udf("first string, second string", PandasUDFType.GROUPED_MAP)
def filter_pandas(df):
if len(df)>0:
return df
else:
return pd.DataFrame({"first":[],"second":[]})
df.groupby("second").apply(filter_pandas).count()
{code}
See stacktrace [here|https://gist.github.com/afumagallireply/02d4c1355bc64a9d2129cdd6d0e9d9f3]
was:
The following snippet crashes with error: org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
{code:java}
df = spark.createDataFrame([("1","2"),("2","2"),("2","3"), ("3","4"), ("5","6")], ("first","second"))
@pandas_udf("first string, second string", PandasUDFType.GROUPED_MAP)
def filter_pandas(df):
return df[df['first']=="9"]
df.groupby("second").apply(filter_pandas).count()
{code}
while this one does not:
{code:java}
df = spark.createDataFrame([(1,2),(2,2),(2,3), (3,4), (5,6)], ("first","second"))
@pandas_udf("first string, second string", PandasUDFType.GROUPED_MAP)
def filter_pandas(df):
return df[df['first']==9]
df.groupby("second").apply(filter_pandas).count()
{code}
and niether this:
{code:java}
df = spark.createDataFrame([("1","2"),("2","2"),("2","3"), ("3","4"), ("5","6")], ("first","second"))
@pandas_udf("first string, second string", PandasUDFType.GROUPED_MAP)
def filter_pandas(df):
if len(df)>0:
return df
else:
return df[df['first']=="9"]
df.groupby("second").apply(filter_pandas).count()
{code}
See stacktrace [here|https://gist.github.com/afumagallireply/02d4c1355bc64a9d2129cdd6d0e9d9f3]
> GROUPED_MAP pandas_udf crashing "Python worker exited unexpectedly"
> -------------------------------------------------------------------
>
> Key: SPARK-26611
> URL: https://issues.apache.org/jira/browse/SPARK-26611
> Project: Spark
> Issue Type: Bug
> Components: PySpark, SQL
> Affects Versions: 2.4.0
> Reporter: Alberto
> Priority: Major
> Labels: UDF, pandas, pyspark
>
> The following snippet crashes with error: org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
> {code:java}
> df = spark.createDataFrame([("1","2"),("2","2"),("2","3"), ("3","4"), ("5","6")], ("first","second"))
> @pandas_udf("first string, second string", PandasUDFType.GROUPED_MAP)
> def filter_pandas(df):
> return df[df['first']=="9"]
> df.groupby("second").apply(filter_pandas).count()
> {code}
> while this one does not:
> {code:java}
> df = spark.createDataFrame([(1,2),(2,2),(2,3), (3,4), (5,6)], ("first","second"))
> @pandas_udf("first string, second string", PandasUDFType.GROUPED_MAP)
> def filter_pandas(df):
> return df[df['first']==9]
> df.groupby("second").apply(filter_pandas).count()
> {code}
> and niether this:
>
> {code:java}
> df = spark.createDataFrame([("1","2"),("2","2"),("2","3"), ("3","4"), ("5","6")], ("first","second"))
> @pandas_udf("first string, second string", PandasUDFType.GROUPED_MAP)
> def filter_pandas(df):
> if len(df)>0:
> return df
> else:
> return pd.DataFrame({"first":[],"second":[]})
> df.groupby("second").apply(filter_pandas).count()
> {code}
>
>
> See stacktrace [here|https://gist.github.com/afumagallireply/02d4c1355bc64a9d2129cdd6d0e9d9f3]
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org