You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2020/06/05 17:16:00 UTC

[jira] [Updated] (SPARK-31915) Remove projection that adds grouping keys in grouped and cogrouped pandas UDFs

     [ https://issues.apache.org/jira/browse/SPARK-31915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon updated SPARK-31915:
---------------------------------
    Description: 
Currently, grouped and cogrouped pandas UDFs in Spark unnecessarily projects the grouping keys. This results in case-sensitivity resolution failure when the project contains columns such as "Column" and "column" as they are considered different but ambiguous columns. 

It results as below:

{code}
from pyspark.sql.functions import *

df = spark.createDataFrame([[1, 1]], ["column", "Score"])

@pandas_udf("column integer, Score float", PandasUDFType.GROUPED_MAP)
def my_pandas_udf(pdf):
    return pdf.assign(Score=0.5)

df.groupby('COLUMN').apply(my_pandas_udf).show()
{code}

{code}
pyspark.sql.utils.AnalysisException: Reference 'COLUMN' is ambiguous, could be: COLUMN, COLUMN.;
{code}

{code}
df1 = spark.createDataFrame([(1, 1)], ("column", "value"))
df2 = spark.createDataFrame([(1, 1)], ("column", "value"))

df1.groupby("COLUMN").cogroup(
    df2.groupby("COLUMN")
).applyInPandas(lambda r, l: r + l, df1.schema).show()
{code}

{code}
pyspark.sql.utils.AnalysisException: cannot resolve '`COLUMN`' given input columns: [COLUMN, COLUMN, value, value];;
'FlatMapCoGroupsInPandas ['COLUMN], ['COLUMN], <lambda>(column#9L, value#10L, column#13L, value#14L), [column#22L, value#23L]
:- Project [COLUMN#9L, column#9L, value#10L]
:  +- LogicalRDD [column#9L, value#10L], false
+- Project [COLUMN#13L, column#13L, value#14L]
   +- LogicalRDD [column#13L, value#14L], false
{code}

  was:
Currently, grouped and cogrouped pandas UDFs in Spark unnecessarily projects the grouping keys. This results in case-sensitivity resolution failure when the project contains columns such as "Column" and "column" as they are considered different but ambiguous columns. 

It results as below:

{code}
from pyspark.sql.functions import *

df = spark.createDataFrame([[1, 1]], ["column", "Score"])

@pandas_udf("column integer, Score float", PandasUDFType.GROUPED_MAP)
def my_pandas_udf(pdf):
    return pdf.assign(Score=0.5)

df.groupby('COLUMN').apply(my_pandas_udf).show()
{code}

{code}
pyspark.sql.utils.AnalysisException: Reference 'COLUMN' is ambiguous, could be: COLUMN, COLUMN.;
{code}

{code}
pyspark.sql.utils.AnalysisException: cannot resolve '`COLUMN`' given input columns: [COLUMN, COLUMN, value, value];;
'FlatMapCoGroupsInPandas ['COLUMN], ['COLUMN], <lambda>(column#9L, value#10L, column#13L, value#14L), [column#22L, value#23L]
:- Project [COLUMN#9L, column#9L, value#10L]
:  +- LogicalRDD [column#9L, value#10L], false
+- Project [COLUMN#13L, column#13L, value#14L]
   +- LogicalRDD [column#13L, value#14L], false
{code}


> Remove projection that adds grouping keys in grouped and cogrouped pandas UDFs
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-31915
>                 URL: https://issues.apache.org/jira/browse/SPARK-31915
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, SQL
>    Affects Versions: 3.0.0
>            Reporter: Hyukjin Kwon
>            Priority: Major
>
> Currently, grouped and cogrouped pandas UDFs in Spark unnecessarily projects the grouping keys. This results in case-sensitivity resolution failure when the project contains columns such as "Column" and "column" as they are considered different but ambiguous columns. 
> It results as below:
> {code}
> from pyspark.sql.functions import *
> df = spark.createDataFrame([[1, 1]], ["column", "Score"])
> @pandas_udf("column integer, Score float", PandasUDFType.GROUPED_MAP)
> def my_pandas_udf(pdf):
>     return pdf.assign(Score=0.5)
> df.groupby('COLUMN').apply(my_pandas_udf).show()
> {code}
> {code}
> pyspark.sql.utils.AnalysisException: Reference 'COLUMN' is ambiguous, could be: COLUMN, COLUMN.;
> {code}
> {code}
> df1 = spark.createDataFrame([(1, 1)], ("column", "value"))
> df2 = spark.createDataFrame([(1, 1)], ("column", "value"))
> df1.groupby("COLUMN").cogroup(
>     df2.groupby("COLUMN")
> ).applyInPandas(lambda r, l: r + l, df1.schema).show()
> {code}
> {code}
> pyspark.sql.utils.AnalysisException: cannot resolve '`COLUMN`' given input columns: [COLUMN, COLUMN, value, value];;
> 'FlatMapCoGroupsInPandas ['COLUMN], ['COLUMN], <lambda>(column#9L, value#10L, column#13L, value#14L), [column#22L, value#23L]
> :- Project [COLUMN#9L, column#9L, value#10L]
> :  +- LogicalRDD [column#9L, value#10L], false
> +- Project [COLUMN#13L, column#13L, value#14L]
>    +- LogicalRDD [column#13L, value#14L], false
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org