You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "L. C. Hsieh (Jira)" <ji...@apache.org> on 2020/03/16 00:57:00 UTC

[jira] [Commented] (SPARK-31160) Resolved attribute(s) missing from ...

    [ https://issues.apache.org/jira/browse/SPARK-31160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17059841#comment-17059841 ] 

L. C. Hsieh commented on SPARK-31160:
-------------------------------------

This is not a bug. It is because your pass old attribute "df.age" to the udf. "fillna" will output a new attribute aliased as the original name.

Btw, "plus_five" takes the column "age", an integer, not a struct, so you need to write it as:
{code}
@pandas_udf("integer", PandasUDFType.SCALAR)
        def plus_five(p):
            return p + 5

df2 = df.fillna({"age": 99}).withColumn("age_5", plus_five(col("age")))
{code}





> Resolved attribute(s) <attr> missing from ...
> ---------------------------------------------
>
>                 Key: SPARK-31160
>                 URL: https://issues.apache.org/jira/browse/SPARK-31160
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.4.5
>         Environment: macos: catalina
> java: 8
> I have this env var set:
> export ARROW_PRE_0_15_IPC_FORMAT=1
>            Reporter: Yves
>            Priority: Critical
>
> When a fillna is used on a column and then you try to apply a udf on the same column you get a `Resolved attribute(s) <attr> missing from ...` error.
> Example Code:
>  
> {code:java}
> from pyspark.sql import SparkSession
> import pandas as pd
> from pyspark.sql.functions import pandas_udf, PandasUDFType
> from pyspark.sql.types import *
> spark = SparkSession \
>  .builder \
>  .master("local[*]") \
>  .appName("bug") \
>  .getOrCreate()
> spark.sparkContext.setLogLevel("ERROR")
> spark.conf.set("spark.sql.execution.arrow.enabled", "true")
> df = spark.createDataFrame(
>  [
>  (1, "Joey", "Richard", None),
>  (2, "Stephane", "Boudreau", 36),
>  (2, "Rejean", "Lapierre", 34)
>  ],
>  ["id", "first_name", "last", "age"]
> )
> @pandas_udf("integer", PandasUDFType.SCALAR)
> def plus_five(p):
>  return p.age + 5
> df2 = df.fillna({"age": 99}).withColumn("age_5", plus_five(df.age))
> df2.show()
> {code}
>  
>  
> Error:
>  
> {code:java}
> Traceback (most recent call last):Traceback (most recent call last):  File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco  File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_valuepy4j.protocol.Py4JJavaError: An error occurred while calling o70.withColumn.: org.apache.spark.sql.AnalysisException: Resolved attribute(s) age#3L missing from id#0L,first_name#1,last#2,age#12L in operator !Project [id#0L, first_name#1, last#2, age#12L, plus_five(age#3L) AS age_5#18]. Attribute(s) with the same name appear in the operation: age. Please check if the right attribute(s) are used.;;!Project [id#0L, first_name#1, last#2, age#12L, plus_five(age#3L) AS age_5#18]+- Project [id#0L, first_name#1, last#2, coalesce(age#3L, cast(99 as bigint)) AS age#12L]   +- LogicalRDD [id#0L, first_name#1, last#2, age#3L], false
>  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:43) at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:95) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:369) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:86) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:86) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:95) at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:108) at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201) at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105) at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:58) at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:56) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:78) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:3412) at org.apache.spark.sql.Dataset.select(Dataset.scala:1340) at org.apache.spark.sql.Dataset.withColumns(Dataset.scala:2258) at org.apache.spark.sql.Dataset.withColumn(Dataset.scala:2225) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748)During handling of the above exception, another exception occurred:
> Traceback (most recent call last):  File "/Users/yvesrichard/Documents/projects/premise/spark-play/bug.py", line 29, in <module>    df2 = df.fillna({"age": 99}).withColumn("age_5", plus_five(df.age))  File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/lib/pyspark.zip/pyspark/sql/dataframe.py", line 1997, in withColumn  File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__  File "/usr/local/Cellar/apache-spark/2.4.5/libexec/python/lib/pyspark.zip/pyspark/sql/utils.py", line 69, in decopyspark.sql.utils.AnalysisException: 'Resolved attribute(s) age#3L missing from id#0L,first_name#1,last#2,age#12L in operator !Project [id#0L, first_name#1, last#2, age#12L, plus_five(age#3L) AS age_5#18]. Attribute(s) with the same name appear in the operation: age. Please check if the right attribute(s) are used.;;\n!Project [id#0L, first_name#1, last#2, age#12L, plus_five(age#3L) AS age_5#18]\n+- Project [id#0L, first_name#1, last#2, coalesce(age#3L, cast(99 as bigint)) AS age#12L]\n   +- LogicalRDD [id#0L, first_name#1, last#2, age#3L], false\n'{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org