You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2020/05/25 07:17:00 UTC
[jira] [Resolved] (SPARK-31773) getting the Caused by:
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding
attribute, at
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
[ https://issues.apache.org/jira/browse/SPARK-31773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon resolved SPARK-31773.
----------------------------------
Resolution: Duplicate
> getting the Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-31773
> URL: https://issues.apache.org/jira/browse/SPARK-31773
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 2.2.0
> Environment: spark 2.2
> Reporter: Pankaj Tiwari
> Priority: Major
>
> Actually I am loading the excel which has some 90 columns and the some columns name contains special character as well like @ % -> . etc etc so while I am doing one use case like :
> sourceDataSet.select(columnSeq).except(targetDataset.select(columnSeq)));
> this is working fine but as soon as I am running
> sourceDataSet.select(columnSeq).except(targetDataset.select(columnSeq)).count()
> it is failing with error like :
> org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
> Exchange SinglePartition
> +- *HashAggregate(keys=[], functions=[partial_count(1)], output=[count#26596L])
> +- *HashAggregate(keys=columns name
>
>
> Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree:column namet#14050
> at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
> at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:88)
> at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:87)
> at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
> at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
> at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
> at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)
> at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)
> at org.apache.spark.sql.catalyst.expressions.BindReferences$.bindReference(BoundAttribute.scala:87)
> at org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$40.apply(HashAggregateExec.scala:703)
> at org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$40.apply(HashAggregateExec.scala:703)
> at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
> at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
> at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
> at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223)
> at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
> at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
> at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
> at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223)
> at scala.collection.immutable.Stream.foreach(Stream.scala:595)
> at scala.collection.TraversableOnce$class.count(TraversableOnce.scala:115)
> at scala.collection.AbstractTraversable.count(Traversable.scala:104)
> at org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection$.createCode(GenerateUnsafeProjection.scala:312)
> at org.apache.spark.sql.execution.aggregate.HashAggregateExec.doConsumeWithKeys(HashAggregateExec.scala:702)
> at org.apache.spark.sql.execution.aggregate.HashAggregateExec.doConsume(HashAggregateExec.scala:156)
> at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:155)
> at org.apache.spark.sql.execution.ProjectExec.consume(basicPhysicalOperators.scala:36)
>
>
>
>
> Caused by: java.lang.RuntimeException: Couldn't find here one name of column following with
> at scala.sys.package$.error(package.scala:27)
> at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:94)
> at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:88)
> at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
>
>
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org