You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Saisai Shao (JIRA)" <ji...@apache.org> on 2014/03/30 08:59:14 UTC

[jira] [Created] (SPARK-1354) Fail to resolve attribute when query with table name as a qualifer in SQLContext

Saisai Shao created SPARK-1354:
----------------------------------

             Summary: Fail to resolve attribute when query with table name as a qualifer in SQLContext
                 Key: SPARK-1354
                 URL: https://issues.apache.org/jira/browse/SPARK-1354
             Project: Apache Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 1.0.0
            Reporter: Saisai Shao


For SQLContext with SimpleCatelog, table name does not register into attribute as a qualifier, so query like "SELECT * FROM records JOIN records1 ON records.key = records1,key" will be failed. The logical plan cannot resolve "records.key" because of missing qualifier "records". The physical plan shows as below

    Project [*]
     Filter ('records.key = 'records1.key)
      CartesianProduct
       ExistingRdd [key#0,value#1], MappedRDD[2] at map at basicOperators.scala:124
       ParquetTableScan [key#2,value#3], (ParquetRelation ParquetFile, pair.parquet), None)

And the exception shows:

org.apache.spark.sql.catalyst.errors.package$TreeNodeException: No function to evaluate expression. type: UnresolvedAttribute, tree: 'records.key
        at org.apache.spark.sql.catalyst.expressions.Expression.apply(Expression.scala:54)
        at org.apache.spark.sql.catalyst.expressions.Equals.apply(predicates.scala:112)
        at org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:43)
        at org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:43)
        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:643)
        at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:643)
        at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:936)
        at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:936)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
        at org.apache.spark.scheduler.Task.run(Task.scala:52)
        at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:211)
        at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:46)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:176)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:722)





--
This message was sent by Atlassian JIRA
(v6.2#6252)