You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Eric Tanner <er...@justenough.com> on 2014/12/02 22:00:46 UTC

Unresolved attributes

I am running
spark 1.1.0
DSE cassandra 4.6

when I try to run the following sql statement:

val sstring = "Select * from seasonality where customer_id = " +
customer_id + " and cat_id = " + seg + " and period_desc = " + cDate
println("sstring = "+sstring)
val rrCheckRdd = sqlContext.sql(sstring).collect().array

I get the following error:

Segment Code = 205
cDate=Year_2011_Month_0_Week_0_Site
reRunCheck seg = 205
sstring = Select * from seasonality where customer_id = 6 and cat_id = 205
and period_desc = Year_2011_Month_0_Week_0_Site
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved
attributes: *, tree:
Project [*]
 Filter (((customer_id#144 = 6) && (CAST(cat_id#148, DoubleType) =
CAST(205, DoubleType))) && (period_desc#150 =
'Year_2011_Month_0_Week_0_Site))
  Subquery seasonality
   SparkLogicalPlan (ExistingRdd
[customer_id#144,period_id#145,season_id#146,cat_lvl#147,cat_id#148,season_avg#149,period_desc#150,analyzed_date#151,sum_amt#152,total_count#153,process_id#154],
MapPartitionsRDD[36] at mapPartitions at basicOperators.scala:208)

        at
org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$$anonfun$apply$1.applyOrElse(Analyzer.scala:72)
        at
org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$$anonfun$apply$1.applyOrElse(Analyzer.scala:70)


It looks like an internal join error or possibly something else.  I need to
get a work around if possible or a quick patch.

Any help is appreciated.

Eric

-- 





*Eric Tanner*Big Data Developer

[image: JustEnough Logo]

15440 Laguna Canyon, Suite 100

Irvine, CA 92618



Cell:
Tel:
Skype:
Web:

  +1 (951) 313-9274
  +1 (949) 706-0400
  e <http://tonya.nicholls.je/>ric.tanner.je
  www.justenough.com

Confidentiality Note: The information contained in this email and
document(s) attached are for the exclusive use of the addressee and may
contain confidential, privileged and non-disclosable information. If the
recipient of this email is not the addressee, such recipient is strictly
prohibited from reading, photocopying, distribution or otherwise using this
email or its contents in any way.

Re: Unresolved attributes

Posted by Michael Armbrust <mi...@databricks.com>.
A little bit about how to read this output.  Resolution occurs from the
bottom up and when you see a tick (') it means that a field is unresolved.
So here it looks like Year_2011_Month_0_Week_0_Site is missing from from
your RDD.  (We are working on more obvious error messages).

Michael

On Tue, Dec 2, 2014 at 1:00 PM, Eric Tanner <er...@justenough.com>
wrote:

> I am running
> spark 1.1.0
> DSE cassandra 4.6
>
> when I try to run the following sql statement:
>
> val sstring = "Select * from seasonality where customer_id = " + customer_id + " and cat_id = " + seg + " and period_desc = " + cDate
> println("sstring = "+sstring)
> val rrCheckRdd = sqlContext.sql(sstring).collect().array
>
> I get the following error:
>
> Segment Code = 205
> cDate=Year_2011_Month_0_Week_0_Site
> reRunCheck seg = 205
> sstring = Select * from seasonality where customer_id = 6 and cat_id = 205
> and period_desc = Year_2011_Month_0_Week_0_Site
> org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved
> attributes: *, tree:
> Project [*]
>  Filter (((customer_id#144 = 6) && (CAST(cat_id#148, DoubleType) =
> CAST(205, DoubleType))) && (period_desc#150 =
> 'Year_2011_Month_0_Week_0_Site))
>   Subquery seasonality
>    SparkLogicalPlan (ExistingRdd
> [customer_id#144,period_id#145,season_id#146,cat_lvl#147,cat_id#148,season_avg#149,period_desc#150,analyzed_date#151,sum_amt#152,total_count#153,process_id#154],
> MapPartitionsRDD[36] at mapPartitions at basicOperators.scala:208)
>
>         at
> org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$$anonfun$apply$1.applyOrElse(Analyzer.scala:72)
>         at
> org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$$anonfun$apply$1.applyOrElse(Analyzer.scala:70)
>
>
> It looks like an internal join error or possibly something else.  I need
> to get a work around if possible or a quick patch.
>
> Any help is appreciated.
>
> Eric
>
> --
>
>
>
>
>
> *Eric Tanner*Big Data Developer
>
> [image: JustEnough Logo]
>
> 15440 Laguna Canyon, Suite 100
>
> Irvine, CA 92618
>
>
>
> Cell:
> Tel:
> Skype:
> Web:
>
>   +1 (951) 313-9274
>   +1 (949) 706-0400
>   e <http://tonya.nicholls.je/>ric.tanner.je
>   www.justenough.com
>
> Confidentiality Note: The information contained in this email and
> document(s) attached are for the exclusive use of the addressee and may
> contain confidential, privileged and non-disclosable information. If the
> recipient of this email is not the addressee, such recipient is strictly
> prohibited from reading, photocopying, distribution or otherwise using this
> email or its contents in any way.
>