You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Jerry Lam <ch...@gmail.com> on 2014/07/10 17:15:20 UTC

Potential bugs in SparkSQL

Hi Spark developers,

I have the following hqls that spark will throw exceptions of this kind:
14/07/10 15:07:55 INFO TaskSetManager: Loss was due to
org.apache.spark.TaskKilledException [duplicate 17]
org.apache.spark.SparkException: Job aborted due to stage failure: Task
0.0:736 failed 4 times, most recent failure: Exception failure in TID 167
on host etl2-node05:
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: No function
to evaluate expression. type: UnresolvedAttribute, tree: 'm.id

org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.eval(unresolved.scala:59)

org.apache.spark.sql.catalyst.expressions.Equals.eval(predicates.scala:151)

org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)

org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
        scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
        scala.collection.Iterator$class.foreach(Iterator.scala:727)
        scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)

scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)

scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
        scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
        scala.collection.AbstractIterator.to(Iterator.scala:1157)

scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
        scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)

scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
        scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
        org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
        org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)

org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)

org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
        org.apache.spark.scheduler.Task.run(Task.scala:51)

org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        java.lang.Thread.run(Thread.java:662)

The hql looks like this (I trimmed the hql down to the essentials to
demonstrate the potential bugs, the actual join is more complex and
irrelevant to the bug):

val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
import hiveContext._
hql("USE test")
hql("select id from m").registerAsTable("m")
hql("select s.id from m join s on (s.id=m.id)").collect().foreach(println)

Apparently, spark is unable to understand the m.id in the "(s.id=m.id)". If
I change it to:
hql("select m_id from m").registerAsTable("m")
hql("select s.id from m join s on (s.id=m_id)").collect().foreach(println)

It will work. Am I doing something wrong or it is a bug in spark sql?

Best Regards,

Jerry

Re: Potential bugs in SparkSQL

Posted by Yin Huai <hu...@gmail.com>.
I have opened https://issues.apache.org/jira/browse/SPARK-2474 to track
this bug. I will also explain my understanding of the root cause.


On Thu, Jul 10, 2014 at 6:03 PM, Michael Armbrust <mi...@databricks.com>
wrote:

> Hmm, yeah looks like the table name is not getting applied to the
> attributes of m.  You can work around this by rewriting your query as:
> hql("select s.id from (SELECT * FROM m) m join s on (s.id=m.id) order by
> s.id"
>
> This explicitly gives the alias m to the attributes of that table. You can
> also open a JIRA and we can look in to the root cause in more detail.
>
> Michael
>
>
> On Thu, Jul 10, 2014 at 5:45 PM, Jerry Lam <ch...@gmail.com> wrote:
>
>> Hi Michael,
>>
>> I got the log you asked for. Note that I manually edited the table name
>> and the field names to hide some sensitive information.
>>
>> == Logical Plan ==
>> Project ['s.id]
>>  Join Inner, Some((id#106 = 'm.id))
>>   Project [id#96 AS id#62]
>>    MetastoreRelation test, m, None
>>   MetastoreRelation test, s, Some(s)
>>
>> == Optimized Logical Plan ==
>> Project ['s.id]
>>  Join Inner, Some((id#106 = 'm.id))
>>   Project []
>>    MetastoreRelation test, m, None
>>   Project [id#106]
>>    MetastoreRelation test, s, Some(s)
>>
>> == Physical Plan ==
>> Project ['s.id]
>>  Filter (id#106:0 = 'm.id)
>>   CartesianProduct
>>    HiveTableScan [], (MetastoreRelation test, m, None), None
>>    HiveTableScan [id#106], (MetastoreRelation test, s, Some(s)), None
>>
>> Best Regards,
>>
>> Jerry
>>
>>
>>
>> On Thu, Jul 10, 2014 at 7:16 PM, Michael Armbrust <michael@databricks.com
>> > wrote:
>>
>>> Hi Jerry,
>>>
>>> Thanks for reporting this.  It would be helpful if you could provide the
>>> output of the following command:
>>>
>>> println(hql("select s.id from m join s on (s.id=m_id)").queryExecution)
>>>
>>> Michael
>>>
>>>
>>> On Thu, Jul 10, 2014 at 8:15 AM, Jerry Lam <ch...@gmail.com> wrote:
>>>
>>>> Hi Spark developers,
>>>>
>>>> I have the following hqls that spark will throw exceptions of this kind:
>>>> 14/07/10 15:07:55 INFO TaskSetManager: Loss was due to
>>>> org.apache.spark.TaskKilledException [duplicate 17]
>>>> org.apache.spark.SparkException: Job aborted due to stage failure: Task
>>>> 0.0:736 failed 4 times, most recent failure: Exception failure in TID 167
>>>> on host etl2-node05:
>>>> org.apache.spark.sql.catalyst.errors.package$TreeNodeException: No function
>>>> to evaluate expression. type: UnresolvedAttribute, tree: 'm.id
>>>>
>>>> org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.eval(unresolved.scala:59)
>>>>
>>>> org.apache.spark.sql.catalyst.expressions.Equals.eval(predicates.scala:151)
>>>>
>>>> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>>>>
>>>> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>>>>         scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
>>>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>>>         scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>>>         scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>>>
>>>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>>>>
>>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>>>>
>>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>>>>         scala.collection.TraversableOnce$class.to
>>>> (TraversableOnce.scala:273)
>>>>         scala.collection.AbstractIterator.to(Iterator.scala:1157)
>>>>
>>>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>>>>         scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>>>>
>>>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>>>>         scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>>>>         org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>>>>         org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>>>>
>>>> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
>>>>
>>>> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
>>>>
>>>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>>>>         org.apache.spark.scheduler.Task.run(Task.scala:51)
>>>>
>>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>         java.lang.Thread.run(Thread.java:662)
>>>>
>>>> The hql looks like this (I trimmed the hql down to the essentials to
>>>> demonstrate the potential bugs, the actual join is more complex and
>>>> irrelevant to the bug):
>>>>
>>>> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
>>>> import hiveContext._
>>>> hql("USE test")
>>>> hql("select id from m").registerAsTable("m")
>>>> hql("select s.id from m join s on (s.id=m.id
>>>> )").collect().foreach(println)
>>>>
>>>> Apparently, spark is unable to understand the m.id in the "(s.id=m.id)".
>>>> If I change it to:
>>>> hql("select m_id from m").registerAsTable("m")
>>>> hql("select s.id from m join s on (s.id
>>>> =m_id)").collect().foreach(println)
>>>>
>>>> It will work. Am I doing something wrong or it is a bug in spark sql?
>>>>
>>>> Best Regards,
>>>>
>>>> Jerry
>>>>
>>>>
>>>
>>
>

Re: Potential bugs in SparkSQL

Posted by Michael Armbrust <mi...@databricks.com>.
Hmm, yeah looks like the table name is not getting applied to the
attributes of m.  You can work around this by rewriting your query as:
hql("select s.id from (SELECT * FROM m) m join s on (s.id=m.id) order by
s.id"

This explicitly gives the alias m to the attributes of that table. You can
also open a JIRA and we can look in to the root cause in more detail.

Michael


On Thu, Jul 10, 2014 at 5:45 PM, Jerry Lam <ch...@gmail.com> wrote:

> Hi Michael,
>
> I got the log you asked for. Note that I manually edited the table name
> and the field names to hide some sensitive information.
>
> == Logical Plan ==
> Project ['s.id]
>  Join Inner, Some((id#106 = 'm.id))
>   Project [id#96 AS id#62]
>    MetastoreRelation test, m, None
>   MetastoreRelation test, s, Some(s)
>
> == Optimized Logical Plan ==
> Project ['s.id]
>  Join Inner, Some((id#106 = 'm.id))
>   Project []
>    MetastoreRelation test, m, None
>   Project [id#106]
>    MetastoreRelation test, s, Some(s)
>
> == Physical Plan ==
> Project ['s.id]
>  Filter (id#106:0 = 'm.id)
>   CartesianProduct
>    HiveTableScan [], (MetastoreRelation test, m, None), None
>    HiveTableScan [id#106], (MetastoreRelation test, s, Some(s)), None
>
> Best Regards,
>
> Jerry
>
>
>
> On Thu, Jul 10, 2014 at 7:16 PM, Michael Armbrust <mi...@databricks.com>
> wrote:
>
>> Hi Jerry,
>>
>> Thanks for reporting this.  It would be helpful if you could provide the
>> output of the following command:
>>
>> println(hql("select s.id from m join s on (s.id=m_id)").queryExecution)
>>
>> Michael
>>
>>
>> On Thu, Jul 10, 2014 at 8:15 AM, Jerry Lam <ch...@gmail.com> wrote:
>>
>>> Hi Spark developers,
>>>
>>> I have the following hqls that spark will throw exceptions of this kind:
>>> 14/07/10 15:07:55 INFO TaskSetManager: Loss was due to
>>> org.apache.spark.TaskKilledException [duplicate 17]
>>> org.apache.spark.SparkException: Job aborted due to stage failure: Task
>>> 0.0:736 failed 4 times, most recent failure: Exception failure in TID 167
>>> on host etl2-node05:
>>> org.apache.spark.sql.catalyst.errors.package$TreeNodeException: No function
>>> to evaluate expression. type: UnresolvedAttribute, tree: 'm.id
>>>
>>> org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.eval(unresolved.scala:59)
>>>
>>> org.apache.spark.sql.catalyst.expressions.Equals.eval(predicates.scala:151)
>>>
>>> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>>>
>>> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>>>         scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
>>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>>         scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>>         scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>>
>>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>>>
>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>>>
>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>>>         scala.collection.TraversableOnce$class.to
>>> (TraversableOnce.scala:273)
>>>         scala.collection.AbstractIterator.to(Iterator.scala:1157)
>>>
>>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>>>         scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>>>
>>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>>>         scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>>>         org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>>>         org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>>>
>>> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
>>>
>>> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
>>>
>>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>>>         org.apache.spark.scheduler.Task.run(Task.scala:51)
>>>
>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>         java.lang.Thread.run(Thread.java:662)
>>>
>>> The hql looks like this (I trimmed the hql down to the essentials to
>>> demonstrate the potential bugs, the actual join is more complex and
>>> irrelevant to the bug):
>>>
>>> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
>>> import hiveContext._
>>> hql("USE test")
>>> hql("select id from m").registerAsTable("m")
>>> hql("select s.id from m join s on (s.id=m.id
>>> )").collect().foreach(println)
>>>
>>> Apparently, spark is unable to understand the m.id in the "(s.id=m.id)".
>>> If I change it to:
>>> hql("select m_id from m").registerAsTable("m")
>>> hql("select s.id from m join s on (s.id
>>> =m_id)").collect().foreach(println)
>>>
>>> It will work. Am I doing something wrong or it is a bug in spark sql?
>>>
>>> Best Regards,
>>>
>>> Jerry
>>>
>>>
>>
>

Re: Potential bugs in SparkSQL

Posted by Jerry Lam <ch...@gmail.com>.
Hi Michael,

I got the log you asked for. Note that I manually edited the table name and
the field names to hide some sensitive information.

== Logical Plan ==
Project ['s.id]
 Join Inner, Some((id#106 = 'm.id))
  Project [id#96 AS id#62]
   MetastoreRelation test, m, None
  MetastoreRelation test, s, Some(s)

== Optimized Logical Plan ==
Project ['s.id]
 Join Inner, Some((id#106 = 'm.id))
  Project []
   MetastoreRelation test, m, None
  Project [id#106]
   MetastoreRelation test, s, Some(s)

== Physical Plan ==
Project ['s.id]
 Filter (id#106:0 = 'm.id)
  CartesianProduct
   HiveTableScan [], (MetastoreRelation test, m, None), None
   HiveTableScan [id#106], (MetastoreRelation test, s, Some(s)), None

Best Regards,

Jerry



On Thu, Jul 10, 2014 at 7:16 PM, Michael Armbrust <mi...@databricks.com>
wrote:

> Hi Jerry,
>
> Thanks for reporting this.  It would be helpful if you could provide the
> output of the following command:
>
> println(hql("select s.id from m join s on (s.id=m_id)").queryExecution)
>
> Michael
>
>
> On Thu, Jul 10, 2014 at 8:15 AM, Jerry Lam <ch...@gmail.com> wrote:
>
>> Hi Spark developers,
>>
>> I have the following hqls that spark will throw exceptions of this kind:
>> 14/07/10 15:07:55 INFO TaskSetManager: Loss was due to
>> org.apache.spark.TaskKilledException [duplicate 17]
>> org.apache.spark.SparkException: Job aborted due to stage failure: Task
>> 0.0:736 failed 4 times, most recent failure: Exception failure in TID 167
>> on host etl2-node05:
>> org.apache.spark.sql.catalyst.errors.package$TreeNodeException: No function
>> to evaluate expression. type: UnresolvedAttribute, tree: 'm.id
>>
>> org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.eval(unresolved.scala:59)
>>
>> org.apache.spark.sql.catalyst.expressions.Equals.eval(predicates.scala:151)
>>
>> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>>
>> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>>         scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>         scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>         scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>
>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>>
>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>>
>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>>         scala.collection.TraversableOnce$class.to
>> (TraversableOnce.scala:273)
>>         scala.collection.AbstractIterator.to(Iterator.scala:1157)
>>
>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>>         scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>>
>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>>         scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>>         org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>>         org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>>
>> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
>>
>> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
>>
>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>>         org.apache.spark.scheduler.Task.run(Task.scala:51)
>>
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>         java.lang.Thread.run(Thread.java:662)
>>
>> The hql looks like this (I trimmed the hql down to the essentials to
>> demonstrate the potential bugs, the actual join is more complex and
>> irrelevant to the bug):
>>
>> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
>> import hiveContext._
>> hql("USE test")
>> hql("select id from m").registerAsTable("m")
>> hql("select s.id from m join s on (s.id=m.id
>> )").collect().foreach(println)
>>
>> Apparently, spark is unable to understand the m.id in the "(s.id=m.id)".
>> If I change it to:
>> hql("select m_id from m").registerAsTable("m")
>> hql("select s.id from m join s on (s.id
>> =m_id)").collect().foreach(println)
>>
>> It will work. Am I doing something wrong or it is a bug in spark sql?
>>
>> Best Regards,
>>
>> Jerry
>>
>>
>

Re: Potential bugs in SparkSQL

Posted by Michael Armbrust <mi...@databricks.com>.
Hi Jerry,

Thanks for reporting this.  It would be helpful if you could provide the
output of the following command:

println(hql("select s.id from m join s on (s.id=m_id)").queryExecution)

Michael


On Thu, Jul 10, 2014 at 8:15 AM, Jerry Lam <ch...@gmail.com> wrote:

> Hi Spark developers,
>
> I have the following hqls that spark will throw exceptions of this kind:
> 14/07/10 15:07:55 INFO TaskSetManager: Loss was due to
> org.apache.spark.TaskKilledException [duplicate 17]
> org.apache.spark.SparkException: Job aborted due to stage failure: Task
> 0.0:736 failed 4 times, most recent failure: Exception failure in TID 167
> on host etl2-node05:
> org.apache.spark.sql.catalyst.errors.package$TreeNodeException: No function
> to evaluate expression. type: UnresolvedAttribute, tree: 'm.id
>
> org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.eval(unresolved.scala:59)
>
> org.apache.spark.sql.catalyst.expressions.Equals.eval(predicates.scala:151)
>
> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>
> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>         scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>         scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>         scala.collection.TraversableOnce$class.to
> (TraversableOnce.scala:273)
>         scala.collection.AbstractIterator.to(Iterator.scala:1157)
>
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>         scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>         scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>         org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>         org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
>
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>         org.apache.spark.scheduler.Task.run(Task.scala:51)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         java.lang.Thread.run(Thread.java:662)
>
> The hql looks like this (I trimmed the hql down to the essentials to
> demonstrate the potential bugs, the actual join is more complex and
> irrelevant to the bug):
>
> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
> import hiveContext._
> hql("USE test")
> hql("select id from m").registerAsTable("m")
> hql("select s.id from m join s on (s.id=m.id)").collect().foreach(println)
>
> Apparently, spark is unable to understand the m.id in the "(s.id=m.id)".
> If I change it to:
> hql("select m_id from m").registerAsTable("m")
> hql("select s.id from m join s on (s.id=m_id)").collect().foreach(println)
>
> It will work. Am I doing something wrong or it is a bug in spark sql?
>
> Best Regards,
>
> Jerry
>
>

Re: Potential bugs in SparkSQL

Posted by Stephen Boesch <ja...@gmail.com>.
Hi Jerry,
To add to your question:

Following does work (from master)- notice the registerAsTable is commented
:  (I took a liberty to add the "order by" clause)

val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
import hiveContext._
hql("USE test")
// hql("select id from m").registerAsTable("m")
val res = hql("select s.id from m join s on (s.id=m.id) order by s.id
").collect.foreach(println)

res: Array[org.apache.spark.sql.Row] = Array([1], [2], [3], [4], [5], [6],
[7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19],
[20])


But when the table is registered I see a different error than you reported:

val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
import hiveContext._
hql("USE test")
hql("select id from m").registerAsTable("m")
val res = hql("select s.id from m join s on (s.id=m.id) order by s.id
").collect.foreach(println)



14/07/10 13:43:23 INFO ParseDriver: Parsing command: select s.id from m
join s on (s.id=m.id) order by s.id
14/07/10 13:43:23 INFO ParseDriver: Parse Completed
14/07/10 13:43:23 INFO Analyzer: Max iterations (2) reached for batch
MultiInstanceRelations
14/07/10 13:43:23 INFO Analyzer: Max iterations (2) reached for batch
CaseInsensitiveAttributeReferences
java.lang.StackOverflowError
at scala.collection.AbstractIterator.seq(Iterator.scala:1157)
at scala.collection.AbstractIterator.seq(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at
org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:212)
at
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:168)
at
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:183)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at
org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:212)
at
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:170)
at
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:183)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at
org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:212)

I am interested in this and will look further.




2014-07-10 8:15 GMT-07:00 Jerry Lam <ch...@gmail.com>:

> Hi Spark developers,
>
> I have the following hqls that spark will throw exceptions of this kind:
> 14/07/10 15:07:55 INFO TaskSetManager: Loss was due to
> org.apache.spark.TaskKilledException [duplicate 17]
> org.apache.spark.SparkException: Job aborted due to stage failure: Task
> 0.0:736 failed 4 times, most recent failure: Exception failure in TID 167
> on host etl2-node05:
> org.apache.spark.sql.catalyst.errors.package$TreeNodeException: No function
> to evaluate expression. type: UnresolvedAttribute, tree: 'm.id
>
> org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.eval(unresolved.scala:59)
>
> org.apache.spark.sql.catalyst.expressions.Equals.eval(predicates.scala:151)
>
> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>
> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>         scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>         scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>         scala.collection.TraversableOnce$class.to
> (TraversableOnce.scala:273)
>         scala.collection.AbstractIterator.to(Iterator.scala:1157)
>
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>         scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>         scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>         org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>         org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
>
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>         org.apache.spark.scheduler.Task.run(Task.scala:51)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         java.lang.Thread.run(Thread.java:662)
>
> The hql looks like this (I trimmed the hql down to the essentials to
> demonstrate the potential bugs, the actual join is more complex and
> irrelevant to the bug):
>
> val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
> import hiveContext._
> hql("USE test")
> hql("select id from m").registerAsTable("m")
> hql("select s.id from m join s on (s.id=m.id)").collect().foreach(println)
>
> Apparently, spark is unable to understand the m.id in the "(s.id=m.id)".
> If I change it to:
> hql("select m_id from m").registerAsTable("m")
> hql("select s.id from m join s on (s.id=m_id)").collect().foreach(println)
>
> It will work. Am I doing something wrong or it is a bug in spark sql?
>
> Best Regards,
>
> Jerry
>
>