You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Shixiong Zhu (JIRA)" <ji...@apache.org> on 2016/03/04 00:43:18 UTC

[jira] [Resolved] (SPARK-13584) ContinuousQueryManagerSuite floods the logs with garbage

     [ https://issues.apache.org/jira/browse/SPARK-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Shixiong Zhu resolved SPARK-13584.
----------------------------------
       Resolution: Fixed
         Assignee: Shixiong Zhu
    Fix Version/s: 2.0.0

> ContinuousQueryManagerSuite floods the logs with garbage
> --------------------------------------------------------
>
>                 Key: SPARK-13584
>                 URL: https://issues.apache.org/jira/browse/SPARK-13584
>             Project: Spark
>          Issue Type: Test
>          Components: Tests
>            Reporter: Shixiong Zhu
>            Assignee: Shixiong Zhu
>             Fix For: 2.0.0
>
>
> We should clean up the following outputs
> {code}
> [info] ContinuousQueryManagerSuite:
> 16:30:20.473 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 0.0 (TID 1)
> java.lang.ArithmeticException: / by zero
> 	at org.apache.spark.sql.streaming.ContinuousQueryManagerSuite$$anonfun$6.apply$mcII$sp(ContinuousQueryManagerSuite.scala:303)
> 	at org.apache.spark.sql.streaming.ContinuousQueryManagerSuite$$anonfun$6.apply(ContinuousQueryManagerSuite.scala:303)
> 	at org.apache.spark.sql.streaming.ContinuousQueryManagerSuite$$anonfun$6.apply(ContinuousQueryManagerSuite.scala:303)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
> 	at scala.collection.Iterator$class.foreach(Iterator.scala:742)
> 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
> 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
> 	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:308)
> 	at scala.collection.AbstractIterator.to(Iterator.scala:1194)
> 	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:300)
> 	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1194)
> 	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:287)
> 	at scala.collection.AbstractIterator.toArray(Iterator.scala:1194)
> 	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:847)
> 	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:847)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1802)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1802)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:81)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:744)
> 16:30:20.506 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, localhost): java.lang.ArithmeticException: / by zero
> 	at org.apache.spark.sql.streaming.ContinuousQueryManagerSuite$$anonfun$6.apply$mcII$sp(ContinuousQueryManagerSuite.scala:303)
> 	at org.apache.spark.sql.streaming.ContinuousQueryManagerSuite$$anonfun$6.apply(ContinuousQueryManagerSuite.scala:303)
> 	at org.apache.spark.sql.streaming.ContinuousQueryManagerSuite$$anonfun$6.apply(ContinuousQueryManagerSuite.scala:303)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
> 	at scala.collection.Iterator$class.foreach(Iterator.scala:742)
> 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
> 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
> 	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:308)
> 	at scala.collection.AbstractIterator.to(Iterator.scala:1194)
> 	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:300)
> 	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1194)
> 	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:287)
> 	at scala.collection.AbstractIterator.toArray(Iterator.scala:1194)
> 	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:847)
> 	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:847)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1802)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1802)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:81)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:744)
> 16:30:20.508 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 0.0 failed 1 times; aborting job
> 16:30:20.523 ERROR org.apache.spark.sql.execution.streaming.StreamExecution: Query query-7 terminated with error
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 1 times, most recent failure: Lost task 1.0 in stage 0.0 (TID 1, localhost): java.lang.ArithmeticException: / by zero
> 	at org.apache.spark.sql.streaming.ContinuousQueryManagerSuite$$anonfun$6.apply$mcII$sp(ContinuousQueryManagerSuite.scala:303)
> 	at org.apache.spark.sql.streaming.ContinuousQueryManagerSuite$$anonfun$6.apply(ContinuousQueryManagerSuite.scala:303)
> 	at org.apache.spark.sql.streaming.ContinuousQueryManagerSuite$$anonfun$6.apply(ContinuousQueryManagerSuite.scala:303)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
> 	at scala.collection.Iterator$class.foreach(Iterator.scala:742)
> 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
> 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
> 	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:308)
> 	at scala.collection.AbstractIterator.to(Iterator.scala:1194)
> 	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:300)
> 	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1194)
> 	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:287)
> 	at scala.collection.AbstractIterator.toArray(Iterator.scala:1194)
> 	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:847)
> 	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:847)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1802)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1802)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:81)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:744)
> Driver stacktrace:
> 	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1452)
> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1440)
> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1439)
> 	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> 	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1439)
> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
> 	at scala.Option.foreach(Option.scala:257)
> 	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
> 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1661)
> 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1620)
> 	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1609)
> 	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> 	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:623)
> 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1776)
> 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1789)
> 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1802)
> 	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1816)
> 	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:847)
> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
> 	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
> 	at org.apache.spark.rdd.RDD.withScope(RDD.scala:323)
> 	at org.apache.spark.rdd.RDD.collect(RDD.scala:846)
> 	at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:223)
> 	at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:231)
> 	at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1522)
> 	at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1522)
> 	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:53)
> 	at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1771)
> 	at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1521)
> 	at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$collect$1.apply(DataFrame.scala:1526)
> 	at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$collect$1.apply(DataFrame.scala:1526)
> 	at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:1784)
> 	at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1526)
> 	at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1503)
> 	at org.apache.spark.sql.execution.streaming.MemorySink.addBatch(memory.scala:117)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution.attemptBatch(StreamExecution.scala:215)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:123)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:74)
> Caused by: java.lang.ArithmeticException: / by zero
> 	at org.apache.spark.sql.streaming.ContinuousQueryManagerSuite$$anonfun$6.apply$mcII$sp(ContinuousQueryManagerSuite.scala:303)
> 	at org.apache.spark.sql.streaming.ContinuousQueryManagerSuite$$anonfun$6.apply(ContinuousQueryManagerSuite.scala:303)
> 	at org.apache.spark.sql.streaming.ContinuousQueryManagerSuite$$anonfun$6.apply(ContinuousQueryManagerSuite.scala:303)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
> 	at scala.collection.Iterator$class.foreach(Iterator.scala:742)
> 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
> 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
> 	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:308)
> 	at scala.collection.AbstractIterator.to(Iterator.scala:1194)
> 	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:300)
> 	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1194)
> 	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:287)
> 	at scala.collection.AbstractIterator.toArray(Iterator.scala:1194)
> 	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:847)
> 	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:847)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1802)
> 	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1802)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:69)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:81)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:744)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org