You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Imran Rashid (JIRA)" <ji...@apache.org> on 2018/04/27 19:01:00 UTC

[jira] [Commented] (SPARK-23894) Flaky Test: BucketedWriteWithoutHiveSupportSuite

    [ https://issues.apache.org/jira/browse/SPARK-23894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456919#comment-16456919 ] 

Imran Rashid commented on SPARK-23894:
--------------------------------------

One thing I've noticed from looking at more instances of this is that normally, we don't see any log lines from {{SharedState}} from the executor threads.  Normally we see this:

{noformat}
09:37:38.203 pool-1-thread-1-ScalaTest-running-ParquetQuerySuite INFO SharedState: Warehouse path is 'file:/Users/irashid/github/pub/spark/sql/core/spark-warehouse/'.
{noformat}

but in failures, we see

{noformat}
23:37:56.728 Executor task launch worker for task 48 INFO SharedState: Warehouse path is 'file:/home/jenkins/workspace/spark-branch-2.3-test-sbt-hadoop-2.6/sql/core/spark-warehouse'.
{noformat}

(notice the thread).  I don't understand why this happens yet.  Nor can I reproduce locally.

> Flaky Test:  BucketedWriteWithoutHiveSupportSuite
> -------------------------------------------------
>
>                 Key: SPARK-23894
>                 URL: https://issues.apache.org/jira/browse/SPARK-23894
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.4.0
>            Reporter: Imran Rashid
>            Priority: Minor
>         Attachments: unit-tests.log
>
>
> Flaky test observed here: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/88991/
> I'll attach a snippet of the unit-tests logs, for this suite and the preceeding one.  Here's a snippet of the exception.
> {noformat}
> 08:36:34.694 Executor task launch worker for task 436 ERROR Executor: Exception in task 0.0 in stage 402.0 (TID 436)
> java.lang.IllegalStateException: LiveListenerBus is stopped.
>         at org.apache.spark.scheduler.LiveListenerBus.addToQueue(LiveListenerBus.scala:97)
>         at org.apache.spark.scheduler.LiveListenerBus.addToStatusQueue(LiveListenerBus.scala:80)
>         at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:93)
>         at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:117)
>         at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:117)
>         at scala.Option.getOrElse(Option.scala:121)
>         at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:117)
>         at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:116)
>         at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:286)
>         at org.apache.spark.sql.test.TestSparkSession.sessionState$lzycompute(TestSQLContext.scala:42)
>         at org.apache.spark.sql.test.TestSparkSession.sessionState(TestSQLContext.scala:41)
>         at org.apache.spark.sql.SparkSession$$anonfun$1$$anonfun$apply$1.apply(SparkSession.scala:92)
>         at org.apache.spark.sql.SparkSession$$anonfun$1$$anonfun$apply$1.apply(SparkSession.scala:92)
>         at scala.Option.map(Option.scala:146)
>         at org.apache.spark.sql.SparkSession$$anonfun$1.apply(SparkSession.scala:92)
>         at org.apache.spark.sql.SparkSession$$anonfun$1.apply(SparkSession.scala:91)
>         at org.apache.spark.sql.internal.SQLConf$.get(SQLConf.scala:110)
>         at org.apache.spark.sql.types.DataType.sameType(DataType.scala:84)
>         at org.apache.spark.sql.catalyst.analysis.TypeCoercion$$anonfun$1.apply(TypeCoercion.scala:105)
>         at org.apache.spark.sql.catalyst.analysis.TypeCoercion$$anonfun$1.apply(TypeCoercion.scala:86)
> {noformat}
> I doubt this is actually because of BucketedWriteWithoutHiveSupportSuite.  I think it has something more to do with {{SparkSession}} 's lazy evaluation of {{SharedState}} doing something funny with the way we setup the test spark context etc ... though I don't really understand it yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org