You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "shahid (JIRA)" <ji...@apache.org> on 2018/10/24 14:55:00 UTC

[jira] [Commented] (SPARK-24140) Flaky test: KMeansClusterSuite.task size should be small in both training and prediction

    [ https://issues.apache.org/jira/browse/SPARK-24140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16662389#comment-16662389 ] 

shahid commented on SPARK-24140:
--------------------------------

Hi [~dongjoon], The link to the report seems not working. Is there a way to see the full logs? Thanks

> Flaky test: KMeansClusterSuite.task size should be small in both training and prediction
> ----------------------------------------------------------------------------------------
>
>                 Key: SPARK-24140
>                 URL: https://issues.apache.org/jira/browse/SPARK-24140
>             Project: Spark
>          Issue Type: Bug
>          Components: ML
>    Affects Versions: 2.4.0
>            Reporter: Dongjoon Hyun
>            Priority: Major
>
> https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-hadoop-2.7/4765/
> {code}
> Error Message
> Job 0 cancelled because SparkContext was shut down
> Stacktrace
>       org.apache.spark.SparkException: Job 0 cancelled because SparkContext was shut down
>       at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:837)
>       at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:835)
>       at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
>       at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:835)
>       at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1841)
>       at org.apache.spark.util.EventLoop.stop(EventLoop.scala:83)
>       at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1754)
>       at org.apache.spark.SparkContext$$anonfun$stop$8.apply$mcV$sp(SparkContext.scala:1927)
>       at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1303)
>       at org.apache.spark.SparkContext.stop(SparkContext.scala:1926)
>       at org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:574)
>       at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
>       at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
>       at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
>       at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1907)
>       at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
>       at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
>       at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
>       at scala.util.Try$.apply(Try.scala:192)
>       at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
>       at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
>       at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
>       at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2030)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2051)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2070)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2095)
>       at org.apache.spark.rdd.RDD.count(RDD.scala:1162)
>       at org.apache.spark.rdd.RDD$$anonfun$takeSample$1.apply(RDD.scala:571)
>       at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>       at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>       at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
>       at org.apache.spark.rdd.RDD.takeSample(RDD.scala:560)
>       at org.apache.spark.mllib.clustering.KMeans.initRandom(KMeans.scala:360)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org