You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marcelo Vanzin (JIRA)" <ji...@apache.org> on 2018/02/21 19:35:00 UTC

[jira] [Commented] (SPARK-23481) The job page shows wrong stages when some of stages are evicted

    [ https://issues.apache.org/jira/browse/SPARK-23481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371901#comment-16371901 ] 

Marcelo Vanzin commented on SPARK-23481:
----------------------------------------

[~zsxwing] you assigned this to yourself so I suppose you were working on a patch?

Anyway, this seems to fix it:

{code}
diff --git a/core/src/main/scala/org/apache/spark/status/AppStatusStore.scala b/core/src/main/scala/org/apache/spark/status/AppStatusStore.scala
index efc2853..3990f9c 100644
--- a/core/src/main/scala/org/apache/spark/status/AppStatusStore.scala
+++ b/core/src/main/scala/org/apache/spark/status/AppStatusStore.scala
@@ -96,7 +96,7 @@ private[spark] class AppStatusStore(
 
   def lastStageAttempt(stageId: Int): v1.StageData = {
     val it = store.view(classOf[StageDataWrapper]).index("stageId").reverse().first(stageId)
-      .closeableIterator()
+      .last(stageId).closeableIterator()
     try {
       if (it.hasNext()) {
         it.next().info
{code}


> The job page shows wrong stages when some of stages are evicted
> ---------------------------------------------------------------
>
>                 Key: SPARK-23481
>                 URL: https://issues.apache.org/jira/browse/SPARK-23481
>             Project: Spark
>          Issue Type: Bug
>          Components: Web UI
>    Affects Versions: 2.3.0
>            Reporter: Shixiong Zhu
>            Assignee: Shixiong Zhu
>            Priority: Blocker
>         Attachments: Screen Shot 2018-02-21 at 12.39.46 AM.png
>
>
> Run "bin/spark-shell --conf spark.ui.retainedJobs=10 --conf spark.ui.retainedStages=10", type the following codes and click the job 19 page, it will show wrong stage ids:
> {code}
> val rdd = sc.parallelize(0 to 100, 100).repartition(10).cache()
> (1 to 20).foreach { i =>
>    rdd.repartition(10).count()
> }
> {code}
> Please see the attached screenshots.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org