You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "shahid (JIRA)" <ji...@apache.org> on 2018/11/27 17:57:00 UTC

[jira] [Created] (SPARK-26186) In progress applications with last updated time is lesser than the cleaning interval are getting removed during cleaning logs

shahid created SPARK-26186:
------------------------------

             Summary: In progress applications with last updated time is lesser than the cleaning interval are getting removed during cleaning logs
                 Key: SPARK-26186
                 URL: https://issues.apache.org/jira/browse/SPARK-26186
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.4.0, 3.0.0
            Reporter: shahid


Inporgress applications with last updated time is withing the cleaning interval are getting deleted.

 

Added a UT to test the scenario.

{code:java}
test("should not clean inprogress application with lastUpdated time less the maxTime") {
    val firstFileModifiedTime = TimeUnit.DAYS.toMillis(1)
    val secondFileModifiedTime = TimeUnit.DAYS.toMillis(6)
    val maxAge = TimeUnit.DAYS.toMillis(7)
    val clock = new ManualClock(0)
    val provider = new FsHistoryProvider(
      createTestConf().set("spark.history.fs.cleaner.maxAge", s"${maxAge}ms"), clock)
    val log = newLogFile("inProgressApp1", None, inProgress = true)
    writeFile(log, true, None,
      SparkListenerApplicationStart(
        "inProgressApp1", Some("inProgressApp1"), 3L, "test", Some("attempt1"))
    )
    clock.setTime(firstFileModifiedTime)
    provider.checkForLogs()
    writeFile(log, true, None,
      SparkListenerApplicationStart(
        "inProgressApp1", Some("inProgressApp1"), 3L, "test", Some("attempt1")),
      SparkListenerJobStart(0, 1L, Nil, null)
    )

    clock.setTime(secondFileModifiedTime)
    provider.checkForLogs()
    clock.setTime(TimeUnit.DAYS.toMillis(10))
    writeFile(log, true, None,
      SparkListenerApplicationStart(
        "inProgressApp1", Some("inProgressApp1"), 3L, "test", Some("attempt1")),
      SparkListenerJobStart(0, 1L, Nil, null),
      SparkListenerJobEnd(0, 1L, JobSucceeded)
    )
    provider.checkForLogs()
    // This should not trigger any cleanup
    updateAndCheck(provider) { list =>
      list.size should be(1)
    }
  }
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org