You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Linhong Liu (Jira)" <ji...@apache.org> on 2020/09/16 05:19:00 UTC

[jira] [Created] (SPARK-32898) totalExecutorRunTimeMs is too big

Linhong Liu created SPARK-32898:
-----------------------------------

             Summary: totalExecutorRunTimeMs is too big
                 Key: SPARK-32898
                 URL: https://issues.apache.org/jira/browse/SPARK-32898
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 3.0.1
            Reporter: Linhong Liu


This might be because of incorrectly calculating executorRunTimeMs in Executor.scala
The function collectAccumulatorsAndResetStatusOnFailure(taskStartTimeNs) can be called when taskStartTimeNs is not set yet (it is 0).

As of now in master branch, here is the problematic code: 

[https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/executor/Executor.scala#L470]

 

There is a throw exception before this line. The catch branch still updates the metric.
However the query shows as SUCCESSful in QPL. Maybe this task is speculative. Not sure.

 

submissionTime in LiveExecutionData may also have similar problem.

[https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListener.scala#L449]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org