You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sen Fang (JIRA)" <ji...@apache.org> on 2015/09/10 20:57:47 UTC

[jira] [Created] (SPARK-10543) Peak Execution Memory Quantile should be Pre-task Basis

Sen Fang created SPARK-10543:
--------------------------------

             Summary: Peak Execution Memory Quantile should be Pre-task Basis
                 Key: SPARK-10543
                 URL: https://issues.apache.org/jira/browse/SPARK-10543
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 1.5.0
            Reporter: Sen Fang
            Priority: Minor


Currently the Peak Execution Memory quantiles seem to be cumulative rather than per task basis. For example, I have seen a value of 2TB in one of my jobs on the quantile metric but each individual task shows less than 1GB on the bottom table.

[~andrewor14] In your PR https://github.com/apache/spark/pull/7770, the screenshot shows the Max Peak Execution Memory of 792.5KB while in the bottom it's about 50KB per task (unless your workload is skewed)

The fix seems straightforward that we use the `update` rather than `value` from the accumulable. I'm happy to provide a PR if people agree this is the right behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org