You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2015/09/12 01:07:45 UTC
[jira] [Assigned] (SPARK-10543) Peak Execution Memory Quantile
should be Per-task Basis
[ https://issues.apache.org/jira/browse/SPARK-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Apache Spark reassigned SPARK-10543:
------------------------------------
Assignee: Apache Spark
> Peak Execution Memory Quantile should be Per-task Basis
> -------------------------------------------------------
>
> Key: SPARK-10543
> URL: https://issues.apache.org/jira/browse/SPARK-10543
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.5.0
> Reporter: Sen Fang
> Assignee: Apache Spark
> Priority: Minor
>
> Currently the Peak Execution Memory quantiles seem to be cumulative rather than per task basis. For example, I have seen a value of 2TB in one of my jobs on the quantile metric but each individual task shows less than 1GB on the bottom table.
> [~andrewor14] In your PR https://github.com/apache/spark/pull/7770, the screenshot shows the Max Peak Execution Memory of 792.5KB while in the bottom it's about 50KB per task (unless your workload is skewed)
> The fix seems straightforward that we use the `update` rather than `value` from the accumulable. I'm happy to provide a PR if people agree this is the right behavior.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org