You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Edwina Lu (JIRA)" <ji...@apache.org> on 2018/02/14 23:41:00 UTC

[jira] [Commented] (SPARK-23429) Add executor memory metrics to heartbeat and expose in executors REST API

    [ https://issues.apache.org/jira/browse/SPARK-23429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16364967#comment-16364967 ] 

Edwina Lu commented on SPARK-23429:
-----------------------------------

Subtask of SPARK-23206

> Add executor memory metrics to heartbeat and expose in executors REST API
> -------------------------------------------------------------------------
>
>                 Key: SPARK-23429
>                 URL: https://issues.apache.org/jira/browse/SPARK-23429
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.2.1
>            Reporter: Edwina Lu
>            Priority: Major
>
> Add new executor level memory metrics ( jvmUsedMemory, executionMemory, storageMemory, and unifiedMemory), and expose these via the executors REST API. This information will help provide insight into how executor and driver JVM memory is used, and for the different memory regions. It can be used to help determine good values for spark.executor.memory, spark.driver.memory, spark.memory.fraction, and spark.memory.storageFraction.
> Add an ExecutorMetrics class, with jvmUsedMemory, executionMemory, and storageMemory. This will track the memory usage at the executor level. The new ExecutorMetrics will be sent by executors to the driver as part of the Heartbeat. A heartbeat will be added for the driver as well, to collect these metrics for the driver.
> Modify the EventLoggingListener to log ExecutorMetricsUpdate events if there is a new peak value for one of the memory metrics for an executor and stage. Only the ExecutorMetrics will be logged, and not the TaskMetrics, to minimize additional logging. Analysis on a set of sample applications showed an increase of 0.25% in the size of the Spark history log, with this approach.
> Modify the AppStatusListener to collect snapshots of peak values for each memory metric. Each snapshot has the time, jvmUsedMemory, executionMemory and storageMemory, and list of active stages.
> Add the new memory metrics (snapshots of peak values for each memory metric) to the executors REST API.
> This is a subtask for SPARK-23206. Please refer to the design doc for that ticket for more details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org