You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/02/20 20:07:37 UTC

[GitHub] squito commented on a change in pull request #23767: [SPARK-26329][CORE][WIP] Faster polling of executor memory metrics.

squito commented on a change in pull request #23767: [SPARK-26329][CORE][WIP] Faster polling of executor memory metrics.
URL: https://github.com/apache/spark/pull/23767#discussion_r258640210
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/executor/Executor.scala
 ##########
 @@ -524,10 +605,19 @@ private[spark] class Executor(
         executorSource.METRIC_DISK_BYTES_SPILLED.inc(task.metrics.diskBytesSpilled)
         executorSource.METRIC_MEMORY_BYTES_SPILLED.inc(task.metrics.memoryBytesSpilled)
 
+        def getMetricPeaks(): Array[Long] = {
+          val currentPeaks = taskMetricPeaks.get(taskId)
 
 Review comment:
   value at task start isn't very useful, as the task itself hasn't done anything.  I've wondered about grabbing the value again at task end (ideally the task would have cleaned up all memory use, but eg. if more offheap memory was requested by netty its unlikely to be reclaimed by OS already).
   
   but I'm less certain about this -- I feel better about just reporting 0s for now, and leaving as something we continue to consider.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org