You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "assia ydroudj (JIRA)" <ji...@apache.org> on 2018/01/10 14:06:01 UTC

[jira] [Commented] (SPARK-21157) Report Total Memory Used by Spark Executors

    [ https://issues.apache.org/jira/browse/SPARK-21157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320293#comment-16320293 ] 

assia ydroudj commented on SPARK-21157:
---------------------------------------



I m beginner in apache spark and have installed a prebuilt distribution of apache spark with hadoop. I look to get the consumption or the usage of memory while running the example PageRank implemented within spark. I have my cluster standalone mode with 1 maser and 4 workers (Virtual machines)

I have tried external tools like ganglia and graphite but they give the memory usage at resource or system level (more general) but what i need exactly is "to track the behavior of the memory (Storage, execution) while running the algorithm does it means, memory usage for a spark application-ID ". Is there anyway to get it into text-file for further exploitation? Please help me on this, Thanks


> Report Total Memory Used by Spark Executors
> -------------------------------------------
>
>                 Key: SPARK-21157
>                 URL: https://issues.apache.org/jira/browse/SPARK-21157
>             Project: Spark
>          Issue Type: Improvement
>          Components: Input/Output
>    Affects Versions: 2.1.1
>            Reporter: Jose Soltren
>         Attachments: TotalMemoryReportingDesignDoc.pdf
>
>
> Building on some of the core ideas of SPARK-9103, this JIRA proposes tracking total memory used by Spark executors, and a means of broadcasting, aggregating, and reporting memory usage data in the Spark UI.
> Here, "total memory used" refers to memory usage that is visible outside of Spark, to an external observer such as YARN, Mesos, or the operating system. The goal of this enhancement is to give Spark users more information about how Spark clusters are using memory. Total memory will include non-Spark JVM memory and all off-heap memory.
> Please consult the attached design document for further details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org