You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Masayoshi TSUZUKI (JIRA)" <ji...@apache.org> on 2014/11/27 03:13:12 UTC

[jira] [Created] (SPARK-4634) Enable metrics for each application to be gathered in one node

Masayoshi TSUZUKI created SPARK-4634:
----------------------------------------

             Summary: Enable metrics for each application to be gathered in one node
                 Key: SPARK-4634
                 URL: https://issues.apache.org/jira/browse/SPARK-4634
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core
    Affects Versions: 1.1.0
            Reporter: Masayoshi TSUZUKI


Metrics output is now like this:
{noformat}
  - app_1.driver.jvm.<somevalue>
  - app_1.driver.jvm.<somevalue>
  - ...
  - app_2.driver.jvm.<somevalue>
  - app_2.driver.jvm.<somevalue>
  - ...
{noformat}
In current spark, application names come to top level,
but we should be able to gather the application names under some top level node.

For example, think of using graphite.
When we use graphite, the application names are listed as top level node.
Graphite can also collect OS metrics, and OS metrics are able to be put in some one node.
But the current Spark metrics are not.
So, with the current Spark, the tree structure of metrics shown in graphite web UI is like this.
{noformat}
  - os
    - os.node1.<somevalue>
    - os.node2.<somevalue>
    - ...
  - app_1
    - app_1.driver.jvm.<somevalue>
    - app_1.driver.jvm.<somevalue>
    - ...
  - app_2
    - ...
  - app_3
    - ...
{noformat}
We should be able to add some top level name before the application name (top level name may be cluster name for instance).

If we make the name configurable by *.conf, it might be also convenience in case that 2 different spark clusters sink metrics to the same graphite server.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org