You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2020/05/11 05:33:33 UTC

[spark] branch branch-3.0 updated: [SPARK-31674][CORE][DOCS] Make Prometheus metric endpoints experimental

This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new e2bf140  [SPARK-31674][CORE][DOCS] Make Prometheus metric endpoints experimental
e2bf140 is described below

commit e2bf140c68ef38216167e0872b964c3964ca0d9f
Author: Dongjoon Hyun <do...@apache.org>
AuthorDate: Sun May 10 22:32:26 2020 -0700

    [SPARK-31674][CORE][DOCS] Make Prometheus metric endpoints experimental
    
    ### What changes were proposed in this pull request?
    
    This PR aims to new Prometheus-format metric endpoints experimental in Apache Spark 3.0.0.
    
    ### Why are the changes needed?
    
    Although the new metrics are disabled by default, we had better make it experimental explicitly in Apache Spark 3.0.0 since the output format is still not fixed. We can finalize it in Apache Spark 3.1.0.
    
    ### Does this PR introduce _any_ user-facing change?
    
    Only doc-change is visible to the users.
    
    ### How was this patch tested?
    
    Manually check the code since this is a documentation and class annotation change.
    
    Closes #28495 from dongjoon-hyun/SPARK-31674.
    
    Authored-by: Dongjoon Hyun <do...@apache.org>
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
    (cherry picked from commit b80309bdb4d26556bd3da6a61cac464cdbdd1fe1)
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
---
 .../main/scala/org/apache/spark/metrics/sink/PrometheusServlet.scala  | 3 +++
 .../scala/org/apache/spark/status/api/v1/PrometheusResource.scala     | 3 +++
 docs/monitoring.md                                                    | 4 ++--
 3 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/core/src/main/scala/org/apache/spark/metrics/sink/PrometheusServlet.scala b/core/src/main/scala/org/apache/spark/metrics/sink/PrometheusServlet.scala
index 7c33bce..011c7bc 100644
--- a/core/src/main/scala/org/apache/spark/metrics/sink/PrometheusServlet.scala
+++ b/core/src/main/scala/org/apache/spark/metrics/sink/PrometheusServlet.scala
@@ -24,15 +24,18 @@ import com.codahale.metrics.MetricRegistry
 import org.eclipse.jetty.servlet.ServletContextHandler
 
 import org.apache.spark.{SecurityManager, SparkConf}
+import org.apache.spark.annotation.Experimental
 import org.apache.spark.ui.JettyUtils._
 
 /**
+ * :: Experimental ::
  * This exposes the metrics of the given registry with Prometheus format.
  *
  * The output is consistent with /metrics/json result in terms of item ordering
  * and with the previous result of Spark JMX Sink + Prometheus JMX Converter combination
  * in terms of key string format.
  */
+@Experimental
 private[spark] class PrometheusServlet(
     val property: Properties,
     val registry: MetricRegistry,
diff --git a/core/src/main/scala/org/apache/spark/status/api/v1/PrometheusResource.scala b/core/src/main/scala/org/apache/spark/status/api/v1/PrometheusResource.scala
index f9fb78e..2a5f151 100644
--- a/core/src/main/scala/org/apache/spark/status/api/v1/PrometheusResource.scala
+++ b/core/src/main/scala/org/apache/spark/status/api/v1/PrometheusResource.scala
@@ -23,15 +23,18 @@ import org.eclipse.jetty.servlet.{ServletContextHandler, ServletHolder}
 import org.glassfish.jersey.server.ServerProperties
 import org.glassfish.jersey.servlet.ServletContainer
 
+import org.apache.spark.annotation.Experimental
 import org.apache.spark.ui.SparkUI
 
 /**
+ * :: Experimental ::
  * This aims to expose Executor metrics like REST API which is documented in
  *
  *    https://spark.apache.org/docs/3.0.0/monitoring.html#executor-metrics
  *
  * Note that this is based on ExecutorSummary which is different from ExecutorSource.
  */
+@Experimental
 @Path("/executors")
 private[v1] class PrometheusResource extends ApiRequestContext {
   @GET
diff --git a/docs/monitoring.md b/docs/monitoring.md
index 7e41c9d..4da0f8e 100644
--- a/docs/monitoring.md
+++ b/docs/monitoring.md
@@ -715,7 +715,7 @@ A list of the available metrics, with a short description:
 Executor-level metrics are sent from each executor to the driver as part of the Heartbeat to describe the performance metrics of Executor itself like JVM heap memory, GC information.
 Executor metric values and their measured memory peak values per executor are exposed via the REST API in JSON format and in Prometheus format.
 The JSON end point is exposed at: `/applications/[app-id]/executors`, and the Prometheus endpoint at: `/metrics/executors/prometheus`.
-The Prometheus endpoint is conditional to a configuration parameter: `spark.ui.prometheus.enabled=true` (the default is `false`).
+The Prometheus endpoint is experimental and conditional to a configuration parameter: `spark.ui.prometheus.enabled=true` (the default is `false`).
 In addition, aggregated per-stage peak values of the executor memory metrics are written to the event log if
 `spark.eventLog.logStageExecutorMetrics` is true.  
 Executor memory metrics are also exposed via the Spark metrics system based on the Dropwizard metrics library.
@@ -963,7 +963,7 @@ Each instance can report to zero or more _sinks_. Sinks are contained in the
 * `CSVSink`: Exports metrics data to CSV files at regular intervals.
 * `JmxSink`: Registers metrics for viewing in a JMX console.
 * `MetricsServlet`: Adds a servlet within the existing Spark UI to serve metrics data as JSON data.
-* `PrometheusServlet`: Adds a servlet within the existing Spark UI to serve metrics data in Prometheus format.
+* `PrometheusServlet`: (Experimental) Adds a servlet within the existing Spark UI to serve metrics data in Prometheus format.
 * `GraphiteSink`: Sends metrics to a Graphite node.
 * `Slf4jSink`: Sends metrics to slf4j as log entries.
 * `StatsdSink`: Sends metrics to a StatsD node.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org