You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jason Taaffe (JIRA)" <ji...@apache.org> on 2017/09/18 17:30:00 UTC
[jira] [Created] (SPARK-22052) Incorrect Metric reported in
MetricsReporter.scala
Jason Taaffe created SPARK-22052:
------------------------------------
Summary: Incorrect Metric reported in MetricsReporter.scala
Key: SPARK-22052
URL: https://issues.apache.org/jira/browse/SPARK-22052
Project: Spark
Issue Type: Bug
Components: Input/Output, Structured Streaming
Affects Versions: 2.2.0, 2.3.0
Environment: Spark 2.2
MetricsReporter.scala
Reporter: Jason Taaffe
Priority: Minor
The wrong metric is being sent in MetricsReporter.scala
The current implementation for processingRate-total uses the wrong metric:
Look at the first and second registerGauge. The second one mistakenly uses inputRowsPerSecond instead of processedRowsPerSecond.
{code:java}
package org.apache.spark.sql.execution.streaming
import java.{util => ju}
import scala.collection.mutable
import com.codahale.metrics.{Gauge, MetricRegistry}
import org.apache.spark.internal.Logging
import org.apache.spark.metrics.source.{Source => CodahaleSource}
import org.apache.spark.util.Clock
/**
* Serves metrics from a [[org.apache.spark.sql.streaming.StreamingQuery]] to
* Codahale/DropWizard metrics
*/
class MetricsReporter(
stream: StreamExecution,
override val sourceName: String) extends CodahaleSource with Logging {
override val metricRegistry: MetricRegistry = new MetricRegistry
// Metric names should not have . in them, so that all the metrics of a query are identified
// together in Ganglia as a single metric group
registerGauge("inputRate-total", () => stream.lastProgress.inputRowsPerSecond)
registerGauge("processingRate-total", () => stream.lastProgress.inputRowsPerSecond)
registerGauge("latency", () => stream.lastProgress.durationMs.get("triggerExecution").longValue())
private def registerGauge[T](name: String, f: () => T)(implicit num: Numeric[T]): Unit = {
synchronized {
metricRegistry.register(name, new Gauge[T] {
override def getValue: T = f()
})
}
}
}
{code}
[https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MetricsReporter.scala]
After adjusting the line and rebuilding from source I tested the change by checking the csv files produced via the metrics properties file. Previously inputRate-total and processingRate-total were identical due to the same metric being used. After the change the processingRate-total file held the right value.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org