You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2020/06/23 09:03:54 UTC

[GitHub] [beam] davidak09 commented on a change in pull request #12063: [BEAM-10294] using SparkMetricsContainerStepMap for readable metrics presentation in Spark history server UI

davidak09 commented on a change in pull request #12063:
URL: https://github.com/apache/beam/pull/12063#discussion_r444071819



##########
File path: runners/spark/src/main/java/org/apache/beam/runners/spark/metrics/MetricsAccumulator.java
##########
@@ -87,13 +86,13 @@ public static MetricsContainerStepMapAccumulator getInstance() {
     }
   }
 
-  private static Optional<MetricsContainerStepMap> recoverValueFromCheckpoint(
+  private static Optional<SparkMetricsContainerStepMap> recoverValueFromCheckpoint(
       JavaSparkContext jsc, CheckpointDir checkpointDir) {
     try {
       Path beamCheckpointPath = checkpointDir.getBeamCheckpointDir();
       checkpointFilePath = new Path(beamCheckpointPath, ACCUMULATOR_CHECKPOINT_FILENAME);
       fileSystem = checkpointFilePath.getFileSystem(jsc.hadoopConfiguration());
-      MetricsContainerStepMap recoveredValue =
+      SparkMetricsContainerStepMap recoveredValue =

Review comment:
       I'm not sure about this - whether it's backward compatible

##########
File path: runners/spark/src/main/java/org/apache/beam/runners/spark/metrics/SparkBeamMetric.java
##########
@@ -51,11 +51,12 @@
     }
     for (MetricResult<DistributionResult> metricResult : metricQueryResults.getDistributions()) {
       DistributionResult result = metricResult.getAttempted();
-      metrics.put(renderName(metricResult) + ".count", result.getCount());
-      metrics.put(renderName(metricResult) + ".sum", result.getSum());
-      metrics.put(renderName(metricResult) + ".min", result.getMin());
-      metrics.put(renderName(metricResult) + ".max", result.getMax());
-      metrics.put(renderName(metricResult) + ".mean", result.getMean());
+      String name = renderName(metricResult);
+      metrics.put(name + ".count", result.getCount());
+      metrics.put(name + ".sum", result.getSum());
+      metrics.put(name + ".min", result.getMin());
+      metrics.put(name + ".max", result.getMax());
+      metrics.put(name + ".mean", result.getMean());

Review comment:
       I'd personally prefer single metric, possibly with `.distribution` suffix, which could include all 5 stats (count, sum, min, max, mean), it would definitely be more readable in Spark UI




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org