You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2021/04/26 07:08:01 UTC

[GitHub] [spark] cloud-fan commented on a change in pull request #32330: [SPARK-35215][SQL] Update custom metric per certain rows and at the end of the task

cloud-fan commented on a change in pull request #32330:
URL: https://github.com/apache/spark/pull/32330#discussion_r620010077



##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceRDD.scala
##########
@@ -92,12 +104,15 @@ private class PartitionIterator[T](
     if (!hasNext) {
       throw QueryExecutionErrors.endOfStreamError()
     }
-    reader.currentMetricsValues.foreach { metric =>
-      assert(customMetrics.contains(metric.name()),
-        s"Custom metrics ${customMetrics.keys.mkString(", ")} do not contain the metric " +
-          s"${metric.name()}")
-      customMetrics(metric.name()).set(metric.value())
+    if (numRow % CustomMetrics.numRowsPerUpdate == 0) {
+      reader.currentMetricsValues.foreach { metric =>

Review comment:
       can we move it into a method to reuse code?

##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/metric/CustomMetrics.scala
##########
@@ -25,6 +25,8 @@ import org.apache.spark.sql.connector.CustomMetric
 object CustomMetrics {
   private[spark] val V2_CUSTOM = "v2Custom"
 
+  private[spark] val numRowsPerUpdate = 100L

Review comment:
       does it need to be a long?

##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousDataSourceRDD.scala
##########
@@ -92,10 +92,18 @@ class ContinuousDataSourceRDD(
 
     val partitionReader = readerForPartition.getPartitionReader()
     new NextIterator[InternalRow] {
+      private var numRow = 0L
+
       override def getNext(): InternalRow = {
-        partitionReader.currentMetricsValues.foreach { metric =>
-          customMetrics(metric.name()).set(metric.value())
+        if (numRow % CustomMetrics.numRowsPerUpdate == 0) {
+          partitionReader.currentMetricsValues.foreach { metric =>
+            assert(customMetrics.contains(metric.name()),

Review comment:
       I'm not sure how useful is the assert here. It's for internal error only and `customMetrics(metric.name())` will fail too.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org