You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by srowen <gi...@git.apache.org> on 2018/10/08 12:58:13 UTC
[GitHub] spark pull request #22594: [SPARK-25674][SQL] If the records are incremented...
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22594#discussion_r223353261
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala ---
@@ -570,4 +572,33 @@ class SQLMetricsSuite extends SparkFunSuite with SQLMetricsTestUtils with Shared
}
}
}
+
+ test("InputMetrics---bytesRead") {
--- End diff --
This isn't really testing the code you changed. It's replicating something similar and testing that. I don't think this test helps. Ideally you would write a test for any path that uses `FileScanRDD` and check its metrics. Are there tests around here that you could 'piggyback' onto? maybe an existing test of the metrics involving `ColumnarBatch ` than can be changed to trigger this case.
It may be hard, I don't know. Worth looking to see if there's an easy way to test this.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org