You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by li...@apache.org on 2018/12/02 00:22:42 UTC
spark git commit: [SPARK-26226][SQL] Track optimization phase for
streaming queries
Repository: spark
Updated Branches:
refs/heads/master 60e4239a1 -> 55c968581
[SPARK-26226][SQL] Track optimization phase for streaming queries
## What changes were proposed in this pull request?
In an earlier PR, we missed measuring the optimization phase time for streaming queries. This patch adds it.
## How was this patch tested?
Given this is a debugging feature, and it is very convoluted to add tests to verify the phase is set properly, I am not introducing a streaming specific test.
Closes #23193 from rxin/SPARK-26226-1.
Authored-by: Reynold Xin <rx...@databricks.com>
Signed-off-by: gatorsmile <ga...@gmail.com>
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/55c96858
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/55c96858
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/55c96858
Branch: refs/heads/master
Commit: 55c96858107739dd768abea1dff88bd970e47e9f
Parents: 60e4239
Author: Reynold Xin <rx...@databricks.com>
Authored: Sat Dec 1 16:22:38 2018 -0800
Committer: gatorsmile <ga...@gmail.com>
Committed: Sat Dec 1 16:22:38 2018 -0800
----------------------------------------------------------------------
.../spark/sql/execution/streaming/IncrementalExecution.scala | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/spark/blob/55c96858/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/IncrementalExecution.scala
----------------------------------------------------------------------
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/IncrementalExecution.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/IncrementalExecution.scala
index fad287e..a73e88c 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/IncrementalExecution.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/IncrementalExecution.scala
@@ -22,6 +22,7 @@ import java.util.concurrent.atomic.AtomicInteger
import org.apache.spark.internal.Logging
import org.apache.spark.sql.{AnalysisException, SparkSession, Strategy}
+import org.apache.spark.sql.catalyst.QueryPlanningTracker
import org.apache.spark.sql.catalyst.expressions.{CurrentBatchTimestamp, ExpressionWithRandomSeed}
import org.apache.spark.sql.catalyst.plans.logical._
import org.apache.spark.sql.catalyst.plans.physical.{AllTuples, ClusteredDistribution, HashPartitioning, SinglePartition}
@@ -73,7 +74,8 @@ class IncrementalExecution(
* Walk the optimized logical plan and replace CurrentBatchTimestamp
* with the desired literal
*/
- override lazy val optimizedPlan: LogicalPlan = {
+ override
+ lazy val optimizedPlan: LogicalPlan = tracker.measurePhase(QueryPlanningTracker.OPTIMIZATION) {
sparkSession.sessionState.optimizer.execute(withCachedData) transformAllExpressions {
case ts @ CurrentBatchTimestamp(timestamp, _, _) =>
logInfo(s"Current batch timestamp = $timestamp")
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org