You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/06/15 19:48:34 UTC

[GitHub] [spark] dtenedor commented on a diff in pull request #36880: [SPARK-39383][SQL] Refactor DEFAULT column support to skip passing the primary Analyzer around

dtenedor commented on code in PR #36880:
URL: https://github.com/apache/spark/pull/36880#discussion_r898349505


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/ResolveDefaultColumnsUtil.scala:
##########
@@ -241,4 +243,33 @@ object ResolveDefaultColumns {
       }
     }
   }
+
+  /**
+   * Returns a new Analyzer for processing default column values using built-in functions only.

Review Comment:
   Thanks for pointing this out, I updated the comment.



##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/ResolveDefaultColumnsUtil.scala:
##########
@@ -143,6 +144,7 @@ object ResolveDefaultColumns {
     }
     // Analyze the parse result.
     val plan = try {
+      val analyzer: Analyzer = schema.defaultColumnAnalyzer

Review Comment:
   We can use an object, I updated the code to do that instead.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org