You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2020/12/18 01:30:28 UTC

[GitHub] [iceberg] rdblue commented on a change in pull request #1948: Spark: Add SQL commands evolve partition specs

rdblue commented on a change in pull request #1948:
URL: https://github.com/apache/iceberg/pull/1948#discussion_r545514306



##########
File path: spark3-extensions/src/main/scala/org/apache/spark/sql/catalyst/parser/extensions/IcebergSparkSqlExtensionsParser.scala
##########
@@ -94,13 +94,20 @@ class IcebergSparkSqlExtensionsParser(delegate: ParserInterface) extends ParserI
    */
   override def parsePlan(sqlText: String): LogicalPlan = {
     val sqlTextAfterSubstitution = substitutor.substitute(sqlText)
-    if (sqlTextAfterSubstitution.toLowerCase(Locale.ROOT).trim().startsWith("call")) {
+    if (isIcebergCommand(sqlTextAfterSubstitution)) {
       parse(sqlTextAfterSubstitution) { parser => astBuilder.visit(parser.singleStatement()) }.asInstanceOf[LogicalPlan]
     } else {
       delegate.parsePlan(sqlText)
     }
   }
 
+  private def isIcebergCommand(sqlText: String): Boolean = {
+    val normalized = sqlText.toLowerCase(Locale.ROOT).trim()
+    normalized.startsWith("call") ||
+        (normalized.startsWith("alter table") && (
+            normalized.contains("add partition field") || normalized.contains("drop partition field")))

Review comment:
       I think that the "add partition" syntax requires a partition "spec" that is something like `(a=1, b=2)` so the parentheses should prevent this from catching "add partition" commands.
   
   That said, we used to fall back to the Spark parser whenever something couldn't be parsed by this parser. I'm not sure whether we want to move back to that or do something more complicated. One option is try the Iceberg parser, then the Spark parser, and then check the Spark parser's exception. If it complains about `CALL` or `FIELD` then the exception from the Iceberg parser should be used, otherwise it should re-throw Spark's exception.
   
   @RussellSpitzer, any ideas here?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org