You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2022/04/05 09:46:38 UTC

[GitHub] [druid] rohangarg commented on a diff in pull request #12386: Added replace statement to sql parser

rohangarg commented on code in PR #12386:
URL: https://github.com/apache/druid/pull/12386#discussion_r842542295


##########
sql/src/main/codegen/includes/replace.ftl:
##########
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+// Using fully qualified name for Pair class, since Calcite also has a same class name being used in the Parser.jj
+SqlNode DruidSqlReplace() :
+{
+    final List<SqlLiteral> keywords = new ArrayList<SqlLiteral>();
+    final SqlNodeList keywordList;
+    SqlNode table;
+    SqlNodeList extendList = null;
+    SqlNode source;
+    SqlNodeList columnList = null;
+    final Span s;
+    SqlInsert sqlInsert;
+    org.apache.druid.java.util.common.Pair<Granularity, String> partitionedBy = new org.apache.druid.java.util.common.Pair(null, null);
+    List<String> partitionSpecList;
+}
+{
+    <REPLACE> { s = span(); }
+    SqlInsertKeywords(keywords)

Review Comment:
   This is not needed since it is currently empty in calcite base grammar. Also, this was probably added in base to support multiple dialects - so we don't need to support it. 
   Also, please mention in a comment that the part between `replace` and `partitioned by` is extracted from `SqlInsert` production rule - so that it is easier to refer to.



##########
sql/src/main/codegen/includes/replace.ftl:
##########
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+// Using fully qualified name for Pair class, since Calcite also has a same class name being used in the Parser.jj
+SqlNode DruidSqlReplace() :
+{
+    final List<SqlLiteral> keywords = new ArrayList<SqlLiteral>();
+    final SqlNodeList keywordList;
+    SqlNode table;
+    SqlNodeList extendList = null;
+    SqlNode source;
+    SqlNodeList columnList = null;
+    final Span s;
+    SqlInsert sqlInsert;
+    org.apache.druid.java.util.common.Pair<Granularity, String> partitionedBy = new org.apache.druid.java.util.common.Pair(null, null);
+    List<String> partitionSpecList;
+}
+{
+    <REPLACE> { s = span(); }
+    SqlInsertKeywords(keywords)
+    {
+        keywordList = new SqlNodeList(keywords, s.addAll(keywords).pos());
+    }
+    <INTO> table = CompoundIdentifier()
+    <FOR> partitionSpecList = PartitionSpecs()
+    [

Review Comment:
   I think it is not needed since we don't support extended reference for a table (`https://calcite.apache.org/docs/reference.html`) 



##########
sql/src/main/codegen/includes/replace.ftl:
##########
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+// Using fully qualified name for Pair class, since Calcite also has a same class name being used in the Parser.jj
+SqlNode DruidSqlReplace() :
+{
+    final List<SqlLiteral> keywords = new ArrayList<SqlLiteral>();
+    final SqlNodeList keywordList;
+    SqlNode table;
+    SqlNodeList extendList = null;
+    SqlNode source;
+    SqlNodeList columnList = null;
+    final Span s;
+    SqlInsert sqlInsert;
+    org.apache.druid.java.util.common.Pair<Granularity, String> partitionedBy = new org.apache.druid.java.util.common.Pair(null, null);
+    List<String> partitionSpecList;
+}
+{
+    <REPLACE> { s = span(); }
+    SqlInsertKeywords(keywords)
+    {
+        keywordList = new SqlNodeList(keywords, s.addAll(keywords).pos());
+    }
+    <INTO> table = CompoundIdentifier()
+    <FOR> partitionSpecList = PartitionSpecs()
+    [
+        LOOKAHEAD(5)
+        [ <EXTEND> ]
+        extendList = ExtendList() {
+            table = extend(table, extendList);
+        }
+    ]
+    [
+        LOOKAHEAD(2)
+        { final Pair<SqlNodeList, SqlNodeList> p; }
+        p = ParenthesizedCompoundIdentifierList() {
+            if (p.right.size() > 0) {

Review Comment:
   please check if we need support for this right now. This list contains the types and nullability of the columns mentioned in the insert query. I'm not sure if we want to support that or not for now.
   Further, I think having the columnList next to the table name looks nicer rather than having it after `FOR` clause. wdyt?



##########
sql/src/main/java/org/apache/druid/sql/calcite/parser/DruidSqlReplace.java:
##########
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.sql.calcite.parser;
+
+import com.google.common.base.Preconditions;
+import org.apache.calcite.sql.SqlInsert;
+import org.apache.calcite.sql.SqlNodeList;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.calcite.sql.SqlWriter;
+import org.apache.druid.java.util.common.granularity.Granularity;
+
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+import java.util.List;
+
+/**
+ * Extends the 'replace' call to hold custom parameters specific to Druid i.e. PARTITIONED BY and the PARTITION SPECS
+ * This class extends the {@link SqlInsert} so that this SqlNode can be used in
+ * {@link org.apache.calcite.sql2rel.SqlToRelConverter} for getting converted into RelNode, and further processing
+ */
+public class DruidSqlReplace extends SqlInsert
+{
+  public static final String SQL_REPLACE_TIME_CHUNKS = "sqlReplaceTimeChunks";
+
+  // This allows reusing super.unparse
+  public static final SqlOperator OPERATOR = SqlInsert.OPERATOR;
+
+  private final Granularity partitionedBy;
+  private final String partitionedByStringForUnparse;
+
+  private final List<String> replaceTimeChunks;
+
+  /**
+   * While partitionedBy and partitionedByStringForUnparse can be null as arguments to the constructor, this is
+   * disallowed (semantically) and the constructor performs checks to ensure that. This helps in producing friendly
+   * errors when the PARTITIONED BY custom clause is not present, and keeps its error separate from JavaCC/Calcite's
+   * custom errors which can be cryptic when someone accidentally forgets to explicitly specify the PARTITIONED BY clause
+   */
+  public DruidSqlReplace(
+      @Nonnull SqlInsert insertNode,
+      @Nullable Granularity partitionedBy,
+      @Nullable String partitionedByStringForUnparse,
+      @Nonnull List<String> replaceTimeChunks
+  ) throws ParseException
+  {
+    super(
+        insertNode.getParserPosition(),
+        (SqlNodeList) insertNode.getOperandList().get(0), // No better getter to extract this
+        insertNode.getTargetTable(),
+        insertNode.getSource(),
+        insertNode.getTargetColumnList()
+    );
+    if (partitionedBy == null) {
+      throw new ParseException("REPLACE statements must specify PARTITIONED BY clause explictly");
+    }
+    this.partitionedBy = partitionedBy;
+
+    Preconditions.checkNotNull(partitionedByStringForUnparse);
+    this.partitionedByStringForUnparse = partitionedByStringForUnparse;
+
+    this.replaceTimeChunks = replaceTimeChunks;
+  }
+
+  public List<String> getReplaceTimeChunks()
+  {
+    return replaceTimeChunks;
+  }
+
+  public Granularity getPartitionedBy()
+  {
+    return partitionedBy;
+  }
+
+  @Nonnull
+  @Override
+  public SqlOperator getOperator()
+  {
+    return OPERATOR;
+  }
+
+  @Override
+  public void unparse(SqlWriter writer, int leftPrec, int rightPrec)
+  {
+    writer.startList(SqlWriter.FrameTypeEnum.SELECT);
+    writer.sep("REPLACE INTO");
+    final int opLeft = getOperator().getLeftPrec();
+    final int opRight = getOperator().getRightPrec();
+    getTargetTable().unparse(writer, opLeft, opRight);
+
+    writer.keyword("FOR");
+    final SqlWriter.Frame frame = writer.startList("(", ")");

Review Comment:
   <nit:> this doesn't handle a single partition spec (without braces) vs a list of partition spec (with braces)



##########
sql/src/main/codegen/config.fmpp:
##########
@@ -51,12 +51,15 @@ data: {
     # List of additional classes and packages to import.
     # Example. "org.apache.calcite.sql.*", "java.util.List".
     imports: [
+      "java.util.List"
       "org.apache.calcite.sql.SqlNode"
       "org.apache.calcite.sql.SqlInsert"
+      "org.apache.druid.java.util.common.Intervals"

Review Comment:
   where is this used? similar for joda Interval import



##########
sql/src/main/codegen/includes/common.ftl:
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+SqlNodeList ClusterItems() :

Review Comment:
   this isn't really common as of now. same goes for partitionSpec(s) rules. maybe we can keep them in their respective rules until there is a need to extract out and re-use



##########
sql/src/main/java/org/apache/druid/sql/calcite/planner/DruidPlanner.java:
##########
@@ -645,11 +661,14 @@ private RelNode rewriteRelDynamicParameters(RelNode rootRel)
 
   private QueryMaker buildQueryMaker(
       final RelRoot rootQueryRel,
-      @Nullable final SqlInsert insert
+      @Nullable final SqlInsert insert,

Review Comment:
   can this take `insertOrReplace` `SqlInsert` node? and then we can assign `DruidSqlInsert` or `DruidSqlReplace` depending on the instance check as is done now.



##########
sql/src/main/codegen/includes/common.ftl:
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+SqlNodeList ClusterItems() :
+{
+  List<SqlNode> list;
+  final Span s;
+  SqlNode e;
+}
+{
+  e = OrderItem() {
+    s = span();
+    list = startList(e);
+  }
+  (
+    LOOKAHEAD(2) <COMMA> e = OrderItem() { list.add(e); }
+  )*
+  {
+    return new SqlNodeList(list, s.addAll(list).pos());
+  }
+}
+
+org.apache.druid.java.util.common.Pair<Granularity, String> PartitionGranularity() :
+{
+  SqlNode e = null;
+  Granularity granularity = null;
+  String unparseString = null;
+}
+{
+  (
+    <HOUR>
+    {
+      granularity = Granularities.HOUR;
+      unparseString = "HOUR";
+    }
+  |
+    <DAY>
+    {
+      granularity = Granularities.DAY;
+      unparseString = "DAY";
+    }
+  |
+    <MONTH>
+    {
+      granularity = Granularities.MONTH;
+      unparseString = "MONTH";
+    }
+  |
+    <YEAR>
+    {
+      granularity = Granularities.YEAR;
+      unparseString = "YEAR";
+    }
+  |
+    <ALL>
+    {
+      granularity = Granularities.ALL;
+      unparseString = "ALL";
+    }
+    [
+      <TIME>
+      {
+        unparseString += " TIME";
+      }
+    ]
+  |
+    e = Expression(ExprContext.ACCEPT_SUB_QUERY)
+    {
+      granularity = DruidSqlParserUtils.convertSqlNodeToGranularityThrowingParseExceptions(e);
+      unparseString = e.toString();
+    }
+  )
+  {
+    return new org.apache.druid.java.util.common.Pair(granularity, unparseString);
+  }
+}
+
+List<String> PartitionSpecs() :
+{
+  List<String> list;
+  String intervalString;
+}
+{
+  (
+    intervalString = PartitionSpec()
+    {
+        return startList(intervalString);
+    }
+  |
+    <LPAREN>
+    intervalString = PartitionSpec()
+    {
+      list = startList(intervalString);
+    }
+    (
+      <COMMA>
+      intervalString = PartitionSpec()
+      {
+        list.add(intervalString);
+      }
+    )*
+    <RPAREN>
+    {
+      return list;
+    }
+  )
+}
+
+String PartitionSpec() :
+{
+  SqlNode sqlNode;
+}
+{
+  (
+    <ALL> <TIME>

Review Comment:
   As mentioned above, I think this should move up



##########
sql/src/main/codegen/includes/common.ftl:
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+SqlNodeList ClusterItems() :
+{
+  List<SqlNode> list;
+  final Span s;
+  SqlNode e;
+}
+{
+  e = OrderItem() {
+    s = span();
+    list = startList(e);
+  }
+  (
+    LOOKAHEAD(2) <COMMA> e = OrderItem() { list.add(e); }
+  )*
+  {
+    return new SqlNodeList(list, s.addAll(list).pos());
+  }
+}
+
+org.apache.druid.java.util.common.Pair<Granularity, String> PartitionGranularity() :
+{
+  SqlNode e = null;
+  Granularity granularity = null;
+  String unparseString = null;
+}
+{
+  (
+    <HOUR>
+    {
+      granularity = Granularities.HOUR;
+      unparseString = "HOUR";
+    }
+  |
+    <DAY>
+    {
+      granularity = Granularities.DAY;
+      unparseString = "DAY";
+    }
+  |
+    <MONTH>
+    {
+      granularity = Granularities.MONTH;
+      unparseString = "MONTH";
+    }
+  |
+    <YEAR>
+    {
+      granularity = Granularities.YEAR;
+      unparseString = "YEAR";
+    }
+  |
+    <ALL>
+    {
+      granularity = Granularities.ALL;
+      unparseString = "ALL";
+    }
+    [
+      <TIME>
+      {
+        unparseString += " TIME";
+      }
+    ]
+  |
+    e = Expression(ExprContext.ACCEPT_SUB_QUERY)
+    {
+      granularity = DruidSqlParserUtils.convertSqlNodeToGranularityThrowingParseExceptions(e);
+      unparseString = e.toString();
+    }
+  )
+  {
+    return new org.apache.druid.java.util.common.Pair(granularity, unparseString);
+  }
+}
+
+List<String> PartitionSpecs() :

Review Comment:
   looks like you've pushed down `ALL TIME` to `PartitionSpec` rule. I think it should be a top level syntax. For instance, a user can either mention a list of intervals they want to replace or they say `ALL TIME` which replaces everything.



##########
sql/src/main/codegen/includes/common.ftl:
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+SqlNodeList ClusterItems() :
+{
+  List<SqlNode> list;
+  final Span s;
+  SqlNode e;
+}
+{
+  e = OrderItem() {
+    s = span();
+    list = startList(e);
+  }
+  (
+    LOOKAHEAD(2) <COMMA> e = OrderItem() { list.add(e); }
+  )*
+  {
+    return new SqlNodeList(list, s.addAll(list).pos());
+  }
+}
+
+org.apache.druid.java.util.common.Pair<Granularity, String> PartitionGranularity() :
+{
+  SqlNode e = null;
+  Granularity granularity = null;
+  String unparseString = null;
+}
+{
+  (
+    <HOUR>
+    {
+      granularity = Granularities.HOUR;
+      unparseString = "HOUR";
+    }
+  |
+    <DAY>
+    {
+      granularity = Granularities.DAY;
+      unparseString = "DAY";
+    }
+  |
+    <MONTH>
+    {
+      granularity = Granularities.MONTH;
+      unparseString = "MONTH";
+    }
+  |
+    <YEAR>
+    {
+      granularity = Granularities.YEAR;
+      unparseString = "YEAR";
+    }
+  |
+    <ALL>
+    {
+      granularity = Granularities.ALL;
+      unparseString = "ALL";
+    }
+    [
+      <TIME>
+      {
+        unparseString += " TIME";
+      }
+    ]
+  |
+    e = Expression(ExprContext.ACCEPT_SUB_QUERY)
+    {
+      granularity = DruidSqlParserUtils.convertSqlNodeToGranularityThrowingParseExceptions(e);
+      unparseString = e.toString();
+    }
+  )
+  {
+    return new org.apache.druid.java.util.common.Pair(granularity, unparseString);
+  }
+}
+
+List<String> PartitionSpecs() :
+{
+  List<String> list;

Review Comment:
   <nit:> naming can be better



##########
sql/src/main/java/org/apache/druid/sql/calcite/parser/DruidSqlReplace.java:
##########
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.sql.calcite.parser;
+
+import com.google.common.base.Preconditions;
+import org.apache.calcite.sql.SqlInsert;
+import org.apache.calcite.sql.SqlNodeList;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.calcite.sql.SqlWriter;
+import org.apache.druid.java.util.common.granularity.Granularity;
+
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+import java.util.List;
+
+/**
+ * Extends the 'replace' call to hold custom parameters specific to Druid i.e. PARTITIONED BY and the PARTITION SPECS
+ * This class extends the {@link SqlInsert} so that this SqlNode can be used in
+ * {@link org.apache.calcite.sql2rel.SqlToRelConverter} for getting converted into RelNode, and further processing
+ */
+public class DruidSqlReplace extends SqlInsert
+{
+  public static final String SQL_REPLACE_TIME_CHUNKS = "sqlReplaceTimeChunks";
+
+  // This allows reusing super.unparse
+  public static final SqlOperator OPERATOR = SqlInsert.OPERATOR;
+
+  private final Granularity partitionedBy;
+  private final String partitionedByStringForUnparse;
+
+  private final List<String> replaceTimeChunks;
+
+  /**
+   * While partitionedBy and partitionedByStringForUnparse can be null as arguments to the constructor, this is
+   * disallowed (semantically) and the constructor performs checks to ensure that. This helps in producing friendly
+   * errors when the PARTITIONED BY custom clause is not present, and keeps its error separate from JavaCC/Calcite's
+   * custom errors which can be cryptic when someone accidentally forgets to explicitly specify the PARTITIONED BY clause
+   */
+  public DruidSqlReplace(
+      @Nonnull SqlInsert insertNode,
+      @Nullable Granularity partitionedBy,
+      @Nullable String partitionedByStringForUnparse,
+      @Nonnull List<String> replaceTimeChunks
+  ) throws ParseException
+  {
+    super(
+        insertNode.getParserPosition(),
+        (SqlNodeList) insertNode.getOperandList().get(0), // No better getter to extract this
+        insertNode.getTargetTable(),
+        insertNode.getSource(),
+        insertNode.getTargetColumnList()
+    );
+    if (partitionedBy == null) {
+      throw new ParseException("REPLACE statements must specify PARTITIONED BY clause explictly");
+    }
+    this.partitionedBy = partitionedBy;
+
+    Preconditions.checkNotNull(partitionedByStringForUnparse);
+    this.partitionedByStringForUnparse = partitionedByStringForUnparse;
+
+    this.replaceTimeChunks = replaceTimeChunks;

Review Comment:
   shouldn't we do the validation of replace chunks here? the validation can include whether the time chunks are convertible into intervals and whether those intervals align with the partitioned-by clause



##########
sql/src/main/java/org/apache/druid/sql/calcite/parser/DruidSqlReplace.java:
##########
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.sql.calcite.parser;
+
+import com.google.common.base.Preconditions;
+import org.apache.calcite.sql.SqlInsert;
+import org.apache.calcite.sql.SqlNodeList;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.calcite.sql.SqlWriter;
+import org.apache.druid.java.util.common.granularity.Granularity;
+
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+import java.util.List;
+
+/**
+ * Extends the 'replace' call to hold custom parameters specific to Druid i.e. PARTITIONED BY and the PARTITION SPECS
+ * This class extends the {@link SqlInsert} so that this SqlNode can be used in
+ * {@link org.apache.calcite.sql2rel.SqlToRelConverter} for getting converted into RelNode, and further processing
+ */
+public class DruidSqlReplace extends SqlInsert
+{
+  public static final String SQL_REPLACE_TIME_CHUNKS = "sqlReplaceTimeChunks";
+
+  // This allows reusing super.unparse
+  public static final SqlOperator OPERATOR = SqlInsert.OPERATOR;
+
+  private final Granularity partitionedBy;
+  private final String partitionedByStringForUnparse;
+
+  private final List<String> replaceTimeChunks;
+
+  /**
+   * While partitionedBy and partitionedByStringForUnparse can be null as arguments to the constructor, this is
+   * disallowed (semantically) and the constructor performs checks to ensure that. This helps in producing friendly
+   * errors when the PARTITIONED BY custom clause is not present, and keeps its error separate from JavaCC/Calcite's
+   * custom errors which can be cryptic when someone accidentally forgets to explicitly specify the PARTITIONED BY clause
+   */
+  public DruidSqlReplace(
+      @Nonnull SqlInsert insertNode,
+      @Nullable Granularity partitionedBy,
+      @Nullable String partitionedByStringForUnparse,
+      @Nonnull List<String> replaceTimeChunks
+  ) throws ParseException
+  {
+    super(
+        insertNode.getParserPosition(),
+        (SqlNodeList) insertNode.getOperandList().get(0), // No better getter to extract this
+        insertNode.getTargetTable(),
+        insertNode.getSource(),
+        insertNode.getTargetColumnList()
+    );
+    if (partitionedBy == null) {
+      throw new ParseException("REPLACE statements must specify PARTITIONED BY clause explictly");
+    }
+    this.partitionedBy = partitionedBy;
+
+    Preconditions.checkNotNull(partitionedByStringForUnparse);

Review Comment:
   <nit:> checkNotNull returns the reference itself so can merge with the line below



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org