You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/08/06 04:02:41 UTC

[GitHub] [flink] lirui-apache commented on a change in pull request #13017: [FLINK-18258][hive] Implement SHOW PARTITIONS for Hive dialect

lirui-apache commented on a change in pull request #13017:
URL: https://github.com/apache/flink/pull/13017#discussion_r466122748



##########
File path: flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/dql/SqlShowPartitions.java
##########
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.sql.parser.dql;
+
+import org.apache.flink.sql.parser.SqlPartitionUtils;
+
+import org.apache.calcite.sql.SqlCall;
+import org.apache.calcite.sql.SqlIdentifier;
+import org.apache.calcite.sql.SqlKind;
+import org.apache.calcite.sql.SqlNode;
+import org.apache.calcite.sql.SqlNodeList;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.calcite.sql.SqlSpecialOperator;
+import org.apache.calcite.sql.SqlWriter;
+import org.apache.calcite.sql.parser.SqlParserPos;
+
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.LinkedHashMap;
+import java.util.List;
+
+import static java.util.Objects.requireNonNull;
+
+/**
+ * SHOW PARTITIONS sql call.
+ */
+public class SqlShowPartitions extends SqlCall {
+
+	public static final SqlSpecialOperator OPERATOR = new SqlSpecialOperator("SHOW PARTITIONS", SqlKind.OTHER);
+
+	protected final SqlIdentifier tableIdentifier;
+	protected final SqlNodeList partitionSpec;
+
+	public SqlShowPartitions(SqlParserPos pos, SqlIdentifier tableName, @Nullable SqlNodeList partitionSpec) {
+		super(pos);
+		this.tableIdentifier = requireNonNull(tableName, "tableName should not be null");
+		this.partitionSpec = partitionSpec;
+	}
+
+	@Override
+	public SqlOperator getOperator() {
+		return OPERATOR;
+	}
+
+	@Override
+	public List<SqlNode> getOperandList() {
+		List<SqlNode> operands = new ArrayList<>();
+		operands.add(tableIdentifier);
+		operands.add(partitionSpec);
+		return operands;
+	}
+
+	@Override
+	public void unparse(SqlWriter writer, int leftPrec, int rightPrec) {
+		writer.keyword("SHOW PARTITIONS");
+		tableIdentifier.unparse(writer, leftPrec, rightPrec);
+		SqlNodeList partitionSpec = getPartitionSpec();
+		if (partitionSpec != null && partitionSpec.size() > 0) {

Review comment:
       If `partitionSpec` is not null, I think it must not be empty. We can add a check to verify that.

##########
File path: flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/HiveDialectITCase.java
##########
@@ -450,6 +450,18 @@ public void testAddDropPartitions() throws Exception {
 		ObjectPath tablePath = new ObjectPath("default", "tbl");
 		assertEquals(2, hiveCatalog.listPartitions(tablePath).size());
 
+		List<Row> partitions = Lists.newArrayList(tableEnv.executeSql("show partitions tbl").collect());

Review comment:
       I think it'll be clearer to move these to a separate test case.

##########
File path: flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java
##########
@@ -1021,6 +1022,28 @@ private TableResult executeOperation(Operation operation) {
 			return buildShowResult("function name", listFunctions());
 		} else if (operation instanceof ShowViewsOperation) {
 			return buildShowResult("view name", listViews());
+		} else if (operation instanceof ShowPartitionsOperation) {
+			String exMsg = getDQLOpExecuteErrorMsg(operation.asSummaryString());

Review comment:
       Why not just call `getDDLOpExecuteErrorMsg`?

##########
File path: flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/SqlCommandParserTest.java
##########
@@ -389,6 +409,15 @@ public static TestItem validSql(
 			return testItem;
 		}
 
+		public static TestItem validSql(

Review comment:
       We already have a `TestItem::validSql` method that takes SQL dialect as a parameter. Can you reuse that?

##########
File path: flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java
##########
@@ -549,6 +552,22 @@ private void callShowModules() {
 		terminal.flush();
 	}
 
+	private void callShowPartitions() {
+		final List<String> partitions;
+		try {
+			partitions = getShowResult("PARTITIONS");

Review comment:
       Don't we need the table identifier and partition spec here?

##########
File path: flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/SqlCommandParserTest.java
##########
@@ -299,6 +299,21 @@ public void testCommands() throws Exception {
 		}
 	}
 
+	@Test
+	public void testHiveCommands() throws Exception {
+		SqlParserHelper helper = new SqlParserHelper(SqlDialect.HIVE);
+		parser = helper.getSqlParser();
+		List<TestItem> testItems = Arrays.asList(
+			// show partitions
+			TestItem.invalidSql("SHOW PARTITIONS ", SqlExecutionException.class, "Encountered \"<EOF>\""),

Review comment:
       I don't think this makes sense, unless `TestItem::invalidSql` also supports HIVE dialect.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org