You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2022/04/07 08:42:43 UTC

[GitHub] [flink] slinkydeveloper commented on a diff in pull request #18386: [FLINK-25684][table] Support enhanced `show databases` syntax

slinkydeveloper commented on code in PR #18386:
URL: https://github.com/apache/flink/pull/18386#discussion_r844867851


##########
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/TableEnvironment.java:
##########
@@ -786,6 +786,24 @@ void createFunction(
      */
     String[] listDatabases();
 
+    /**
+     * Gets the names of all databases registered in the specified catalog.
+     *
+     * @param catalogName specified catalog name
+     * @return A list of the names of all registered databases in the specified catalog.
+     */
+    String[] listDatabases(String catalogName);
+
+    /**
+     * Gets the names of all databases registered in the specified catalog.
+     *
+     * @param catalogName specified catalog name
+     * @param notLike is not like
+     * @param likePattern like pattern
+     * @return A list of the names of all registered databases in the specified catalog.
+     */
+    String[] listDatabases(String catalogName, boolean notLike, String likePattern);

Review Comment:
   I don't like the idea of having a boolean flag like this in a public api, in particular this one which is a "level one" top level api. I think we need to reiterate on this. What I propose is:
   
   * Use overloads, for example `listDatabasesLike` and `listDatabasesNotLike`.
   * Define a new type to specify the filter, or reuse somehow the `Expression` stack we already use.



##########
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/api/TableEnvironmentTest.scala:
##########
@@ -696,15 +695,46 @@ class TableEnvironmentTest {
   @Test
   def testExecuteSqlWithShowDatabases(): Unit = {
     val tableResult1 = tableEnv.executeSql("CREATE DATABASE db1 COMMENT 'db1_comment'")
-    assertEquals(ResultKind.SUCCESS, tableResult1.getResultKind)
-    val tableResult2 = tableEnv.executeSql("SHOW DATABASES")
-    assertEquals(ResultKind.SUCCESS_WITH_CONTENT, tableResult2.getResultKind)
-    assertEquals(
-      ResolvedSchema.of(Column.physical("database name", DataTypes.STRING())),
-      tableResult2.getResolvedSchema)
-    checkData(
-      util.Arrays.asList(Row.of("default_database"), Row.of("db1")).iterator(),
-      tableResult2.collect())
+    assertThat(tableResult1.getResultKind).isEqualTo(ResultKind.SUCCESS)
+    val tableResult2 = tableEnv.executeSql("CREATE DATABASE db2 COMMENT 'db2_comment'")
+    assertThat(tableResult2.getResultKind).isEqualTo(ResultKind.SUCCESS)
+    val tableResult3 = tableEnv.executeSql("CREATE DATABASE pre_db3 COMMENT 'db3_comment'")
+    assertThat(tableResult3.getResultKind).isEqualTo(ResultKind.SUCCESS)
+
+    val tableResult4 = tableEnv.executeSql("SHOW DATABASES")
+    assertThat(tableResult4.getResultKind).isEqualTo(ResultKind.SUCCESS_WITH_CONTENT)
+    assertThat(ResolvedSchema.of(Column.physical("database name", DataTypes.STRING())))
+      .isEqualTo(tableResult4.getResolvedSchema)
+    assertThat(CollectionUtil.iteratorToList(tableResult4.collect).toArray())
+      .isEqualTo(util.Arrays.asList(

Review Comment:
   Same below



##########
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/api/TableEnvironmentTest.scala:
##########
@@ -696,15 +695,46 @@ class TableEnvironmentTest {
   @Test
   def testExecuteSqlWithShowDatabases(): Unit = {
     val tableResult1 = tableEnv.executeSql("CREATE DATABASE db1 COMMENT 'db1_comment'")
-    assertEquals(ResultKind.SUCCESS, tableResult1.getResultKind)
-    val tableResult2 = tableEnv.executeSql("SHOW DATABASES")
-    assertEquals(ResultKind.SUCCESS_WITH_CONTENT, tableResult2.getResultKind)
-    assertEquals(
-      ResolvedSchema.of(Column.physical("database name", DataTypes.STRING())),
-      tableResult2.getResolvedSchema)
-    checkData(
-      util.Arrays.asList(Row.of("default_database"), Row.of("db1")).iterator(),
-      tableResult2.collect())
+    assertThat(tableResult1.getResultKind).isEqualTo(ResultKind.SUCCESS)
+    val tableResult2 = tableEnv.executeSql("CREATE DATABASE db2 COMMENT 'db2_comment'")
+    assertThat(tableResult2.getResultKind).isEqualTo(ResultKind.SUCCESS)
+    val tableResult3 = tableEnv.executeSql("CREATE DATABASE pre_db3 COMMENT 'db3_comment'")
+    assertThat(tableResult3.getResultKind).isEqualTo(ResultKind.SUCCESS)
+
+    val tableResult4 = tableEnv.executeSql("SHOW DATABASES")
+    assertThat(tableResult4.getResultKind).isEqualTo(ResultKind.SUCCESS_WITH_CONTENT)
+    assertThat(ResolvedSchema.of(Column.physical("database name", DataTypes.STRING())))
+      .isEqualTo(tableResult4.getResolvedSchema)
+    assertThat(CollectionUtil.iteratorToList(tableResult4.collect).toArray())
+      .isEqualTo(util.Arrays.asList(

Review Comment:
   `containsExactly`



##########
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java:
##########
@@ -562,11 +563,26 @@ public Table from(TableDescriptor descriptor) {
 
     @Override
     public String[] listDatabases() {
-        return catalogManager
-                .getCatalog(catalogManager.getCurrentCatalog())
-                .get()
-                .listDatabases()
-                .toArray(new String[0]);
+        return listDatabases(catalogManager.getCurrentCatalog());
+    }
+
+    @Override
+    public String[] listDatabases(String catalogName) {
+        return catalogManager.getCatalog(catalogName).get().listDatabases().stream()
+                .toArray(String[]::new);
+    }
+
+    public String[] listDatabases(String catalogName, boolean notLike, String likePattern) {

Review Comment:
   Missing `@Override` here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org