You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/08/02 08:25:06 UTC

[GitHub] [spark] panbingkun opened a new pull request, #37376: [SPARK-37599][SQL][TESTS] Unify v1 and v2 ALTER TABLE .. SET LOCATION tests

panbingkun opened a new pull request, #37376:
URL: https://github.com/apache/spark/pull/37376

   ### What changes were proposed in this pull request?
   
   - Move parser tests from DDLParserSuite to AlterTableSetLocationParserSuite.
   - Port DS v1 tests from DDLSuite and other test suites to v1.AlterTableSetLocationSuite.
   - Add a test for DSv2 ALTER TABLE .. SET SERDE to v2.AlterTableSetLocationSuite.
   
   
   ### Why are the changes needed?
   To improve test coverage.
   
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   
   ### How was this patch tested?
   By running new test suites:
   
   > $ build/sbt -Phive-2.3 -Phive-thriftserver "test:testOnly *AlterTableSetLocationSuite"
   > $ build/sbt -Phive-2.3 -Phive-thriftserver "test:testOnly *AlterTableSetLocationParserSuite"


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] MaxGekk closed pull request #37376: [SPARK-37599][SQL][TESTS] Unify v1 and v2 ALTER TABLE .. SET LOCATION tests

Posted by GitBox <gi...@apache.org>.
MaxGekk closed pull request #37376: [SPARK-37599][SQL][TESTS] Unify v1 and v2 ALTER TABLE .. SET LOCATION tests
URL: https://github.com/apache/spark/pull/37376


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] MaxGekk commented on pull request #37376: [SPARK-37599][SQL][TESTS] Unify v1 and v2 ALTER TABLE .. SET LOCATION tests

Posted by GitBox <gi...@apache.org>.
MaxGekk commented on PR #37376:
URL: https://github.com/apache/spark/pull/37376#issuecomment-1214410244

   +1, LGTM. Merging to master.
   Thank you, @panbingkun.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] AmplabJenkins commented on pull request #37376: [SPARK-37599][SQL][TESTS] Unify v1 and v2 ALTER TABLE .. SET LOCATION tests

Posted by GitBox <gi...@apache.org>.
AmplabJenkins commented on PR #37376:
URL: https://github.com/apache/spark/pull/37376#issuecomment-1202763668

   Can one of the admins verify this patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] panbingkun commented on pull request #37376: [SPARK-37599][SQL][TESTS] Unify v1 and v2 ALTER TABLE .. SET LOCATION tests

Posted by GitBox <gi...@apache.org>.
panbingkun commented on PR #37376:
URL: https://github.com/apache/spark/pull/37376#issuecomment-1203834656

   cc @MaxGekk 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] MaxGekk commented on a diff in pull request #37376: [SPARK-37599][SQL][TESTS] Unify v1 and v2 ALTER TABLE .. SET LOCATION tests

Posted by GitBox <gi...@apache.org>.
MaxGekk commented on code in PR #37376:
URL: https://github.com/apache/spark/pull/37376#discussion_r940289260


##########
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/command/AlterTableSetLocationSuite.scala:
##########
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.hive.execution.command
+
+import org.apache.spark.sql.execution.command.v1
+
+/**
+ * The class contains tests for the `ALTER TABLE .. SET LOCATION` command to check
+ * V1 Hive external table catalog.
+ */
+class AlterTableSetLocationSuite extends v1.AlterTableSetLocationSuiteBase with CommandSuiteBase {
+
+  override def buildCreateTableSQL(t: String): String = {
+    s"CREATE TABLE $t (col1 int, col2 string, a int, b int) " +
+      s"PARTITIONED BY (a, b) " +

Review Comment:
   s" is not needed here. How about:
   ```scala
       s"""CREATE TABLE $t (col1 int, col2 string, a int, b int)
         |PARTITIONED BY (a, b)
         |ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
         |STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.SequenceFileInputFormat'
         |OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'""".stripMargin
   ```



##########
sql/core/src/test/scala/org/apache/spark/sql/execution/command/AlterTableSetLocationSuiteBase.scala:
##########
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command
+
+import org.apache.spark.sql.QueryTest
+
+/**
+ * This base suite contains unified tests for the `ALTER TABLE .. SET [SERDE|SERDEPROPERTIES]`

Review Comment:
   SET [SERDE|SERDEPROPERTIES] -> SET LOCATION?



##########
sql/core/src/test/scala/org/apache/spark/sql/execution/command/v2/AlterTableSetLocationSuite.scala:
##########
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command.v2
+
+import scala.collection.JavaConverters._
+
+import org.apache.spark.SparkConf
+import org.apache.spark.sql.AnalysisException
+import org.apache.spark.sql.connector.catalog.{Identifier, InMemoryCatalog, Table}
+import org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
+import org.apache.spark.sql.connector.catalog.CatalogV2Util.withDefaultOwnership
+import org.apache.spark.sql.execution.command
+
+/**
+ * The class contains tests for the `ALTER TABLE .. SET LOCATION` command to
+ * check V2 table catalogs.
+ */
+class AlterTableSetLocationSuite
+  extends command.AlterTableSetLocationSuiteBase with CommandSuiteBase {
+
+  private def getTableMetadata(tableName: String): Table = {
+    val nameParts = spark.sessionState.sqlParser.parseMultipartIdentifier(tableName)
+    val v2Catalog = spark.sessionState.catalogManager.catalog(nameParts.head).asTableCatalog
+    val namespace = nameParts.drop(1).init.toArray
+    v2Catalog.loadTable(Identifier.of(namespace, nameParts.last))
+  }
+
+  override def sparkConf: SparkConf = super.sparkConf
+    .set("spark.sql.catalog.testcat", classOf[InMemoryCatalog].getName)

Review Comment:
   Why do you need this? `CommandSuiteBase` should set everything.



##########
sql/core/src/test/scala/org/apache/spark/sql/execution/command/v1/AlterTableSetLocationSuite.scala:
##########
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command.v1
+
+import java.net.URI
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.sql.AnalysisException
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.catalyst.catalog.CatalogTypes.TablePartitionSpec
+import org.apache.spark.sql.catalyst.catalog.CatalogUtils
+import org.apache.spark.sql.execution.command
+import org.apache.spark.sql.internal.SQLConf
+
+/**
+ * This base suite contains unified tests for the `ALTER TABLE .. SET LOCATION`
+ * command that check V1 table catalogs. The tests that cannot run for all V1 catalogs
+ * are located in more specific test suites:
+ *
+ *   - V1 In-Memory catalog: `org.apache.spark.sql.execution.command.v1.AlterTableSetLocationSuite`
+ *   - V1 Hive External catalog:
+ *     `org.apache.spark.sql.hive.execution.command.AlterTableSetLocationSuite`
+ */
+trait AlterTableSetLocationSuiteBase extends command.AlterTableSetLocationSuiteBase {
+
+  private lazy val sessionCatalog = spark.sessionState.catalog
+
+  protected def buildCreateTableSQL(t: String): String
+
+  private def normalizeSerdeProp(props: Map[String, String]): Map[String, String] = {
+    props.filterNot(p => Seq("serialization.format", "path").contains(p._1))
+  }
+
+  // Verify that the location is set to the expected string
+  private def checkLocation(tableIdent: TableIdentifier, expected: URI,
+    spec: Option[TablePartitionSpec] = None): Unit = {
+    val storageFormat = spec
+      .map { s => sessionCatalog.getPartition(tableIdent, s).storage }
+      .getOrElse { sessionCatalog.getTableMetadata(tableIdent).storage }
+    assert(storageFormat.locationUri ===
+      Some(makeQualifiedPath(CatalogUtils.URIToString(expected))))
+  }
+
+  test("alter table set location") {
+    withNamespaceAndTable("ns", "tbl") { t =>
+      sql(buildCreateTableSQL(t))
+      sql(s"INSERT INTO $t PARTITION (a = '1', b = '2') SELECT 1, 'abc'")
+
+      val tableIdent = TableIdentifier("tbl", Some("ns"))
+      val partSpec = Map("a" -> "1", "b" -> "2")
+
+      val catalogTable = sessionCatalog.getTableMetadata(tableIdent)
+      assert(catalogTable.storage.locationUri.isDefined)
+      assert(normalizeSerdeProp(catalogTable.storage.properties).isEmpty)
+
+      val catalogTablePartition = sessionCatalog.getPartition(tableIdent, partSpec)
+      assert(catalogTablePartition.storage.locationUri.isDefined)
+      assert(normalizeSerdeProp(catalogTablePartition.storage.properties).isEmpty)
+
+      // set table location
+      sql(s"ALTER TABLE $t SET LOCATION '/path/to/your/lovely/heart'")
+      checkLocation(tableIdent, new URI("/path/to/your/lovely/heart"))
+
+      // set table partition location
+      sql(s"ALTER TABLE $t PARTITION (a='1', b='2') SET LOCATION '/path/to/part/ways'")
+      checkLocation(tableIdent, new URI("/path/to/part/ways"), Some(partSpec))
+
+      // set location for partition spec in the upper case
+      withSQLConf(SQLConf.CASE_SENSITIVE.key -> "false") {
+        sql(s"ALTER TABLE $t PARTITION (A='1', B='2') SET LOCATION '/path/to/part/ways2'")
+        checkLocation(tableIdent, new URI("/path/to/part/ways2"), Some(partSpec))
+      }
+      withSQLConf(SQLConf.CASE_SENSITIVE.key -> "true") {
+        val e = intercept[AnalysisException] {
+          sql(s"ALTER TABLE $t PARTITION (A='1', B='2') SET LOCATION '/path/to/part/ways3'")
+        }.getMessage
+        assert(e.contains("not a valid partition column"))
+      }
+
+      sessionCatalog.setCurrentDatabase("ns")
+      // set table location without explicitly specifying database
+      sql("ALTER TABLE tbl SET LOCATION '/swanky/steak/place'")
+      checkLocation(tableIdent, new URI("/swanky/steak/place"))
+      // set table partition location without explicitly specifying database
+      sql("ALTER TABLE tbl PARTITION (a='1', b='2') SET LOCATION 'vienna'")
+      val table = sessionCatalog.getTableMetadata(TableIdentifier("tbl"))
+      val viennaPartPath = new Path(new Path(table.location), "vienna")
+      checkLocation(tableIdent, CatalogUtils.stringToURI(viennaPartPath.toString), Some(partSpec))
+
+      // table to alter does not exist
+      val e1 = intercept[AnalysisException] {
+        sql("ALTER TABLE ns.does_not_exist SET LOCATION '/mister/spark'")
+      }
+      assert(e1.getMessage.contains("Table not found: ns.does_not_exist"))
+
+      // partition to alter does not exist

Review Comment:
   Please, create separate tests for this and for "table to alter does not exist"



##########
sql/core/src/test/scala/org/apache/spark/sql/execution/command/v1/AlterTableSetLocationSuite.scala:
##########
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command.v1
+
+import java.net.URI
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.sql.AnalysisException
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.catalyst.catalog.CatalogTypes.TablePartitionSpec
+import org.apache.spark.sql.catalyst.catalog.CatalogUtils
+import org.apache.spark.sql.execution.command
+import org.apache.spark.sql.internal.SQLConf
+
+/**
+ * This base suite contains unified tests for the `ALTER TABLE .. SET LOCATION`
+ * command that check V1 table catalogs. The tests that cannot run for all V1 catalogs
+ * are located in more specific test suites:
+ *
+ *   - V1 In-Memory catalog: `org.apache.spark.sql.execution.command.v1.AlterTableSetLocationSuite`
+ *   - V1 Hive External catalog:
+ *     `org.apache.spark.sql.hive.execution.command.AlterTableSetLocationSuite`
+ */
+trait AlterTableSetLocationSuiteBase extends command.AlterTableSetLocationSuiteBase {
+
+  private lazy val sessionCatalog = spark.sessionState.catalog
+
+  protected def buildCreateTableSQL(t: String): String
+
+  private def normalizeSerdeProp(props: Map[String, String]): Map[String, String] = {
+    props.filterNot(p => Seq("serialization.format", "path").contains(p._1))
+  }
+
+  // Verify that the location is set to the expected string
+  private def checkLocation(tableIdent: TableIdentifier, expected: URI,

Review Comment:
   Please, fix indentation. Should be 4 spaces, see https://github.com/databricks/scala-style-guide#spacing-and-indentation
   ```scala
     private def checkLocation(
         tableIdent: TableIdentifier,
         expected: URI,
         spec: Option[TablePartitionSpec] = None): Unit = {
   ```



##########
sql/core/src/test/scala/org/apache/spark/sql/execution/command/v1/AlterTableSetLocationSuite.scala:
##########
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command.v1
+
+import java.net.URI
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.sql.AnalysisException
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.catalyst.catalog.CatalogTypes.TablePartitionSpec
+import org.apache.spark.sql.catalyst.catalog.CatalogUtils
+import org.apache.spark.sql.execution.command
+import org.apache.spark.sql.internal.SQLConf
+
+/**
+ * This base suite contains unified tests for the `ALTER TABLE .. SET LOCATION`
+ * command that check V1 table catalogs. The tests that cannot run for all V1 catalogs
+ * are located in more specific test suites:
+ *
+ *   - V1 In-Memory catalog: `org.apache.spark.sql.execution.command.v1.AlterTableSetLocationSuite`
+ *   - V1 Hive External catalog:
+ *     `org.apache.spark.sql.hive.execution.command.AlterTableSetLocationSuite`
+ */
+trait AlterTableSetLocationSuiteBase extends command.AlterTableSetLocationSuiteBase {
+
+  private lazy val sessionCatalog = spark.sessionState.catalog
+
+  protected def buildCreateTableSQL(t: String): String
+
+  private def normalizeSerdeProp(props: Map[String, String]): Map[String, String] = {
+    props.filterNot(p => Seq("serialization.format", "path").contains(p._1))
+  }
+
+  // Verify that the location is set to the expected string
+  private def checkLocation(tableIdent: TableIdentifier, expected: URI,
+    spec: Option[TablePartitionSpec] = None): Unit = {
+    val storageFormat = spec
+      .map { s => sessionCatalog.getPartition(tableIdent, s).storage }
+      .getOrElse { sessionCatalog.getTableMetadata(tableIdent).storage }
+    assert(storageFormat.locationUri ===
+      Some(makeQualifiedPath(CatalogUtils.URIToString(expected))))
+  }
+
+  test("alter table set location") {
+    withNamespaceAndTable("ns", "tbl") { t =>
+      sql(buildCreateTableSQL(t))
+      sql(s"INSERT INTO $t PARTITION (a = '1', b = '2') SELECT 1, 'abc'")
+
+      val tableIdent = TableIdentifier("tbl", Some("ns"))
+      val partSpec = Map("a" -> "1", "b" -> "2")
+
+      val catalogTable = sessionCatalog.getTableMetadata(tableIdent)
+      assert(catalogTable.storage.locationUri.isDefined)
+      assert(normalizeSerdeProp(catalogTable.storage.properties).isEmpty)
+
+      val catalogTablePartition = sessionCatalog.getPartition(tableIdent, partSpec)
+      assert(catalogTablePartition.storage.locationUri.isDefined)
+      assert(normalizeSerdeProp(catalogTablePartition.storage.properties).isEmpty)
+
+      // set table location
+      sql(s"ALTER TABLE $t SET LOCATION '/path/to/your/lovely/heart'")
+      checkLocation(tableIdent, new URI("/path/to/your/lovely/heart"))
+
+      // set table partition location

Review Comment:
   Could you create separate test for this and below cases.



##########
sql/core/src/test/scala/org/apache/spark/sql/execution/command/v2/AlterTableSetLocationSuite.scala:
##########
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command.v2
+
+import scala.collection.JavaConverters._
+
+import org.apache.spark.SparkConf
+import org.apache.spark.sql.AnalysisException
+import org.apache.spark.sql.connector.catalog.{Identifier, InMemoryCatalog, Table}
+import org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
+import org.apache.spark.sql.connector.catalog.CatalogV2Util.withDefaultOwnership
+import org.apache.spark.sql.execution.command
+
+/**
+ * The class contains tests for the `ALTER TABLE .. SET LOCATION` command to
+ * check V2 table catalogs.
+ */
+class AlterTableSetLocationSuite
+  extends command.AlterTableSetLocationSuiteBase with CommandSuiteBase {
+
+  private def getTableMetadata(tableName: String): Table = {
+    val nameParts = spark.sessionState.sqlParser.parseMultipartIdentifier(tableName)
+    val v2Catalog = spark.sessionState.catalogManager.catalog(nameParts.head).asTableCatalog
+    val namespace = nameParts.drop(1).init.toArray
+    v2Catalog.loadTable(Identifier.of(namespace, nameParts.last))
+  }
+
+  override def sparkConf: SparkConf = super.sparkConf
+    .set("spark.sql.catalog.testcat", classOf[InMemoryCatalog].getName)
+
+  test("alter Table set location") {
+    val t = "testcat.ns1.ns2.tbl"
+    withTable(t) {

Review Comment:
   Please, use `withNamespaceAndTable` instead of it.



##########
sql/core/src/test/scala/org/apache/spark/sql/execution/command/v2/AlterTableSetLocationSuite.scala:
##########
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command.v2
+
+import scala.collection.JavaConverters._
+
+import org.apache.spark.SparkConf
+import org.apache.spark.sql.AnalysisException
+import org.apache.spark.sql.connector.catalog.{Identifier, InMemoryCatalog, Table}
+import org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper
+import org.apache.spark.sql.connector.catalog.CatalogV2Util.withDefaultOwnership
+import org.apache.spark.sql.execution.command
+
+/**
+ * The class contains tests for the `ALTER TABLE .. SET LOCATION` command to
+ * check V2 table catalogs.
+ */
+class AlterTableSetLocationSuite
+  extends command.AlterTableSetLocationSuiteBase with CommandSuiteBase {
+
+  private def getTableMetadata(tableName: String): Table = {
+    val nameParts = spark.sessionState.sqlParser.parseMultipartIdentifier(tableName)
+    val v2Catalog = spark.sessionState.catalogManager.catalog(nameParts.head).asTableCatalog
+    val namespace = nameParts.drop(1).init.toArray
+    v2Catalog.loadTable(Identifier.of(namespace, nameParts.last))
+  }
+
+  override def sparkConf: SparkConf = super.sparkConf
+    .set("spark.sql.catalog.testcat", classOf[InMemoryCatalog].getName)
+
+  test("alter Table set location") {
+    val t = "testcat.ns1.ns2.tbl"
+    withTable(t) {
+      sql(s"CREATE TABLE $t (id int) USING foo")
+      sql(s"ALTER TABLE $t SET LOCATION 's3://bucket/path'")
+
+      val table = getTableMetadata(t)
+
+      assert(table.name === t)
+      assert(table.properties === withDefaultOwnership(
+        Map("provider" -> "foo", "location" -> "s3://bucket/path")).asJava)
+    }
+  }
+
+  test("alter table set partition location") {
+    val t = "testcat.ns1.ns2.tbl"
+    withTable(t) {

Review Comment:
   Use `withNamespaceAndTable`, please



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org