You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2021/02/01 13:43:13 UTC

[GitHub] [spark] MaxGekk commented on a change in pull request #31403: [SPARK-34301][SQL] Use logical plan of alter table in `CatalogImpl.recoverPartitions()`

MaxGekk commented on a change in pull request #31403:
URL: https://github.com/apache/spark/pull/31403#discussion_r567834009



##########
File path: sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala
##########
@@ -446,9 +446,9 @@ class CatalogImpl(sparkSession: SparkSession) extends Catalog {
    * @since 2.1.1
    */
   override def recoverPartitions(tableName: String): Unit = {

Review comment:
       Currently, other methods in `CatalogImpl` can work with both v1 and v2 tables already. Look at `cacheTable()`, `isCached()`, `refreshTable()`. I don't see much difference between those methods and `recoverPartitions()`.
   
   And the focus of this PR is to re-use new resolution framework, and to have consistent error message. Not taking into account that recovering partition of v2 tables is not supported at the moment.
   
   In general, `CatalogImpl` has a lot of pretty specific to v1 catalog implementation things. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org