You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/11/24 00:11:02 UTC

[GitHub] [spark] rdblue commented on pull request #30429: [SPARK-33492][SQL] DSv2: Append/Overwrite/ReplaceTable should invalidate cache

rdblue commented on pull request #30429:
URL: https://github.com/apache/spark/pull/30429#issuecomment-732498815


   Sorry I'm late to help review, but this overall looks good to me.
   
   The one minor issue I have is passing the Spark session and logical plan through the plans as new parameters. I would rather use a callback to refresh instead, so that `DataSourceV2Strategy` would pass what to do after the commit:
   
   ```scala
       case AppendData(r: DataSourceV2Relation, query, writeOptions, _) =>
         val refresh = () => session.sharedState.cacheManager.recacheByPlan(session, r)
         r.table.asWritable match {
           case v1 if v1.supports(TableCapability.V1_BATCH_WRITE) =>
             AppendDataExecV1(v1, writeOptions.asOptions, query, afterWrite = refresh) :: Nil
           case v2 =>
             AppendDataExec(v2, r, writeOptions.asOptions, planLater(query), afterWrite = refresh) :: Nil
         }
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org