You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2021/04/28 21:01:29 UTC

[GitHub] [spark] viirya commented on a change in pull request #32361: [SPARK-35240][SS] Use CheckpointFileManager for checkpoint file manipulation

viirya commented on a change in pull request #32361:
URL: https://github.com/apache/spark/pull/32361#discussion_r622544504



##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/CheckpointFileManager.scala
##########
@@ -83,6 +83,9 @@ trait CheckpointFileManager {
 
   /** Is the default file system this implementation is operating on the local file system. */
   def isLocal: Boolean
+
+  /** Returns a qualified path object for the file system's working directory. */
+  def makeQualified(path: Path): Path

Review comment:
       Oh, that makes sense. Actually the place using `makeQualified` in `ResolveWriteToStream` is to call `mkdirs` on the qualified path. As now we also delegate to `CheckpointFileManager`, we don't need call `makeQualified` now.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org