You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by srowen <gi...@git.apache.org> on 2017/03/01 16:52:41 UTC

[GitHub] spark pull request #17124: [SPARK-19779][SS]Delete needless tmp file after r...

Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17124#discussion_r103732674
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/HDFSBackedStateStoreProvider.scala ---
    @@ -282,8 +282,12 @@ private[state] class HDFSBackedStateStoreProvider(
           // target file will break speculation, skipping the rename step is the only choice. It's still
           // semantically correct because Structured Streaming requires rerunning a batch should
           // generate the same output. (SPARK-19677)
    +      // Also, a tmp file of delta file that generated by the first batch after restart
    +      // streaming job is still reserved on HDFS. (SPARK-19779)
           // scalastyle:on
    -      if (!fs.exists(finalDeltaFile) && !fs.rename(tempDeltaFile, finalDeltaFile)) {
    +      if (fs.exists(finalDeltaFile)) {
    +        fs.delete(tempDeltaFile, true)
    +      } else if (!fs.rename(tempDeltaFile, finalDeltaFile)) {
    --- End diff --
    
    If the file exists, it is deleted, but no new file is renamed to it -- is that right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org