You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yang Jie (Jira)" <ji...@apache.org> on 2021/08/04 03:26:00 UTC

[jira] [Created] (SPARK-36406) No longer do file truncate operation before delete a write failed file held by DiskBlockObjectWriter

Yang Jie created SPARK-36406:
--------------------------------

             Summary: No longer do file truncate operation before delete a write failed file held by DiskBlockObjectWriter
                 Key: SPARK-36406
                 URL: https://issues.apache.org/jira/browse/SPARK-36406
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core
    Affects Versions: 3.3.0
            Reporter: Yang Jie


We always do a file truncate operation(by DiskBlockObjectWriter.revertPartialWritesAndClose method) before delete a write failed file held by DiskBlockObjectWriter, a typical process is as follows

 
{code:java}
if (!success) {
  // This code path only happens if an exception was thrown above before we set success;
  // close our stuff and let the exception be thrown further
  writer.revertPartialWritesAndClose()
  if (file.exists()) {
    if (!file.delete()) {
      logWarning(s"Error deleting ${file}")
    }
  }
}{code}
 

This truncate operation seems unnecessary, we can add a new method to avoid do it.

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org