You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Attila Zsolt Piros (Jira)" <ji...@apache.org> on 2021/12/20 08:54:00 UTC
[jira] [Resolved] (SPARK-36406) No longer do file truncate operation before delete a write failed file held by DiskBlockObjectWriter
[ https://issues.apache.org/jira/browse/SPARK-36406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Attila Zsolt Piros resolved SPARK-36406.
----------------------------------------
Fix Version/s: 3.3.0
Resolution: Fixed
Issue resolved by pull request 33628
[https://github.com/apache/spark/pull/33628]
> No longer do file truncate operation before delete a write failed file held by DiskBlockObjectWriter
> ----------------------------------------------------------------------------------------------------
>
> Key: SPARK-36406
> URL: https://issues.apache.org/jira/browse/SPARK-36406
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 3.3.0
> Reporter: Yang Jie
> Assignee: Yang Jie
> Priority: Minor
> Fix For: 3.3.0
>
>
> We always do a file truncate operation(by DiskBlockObjectWriter.revertPartialWritesAndClose method) before delete a write failed file held by DiskBlockObjectWriter, a typical process is as follows
>
> {code:java}
> if (!success) {
> // This code path only happens if an exception was thrown above before we set success;
> // close our stuff and let the exception be thrown further
> writer.revertPartialWritesAndClose()
> if (file.exists()) {
> if (!file.delete()) {
> logWarning(s"Error deleting ${file}")
> }
> }
> }{code}
>
> This truncate operation seems unnecessary, we can add a new method to avoid do it.
>
>
>
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org