You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Reynold Xin (JIRA)" <ji...@apache.org> on 2016/11/02 05:59:58 UTC

[jira] [Resolved] (SPARK-17475) HDFSMetadataLog should not leak CRC files

     [ https://issues.apache.org/jira/browse/SPARK-17475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Reynold Xin resolved SPARK-17475.
---------------------------------
       Resolution: Fixed
    Fix Version/s: 2.1.0

> HDFSMetadataLog should not leak CRC files
> -----------------------------------------
>
>                 Key: SPARK-17475
>                 URL: https://issues.apache.org/jira/browse/SPARK-17475
>             Project: Spark
>          Issue Type: Sub-task
>          Components: DStreams
>    Affects Versions: 2.0.1
>            Reporter: Frederick Reiss
>            Assignee: Frederick Reiss
>             Fix For: 2.1.0
>
>
> When HDFSMetadataLog uses a log directory on a filesystem other than HDFS (i.e. NFS or the driver node's local filesystem), the class leaves orphan checksum (CRC) files in the log directory. The files have names that follow the pattern "..[long UUID hex string].tmp.crc". These files exist because HDFSMetaDataLog renames other temporary files without renaming the corresponding checksum files. There is one CRC file per batch, so the directory fills up quite quickly.
> I'm not certain, but this problem might also occur on certain versions of the HDFS APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org