You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Tomas Bartalos (JIRA)" <ji...@apache.org> on 2019/07/25 15:22:00 UTC

[jira] [Commented] (SPARK-28025) HDFSBackedStateStoreProvider should not leak .crc files

    [ https://issues.apache.org/jira/browse/SPARK-28025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16892875#comment-16892875 ] 

Tomas Bartalos commented on SPARK-28025:
----------------------------------------

I'm also affected by performance issue caused by .crc files leak in checkpoint directory. [~skonto] thank you for the workaround, it works.

Would it be possible to implement cleaning of crc files, when one needs the checksum ?

> HDFSBackedStateStoreProvider should not leak .crc files 
> --------------------------------------------------------
>
>                 Key: SPARK-28025
>                 URL: https://issues.apache.org/jira/browse/SPARK-28025
>             Project: Spark
>          Issue Type: Bug
>          Components: Structured Streaming
>    Affects Versions: 2.4.3
>         Environment: Spark 2.4.3
> Kubernetes 1.11(?) (OpenShift)
> StateStore storage on a mounted PVC. Viewed as a local filesystem by the `FileContextBasedCheckpointFileManager` : 
> {noformat}
> scala> glusterfm.isLocal
> res17: Boolean = true{noformat}
>            Reporter: Gerard Maas
>            Priority: Major
>
> The HDFSBackedStateStoreProvider when using the default CheckpointFileManager is leaving '.crc' files behind. There's a .crc file created for each `atomicFile` operation of the CheckpointFileManager.
> Over time, the number of files becomes very large. It makes the state store file system constantly increase in size and, in our case, deteriorates the file system performance.
> Here's a sample of one of our spark storage volumes after 2 days of execution (4 stateful streaming jobs, each on a different sub-dir):
>  # 
> {noformat}
> Total files in PVC (used for checkpoints and state store)
> $find . | wc -l
> 431796
> # .crc files
> $find . -name "*.crc" | wc -l
> 418053{noformat}
> With each .crc file taking one storage block, the used storage runs into the GBs of data.
> These jobs are running on Kubernetes. Our shared storage provider, GlusterFS, shows serious performance deterioration with this large number of files:
> {noformat}
> DEBUG HDFSBackedStateStoreProvider: fetchFiles() took 29164ms{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org