You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Nicholas Chammas (Jira)" <ji...@apache.org> on 2021/04/15 17:16:00 UTC
[jira] [Commented] (SPARK-33000) cleanCheckpoints config does not
clean all checkpointed RDDs on shutdown
[ https://issues.apache.org/jira/browse/SPARK-33000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17322333#comment-17322333 ]
Nicholas Chammas commented on SPARK-33000:
------------------------------------------
Per the discussion [on the dev list|http://apache-spark-developers-list.1001551.n3.nabble.com/Shutdown-cleanup-of-disk-based-resources-that-Spark-creates-td30928.html] and [PR|https://github.com/apache/spark/pull/31742], it seems we just want to update the documentation to clarify that {{cleanCheckpoints}} does not impact shutdown behavior. i.e. Checkpoints are not meant to be cleaned up on shutdown (whether planned or unplanned), and the config is currently working as intended.
> cleanCheckpoints config does not clean all checkpointed RDDs on shutdown
> ------------------------------------------------------------------------
>
> Key: SPARK-33000
> URL: https://issues.apache.org/jira/browse/SPARK-33000
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 2.4.6
> Reporter: Nicholas Chammas
> Priority: Minor
>
> Maybe it's just that the documentation needs to be updated, but I found this surprising:
> {code:python}
> $ pyspark
> ...
> >>> spark.conf.set('spark.cleaner.referenceTracking.cleanCheckpoints', 'true')
> >>> spark.sparkContext.setCheckpointDir('/tmp/spark/checkpoint/')
> >>> a = spark.range(10)
> >>> a.checkpoint()
> DataFrame[id: bigint]
> >>> exit(){code}
> The checkpoint data is left behind in {{/tmp/spark/checkpoint/}}. I expected Spark to clean it up on shutdown.
> The documentation for {{spark.cleaner.referenceTracking.cleanCheckpoints}} says:
> {quote}Controls whether to clean checkpoint files if the reference is out of scope.
> {quote}
> When Spark shuts down, everything goes out of scope, so I'd expect all checkpointed RDDs to be cleaned up.
> For the record, I see the same behavior in both the Scala and Python REPLs.
> Evidence the current behavior is confusing:
> * [https://stackoverflow.com/q/52630858/877069]
> * [https://stackoverflow.com/q/60009856/877069]
> * [https://stackoverflow.com/q/61454740/877069]
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org