You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Umesh Chaudhary (JIRA)" <ji...@apache.org> on 2017/01/11 08:14:58 UTC

[jira] [Commented] (SPARK-17885) Spark Streaming deletes checkpointed RDD then tries to load it after restart

    [ https://issues.apache.org/jira/browse/SPARK-17885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817614#comment-15817614 ] 

Umesh Chaudhary commented on SPARK-17885:
-----------------------------------------

Yes you are correct that "spark.cleaner.ttl" has nothing to do with checkpointed RDDs cleaning but ContextCleaner cleans the checkpointed files when their reference goes out of scope and it is controlled by "spark.cleaner.referenceTracking.cleanCheckpoints"

While seeing below two log lines: 

2016-10-09 02:48:43,533 [streaming-job-executor-0] INFO org.apache.spark.rdd.ReliableRDDCheckpointData - Done checkpointing RDD 73847 to hdfs://proc-job/checkpoint/cadf8dcf-ebc2-4366-a2e1-0939976c6ce1/rdd-73847, new parent is RDD 73872
host = ip-10-1-1-13.ec2.internal

2016-10-09 02:53:14,696 [JobGenerator] INFO org.apache.spark.streaming.dstream.DStreamCheckpointData - Deleted checkpoint file 'hdfs://proc-job/checkpoint/cadf8dcf-ebc2-4366-a2e1-0939976c6ce1/rdd-73847' for time 1475981310000 ms
host = ip-10-1-1-13.ec2.internal

Notice that DStreamCheckpointData deleted the checkpointed dir for time 1475981310000 ms and it is only possible when you have a new checkpointed dir present newer to that time. From the code we see the logic behind cleaning the checkpointed directory:

{{ 
Find all the checkpointed RDDs (i.e. files) that are older than `lastCheckpointFileTime`
This is because checkpointed RDDs older than this are not going to be needed even after master fails, as the checkpoint data of `time` does not refer to those files
}}

So can you attach the logs between 2016-10-09 02:48:43,533 and 2016-10-09 02:53:14,696 as we would like to see what application does during this ~5 mins interval.

> Spark Streaming deletes checkpointed RDD then tries to load it after restart
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-17885
>                 URL: https://issues.apache.org/jira/browse/SPARK-17885
>             Project: Spark
>          Issue Type: Bug
>          Components: DStreams
>    Affects Versions: 1.5.1
>            Reporter: Cosmin Ciobanu
>
> The issue is that the Spark driver checkpoints an RDD, deletes it, the job restarts, and the new driver tries to load the deleted checkpoint RDD.
> The application is run in YARN, which attempts to restart the application a number of times (100 in our case), all of which fail due to missing the deleted RDD. 
> Here is a Splunk log which shows the inconsistency in checkpoint behaviour:
> *2016-10-09 02:48:43,533* [streaming-job-executor-0] INFO  org.apache.spark.rdd.ReliableRDDCheckpointData - Done checkpointing RDD 73847 to hdfs://proc-job/checkpoint/cadf8dcf-ebc2-4366-a2e1-0939976c6ce1/*rdd-73847*, new parent is RDD 73872
> host = ip-10-1-1-13.ec2.internal
> *2016-10-09 02:53:14,696* [JobGenerator] INFO  org.apache.spark.streaming.dstream.DStreamCheckpointData - Deleted checkpoint file 'hdfs://proc-job/checkpoint/cadf8dcf-ebc2-4366-a2e1-0939976c6ce1/*rdd-73847*' for time 1475981310000 ms
> host = ip-10-1-1-13.ec2.internal
> *Job restarts here, notice driver host change from ip-10-1-1-13.ec2.internal to ip-10-1-1-25.ec2.internal.*
> *2016-10-09 02:53:30,175* [Driver] INFO  org.apache.spark.streaming.dstream.DStreamCheckpointData - Restoring checkpointed RDD for time 1475981310000 ms from file 'hdfs://proc-job/checkpoint/cadf8dcf-ebc2-4366-a2e1-0939976c6ce1/*rdd-73847*'
> host = ip-10-1-1-25.ec2.internal
> *2016-10-09 02:53:30,491* [Driver] ERROR org.apache.spark.deploy.yarn.ApplicationMaster - User class threw exception: java.lang.IllegalArgumentException: requirement failed: Checkpoint directory does not exist: hdfs://proc-job/checkpoint/cadf8dcf-ebc2-4366-a2e1-0939976c6ce1/*rdd-73847*
> java.lang.IllegalArgumentException: requirement failed: Checkpoint directory does not exist: hdfs://proc-job/checkpoint/cadf8dcf-ebc2-4366-a2e1-0939976c6ce1/*rdd-73847*
> host = ip-10-1-1-25.ec2.internal
> Spark streaming is configured with a microbatch interval of 30 seconds, checkpoint interval of 120 seconds, and cleaner.ttl of 28800 (8 hours), but as far as I can tell, this TTL only affects metadata cleanup interval. RDDs seem to be deleted every 4-5 minutes after being checkpointed.
> Running on top of Spark 1.5.1.
> There are at least two possible issues here:
> - In case of a driver restart the new driver tries to load checkpointed RDDs which the previous driver had just deleted;
> - Spark loads stale checkpointed data - the logs show that the deleted RDD was initially checkpointed 4 minutes and 31 seconds before deletion, and 4 minutes and 47 seconds before the new driver tries to load it. Given the fact the checkpointing interval is 120 seconds, it makes no sense to load data older than that.
> P.S. Looking at the source code with the event loop that handles checkpoint updates and cleanup, nothing seems to have changed in more recent versions of Spark, so the bug is likely present in 2.0.1 as well.
> P.P.S. The issue is difficult to reproduce - it only occurs once in every 10 or so restarts, and only in clusters with high-load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org