You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Glenn Strycker (JIRA)" <ji...@apache.org> on 2015/06/26 18:55:04 UTC
[jira] [Commented] (SPARK-8666) checkpointing does not take
advantage of persisted/cached RDDs
[ https://issues.apache.org/jira/browse/SPARK-8666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14603175#comment-14603175 ]
Glenn Strycker commented on SPARK-8666:
---------------------------------------
I added a stackoverflow question to parallel this ticket: http://stackoverflow.com/questions/31078350/spark-rdd-checkpoint-on-persisted-cached-rdds-are-performing-the-dag-twice
One idea I had is that maybe I have to "materialize" twice?
{noformat}
// this will create the RDD and cache, when materialized
val newRDD = prevRDD.map(a => (a._1, 1L)).distinct.persist(StorageLevel.MEMORY_AND_DISK_SER)
print(newRDD.count())
// will this now checkpoint FROM THE EXISTING CACHE IN MEMORY?
newRDD.checkpoint
print(newRDD.count())
{noformat}
> checkpointing does not take advantage of persisted/cached RDDs
> --------------------------------------------------------------
>
> Key: SPARK-8666
> URL: https://issues.apache.org/jira/browse/SPARK-8666
> Project: Spark
> Issue Type: New Feature
> Reporter: Glenn Strycker
>
> I have been noticing that when checkpointing RDDs, all operations are occurring TWICE.
> For example, when I run the following code and watch the stages...
> {noformat}
> val newRDD = prevRDD.map(a => (a._1, 1L)).distinct.persist(StorageLevel.MEMORY_AND_DISK_SER)
> newRDD.checkpoint
> print(newRDD.count())
> {noformat}
> I see distinct and count operations appearing TWICE, and shuffle disk writes and reads (from the distinct) occurring TWICE.
> My newRDD is persisted to memory, why can't the checkpoint simply save those partitions to disk when the first operations have completed?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org