You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Sung Hwan Chung <co...@cs.stanford.edu> on 2014/06/26 02:19:54 UTC

Does Spark restart cached workers even without failures?

I'm doing coalesce with shuffle, cache and then do thousands of iterations.

I noticed that sometimes Spark would for no particular reason perform
partial coalesce again after running for a long time - and there was no
exception or failure on the worker's part.

Why is this happening?