You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by David Thomas <dt...@gmail.com> on 2014/01/30 01:50:56 UTC

Question on Scalability

How does Spark handle the situation where the RDD does not fit into the
memory of all the machines in the cluster together?

Re: Question on Scalability

Posted by Khanderao kand <kh...@gmail.com>.
Yes. the Overflowing memory would be locally persisted. As a result
performance will degrade but application will continue.


On Thu, Jan 30, 2014 at 6:20 AM, David Thomas <dt...@gmail.com> wrote:

> How does Spark handle the situation where the RDD does not fit into the
> memory of all the machines in the cluster together?
>