You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by tan shai <ta...@gmail.com> on 2016/09/06 13:39:41 UTC
Total memory of workers
Hello,
Can anyone explain to me the behavior of spark if the size of the processed
file is greater than the total memory available on workers?
Many thanks.