You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by rx...@apache.org on 2014/03/03 23:41:24 UTC

git commit: update proportion of memory

Repository: spark
Updated Branches:
  refs/heads/master 369aad6f9 -> 9d225a910


update proportion of memory

The default value of "spark.storage.memoryFraction" has been changed from 0.66 to 0.6 . So it should be 60% of the memory to cache while 40% used for task execution.

Author: Chen Chao <cr...@gmail.com>

Closes #66 from CrazyJvm/master and squashes the following commits:

0f84d86 [Chen Chao] update proportion of memory


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/9d225a91
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/9d225a91
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/9d225a91

Branch: refs/heads/master
Commit: 9d225a91043ac92a0e727ba281b10c250a945614
Parents: 369aad6
Author: Chen Chao <cr...@gmail.com>
Authored: Mon Mar 3 14:41:25 2014 -0800
Committer: Reynold Xin <rx...@apache.org>
Committed: Mon Mar 3 14:41:25 2014 -0800

----------------------------------------------------------------------
 docs/tuning.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/9d225a91/docs/tuning.md
----------------------------------------------------------------------
diff --git a/docs/tuning.md b/docs/tuning.md
index 7047786..26ff132 100644
--- a/docs/tuning.md
+++ b/docs/tuning.md
@@ -163,8 +163,8 @@ their work directories), *not* on your driver program.
 **Cache Size Tuning**
 
 One important configuration parameter for GC is the amount of memory that should be used for caching RDDs.
-By default, Spark uses 66% of the configured executor memory (`spark.executor.memory` or `SPARK_MEM`) to
-cache RDDs. This means that 33% of memory is available for any objects created during task execution.
+By default, Spark uses 60% of the configured executor memory (`spark.executor.memory` or `SPARK_MEM`) to
+cache RDDs. This means that 40% of memory is available for any objects created during task execution.
 
 In case your tasks slow down and you find that your JVM is garbage-collecting frequently or running out of
 memory, lowering this value will help reduce the memory consumption. To change this to say 50%, you can call