You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by rx...@apache.org on 2014/12/06 01:35:47 UTC
svn commit: r1643476 - /spark/faq.md
Author: rxin
Date: Sat Dec 6 00:35:47 2014
New Revision: 1643476
URL: http://svn.apache.org/r1643476
Log:
Updated cluster size
Modified:
spark/faq.md
Modified: spark/faq.md
URL: http://svn.apache.org/viewvc/spark/faq.md?rev=1643476&r1=1643475&r2=1643476&view=diff
==============================================================================
--- spark/faq.md (original)
+++ spark/faq.md Sat Dec 6 00:35:47 2014
@@ -21,7 +21,7 @@ Spark is a fast and general processing e
</p>
<p class="question">How large a cluster can Spark scale to?</p>
-<p class="answer">Many organizations run Spark on clusters with thousands of nodes.</p>
+<p class="answer">Many organizations run Spark on clusters with thousands of nodes. The largest cluster we know has over 8000 nodes.</p>
<p class="question">What happens if my dataset does not fit in memory?</p>
<p class="answer">Often each partition of data is small and does fit in memory, and these partitions are processed a few at a time. For very large partitions that do not fit in memory, Spark's built-in operators perform external operations on datasets.</p>
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org