You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by pw...@apache.org on 2014/05/29 00:50:03 UTC

git commit: Organize configuration docs

Repository: spark
Updated Branches:
  refs/heads/master 82eadc3b0 -> 7801d44fd


Organize configuration docs

This PR improves and organizes the config option page
and makes a few other changes to config docs. See a preview here:
http://people.apache.org/~pwendell/config-improvements/configuration.html

The biggest changes are:
1. The configs for the standalone master/workers were moved to the
standalone page and out of the general config doc.
2. SPARK_LOCAL_DIRS was missing from the standalone docs.
3. Expanded discussion of injecting configs with spark-submit, including an
example.
4. Config options were organized into the following categories:
- Runtime Environment
- Shuffle Behavior
- Spark UI
- Compression and Serialization
- Execution Behavior
- Networking
- Scheduling
- Security
- Spark Streaming

Author: Patrick Wendell <pw...@gmail.com>

Closes #880 from pwendell/config-cleanup and squashes the following commits:

93f56c3 [Patrick Wendell] Feedback from Matei
6f66efc [Patrick Wendell] More feedback
16ae776 [Patrick Wendell] Adding back header section
d9c264f [Patrick Wendell] Small fix
e0c1728 [Patrick Wendell] Response to Matei's review
27d57db [Patrick Wendell] Reverting changes to index.html (covered in #896)
e230ef9 [Patrick Wendell] Merge remote-tracking branch 'apache/master' into config-cleanup
a374369 [Patrick Wendell] Line wrapping fixes
fdff7fc [Patrick Wendell] Merge remote-tracking branch 'apache/master' into config-cleanup
3289ea4 [Patrick Wendell] Pulling in changes from #856
106ee31 [Patrick Wendell] Small link fix
f7e79bc [Patrick Wendell] Re-organizing config options.
54b184d [Patrick Wendell] Adding standalone configs to the standalone page
592e94a [Patrick Wendell] Stash
29b5446 [Patrick Wendell] Better discussion of spark-submit in configuration docs
2d719ef [Patrick Wendell] Small fix
4af9e07 [Patrick Wendell] Adding SPARK_LOCAL_DIRS docs
204b248 [Patrick Wendell] Small fixes


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7801d44f
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/7801d44f
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/7801d44f

Branch: refs/heads/master
Commit: 7801d44fd3bcf4d82e6db12574cc42bef15bf0e1
Parents: 82eadc3
Author: Patrick Wendell <pw...@gmail.com>
Authored: Wed May 28 15:49:54 2014 -0700
Committer: Patrick Wendell <pw...@gmail.com>
Committed: Wed May 28 15:49:54 2014 -0700

----------------------------------------------------------------------
 docs/configuration.md    | 815 ++++++++++++++++++++----------------------
 docs/quick-start.md      |   8 +-
 docs/spark-standalone.md | 167 ++++++++-
 3 files changed, 554 insertions(+), 436 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/7801d44f/docs/configuration.md
----------------------------------------------------------------------
diff --git a/docs/configuration.md b/docs/configuration.md
index e5d955f..b6e7fd3 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -2,19 +2,25 @@
 layout: global
 title: Spark Configuration
 ---
-
 * This will become a table of contents (this text will be scraped).
 {:toc}
 
-Spark provides several locations to configure the system:
+Spark provides three locations to configure the system:
+
+* [Spark properties](#spark-properties) control most application parameters and can be set by passing
+  a [SparkConf](api/core/index.html#org.apache.spark.SparkConf) object to SparkContext, or through Java
+  system properties.
+* [Environment variables](#environment-variables) can be used to set per-machine settings, such as
+  the IP address, through the `conf/spark-env.sh` script on each node.
+* [Logging](#configuring-logging) can be configured through `log4j.properties`.
 
 # Spark Properties
 
 Spark properties control most application settings and are configured separately for each
-application. The preferred way is to set them through
-[SparkConf](api/scala/index.html#org.apache.spark.SparkConf) and passing it as an argument to your
-SparkContext. SparkConf allows you to configure most of the common properties to initialize a
-cluster (e.g. master URL and application name), as well as arbitrary key-value pairs through the
+application. These properties can be set directly on a
+[SparkConf](api/scala/index.html#org.apache.spark.SparkConf) and passed as an argument to your
+SparkContext. SparkConf allows you to configure some of the common properties
+(e.g. master URL and application name), as well as arbitrary key-value pairs through the
 `set()` method. For example, we could initialize an application as follows:
 
 {% highlight scala %}
@@ -25,22 +31,37 @@ val conf = new SparkConf()
 val sc = new SparkContext(conf)
 {% endhighlight %}
 
-## Loading Default Configurations
+## Dynamically Loading Spark Properties
+In some cases, you may want to avoid hard-coding certain configurations in a `SparkConf`. For
+instance, if you'd like to run the same application with different masters or different
+amounts of memory. Spark allows you to simply create an empty conf:
 
-In the case of `spark-shell`, a SparkContext has already been created for you, so you cannot control
-the configuration properties through SparkConf. However, you can still set configuration properties
-through a default configuration file. By default, `spark-shell` (and more generally `spark-submit`)
-will read configuration options from `conf/spark-defaults.conf`, in which each line consists of a
-key and a value separated by whitespace. For example,
+{% highlight scala %}
+val sc = new SparkContext(new SparkConf())
+{% endhighlight %}
+
+Then, you can supply configuration values at runtime:
+{% highlight bash %}
+./bin/spark-submit --name "My fancy app" --master local[4] myApp.jar
+{% endhighlight %}
+
+The Spark shell and [`spark-submit`](cluster-overview.html#launching-applications-with-spark-submit)
+tool support two ways to load configurations dynamically. The first are command line options,
+such as `--master`, as shown above. Running `./bin/spark-submit --help` will show the entire list
+of options.
+
+`bin/spark-submit` will also read configuration options from `conf/spark-defaults.conf`, in which
+each line consists of a key and a value separated by whitespace. For example:
 
     spark.master            spark://5.6.7.8:7077
     spark.executor.memory   512m
     spark.eventLog.enabled  true
     spark.serializer        org.apache.spark.serializer.KryoSerializer
 
-Any values specified in the file will be passed on to the application, and merged with those
-specified through SparkConf. If the same configuration property exists in both `spark-defaults.conf`
-and SparkConf, then the latter will take precedence as it is the most application-specific.
+Any values specified as flags or in the properties file will be passed on to the application
+and merged with those specified through SparkConf. Properties set directly on the SparkConf
+take highest precedence, then flags passed to `spark-submit` or `spark-shell`, then options
+in the `spark-defaults.conf` file.
 
 ## Viewing Spark Properties
 
@@ -49,19 +70,34 @@ This is a useful place to check to make sure that your properties have been set
 that only values explicitly specified through either `spark-defaults.conf` or SparkConf will
 appear. For all other configuration properties, you can assume the default value is used.
 
-## All Configuration Properties
+## Available Properties
 
-Most of the properties that control internal settings have reasonable default values. However,
-there are at least five properties that you will commonly want to control:
+Most of the properties that control internal settings have reasonable default values. Some
+of the most common options to set are:
 
 <table class="table">
 <tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
 <tr>
+  <td><code>spark.app.name</code></td>
+  <td>(none)</td>
+  <td>
+    The name of your application. This will appear in the UI and in log data.
+  </td>
+</tr>
+<tr>
+  <td><code>spark.master</code></td>
+  <td>(none)</td>
+  <td>
+    The cluster manager to connect to. See the list of
+    <a href="scala-programming-guide.html#master-urls"> allowed master URL's</a>.
+  </td>
+</tr>
+<tr>
   <td><code>spark.executor.memory</code></td>
   <td>512m</td>
   <td>
-    Amount of memory to use per executor process, in the same format as JVM memory strings (e.g.
-    <code>512m</code>, <code>2g</code>).
+    Amount of memory to use per executor process, in the same format as JVM memory strings
+    (e.g. <code>512m</code>, <code>2g</code>).
   </td>
 </tr>
 <tr>
@@ -69,10 +105,12 @@ there are at least five properties that you will commonly want to control:
   <td>org.apache.spark.serializer.<br />JavaSerializer</td>
   <td>
     Class to use for serializing objects that will be sent over the network or need to be cached
-    in serialized form. The default of Java serialization works with any Serializable Java object but is
-    quite slow, so we recommend <a href="tuning.html">using <code>org.apache.spark.serializer.KryoSerializer</code>
-    and configuring Kryo serialization</a> when speed is necessary. Can be any subclass of
-    <a href="api/scala/index.html#org.apache.spark.serializer.Serializer"><code>org.apache.spark.Serializer</code></a>.
+    in serialized form. The default of Java serialization works with any Serializable Java object
+    but is quite slow, so we recommend <a href="tuning.html">using
+    <code>org.apache.spark.serializer.KryoSerializer</code> and configuring Kryo serialization</a>
+    when speed is necessary. Can be any subclass of
+    <a href="api/scala/index.html#org.apache.spark.serializer.Serializer">
+    <code>org.apache.spark.Serializer</code></a>.
   </td>
 </tr>
 <tr>
@@ -81,7 +119,8 @@ there are at least five properties that you will commonly want to control:
   <td>
     If you use Kryo serialization, set this class to register your custom classes with Kryo.
     It should be set to a class that extends
-    <a href="api/scala/index.html#org.apache.spark.serializer.KryoRegistrator"><code>KryoRegistrator</code></a>.
+    <a href="api/scala/index.html#org.apache.spark.serializer.KryoRegistrator">
+    <code>KryoRegistrator</code></a>.
     See the <a href="tuning.html#data-serialization">tuning guide</a> for more details.
   </td>
 </tr>
@@ -94,138 +133,151 @@ there are at least five properties that you will commonly want to control:
     comma-separated list of multiple directories on different disks.
 
     NOTE: In Spark 1.0 and later this will be overriden by SPARK_LOCAL_DIRS (Standalone, Mesos) or
-    LOCAL_DIRS (YARN) envrionment variables set by the cluster manager.
+    LOCAL_DIRS (YARN) environment variables set by the cluster manager.
   </td>
 </tr>
 <tr>
-  <td><code>spark.cores.max</code></td>
-  <td>(not set)</td>
+  <td><code>spark.logConf</code></td>
+  <td>false</td>
   <td>
-    When running on a <a href="spark-standalone.html">standalone deploy cluster</a> or a
-    <a href="running-on-mesos.html#mesos-run-modes">Mesos cluster in "coarse-grained"
-    sharing mode</a>, the maximum amount of CPU cores to request for the application from
-    across the cluster (not from each machine). If not set, the default will be
-    <code>spark.deploy.defaultCores</code> on Spark's standalone cluster manager, or
-    infinite (all available cores) on Mesos.
+    Logs the effective SparkConf as INFO when a SparkContext is started.
   </td>
 </tr>
 </table>
 
-
 Apart from these, the following properties are also available, and may be useful in some situations:
 
+#### Runtime Environment
 <table class="table">
 <tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
 <tr>
-  <td><code>spark.default.parallelism</code></td>
+  <td><code>spark.executor.memory</code></td>
+  <td>512m</td>
   <td>
-    <ul>
-      <li>Local mode: number of cores on the local machine</li>
-      <li>Mesos fine grained mode: 8</li>
-      <li>Others: total number of cores on all executor nodes or 2, whichever is larger</li>
-    </ul>
+    Amount of memory to use per executor process, in the same format as JVM memory strings
+    (e.g. <code>512m</code>, <code>2g</code>).
   </td>
+</tr>
+<tr>
+  <td><code>spark.executor.extraJavaOptions</code></td>
+  <td>(none)</td>
   <td>
-    Default number of tasks to use across the cluster for distributed shuffle operations
-    (<code>groupByKey</code>, <code>reduceByKey</code>, etc) when not set by user.
+    A string of extra JVM options to pass to executors. For instance, GC settings or other
+    logging. Note that it is illegal to set Spark properties or heap size settings with this
+    option. Spark properties should be set using a SparkConf object or the
+    spark-defaults.conf file used with the spark-submit script. Heap size settings can be set
+    with spark.executor.memory.
   </td>
 </tr>
 <tr>
-  <td><code>spark.storage.memoryFraction</code></td>
-  <td>0.6</td>
+  <td><code>spark.executor.extraClassPath</code></td>
+  <td>(none)</td>
   <td>
-    Fraction of Java heap to use for Spark's memory cache. This should not be larger than the "old"
-    generation of objects in the JVM, which by default is given 0.6 of the heap, but you can increase
-    it if you configure your own old generation size.
+    Extra classpath entries to append to the classpath of executors. This exists primarily
+    for backwards-compatibility with older versions of Spark. Users typically should not need
+    to set this option.
   </td>
 </tr>
 <tr>
-  <td><code>spark.shuffle.memoryFraction</code></td>
-  <td>0.3</td>
+  <td><code>spark.executor.extraLibraryPath</code></td>
+  <td>(none)</td>
   <td>
-    Fraction of Java heap to use for aggregation and cogroups during shuffles, if
-    <code>spark.shuffle.spill</code> is true. At any given time, the collective size of
-    all in-memory maps used for shuffles is bounded by this limit, beyond which the contents will
-    begin to spill to disk. If spills are often, consider increasing this value at the expense of
-    <code>spark.storage.memoryFraction</code>.
+    Set a special library path to use when launching executor JVM's.
   </td>
 </tr>
 <tr>
-  <td><code>spark.storage.memoryMapThreshold</code></td>
-  <td>8192</td>
+  <td><code>spark.files.userClassPathFirst</code></td>
+  <td>false</td>
   <td>
-    Size of a block, in bytes, above which Spark memory maps when reading a block from disk.
-    This prevents Spark from memory mapping very small blocks. In general, memory
-    mapping has high overhead for blocks close to or below the page size of the operating system.
+    (Experimental) Whether to give user-added jars precedence over Spark's own jars when
+    loading classes in Executors. This feature can be used to mitigate conflicts between
+    Spark's dependencies and user dependencies. It is currently an experimental feature.
   </td>
 </tr>
+</table>
+
+#### Shuffle Behavior
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
 <tr>
-  <td><code>spark.tachyonStore.baseDir</code></td>
-  <td>System.getProperty("java.io.tmpdir")</td>
+  <td><code>spark.shuffle.consolidateFiles</code></td>
+  <td>false</td>
   <td>
-    Directories of the Tachyon File System that store RDDs. The Tachyon file system's URL is set by
-    <code>spark.tachyonStore.url</code>.  It can also be a comma-separated list of multiple
-    directories on Tachyon file system.
+    If set to "true", consolidates intermediate files created during a shuffle. Creating fewer
+    files can improve filesystem performance for shuffles with large numbers of reduce tasks. It
+    is recommended to set this to "true" when using ext4 or xfs filesystems. On ext3, this option
+    might degrade performance on machines with many (>8) cores due to filesystem limitations.
   </td>
 </tr>
 <tr>
-  <td><code>spark.tachyonStore.url</code></td>
-  <td>tachyon://localhost:19998</td>
+  <td><code>spark.shuffle.spill</code></td>
+  <td>true</td>
   <td>
-    The URL of the underlying Tachyon file system in the TachyonStore.
+    If set to "true", limits the amount of memory used during reduces by spilling data out to disk.
+    This spilling threshold is specified by <code>spark.shuffle.memoryFraction</code>.
   </td>
 </tr>
 <tr>
-  <td><code>spark.mesos.coarse</code></td>
-  <td>false</td>
+  <td><code>spark.shuffle.spill.compress</code></td>
+  <td>true</td>
   <td>
-    If set to "true", runs over Mesos clusters in <a
-    href="running-on-mesos.html#mesos-run-modes">"coarse-grained" sharing mode</a>, where Spark
-    acquires one long-lived Mesos task on each machine instead of one Mesos task per Spark task.
-    This gives lower-latency scheduling for short queries, but leaves resources in use for the whole
-    duration of the Spark job.
+    Whether to compress data spilled during shuffles. Compression will use
+    <code>spark.io.compression.codec</code>.
   </td>
 </tr>
 <tr>
-  <td><code>spark.ui.port</code></td>
-  <td>4040</td>
+  <td><code>spark.shuffle.memoryFraction</code></td>
+  <td>0.3</td>
   <td>
-    Port for your application's dashboard, which shows memory and workload data
+    Fraction of Java heap to use for aggregation and cogroups during shuffles, if
+    <code>spark.shuffle.spill</code> is true. At any given time, the collective size of
+    all in-memory maps used for shuffles is bounded by this limit, beyond which the contents will
+    begin to spill to disk. If spills are often, consider increasing this value at the expense of
+    <code>spark.storage.memoryFraction</code>.
   </td>
 </tr>
 <tr>
-  <td><code>spark.ui.retainedStages</code></td>
-  <td>1000</td>
+  <td><code>spark.shuffle.compress</code></td>
+  <td>true</td>
   <td>
-    How many stages the Spark UI remembers before garbage collecting.
+    Whether to compress map output files. Generally a good idea. Compression will use
+    <code>spark.io.compression.codec</code>.
   </td>
 </tr>
 <tr>
-  <td><code>spark.ui.filters</code></td>
-  <td>None</td>
+  <td><code>spark.shuffle.file.buffer.kb</code></td>
+  <td>100</td>
   <td>
-    Comma separated list of filter class names to apply to the Spark web ui. The filter should be a
-    standard javax servlet Filter. Parameters to each filter can also be specified by setting a
-    java system property of spark.&lt;class name of filter&gt;.params='param1=value1,param2=value2'
-    (e.g. -Dspark.ui.filters=com.test.filter1 -Dspark.com.test.filter1.params='param1=foo,param2=testing')
+    Size of the in-memory buffer for each shuffle file output stream, in kilobytes. These buffers
+    reduce the number of disk seeks and system calls made in creating intermediate shuffle files.
   </td>
 </tr>
 <tr>
-  <td><code>spark.ui.acls.enable</code></td>
-  <td>false</td>
+  <td><code>spark.reducer.maxMbInFlight</code></td>
+  <td>48</td>
   <td>
-    Whether spark web ui acls should are enabled. If enabled, this checks to see if the user has
-    access permissions to view the web ui. See <code>spark.ui.view.acls</code> for more details.
-    Also note this requires the user to be known, if the user comes across as null no checks
-    are done. Filters can be used to authenticate and set the user.
+    Maximum size (in megabytes) of map outputs to fetch simultaneously from each reduce task. Since
+    each output requires us to create a buffer to receive it, this represents a fixed memory
+    overhead per reduce task, so keep it small unless you have a large amount of memory.
+  </td>
+</tr>
+</table>
+
+#### Spark UI
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
+<tr>
+  <td><code>spark.ui.port</code></td>
+  <td>4040</td>
+  <td>
+    Port for your application's dashboard, which shows memory and workload data
   </td>
 </tr>
 <tr>
-  <td><code>spark.ui.view.acls</code></td>
-  <td>Empty</td>
+  <td><code>spark.ui.retainedStages</code></td>
+  <td>1000</td>
   <td>
-    Comma separated list of users that have view access to the spark web ui. By default only the
-    user that started the Spark job has view access.
+    How many stages the Spark UI remembers before garbage collecting.
   </td>
 </tr>
 <tr>
@@ -236,20 +288,36 @@ Apart from these, the following properties are also available, and may be useful
   </td>
 </tr>
 <tr>
-  <td><code>spark.shuffle.compress</code></td>
-  <td>true</td>
+  <td><code>spark.eventLog.enabled</code></td>
+  <td>false</td>
   <td>
-    Whether to compress map output files. Generally a good idea.
+    Whether to log Spark events, useful for reconstructing the Web UI after the application has
+    finished.
   </td>
 </tr>
 <tr>
-  <td><code>spark.shuffle.spill.compress</code></td>
-  <td>true</td>
+  <td><code>spark.eventLog.compress</code></td>
+  <td>false</td>
   <td>
-    Whether to compress data spilled during shuffles.
+    Whether to compress logged events, if <code>spark.eventLog.enabled</code> is true.
   </td>
 </tr>
 <tr>
+  <td><code>spark.eventLog.dir</code></td>
+  <td>file:///tmp/spark-events</td>
+  <td>
+    Base directory in which Spark events are logged, if <code>spark.eventLog.enabled</code> is true.
+    Within this base directory, Spark creates a sub-directory for each application, and logs the
+    events specific to the application in this directory. Users may want to set this to
+    and HDFS directory so that history files can be read by the history server.
+  </td>
+</tr>
+</table>
+
+#### Compression and Serialization
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
+<tr>
   <td><code>spark.broadcast.compress</code></td>
   <td>true</td>
   <td>
@@ -260,59 +328,46 @@ Apart from these, the following properties are also available, and may be useful
   <td><code>spark.rdd.compress</code></td>
   <td>false</td>
   <td>
-    Whether to compress serialized RDD partitions (e.g. for <code>StorageLevel.MEMORY_ONLY_SER</code>).
-    Can save substantial space at the cost of some extra CPU time.
+    Whether to compress serialized RDD partitions (e.g. for
+    <code>StorageLevel.MEMORY_ONLY_SER</code>). Can save substantial space at the cost of some
+    extra CPU time.
   </td>
 </tr>
 <tr>
   <td><code>spark.io.compression.codec</code></td>
   <td>org.apache.spark.io.<br />LZFCompressionCodec</td>
   <td>
-    The codec used to compress internal data such as RDD partitions and shuffle outputs. By default,
-    Spark provides two codecs: <code>org.apache.spark.io.LZFCompressionCodec</code> and
-    <code>org.apache.spark.io.SnappyCompressionCodec</code>.
+    The codec used to compress internal data such as RDD partitions and shuffle outputs.
+    By default, Spark provides two codecs: <code>org.apache.spark.io.LZFCompressionCodec</code>
+    and <code>org.apache.spark.io.SnappyCompressionCodec</code>. Of these two choices,
+    Snappy offers faster compression and decompression, while LZF offers a better compression
+    ratio.
   </td>
 </tr>
 <tr>
   <td><code>spark.io.compression.snappy.block.size</code></td>
   <td>32768</td>
   <td>
-    Block size (in bytes) used in Snappy compression, in the case when Snappy compression codec is
-    used.
-  </td>
-</tr>
-<tr>
-  <td><code>spark.scheduler.mode</code></td>
-  <td>FIFO</td>
-  <td>
-    The <a href="job-scheduling.html#scheduling-within-an-application">scheduling mode</a> between
-    jobs submitted to the same SparkContext. Can be set to <code>FAIR</code>
-    to use fair sharing instead of queueing jobs one after another. Useful for
-    multi-user services.
-  </td>
-</tr>
-<tr>
-  <td><code>spark.scheduler.revive.interval</code></td>
-  <td>1000</td>
-  <td>
-    The interval length for the scheduler to revive the worker resource offers to run tasks. (in
-    milliseconds)
+    Block size (in bytes) used in Snappy compression, in the case when Snappy compression codec
+    is used.
   </td>
 </tr>
 <tr>
-  <td><code>spark.reducer.maxMbInFlight</code></td>
-  <td>48</td>
+  <td><code>spark.closure.serializer</code></td>
+  <td>org.apache.spark.serializer.<br />JavaSerializer</td>
   <td>
-    Maximum size (in megabytes) of map outputs to fetch simultaneously from each reduce task. Since
-    each output requires us to create a buffer to receive it, this represents a fixed memory
-    overhead per reduce task, so keep it small unless you have a large amount of memory.
+    Serializer class to use for closures. Currently only the Java serializer is supported.
   </td>
 </tr>
 <tr>
-  <td><code>spark.closure.serializer</code></td>
-  <td>org.apache.spark.serializer.<br />JavaSerializer</td>
+  <td><code>spark.serializer.objectStreamReset</code></td>
+  <td>10000</td>
   <td>
-    Serializer class to use for closures. Currently only the Java serializer is supported.
+    When serializing using org.apache.spark.serializer.JavaSerializer, the serializer caches
+    objects to prevent writing redundant data, however that stops garbage collection of those
+    objects. By calling 'reset' you flush that info from the serializer, and allow old
+    objects to be collected. To turn off this periodic reset set it to a value &lt;= 0.
+    By default it will reset the serializer every 10,000 objects.
   </td>
 </tr>
 <tr>
@@ -329,21 +384,29 @@ Apart from these, the following properties are also available, and may be useful
   <td><code>spark.kryoserializer.buffer.mb</code></td>
   <td>2</td>
   <td>
-    Maximum object size to allow within Kryo (the library needs to create a buffer at least as large
-    as the largest single object you'll serialize). Increase this if you get a "buffer limit
+    Maximum object size to allow within Kryo (the library needs to create a buffer at least as
+    large as the largest single object you'll serialize). Increase this if you get a "buffer limit
     exceeded" exception inside Kryo. Note that there will be one buffer <i>per core</i> on each
     worker.
   </td>
 </tr>
+</table>
+
+#### Execution Behavior
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
 <tr>
-  <td><code>spark.serializer.objectStreamReset</code></td>
-  <td>10000</td>
+  <td><code>spark.default.parallelism</code></td>
   <td>
-    When serializing using org.apache.spark.serializer.JavaSerializer, the serializer caches
-    objects to prevent writing redundant data, however that stops garbage collection of those
-    objects. By calling 'reset' you flush that info from the serializer, and allow old
-    objects to be collected. To turn off this periodic reset set it to a value &lt;= 0.
-    By default it will reset the serializer every 10,000 objects.
+    <ul>
+      <li>Local mode: number of cores on the local machine</li>
+      <li>Mesos fine grained mode: 8</li>
+      <li>Others: total number of cores on all executor nodes or 2, whichever is larger</li>
+    </ul>
+  </td>
+  <td>
+    Default number of tasks to use across the cluster for distributed shuffle operations
+    (<code>groupByKey</code>, <code>reduceByKey</code>, etc) when not set by user.
   </td>
 </tr>
 <tr>
@@ -354,73 +417,92 @@ Apart from these, the following properties are also available, and may be useful
   </td>
 </tr>
 <tr>
-  <td><code>spark.locality.wait</code></td>
-  <td>3000</td>
+  <td><code>spark.broadcast.blockSize</code></td>
+  <td>4096</td>
   <td>
-    Number of milliseconds to wait to launch a data-local task before giving up and launching it
-    on a less-local node. The same wait will be used to step through multiple locality levels
-    (process-local, node-local, rack-local and then any). It is also possible to customize the
-    waiting time for each level by setting <code>spark.locality.wait.node</code>, etc.
-    You should increase this setting if your tasks are long and see poor locality, but the
-    default usually works well.
+    Size of each piece of a block in kilobytes for <code>TorrentBroadcastFactory</code>.
+    Too large a value decreases parallelism during broadcast (makes it slower); however, if it is
+    too small, <code>BlockManager</code> might take a performance hit.
   </td>
 </tr>
 <tr>
-  <td><code>spark.locality.wait.process</code></td>
-  <td>spark.locality.wait</td>
+  <td><code>spark.files.overwrite</code></td>
+  <td>false</td>
   <td>
-    Customize the locality wait for process locality. This affects tasks that attempt to access
-    cached data in a particular executor process.
+    Whether to overwrite files added through SparkContext.addFile() when the target file exists and
+    its contents do not match those of the source.
   </td>
 </tr>
 <tr>
-  <td><code>spark.locality.wait.node</code></td>
-  <td>spark.locality.wait</td>
+  <td><code>spark.files.fetchTimeout</code></td>
+  <td>false</td>
+  <td>
+    Communication timeout to use when fetching files added through SparkContext.addFile() from
+    the driver.
+  </td>
+</tr>
+<tr>
+  <td><code>spark.storage.memoryFraction</code></td>
+  <td>0.6</td>
+  <td>
+    Fraction of Java heap to use for Spark's memory cache. This should not be larger than the "old"
+    generation of objects in the JVM, which by default is given 0.6 of the heap, but you can
+    increase it if you configure your own old generation size.
+  </td>
+</tr>
+<tr>
+  <td><code>spark.tachyonStore.baseDir</code></td>
+  <td>System.getProperty("java.io.tmpdir")</td>
   <td>
-    Customize the locality wait for node locality. For example, you can set this to 0 to skip
-    node locality and search immediately for rack locality (if your cluster has rack information).
+    Directories of the Tachyon File System that store RDDs. The Tachyon file system's URL is set by
+    <code>spark.tachyonStore.url</code>. It can also be a comma-separated list of multiple
+    directories on Tachyon file system.
   </td>
 </tr>
 <tr>
-  <td><code>spark.locality.wait.rack</code></td>
-  <td>spark.locality.wait</td>
+  <td><code>spark.storage.memoryMapThreshold</code></td>
+  <td>8192</td>
   <td>
-    Customize the locality wait for rack locality.
+    Size of a block, in bytes, above which Spark memory maps when reading a block from disk.
+    This prevents Spark from memory mapping very small blocks. In general, memory
+    mapping has high overhead for blocks close to or below the page size of the operating system.
   </td>
 </tr>
 <tr>
-  <td><code>spark.worker.timeout</code></td>
-  <td>60</td>
+  <td><code>spark.tachyonStore.url</code></td>
+  <td>tachyon://localhost:19998</td>
   <td>
-    Number of seconds after which the standalone deploy master considers a worker lost if it
-    receives no heartbeats.
+    The URL of the underlying Tachyon file system in the TachyonStore.
   </td>
 </tr>
 <tr>
-  <td><code>spark.worker.cleanup.enabled</code></td>
-  <td>false</td>
+  <td><code>spark.cleaner.ttl</code></td>
+  <td>(infinite)</td>
   <td>
-    Enable periodic cleanup of worker / application directories.  Note that this only affects
-    standalone mode, as YARN works differently. Applications directories are cleaned up regardless
-    of whether the application is still running.
+    Duration (seconds) of how long Spark will remember any metadata (stages generated, tasks
+    generated, etc.). Periodic cleanups will ensure that metadata older than this duration will be
+    forgotten. This is useful for running Spark for many hours / days (for example, running 24/7 in
+    case of Spark Streaming applications). Note that any RDD that persists in memory for more than
+    this duration will be cleared as well.
   </td>
 </tr>
+</table>
+
+#### Networking
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
 <tr>
-  <td><code>spark.worker.cleanup.interval</code></td>
-  <td>1800 (30 minutes)</td>
+  <td><code>spark.driver.host</code></td>
+  <td>(local hostname)</td>
   <td>
-    Controls the interval, in seconds, at which the worker cleans up old application work dirs
-    on the local machine.
+    Hostname or IP address for the driver to listen on.
   </td>
 </tr>
 <tr>
-  <td><code>spark.worker.cleanup.appDataTtl</code></td>
-  <td>7 * 24 * 3600 (7 days)</td>
+  <td><code>spark.driver.port</code></td>
+  <td>(random)</td>
   <td>
-    The number of seconds to retain application work directories on each worker.  This is a Time To
-    Live and should depend on the amount of available disk space you have.  Application logs and
-    jars are downloaded to each application work dir.  Over time, the work dirs can quickly fill up
-    disk space, especially if you run jobs very frequently.
+    Port for the driver to listen on.
   </td>
 </tr>
 <tr>
@@ -454,8 +536,8 @@ Apart from these, the following properties are also available, and may be useful
      This is set to a larger value to disable failure detector that comes inbuilt akka. It can be
      enabled again, if you plan to use this feature (Not recommended). Acceptable heart beat pause
      in seconds for akka. This can be used to control sensitivity to gc pauses. Tune this in
-     combination of `spark.akka.heartbeat.interval` and `spark.akka.failure-detector.threshold` if
-     you need to.
+     combination of `spark.akka.heartbeat.interval` and `spark.akka.failure-detector.threshold`
+     if you need to.
   </td>
 </tr>
 <tr>
@@ -476,55 +558,23 @@ Apart from these, the following properties are also available, and may be useful
     enabled again, if you plan to use this feature (Not recommended). A larger interval value in
     seconds reduces network overhead and a smaller value ( ~ 1 s) might be more informative for
     akka's failure detector. Tune this in combination of `spark.akka.heartbeat.pauses` and
-    `spark.akka.failure-detector.threshold` if you need to. Only positive use case for using failure
-    detector can be, a sensistive failure detector can help evict rogue executors really quick.
-    However this is usually not the case as gc pauses and network lags are expected in a real spark
-    cluster. Apart from that enabling this leads to a lot of exchanges of heart beats between nodes
-    leading to flooding the network with those.
-  </td>
-</tr>
-<tr>
-  <td><code>spark.driver.host</code></td>
-  <td>(local hostname)</td>
-  <td>
-    Hostname or IP address for the driver to listen on.
-  </td>
-</tr>
-<tr>
-  <td><code>spark.driver.port</code></td>
-  <td>(random)</td>
-  <td>
-    Port for the driver to listen on.
-  </td>
-</tr>
-<tr>
-  <td><code>spark.cleaner.ttl</code></td>
-  <td>(infinite)</td>
-  <td>
-    Duration (seconds) of how long Spark will remember any metadata (stages generated, tasks
-    generated, etc.).  Periodic cleanups will ensure that metadata older than this duration will be
-    forgotten. This is useful for running Spark for many hours / days (for example, running 24/7 in
-    case of Spark Streaming applications). Note that any RDD that persists in memory for more than
-    this duration will be cleared as well.
-  </td>
-</tr>
-<tr>
-  <td><code>spark.streaming.blockInterval</code></td>
-  <td>200</td>
-  <td>
-    Interval (milliseconds) at which data received by Spark Streaming receivers is coalesced
-    into blocks of data before storing them in Spark.
+    `spark.akka.failure-detector.threshold` if you need to. Only positive use case for using
+    failure detector can be, a sensistive failure detector can help evict rogue executors really
+    quick. However this is usually not the case as gc pauses and network lags are expected in a
+    real Spark cluster. Apart from that enabling this leads to a lot of exchanges of heart beats
+    between nodes leading to flooding the network with those.
   </td>
 </tr>
+</table>
+
+#### Scheduling
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
 <tr>
-  <td><code>spark.streaming.unpersist</code></td>
-  <td>true</td>
+  <td><code>spark.task.cpus</code></td>
+  <td>1</td>
   <td>
-    Force RDDs generated and persisted by Spark Streaming to be automatically unpersisted from
-    Spark's memory. The raw input data received by Spark Streaming is also automatically cleared.
-    Setting this to false will allow the raw data and persisted RDDs to be accessible outside the
-    streaming application as they will not be cleared automatically. But it comes at the cost of
-    higher memory usage in Spark.
+    Number of cores to allocate for each task.
   </td>
 </tr>
 <tr>
@@ -536,39 +586,36 @@ Apart from these, the following properties are also available, and may be useful
   </td>
 </tr>
 <tr>
-  <td><code>spark.broadcast.blockSize</code></td>
-  <td>4096</td>
-  <td>
-    Size of each piece of a block in kilobytes for <code>TorrentBroadcastFactory</code>.
-    Too large a value decreases parallelism during broadcast (makes it slower); however, if it is
-    too small, <code>BlockManager</code> might take a performance hit.
-  </td>
-</tr>
-
-<tr>
-  <td><code>spark.shuffle.consolidateFiles</code></td>
-  <td>false</td>
+  <td><code>spark.scheduler.mode</code></td>
+  <td>FIFO</td>
   <td>
-    If set to "true", consolidates intermediate files created during a shuffle. Creating fewer files
-    can improve filesystem performance for shuffles with large numbers of reduce tasks. It is
-    recommended to set this to "true" when using ext4 or xfs filesystems. On ext3, this option might
-    degrade performance on machines with many (>8) cores due to filesystem limitations.
+    The <a href="job-scheduling.html#scheduling-within-an-application">scheduling mode</a> between
+    jobs submitted to the same SparkContext. Can be set to <code>FAIR</code>
+    to use fair sharing instead of queueing jobs one after another. Useful for
+    multi-user services.
   </td>
 </tr>
 <tr>
-  <td><code>spark.shuffle.file.buffer.kb</code></td>
-  <td>100</td>
+  <td><code>spark.cores.max</code></td>
+  <td>(not set)</td>
   <td>
-    Size of the in-memory buffer for each shuffle file output stream, in kilobytes. These buffers
-    reduce the number of disk seeks and system calls made in creating intermediate shuffle files.
+    When running on a <a href="spark-standalone.html">standalone deploy cluster</a> or a
+    <a href="running-on-mesos.html#mesos-run-modes">Mesos cluster in "coarse-grained"
+    sharing mode</a>, the maximum amount of CPU cores to request for the application from
+    across the cluster (not from each machine). If not set, the default will be
+    <code>spark.deploy.defaultCores</code> on Spark's standalone cluster manager, or
+    infinite (all available cores) on Mesos.
   </td>
 </tr>
 <tr>
-  <td><code>spark.shuffle.spill</code></td>
-  <td>true</td>
+  <td><code>spark.mesos.coarse</code></td>
+  <td>false</td>
   <td>
-    If set to "true", limits the amount of memory used during reduces by spilling data out to disk.
-    This spilling threshold is specified by <code>spark.shuffle.memoryFraction</code>.
+    If set to "true", runs over Mesos clusters in
+    <a href="running-on-mesos.html#mesos-run-modes">"coarse-grained" sharing mode</a>,
+    where Spark acquires one long-lived Mesos task on each machine instead of one Mesos task per
+    Spark task. This gives lower-latency scheduling for short queries, but leaves resources in use
+    for the whole duration of the Spark job.
   </td>
 </tr>
 <tr>
@@ -601,91 +648,59 @@ Apart from these, the following properties are also available, and may be useful
   </td>
 </tr>
 <tr>
-  <td><code>spark.logConf</code></td>
-  <td>false</td>
-  <td>
-    Whether to log the supplied SparkConf as INFO at start of spark context.
-  </td>
-</tr>
-<tr>
-  <td><code>spark.eventLog.enabled</code></td>
-  <td>false</td>
-  <td>
-    Whether to log spark events, useful for reconstructing the Web UI after the application has
-    finished.
-  </td>
-</tr>
-<tr>
-  <td><code>spark.eventLog.compress</code></td>
-  <td>false</td>
-  <td>
-    Whether to compress logged events, if <code>spark.eventLog.enabled</code> is true.
-  </td>
-</tr>
-<tr>
-  <td><code>spark.eventLog.dir</code></td>
-  <td>file:///tmp/spark-events</td>
-  <td>
-    Base directory in which spark events are logged, if <code>spark.eventLog.enabled</code> is true.
-    Within this base directory, Spark creates a sub-directory for each application, and logs the
-    events specific to the application in this directory.
-  </td>
-</tr>
-<tr>
-  <td><code>spark.deploy.spreadOut</code></td>
-  <td>true</td>
+  <td><code>spark.locality.wait</code></td>
+  <td>3000</td>
   <td>
-    Whether the standalone cluster manager should spread applications out across nodes or try to
-    consolidate them onto as few nodes as possible. Spreading out is usually better for data
-    locality in HDFS, but consolidating is more efficient for compute-intensive workloads. <br/>
-    <b>Note:</b> this setting needs to be configured in the standalone cluster master, not in
-    individual applications; you can set it through <code>SPARK_MASTER_OPTS</code> in
-    <code>spark-env.sh</code>.
+    Number of milliseconds to wait to launch a data-local task before giving up and launching it
+    on a less-local node. The same wait will be used to step through multiple locality levels
+    (process-local, node-local, rack-local and then any). It is also possible to customize the
+    waiting time for each level by setting <code>spark.locality.wait.node</code>, etc.
+    You should increase this setting if your tasks are long and see poor locality, but the
+    default usually works well.
   </td>
 </tr>
 <tr>
-  <td><code>spark.deploy.defaultCores</code></td>
-  <td>(infinite)</td>
+  <td><code>spark.locality.wait.process</code></td>
+  <td>spark.locality.wait</td>
   <td>
-    Default number of cores to give to applications in Spark's standalone mode if they don't set
-    <code>spark.cores.max</code>. If not set, applications always get all available cores unless
-    they configure <code>spark.cores.max</code> themselves.  Set this lower on a shared cluster to
-    prevent users from grabbing the whole cluster by default. <br/> <b>Note:</b> this setting needs
-    to be configured in the standalone cluster master, not in individual applications; you can set
-    it through <code>SPARK_MASTER_OPTS</code> in <code>spark-env.sh</code>.
+    Customize the locality wait for process locality. This affects tasks that attempt to access
+    cached data in a particular executor process.
   </td>
 </tr>
 <tr>
-  <td><code>spark.files.overwrite</code></td>
-  <td>false</td>
+  <td><code>spark.locality.wait.node</code></td>
+  <td>spark.locality.wait</td>
   <td>
-    Whether to overwrite files added through SparkContext.addFile() when the target file exists and
-    its contents do not match those of the source.
+    Customize the locality wait for node locality. For example, you can set this to 0 to skip
+    node locality and search immediately for rack locality (if your cluster has rack information).
   </td>
 </tr>
 <tr>
-  <td><code>spark.files.fetchTimeout</code></td>
-  <td>false</td>
+  <td><code>spark.locality.wait.rack</code></td>
+  <td>spark.locality.wait</td>
   <td>
-    Communication timeout to use when fetching files added through SparkContext.addFile() from
-    the driver.
+    Customize the locality wait for rack locality.
   </td>
 </tr>
 <tr>
-  <td><code>spark.files.userClassPathFirst</code></td>
-  <td>false</td>
+  <td><code>spark.scheduler.revive.interval</code></td>
+  <td>1000</td>
   <td>
-    (Experimental) Whether to give user-added jars precedence over Spark's own jars when
-    loading classes in Executors. This feature can be used to mitigate conflicts between
-    Spark's dependencies and user dependencies. It is currently an experimental feature.
+    The interval length for the scheduler to revive the worker resource offers to run tasks.
+    (in milliseconds)
   </td>
 </tr>
+</table>
+
+#### Security
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
 <tr>
   <td><code>spark.authenticate</code></td>
   <td>false</td>
   <td>
-    Whether spark authenticates its internal connections. See <code>spark.authenticate.secret</code>
-    if not running on Yarn.
+    Whether Spark authenticates its internal connections. See
+    <code>spark.authenticate.secret</code> if not running on YARN.
   </td>
 </tr>
 <tr>
@@ -693,7 +708,7 @@ Apart from these, the following properties are also available, and may be useful
   <td>None</td>
   <td>
     Set the secret key used for Spark to authenticate between components. This needs to be set if
-    not running on Yarn and authentication is enabled.
+    not running on YARN and authentication is enabled.
   </td>
 </tr>
 <tr>
@@ -705,42 +720,71 @@ Apart from these, the following properties are also available, and may be useful
   </td>
 </tr>
 <tr>
-  <td><code>spark.task.cpus</code></td>
-  <td>1</td>
+  <td><code>spark.ui.filters</code></td>
+  <td>None</td>
   <td>
-    Number of cores to allocate for each task.
+    Comma separated list of filter class names to apply to the Spark web UI. The filter should be a
+    standard <a href="http://docs.oracle.com/javaee/6/api/javax/servlet/Filter.html">
+    javax servlet Filter</a>. Parameters to each filter can also be specified by setting a
+    java system property of: <br />
+    <code>spark.&lt;class name of filter&gt;.params='param1=value1,param2=value2'</code><br />
+    For example: <br />
+    <code>-Dspark.ui.filters=com.test.filter1</code> <br />
+    <code>-Dspark.com.test.filter1.params='param1=foo,param2=testing'</code>
   </td>
 </tr>
 <tr>
-  <td><code>spark.executor.extraJavaOptions</code></td>
-  <td>(none)</td>
+  <td><code>spark.ui.acls.enable</code></td>
+  <td>false</td>
   <td>
-    A string of extra JVM options to pass to executors. For instance, GC settings or other
-    logging. Note that it is illegal to set Spark properties or heap size settings with this 
-    option. Spark properties should be set using a SparkConf object or the 
-    spark-defaults.conf file used with the spark-submit script. Heap size settings can be set
-    with spark.executor.memory.
+    Whether Spark web ui acls should are enabled. If enabled, this checks to see if the user has
+    access permissions to view the web ui. See <code>spark.ui.view.acls</code> for more details.
+    Also note this requires the user to be known, if the user comes across as null no checks
+    are done. Filters can be used to authenticate and set the user.
   </td>
 </tr>
 <tr>
-  <td><code>spark.executor.extraClassPath</code></td>
-  <td>(none)</td>
+  <td><code>spark.ui.view.acls</code></td>
+  <td>Empty</td>
   <td>
-    Extra classpath entries to append to the classpath of executors. This exists primarily
-    for backwards-compatibility with older versions of Spark. Users typically should not need
-    to set this option.
+    Comma separated list of users that have view access to the Spark web ui. By default only the
+    user that started the Spark job has view access.
   </td>
 </tr>
+</table>
+
+#### Spark Streaming
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
 <tr>
-  <td><code>spark.executor.extraLibraryPath</code></td>
-  <td>(none)</td>
+  <td><code>spark.streaming.blockInterval</code></td>
+  <td>200</td>
   <td>
-    Set a special library path to use when launching executor JVM's.
+    Interval (milliseconds) at which data received by Spark Streaming receivers is coalesced
+    into blocks of data before storing them in Spark.
+  </td>
+</tr>
+<tr>
+  <td><code>spark.streaming.unpersist</code></td>
+  <td>true</td>
+  <td>
+    Force RDDs generated and persisted by Spark Streaming to be automatically unpersisted from
+    Spark's memory. The raw input data received by Spark Streaming is also automatically cleared.
+    Setting this to false will allow the raw data and persisted RDDs to be accessible outside the
+    streaming application as they will not be cleared automatically. But it comes at the cost of
+    higher memory usage in Spark.
   </td>
 </tr>
-
 </table>
 
+#### Cluster Managers (YARN, Mesos, Standalone)
+Each cluster manager in Spark has additional configuration options. Configurations 
+can be found on the pages for each mode:
+
+ * [YARN](running-on-yarn.html#configuration)
+ * [Mesos](running-on-mesos.html)
+ * [Standalone Mode](spark-standalone.html#cluster-launch-scripts)
+
 # Environment Variables
 
 Certain Spark settings can be configured through environment variables, which are read from the
@@ -774,104 +818,15 @@ The following variables can be set in `spark-env.sh`:
   </tr>
 </table>
 
-In addition to the above, there are also options for setting up the Spark [standalone cluster
-scripts](spark-standalone.html#cluster-launch-scripts), such as number of cores to use on each
-machine and maximum memory.
+In addition to the above, there are also options for setting up the Spark
+[standalone cluster scripts](spark-standalone.html#cluster-launch-scripts), such as number of cores
+to use on each machine and maximum memory.
 
-Since `spark-env.sh` is a shell script, some of these can be set programmatically -- for example,
-you might compute `SPARK_LOCAL_IP` by looking up the IP of a specific network interface.
+Since `spark-env.sh` is a shell script, some of these can be set programmatically -- for example, you might
+compute `SPARK_LOCAL_IP` by looking up the IP of a specific network interface.
 
 # Configuring Logging
 
 Spark uses [log4j](http://logging.apache.org/log4j/) for logging. You can configure it by adding a
 `log4j.properties` file in the `conf` directory. One way to start is to copy the existing
 `log4j.properties.template` located there.
-
-# Configuring Ports for Network Security
-
-Spark makes heavy use of the network, and some environments have strict requirements for using tight
-firewall settings.  Below are the primary ports that Spark uses for its communication and how to
-configure those ports.
-
-<table class="table">
-  <tr>
-    <th>From</th><th>To</th><th>Default Port</th><th>Purpose</th><th>Configuration
-    Setting</th><th>Notes</th>
-  </tr>
-  <!-- Web UIs -->
-  <tr>
-    <td>Browser</td>
-    <td>Standalone Cluster Master</td>
-    <td>8080</td>
-    <td>Web UI</td>
-    <td><code>master.ui.port</code></td>
-    <td>Jetty-based</td>
-  </tr>
-  <tr>
-    <td>Browser</td>
-    <td>Worker</td>
-    <td>8081</td>
-    <td>Web UI</td>
-    <td><code>worker.ui.port</code></td>
-    <td>Jetty-based</td>
-  </tr>
-  <tr>
-    <td>Browser</td>
-    <td>Driver</td>
-    <td>4040</td>
-    <td>Web UI</td>
-    <td><code>spark.ui.port</code></td>
-    <td>Jetty-based</td>
-  </tr>
-  <tr>
-    <td>Browser</td>
-    <td>History Server</td>
-    <td>18080</td>
-    <td>Web UI</td>
-    <td><code>spark.history.ui.port</code></td>
-    <td>Jetty-based</td>
-  </tr>
-
-  <!-- Cluster interactions -->
-  <tr>
-    <td>Application</td>
-    <td>Standalone Cluster Master</td>
-    <td>7077</td>
-    <td>Submit job to cluster</td>
-    <td><code>spark.driver.port</code></td>
-    <td>Akka-based.  Set to "0" to choose a port randomly</td>
-  </tr>
-  <tr>
-    <td>Worker</td>
-    <td>Standalone Cluster Master</td>
-    <td>7077</td>
-    <td>Join cluster</td>
-    <td><code>spark.driver.port</code></td>
-    <td>Akka-based.  Set to "0" to choose a port randomly</td>
-  </tr>
-  <tr>
-    <td>Application</td>
-    <td>Worker</td>
-    <td>(random)</td>
-    <td>Join cluster</td>
-    <td><code>SPARK_WORKER_PORT</code> (standalone cluster)</td>
-    <td>Akka-based</td>
-  </tr>
-
-  <!-- Other misc stuff -->
-  <tr>
-    <td>Driver and other Workers</td>
-    <td>Worker</td>
-    <td>(random)</td>
-    <td>
-      <ul>
-        <li>File server for file and jars</li>
-        <li>Http Broadcast</li>
-        <li>Class file server (Spark Shell only)</li>
-      </ul>
-    </td>
-    <td>None</td>
-    <td>Jetty-based.  Each of these services starts on a random port that cannot be configured</td>
-  </tr>
-
-</table>

http://git-wip-us.apache.org/repos/asf/spark/blob/7801d44f/docs/quick-start.md
----------------------------------------------------------------------
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 33a0df1..20e17eb 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -252,11 +252,11 @@ we initialize a SparkContext as part of the program.
 We pass the SparkContext constructor a 
 [SparkConf](api/scala/index.html#org.apache.spark.SparkConf)
 object which contains information about our
-application. We also call sc.addJar to make sure that when our application is launched in cluster
-mode, the jar file containing it will be shipped automatically to worker nodes.
+application. 
 
-This file depends on the Spark API, so we'll also include an sbt configuration file, `simple.sbt`
-which explains that Spark is a dependency. This file also adds a repository that Spark depends on:
+Our application depends on the Spark API, so we'll also include an sbt configuration file, 
+`simple.sbt` which explains that Spark is a dependency. This file also adds a repository that 
+Spark depends on:
 
 {% highlight scala %}
 name := "Simple Project"

http://git-wip-us.apache.org/repos/asf/spark/blob/7801d44f/docs/spark-standalone.md
----------------------------------------------------------------------
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index eb3211b..dca80a9 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -93,7 +93,15 @@ You can optionally configure the cluster further by setting environment variable
   </tr>
   <tr>
     <td><code>SPARK_MASTER_OPTS</code></td>
-    <td>Configuration properties that apply only to the master in the form "-Dx=y" (default: none).</td>
+    <td>Configuration properties that apply only to the master in the form "-Dx=y" (default: none). See below for a list of possible options.</td>
+  </tr>
+  <tr>
+    <td><code>SPARK_LOCAL_DIRS</code></td>
+    <td>
+    Directory to use for "scratch" space in Spark, including map output files and RDDs that get 
+    stored on disk. This should be on a fast, local disk in your system. It can also be a 
+    comma-separated list of multiple directories on different disks.
+    </td>
   </tr>
   <tr>
     <td><code>SPARK_WORKER_CORES</code></td>
@@ -126,7 +134,7 @@ You can optionally configure the cluster further by setting environment variable
   </tr>
   <tr>
     <td><code>SPARK_WORKER_OPTS</code></td>
-    <td>Configuration properties that apply only to the worker in the form "-Dx=y" (default: none).</td>
+    <td>Configuration properties that apply only to the worker in the form "-Dx=y" (default: none). See below for a list of possible options.</td>
   </tr>
   <tr>
     <td><code>SPARK_DAEMON_MEMORY</code></td>
@@ -144,6 +152,73 @@ You can optionally configure the cluster further by setting environment variable
 
 **Note:** The launch scripts do not currently support Windows. To run a Spark cluster on Windows, start the master and workers by hand.
 
+SPARK_MASTER_OPTS supports the following system properties:
+
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
+<tr>
+  <td><code>spark.deploy.spreadOut</code></td>
+  <td>true</td>
+  <td>
+    Whether the standalone cluster manager should spread applications out across nodes or try
+    to consolidate them onto as few nodes as possible. Spreading out is usually better for
+    data locality in HDFS, but consolidating is more efficient for compute-intensive workloads. <br/>
+  </td>
+</tr>
+<tr>
+  <td><code>spark.deploy.defaultCores</code></td>
+  <td>(infinite)</td>
+  <td>
+    Default number of cores to give to applications in Spark's standalone mode if they don't
+    set <code>spark.cores.max</code>. If not set, applications always get all available
+    cores unless they configure <code>spark.cores.max</code> themselves.
+    Set this lower on a shared cluster to prevent users from grabbing
+    the whole cluster by default. <br/>
+  </td>
+</tr>
+<tr>
+  <td><code>spark.worker.timeout</code></td>
+  <td>60</td>
+  <td>
+    Number of seconds after which the standalone deploy master considers a worker lost if it
+    receives no heartbeats.
+  </td>
+</tr>
+</table>
+
+SPARK_WORKER_OPTS supports the following system properties:
+
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
+<tr>
+  <td><code>spark.worker.cleanup.enabled</code></td>
+  <td>false</td>
+  <td>
+    Enable periodic cleanup of worker / application directories.  Note that this only affects standalone
+    mode, as YARN works differently. Applications directories are cleaned up regardless of whether
+    the application is still running.
+  </td>
+</tr>
+<tr>
+  <td><code>spark.worker.cleanup.interval</code></td>
+  <td>1800 (30 minutes)</td>
+  <td>
+    Controls the interval, in seconds, at which the worker cleans up old application work dirs
+    on the local machine.
+  </td>
+</tr>
+<tr>
+  <td><code>spark.worker.cleanup.appDataTtl</code></td>
+  <td>7 * 24 * 3600 (7 days)</td>
+  <td>
+    The number of seconds to retain application work directories on each worker.  This is a Time To Live
+    and should depend on the amount of available disk space you have.  Application logs and jars are
+    downloaded to each application work dir.  Over time, the work dirs can quickly fill up disk space,
+    especially if you run jobs very frequently.
+  </td>
+</tr>
+</table>
+
 # Connecting an Application to the Cluster
 
 To run an application on the Spark cluster, simply pass the `spark://IP:PORT` URL of the master as to the [`SparkContext`
@@ -212,6 +287,94 @@ In addition, detailed log output for each job is also written to the work direct
 You can run Spark alongside your existing Hadoop cluster by just launching it as a separate service on the same machines. To access Hadoop data from Spark, just use a hdfs:// URL (typically `hdfs://<namenode>:9000/path`, but you can find the right URL on your Hadoop Namenode's web UI). Alternatively, you can set up a separate cluster for Spark, and still have it access HDFS over the network; this will be slower than disk-local access, but may not be a concern if you are still running in the same local area network (e.g. you place a few Spark machines on each rack that you have Hadoop on).
 
 
+# Configuring Ports for Network Security
+
+Spark makes heavy use of the network, and some environments have strict requirements for using tight
+firewall settings.  Below are the primary ports that Spark uses for its communication and how to
+configure those ports.
+
+<table class="table">
+  <tr>
+    <th>From</th><th>To</th><th>Default Port</th><th>Purpose</th><th>Configuration
+    Setting</th><th>Notes</th>
+  </tr>
+  <!-- Web UIs -->
+  <tr>
+    <td>Browser</td>
+    <td>Standalone Cluster Master</td>
+    <td>8080</td>
+    <td>Web UI</td>
+    <td><code>master.ui.port</code></td>
+    <td>Jetty-based</td>
+  </tr>
+  <tr>
+    <td>Browser</td>
+    <td>Driver</td>
+    <td>4040</td>
+    <td>Web UI</td>
+    <td><code>spark.ui.port</code></td>
+    <td>Jetty-based</td>
+  </tr>
+  <tr>
+    <td>Browser</td>
+    <td>History Server</td>
+    <td>18080</td>
+    <td>Web UI</td>
+    <td><code>spark.history.ui.port</code></td>
+    <td>Jetty-based</td>
+  </tr>
+  <tr>
+    <td>Browser</td>
+    <td>Worker</td>
+    <td>8081</td>
+    <td>Web UI</td>
+    <td><code>worker.ui.port</code></td>
+    <td>Jetty-based</td>
+  </tr>
+  <!-- Cluster interactions -->
+  <tr>
+    <td>Application</td>
+    <td>Standalone Cluster Master</td>
+    <td>7077</td>
+    <td>Submit job to cluster</td>
+    <td><code>spark.driver.port</code></td>
+    <td>Akka-based.  Set to "0" to choose a port randomly</td>
+  </tr>
+  <tr>
+    <td>Worker</td>
+    <td>Standalone Cluster Master</td>
+    <td>7077</td>
+    <td>Join cluster</td>
+    <td><code>spark.driver.port</code></td>
+    <td>Akka-based.  Set to "0" to choose a port randomly</td>
+  </tr>
+  <tr>
+    <td>Application</td>
+    <td>Worker</td>
+    <td>(random)</td>
+    <td>Join cluster</td>
+    <td><code>SPARK_WORKER_PORT</code> (standalone cluster)</td>
+    <td>Akka-based</td>
+  </tr>
+
+  <!-- Other misc stuff -->
+  <tr>
+    <td>Driver and other Workers</td>
+    <td>Worker</td>
+    <td>(random)</td>
+    <td>
+      <ul>
+        <li>File server for file and jars</li>
+        <li>Http Broadcast</li>
+        <li>Class file server (Spark Shell only)</li>
+      </ul>
+    </td>
+    <td>None</td>
+    <td>Jetty-based.  Each of these services starts on a random port that cannot be configured</td>
+  </tr>
+
+</table>
+
 # High Availability
 
 By default, standalone scheduling clusters are resilient to Worker failures (insofar as Spark itself is resilient to losing work by moving it to other workers). However, the scheduler uses a Master to make scheduling decisions, and this (by default) creates a single point of failure: if the Master crashes, no new applications can be created. In order to circumvent this, we have two high availability schemes, detailed below.