You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@samza.apache.org by xi...@apache.org on 2016/10/19 17:21:45 UTC

svn commit: r1765686 [28/42] - in /samza/site: ./ archive/ community/ contribute/ img/0.11/ img/0.11/learn/ img/0.11/learn/documentation/ img/0.11/learn/documentation/comparisons/ img/0.11/learn/documentation/container/ img/0.11/learn/documentation/int...

Added: samza/site/learn/documentation/0.11/jobs/configuration-table.html
URL: http://svn.apache.org/viewvc/samza/site/learn/documentation/0.11/jobs/configuration-table.html?rev=1765686&view=auto
==============================================================================
--- samza/site/learn/documentation/0.11/jobs/configuration-table.html (added)
+++ samza/site/learn/documentation/0.11/jobs/configuration-table.html Wed Oct 19 17:21:41 2016
@@ -0,0 +1,1825 @@
+<!DOCTYPE html>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<html>
+    <head>
+        <meta charset="utf-8">
+        <title>Samza Configuration Reference</title>
+        <style type="text/css">
+            body {
+                font-family: "Helvetica Neue",Helvetica,Arial,sans-serif;
+                font-size: 14px;
+                line-height: 22px;
+                color: #333;
+                background-color: #fff;
+            }
+
+            table {
+                border-collapse: collapse;
+                margin: 1em 0;
+            }
+
+            table th, table td {
+                text-align: left;
+                vertical-align: top;
+                padding: 12px;
+                border-bottom: 1px solid #ccc;
+                border-top: 1px solid #ccc;
+                border-left: 0;
+                border-right: 0;
+            }
+
+            table td.property, table td.default {
+                white-space: nowrap;
+            }
+
+            table th.section {
+                background-color: #eee;
+            }
+
+            table th.section .subtitle {
+                font-weight: normal;
+            }
+
+            code, a.property {
+                font-family: monospace;
+            }
+
+            span.system, span.stream, span.store, span.serde, span.rewriter, span.listener, span.reporter {
+                padding: 1px;
+                margin: 1px;
+                border-width: 1px;
+                border-style: solid;
+                border-radius: 4px;
+            }
+
+            span.system {
+                background-color: #ddf;
+                border-color: #bbd;
+            }
+
+            span.stream {
+                background-color: #dfd;
+                border-color: #bdb;
+            }
+
+            span.store {
+                background-color: #fdf;
+                border-color: #dbd;
+            }
+
+            span.serde {
+                background-color: #fdd;
+                border-color: #dbb;
+            }
+
+            span.rewriter {
+                background-color: #eee;
+                border-color: #ccc;
+            }
+
+            span.listener {
+                background-color: #ffd;
+                border-color: #ddb;
+            }
+
+            span.reporter {
+                background-color: #dff;
+                border-color: #bdd;
+            }
+        </style>
+    </head>
+
+    <body>
+        <h1>Samza Configuration Reference</h1>
+        <p>The following table lists all the standard properties that can be included in a Samza job configuration file.</p>
+        <p>Words highlighted like <span class="system">this</span> are placeholders for your own variable names.</p>
+        <table>
+            <tbody>
+                <tr><th>Name</th><th>Default</th><th>Description</th></tr>
+                <tr>
+                    <th colspan="3" class="section" id="job"><a href="configuration.html">Samza job configuration</a></th>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-factory-class">job.factory.class</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        <strong>Required:</strong> The <a href="job-runner.html">job factory</a> to use for running this job.
+                        The value is a fully-qualified Java classname, which must implement
+                        <a href="../api/javadocs/org/apache/samza/job/StreamJobFactory.html">StreamJobFactory</a>.
+                        Samza ships with three implementations:
+                        <dl>
+                            <dt><code>org.apache.samza.job.local.ThreadJobFactory</code></dt>
+                            <dd>Runs your job on your local machine using threads. This is intended only for
+                                development, not for production deployments.</dd>
+                            <dt><code>org.apache.samza.job.local.ProcessJobFactory</code></dt>
+                            <dd>Runs your job on your local machine as a subprocess. An optional command builder
+                                property can also be specified (see <a href="#task-command-class" class="property">
+                                    task.command.class</a> for details). This is intended only for development,
+                                not for production deployments.</dd>
+                            <dt><code>org.apache.samza.job.yarn.YarnJobFactory</code></dt>
+                            <dd>Runs your job on a YARN grid. See <a href="#yarn">below</a> for YARN-specific configuration.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-name">job.name</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        <strong>Required:</strong> The name of your job. This name appears on the Samza dashboard, and it
+                        is used to tell apart this job's checkpoints from other jobs' checkpoints.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-id">job.id</td>
+                    <td class="default">1</td>
+                    <td class="description">
+                        If you run several instances of your job at the same time, you need to give each execution a
+                        different <code>job.id</code>. This is important, since otherwise the jobs will overwrite each
+                        others' checkpoints, and perhaps interfere with each other in other ways.
+                    </td>
+                </tr>
+                <tr>
+                    <td class="property" id="job-coordinator-system">job.coordinator.system</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        <strong>Required:</strong> The <span class="system">system-name</span> to use for creating and maintaining the <a href="../container/coordinator-stream.html">Coordinator Stream</a>.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-coordinator-replication-factor">job.coordinator.<br />replication.factor</td>
+                    <td class="default">3</td>
+                    <td class="description">
+                        If you are using Kafka for coordinator stream, this is the number of Kafka nodes to which you want the
+                        coordinator topic replicated for durability.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-coordinator-segment-bytes">job.coordinator.<br />segment.bytes</td>
+                    <td class="default">26214400</td>
+                    <td class="description">
+                        If you are using a Kafka system for coordinator stream, this is the segment size to be used for the coordinator
+                        topic's log segments. Keeping this number small is useful because it increases the frequency
+                        that Kafka will garbage collect old messages.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-coordinator-monitor-partition-change">job.coordinator.<br />monitor-partition-change</td>
+                    <td class="default">false</td>
+                    <td class="description">
+                        If you are using Kafka for coordinator stream, this configuration enables the Job Coordinator to
+                        detect partition count difference in Kafka input topics. On detection, it updates a Gauge
+                        metric of format <span class="system">system-name</span>.<span class="stream">stream-name</span>.partitionCount,
+                        which indicates the difference in the partition count from the initial state. Please note that currently this
+                        feature only works for Kafka-based systems.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-coordinator-monitor-partition-change-frequency-ms">job.coordinator.<br />monitor-partition-change.frequency.ms</td>
+                    <td class="default">300000</td>
+                    <td class="description">
+                        The frequency at which the input streams' partition count change should be detected. This check
+                        can be tuned to be pretty low as partition increase is not a common event.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-config-rewriter-class">job.config.rewriter.<br><span class="rewriter">rewriter-name</span>.class</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        You can optionally define configuration rewriters, which have the opportunity to dynamically
+                        modify the job configuration before the job is started. For example, this can be useful for
+                        pulling configuration from an external configuration management system, or for determining
+                        the set of input streams dynamically at runtime. The value of this property is a
+                        fully-qualified Java classname which must implement
+                        <a href="../api/javadocs/org/apache/samza/config/ConfigRewriter.html">ConfigRewriter</a>.
+                        Samza ships with these rewriters by default:
+                        <dl>
+                            <dt><code>org.apache.samza.config.RegExTopicGenerator</code></dt>
+                            <dd>When consuming from Kafka, this allows you to consume all Kafka topics that match
+                                some regular expression (rather than having to list each topic explicitly).
+                                This rewriter has <a href="#regex-rewriter">additional configuration</a>.</dd>
+                            <dt><code>org.apache.samza.config.EnvironmentConfigRewriter</code></dt>
+                            <dd>This rewriter takes environment variables that are prefixed with <i>SAMZA_</i>
+                                and adds them to the configuration, overriding previous values where they
+                                exist. The keys are lowercased and underscores are converted to dots.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-config-rewriters">job.config.rewriters</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        If you have defined configuration rewriters, you need to list them here, in the order in
+                        which they should be applied. The value of this property is a comma-separated list of
+                        <span class="rewriter">rewriter-name</span> tokens.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-systemstreampartition-grouper-factory">job.systemstreampartition.<br>grouper.factory</td>
+                    <td class="default">org.apache.samza.<br>container.grouper.stream.<br>GroupByPartitionFactory</td>
+                    <td class="description">
+                        A factory class that is used to determine how input SystemStreamPartitions are grouped together for processing in individual StreamTask instances. The factory must implement the SystemStreamPartitionGrouperFactory interface. Once this configuration is set, it can't be changed, since doing so could violate state semantics, and lead to a loss of data.
+
+                        <dl>
+                          <dt><code>org.apache.samza.container.grouper.stream.GroupByPartitionFactory</code></dt>
+                          <dd>Groups input stream partitions according to their partition number. This grouping leads to a single StreamTask processing all messages for a single partition (e.g. partition 0) across all input streams that have a partition 0. Therefore, the default is that you get one StreamTask for all input partitions with the same partition number. Using this strategy, if two input streams have a partition 0, then messages from both partitions will be routed to a single StreamTask. This partitioning strategy is useful for joining and aggregating streams.</dt>
+                          <dt><code>org.apache.samza.container.grouper.stream.GroupBySystemStreamPartitionFactory</code></dt>
+                          <dd>Assigns each SystemStreamPartition to its own unique StreamTask. The  GroupBySystemStreamPartitionFactory is useful in cases where you want increased parallelism (more containers), and don't care about co-locating partitions for grouping or joins, since it allows for a greater number of StreamTasks to be divided up amongst Samza containers.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-systemstreampartition-matcher-class">job.systemstreampartition.matcher.class</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        If you want to enable static partition assignment, then this is a <strong>required</strong> configuration.
+                        The value of this property is a fully-qualified Java class name that implements the interface
+                        <code>org.apache.samza.system.SystemStreamPartitionMatcher</code>.
+                        Samza ships with two matcher classes:
+                        <dl>
+                            <dt><code>org.apache.samza.system.RangeSystemStreamPartitionMatcher</code></dt>
+                            <dd>This classes uses a comma separated list of range(s) to determine which partition matches,
+                                and thus statically assigned to the Job. For example "2,3,1-2", statically assigns partition
+                                1, 2, and 3 for all the specified system and streams (topics in case of Kafka) to the job.
+                                For config validation each element in the comma separated list much conform to one of the
+                                following regex:
+                                <ul>
+                                    <li><code>"(\\d+)"</code> or </li>
+                                    <li><code>"(\\d+-\\d+)"</code> </li>
+                                </ul>
+                                <code>JobConfig.SSP_MATCHER_CLASS_RANGE</code> constant has the canonical name of this class.
+                            </dd>
+                        </dl>
+                        <dl>
+                            <dt><code>org.apache.samza.system.RegexSystemStreamPartitionMatcher</code></dt>
+                            <dd>This classes uses a standard Java supported regex to determine which partition matches,
+                                and thus statically assigned to the Job. For example "[1-2]", statically assigns partition 1 and 2
+                                for all the specified system and streams (topics in case of Kafka) to the job.
+                                <code>JobConfig.SSP_MATCHER_CLASS_REGEX</code> constant has the canonical name of this class.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job_systemstreampartition_matcher_config_range">job.systemstreampartition.matcher.config.range</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        If <code>job.systemstreampartition.matcher.class</code> is specified, and the value of this property is
+                        <code>org.apache.samza.system.RangeSystemStreamPartitionMatcher</code>, then this property is a
+                        <strong>required</strong> configuration. Specify a comma separated list of range(s) to determine which
+                        partition matches, and thus statically assigned to the Job. For example "2,3,11-20", statically assigns
+                        partition 2, 3, and 11 to 20 for all the specified system and streams (topics in case of Kafka) to the job.
+                        A singel configuration value like "19" is valid as well. This statically assigns partition 19.
+                        For config validation each element in the comma separated list much conform to one of the
+                        following regex:
+                        <ul>
+                            <li><code>"(\\d+)"</code> or </li>
+                            <li><code>"(\\d+-\\d+)"</code> </li>
+                        </ul>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job_systemstreampartition_matcher_config_regex">job.systemstreampartition.matcher.config.regex</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        If <code>job.systemstreampartition.matcher.class</code> is specified, and the value of this property is
+                        <code>org.apache.samza.system.RegexSystemStreamPartitionMatcher</code>, then this property is a
+                        <strong>required</strong> configuration. The value should be a valid Java supported regex. For example "[1-2]",
+                        statically assigns partition 1 and 2 for all the specified system and streams (topics in case of Kakfa) to the job.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job_systemstreampartition_matcher_co
+                    nfig_job_factory_regex">job.systemstreampartition.matcher.config.job.factory.regex</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        This configuration can be used to specify the Java supported regex to match the <code>StreamJobFactory</code>
+                        for which the static partition assignment should be enabled. This configuration enables the partition
+                        assignment feature to be used for custom <code>StreamJobFactory</code>(ies) as well.
+                        <p>
+                            This config defaults to the following value:
+                            <code>"org\\.apache\\.samza\\.job\\.local(.*ProcessJobFactory|.*ThreadJobFactory)"</code>,
+                            which enables static partition assignment when <code>job.factory.class</code> is set to
+                            <code>org.apache.samza.job.local.ProcessJobFactory</code> or <code>org.apache.samza.job.local.ThreadJobFactory.</code>
+                        </p>
+                    </td>
+                </tr>
+
+
+                <tr>
+                    <td class="property" id="job-checkpoint-validation-enabled">job.checkpoint.<br>validation.enabled</td>
+                    <td class="default">true</td>
+                    <td class="description">
+                        This setting controls if the job should fail(true) or just warn(false) in case the validation of checkpoint partition number fails. <br/> <b>CAUTION</b>: this configuration needs to be used w/ care. It should only be used as a work-around after the checkpoint has been auto-created with wrong number of partitions by mistake.
+                    </td>
+                </tr>
+                <tr>
+                    <td class="property" id="job-security-manager-factory">job.security.manager.factory</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        This is the factory class used to create the proper <a href="../api/javadocs/org/apache/samza/container/SecurityManager.html">SecurityManager</a> to handle security for Samza containers when running in a secure environment, such as Yarn with Kerberos eanbled.
+                        Samza ships with one security manager by default:
+                        <dl>
+                            <dt><code>org.apache.samza.job.yarn.SamzaYarnSecurityManagerFactory</code></dt>
+                            <dd>Supports Samza containers to run properly in a Kerberos enabled Yarn cluster. Each Samza container, once started, will create a <a href="../api/javadocs/org/apache/samza/job/yarn/SamzaContainerSecurityManager.html">SamzaContainerSecurityManager</a>. SamzaContainerSecurityManager runs on its separate thread and update user's delegation tokens at the interval specified by <a href="#yarn-token-renewal-interval-seconds" class="property">yarn.token.renewal.interval.seconds</a>. See <a href="../yarn/yarn-security.html">Yarn Security</a> for details.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-container-count">job.container.count</td>
+                    <td class="default">1</td>
+                    <td class="description">
+                        The number of YARN containers to request for running your job. This is the main parameter
+                        for controlling the scale (allocated computing resources) of your job: to increase the
+                        parallelism of processing, you need to increase the number of containers. The minimum is one
+                        container, and the maximum number of containers is the number of task instances (usually the
+                        <a href="../container/samza-container.html#tasks-and-partitions">number of input stream partitions</a>).
+                        Task instances are evenly distributed across the number of containers that you specify.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-container-single-thread-mode">job.container.single.thread.mode</td>
+                    <td class="default">false</td>
+                    <td class="description">
+                        If set to true, samza will fallback to legacy single-threaded event loop. Default is false, which enables the <a href="../container/event-loop.html">multithreading execution</a>.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-container-thread-pool-size">job.container.thread.pool.size</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        If configured, the container thread pool will be used to run synchronous operations of each task in parallel. The operations include StreamTask.process(), WindowableTask.window(), and internally Task.commit(). Note that the thread pool is not applicable to AsyncStremTask.processAsync(). The size should always be greater than zero. If not configured, all task operations will run in a single thread.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-host_affinity-enabled">job.host-affinity.enabled</td>
+                    <td class="default">false</td>
+                    <td class="description">
+                        This property indicates whether host-affinity is enabled or not. Host-affinity refers to the ability of Samza to request and allocate a container on the same host every time the job is deployed.
+                        When host-affinity is enabled, Samza makes a "best-effort" to honor the host-affinity constraint.
+                        The property <a href="#cluster-manager-container-request-timeout-ms" class="property">cluster-manager.container.request.timeout.ms</a> determines how long to wait before de-prioritizing the host-affinity constraint and assigning the container to any available resource.
+                        <b>Please Note</b>: This feature is tested to work with the FairScheduler in Yarn when continuous-scheduling is enabled.
+                    </td>
+                </tr>
+
+                <tr>
+                    <th colspan="3" class="section" id="task"><a href="../api/overview.html">Task configuration</a></th>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-class">task.class</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        <strong>Required:</strong> The fully-qualified name of the Java class which processes
+                        incoming messages from input streams. The class must implement
+                        <a href="../api/javadocs/org/apache/samza/task/StreamTask.html">StreamTask</a> or
+                        <a href="../api/javadocs/org/apache/samza/task/AsyncStreamTask.html">AsyncStreamTask</a>,
+                        and may optionally implement
+                        <a href="../api/javadocs/org/apache/samza/task/InitableTask.html">InitableTask</a>,
+                        <a href="../api/javadocs/org/apache/samza/task/ClosableTask.html">ClosableTask</a> and/or
+                        <a href="../api/javadocs/org/apache/samza/task/WindowableTask.html">WindowableTask</a>.
+                        The class will be instantiated several times, once for every
+                        <a href="../container/samza-container.html#tasks-and-partitions">input stream partition</a>.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-inputs">task.inputs</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        <strong>Required:</strong> A comma-separated list of streams that are consumed by this job.
+                        Each stream is given in the format
+                        <span class="system">system-name</span>.<span class="stream">stream-name</span>.
+                        For example, if you have one input system called <code>my-kafka</code>, and want to consume two
+                        Kafka topics called <code>PageViewEvent</code> and <code>UserActivityEvent</code>, then you would set
+                        <code>task.inputs=my-kafka.PageViewEvent, my-kafka.UserActivityEvent</code>.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-window-ms">task.window.ms</td>
+                    <td class="default">-1</td>
+                    <td class="description">
+                        If <a href="#task-class" class="property">task.class</a> implements
+                        <a href="../api/javadocs/org/apache/samza/task/WindowableTask.html">WindowableTask</a>, it can
+                        receive a <a href="../container/windowing.html">windowing callback</a> in regular intervals.
+                        This property specifies the time between window() calls, in milliseconds. If the number is
+                        negative (the default), window() is never called. Note that Samza is
+                        <a href="../container/event-loop.html">single-threaded</a>, so a window() call will never
+                        occur concurrently with the processing of a message. If a message is being processed at the
+                        time when a window() call is due, the window() call occurs after the processing of the current
+                        message has completed.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-checkpoint-factory">task.checkpoint.factory</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        To enable <a href="../container/checkpointing.html">checkpointing</a>, you must set
+                        this property to the fully-qualified name of a Java class that implements
+                        <a href="../api/javadocs/org/apache/samza/checkpoint/CheckpointManagerFactory.html">CheckpointManagerFactory</a>.
+                        This is not required, but recommended for most jobs. If you don't configure checkpointing,
+                        and a job or container restarts, it does not remember which messages it has already processed.
+                        Without checkpointing, consumer behavior is determined by the
+                        <a href="#systems-samza-offset-default" class="property">...samza.offset.default</a>
+                        setting, which by default skips any messages that were published while the container was
+                        restarting. Checkpointing allows a job to start up where it previously left off.
+                        Samza ships with two checkpoint managers by default:
+                        <dl>
+                            <dt><code>org.apache.samza.checkpoint.file.FileSystemCheckpointManagerFactory</code></dt>
+                            <dd>Writes checkpoints to files on the local filesystem. You can configure the file path
+                                with the <a href="#task-checkpoint-path" class="property">task.checkpoint.path</a>
+                                property. This is a simple option if your job always runs on the same machine.
+                                On a multi-machine cluster, this would require a network filesystem mount.</dd>
+                            <dt><code>org.apache.samza.checkpoint.kafka.KafkaCheckpointManagerFactory</code></dt>
+                            <dd>Writes checkpoints to a dedicated topic on a Kafka cluster. This is the recommended
+                                option if you are already using Kafka for input or output streams. Use the
+                                <a href="#task-checkpoint-system" class="property">task.checkpoint.system</a>
+                                property to configure which Kafka cluster to use for checkpoints.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-commit-ms">task.commit.ms</td>
+                    <td class="default">60000</td>
+                    <td class="description">
+                        If <a href="#task-checkpoint-factory" class="property">task.checkpoint.factory</a> is
+                        configured, this property determines how often a checkpoint is written. The value is
+                        the time between checkpoints, in milliseconds. The frequency of checkpointing affects
+                        failure recovery: if a container fails unexpectedly (e.g. due to crash or machine failure)
+                        and is restarted, it resumes processing at the last checkpoint. Any messages processed
+                        since the last checkpoint on the failed container are processed again. Checkpointing
+                        more frequently reduces the number of messages that may be processed twice, but also
+                        uses more resources.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-command-class">task.command.class</td>
+                    <td class="default">org.apache.samza.job.<br>ShellCommandBuilder</td>
+                    <td class="description">
+                        The fully-qualified name of the Java class which determines the command line and environment
+                        variables for a <a href="../container/samza-container.html">container</a>. It must be a subclass of
+                        <a href="../api/javadocs/org/apache/samza/job/CommandBuilder.html">CommandBuilder</a>.
+                        This defaults to <code>task.command.class=org.apache.samza.job.ShellCommandBuilder</code>.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-opts">task.opts</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        Any JVM options to include in the command line when executing Samza containers. For example,
+                        this can be used to set the JVM heap size, to tune the garbage collector, or to enable
+                        <a href="/learn/tutorials/{{site.version}}/remote-debugging-samza.html">remote debugging</a>.
+                        This cannot be used when running with <code>ThreadJobFactory</code>. Anything you put in
+                        <code>task.opts</code> gets forwarded directly to the commandline as part of the JVM invocation.
+                        <dl>
+                            <dt>Example: <code>task.opts=-XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC</code></dt>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-java-home">task.java.home</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        The JAVA_HOME path for Samza containers. By setting this property, you can use a java version that is
+                        different from your cluster's java version. Remember to set the <code>yarn.am.java.home</code> as well.
+                        <dl>
+                            <dt>Example: <code>task.java.home=/usr/java/jdk1.8.0_05</code></dt>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-execute">task.execute</td>
+                    <td class="default">bin/run-container.sh</td>
+                    <td class="description">
+                        The command that starts a Samza container. The script must be included in the
+                        <a href="packaging.html">job package</a>. There is usually no need to customize this.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-chooser-class">task.chooser.class</td>
+                    <td class="default">org.apache.samza.<br>system.chooser.<br>RoundRobinChooserFactory</td>
+                    <td class="description">
+                        This property can be optionally set to override the default
+                        <a href="../container/streams.html#messagechooser">message chooser</a>, which determines the
+                        order in which messages from multiple input streams are processed. The value of this
+                        property is the fully-qualified name of a Java class that implements
+                        <a href="../api/javadocs/org/apache/samza/system/chooser/MessageChooserFactory.html">MessageChooserFactory</a>.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-drop-deserialization-errors">task.drop.deserialization.errors</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        This property is to define how the system deals with deserialization failure situation. If set to true, the system will
+                        skip the error messages and keep running. If set to false, the system with throw exceptions and fail the container. Default
+                        is false.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-drop-serialization-errors">task.drop.serialization.errors</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        This property is to define how the system deals with serialization failure situation. If set to true, the system will
+                        drop the error messages and keep running. If set to false, the system with throw exceptions and fail the container. Default
+                        is false.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-log4j-system">task.log4j.system</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        Specify the system name for the StreamAppender. If this property is not specified in the config,
+                        Samza throws exception. (See
+                        <a href="logging.html#stream-log4j-appender">Stream Log4j Appender</a>)
+                        <dl>
+                            <dt>Example: <code>task.log4j.system=kafka</code></dt>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-log4j-location-info-enabled">task.log4j.location.info.enabled</td>
+                    <td class="default">false</td>
+                    <td class="description">
+                        Defines whether or not to include log4j's LocationInfo data in Log4j StreamAppender messages. LocationInfo includes
+                        information such as the file, class, and line that wrote a log message. This setting is only active if the Log4j
+                        stream appender is being used. (See <a href="logging.html#stream-log4j-appender">Stream Log4j Appender</a>)
+                        <dl>
+                            <dt>Example: <code>task.log4j.location.info.enabled=true</code></dt>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-poll-interval-ms">task.poll.interval.ms</td>
+                    <td class="default"></td>
+                    <td class="description">
+                      Samza's container polls for more messages under two conditions. The first condition arises when there are simply no remaining
+                      buffered messages to process for any input SystemStreamPartition. The second condition arises when some input
+                      SystemStreamPartitions have empty buffers, but some do not. In the latter case, a polling interval is defined to determine how
+                      often to refresh the empty SystemStreamPartition buffers. By default, this interval is 50ms, which means that any empty
+                      SystemStreamPartition buffer will be refreshed at least every 50ms. A higher value here means that empty SystemStreamPartitions
+                      will be refreshed less often, which means more latency is introduced, but less CPU and network will be used. Decreasing this
+                      value means that empty SystemStreamPartitions are refreshed more frequently, thereby introducing less latency, but increasing
+                      CPU and network utilization.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-ignored-exceptions">task.ignored.exceptions</td>
+                    <td class="default"></td>
+                    <td class="description">
+                      This property specifies which exceptions should be ignored if thrown in a task's <code>process</code> or <code>window</code>
+                      methods. The exceptions to be ignored should be a comma-separated list of fully-qualified class names of the exceptions or
+                      <code>*</code> to ignore all exceptions.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-shutdown-ms">task.shutdown.ms</td>
+                    <td class="default">5000</td>
+                    <td class="description">
+                        This property controls how long the Samza container will wait for an orderly shutdown of task instances.
+                    </td>
+                </tr>
+                <tr>
+                    <td class="property" id="task-name-grouper-factory">task.name.grouper.factory</td>
+                    <td class="default">org.apache.samza.<br>container.grouper.task.<br>GroupByContainerCountFactory</td>
+                    <td class="description">
+                        The fully-qualified name of the Java class which determines the factory class which will build the TaskNameGrouper.
+                        The default configuration value if the property is not present is <code>task.name.grouper.factory=org.apache.samza.container.grouper.task.GroupByContainerCountFactory</code>.<br>
+                        The user can specify a custom implementation of the TaskNameGrouperFactory where a custom logic is implemented for grouping the tasks.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-broadcast-inputs">task.broadcast.inputs</td>
+                    <td class="default"></td>
+                    <td class="description">
+                       This property specifies the partitions that all tasks should consume. The systemStreamPartitions you put
+                       here will be sent to all the tasks.
+                       <dl>
+                         <dt>Format: <span class="system">system-name</span>.<span class="stream">stream-name</span>#<i>partitionId</i>
+                       or <span class="system">system-name</span>.<span class="stream">stream-name</span>#[<i>startingPartitionId</i>-<i>endingPartitionId</i>]</dt>
+                       </dl>
+                       <dl>
+                            <dt>Example: <code>task.broadcast.inputs=mySystem.broadcastStream#[0-2], mySystem.broadcastStream#0</code></dt>
+                       </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-max-concurrency">task.max.concurrency</td>
+                    <td class="default">1</td>
+                    <td class="description">
+                        Max number of outstanding messages being processed per task at a time, and it’s applicable to both StreamTask and AsyncStreamTask. The values can be:
+                        <dl>
+                            <dt><code>1</code></dt>
+                            <dd>Each task processes one message at a time. Next message will wait until the current message process completes. This ensures strict in-order processing.</dd>
+                            <dt><code>&gt1</code></dt>
+                            <dd>Multiple outstanding messages are allowed to be processed per task at a time. The completion can be out of order. This option increases the parallelism within a task, but may result in out-of-order processing.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-callback-timeout-ms">task.callback.timeout.ms</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        This property is for AsyncStreamTask only. It defines the max time interval from processAsync() to callback is fired. When the timeout happens, it will throw a TaskCallbackTimeoutException and shut down the container. Default is no timeout.
+                    </td>
+                </tr>
+
+                <tr>
+                    <th colspan="3" class="section" id="streams"><a href="../container/streams.html">Systems (input and output streams)</a></th>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-factory">systems.<span class="system">system-name</span>.<br>samza.factory</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        <strong>Required:</strong> The fully-qualified name of a Java class which provides a
+                        <em>system</em>. A system can provide input streams which you can consume in your Samza job,
+                        or output streams to which you can write, or both. The requirements on a system are very
+                        flexible &mdash; it may connect to a message broker, or read and write files, or use a database,
+                        or anything else. The class must implement
+                        <a href="../api/javadocs/org/apache/samza/system/SystemFactory.html">SystemFactory</a>.
+                        Samza ships with the following implementations:
+                        <dl>
+                            <dt><code>org.apache.samza.system.kafka.KafkaSystemFactory</code></dt>
+                            <dd>Connects to a cluster of <a href="http://kafka.apache.org/">Kafka</a> brokers, allows
+                                Kafka topics to be consumed as streams in Samza, allows messages to be published to
+                                Kafka topics, and allows Kafka to be used for checkpointing (see
+                                <a href="#task-checkpoint-factory" class="property">task.checkpoint.factory</a>).
+                                See also <a href="#kafka">configuration of a Kafka system</a>.</dd>
+                            <dt><code>org.apache.samza.system.filereader.FileReaderSystemFactory</code></dt>
+                            <dd>Reads data from a file on the local filesystem (the stream name is the path of the
+                                file to read). The file is read as ASCII, and treated as a stream of messages separated
+                                by newline (<code>\n</code>) characters. A task can consume each line of the file as
+                                a <code>java.lang.String</code> object. This system does not provide output streams.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-key-serde">systems.<span class="system">system-name</span>.<br>samza.key.serde</td>
+                    <td class="default" rowspan="2"></td>
+                    <td class="description" rowspan="2">
+                        The <a href="../container/serialization.html">serde</a> which will be used to deserialize the
+                        <em>key</em> of messages on input streams, and to serialize the <em>key</em> of messages on
+                        output streams. This property can be defined either for an individual stream, or for all
+                        streams within a system (if both are defined, the stream-level definition takes precedence).
+                        The value of this property must be a <span class="serde">serde-name</span> that is registered
+                        with <a href="#serializers-registry-class" class="property">serializers.registry.*.class</a>.
+                        If this property is not set, messages are passed unmodified between the input stream consumer,
+                        the task and the output stream producer.
+                    </td>
+                </tr>
+                <tr>
+                    <td class="property">systems.<span class="system">system-name</span>.<br>streams.<span class="stream">stream-name</span>.<br>samza.key.serde</td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-msg-serde">systems.<span class="system">system-name</span>.<br>samza.msg.serde</td>
+                    <td class="default" rowspan="2"></td>
+                    <td class="description" rowspan="2">
+                        The <a href="../container/serialization.html">serde</a> which will be used to deserialize the
+                        <em>value</em> of messages on input streams, and to serialize the <em>value</em> of messages on
+                        output streams. This property can be defined either for an individual stream, or for all
+                        streams within a system (if both are defined, the stream-level definition takes precedence).
+                        The value of this property must be a <span class="serde">serde-name</span> that is registered
+                        with <a href="#serializers-registry-class" class="property">serializers.registry.*.class</a>.
+                        If this property is not set, messages are passed unmodified between the input stream consumer,
+                        the task and the output stream producer.
+                    </td>
+                </tr>
+                <tr>
+                    <td class="property">systems.<span class="system">system-name</span>.<br>streams.<span class="stream">stream-name</span>.<br>samza.msg.serde</td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-offset-default">systems.<span class="system">system-name</span>.<br>samza.offset.default</td>
+                    <td class="default" rowspan="2">upcoming</td>
+                    <td class="description" rowspan="2">
+                        If a container starts up without a <a href="../container/checkpointing.html">checkpoint</a>,
+                        this property determines where in the input stream we should start consuming. The value must be an
+                        <a href="../api/javadocs/org/apache/samza/system/SystemStreamMetadata.OffsetType.html">OffsetType</a>,
+                        one of the following:
+                        <dl>
+                            <dt><code>upcoming</code></dt>
+                            <dd>Start processing messages that are published after the job starts. Any messages published while
+                                the job was not running are not processed.</dd>
+                            <dt><code>oldest</code></dt>
+                            <dd>Start processing at the oldest available message in the system, and
+                                <a href="reprocessing.html">reprocess</a> the entire available message history.</dd>
+                        </dl>
+                        This property can be defined either for an individual stream, or for all streams within a system
+                        (if both are defined, the stream-level definition takes precedence).
+                    </td>
+                </tr>
+                <tr>
+                    <td class="property">systems.<span class="system">system-name</span>.<br>streams.<span class="stream">stream-name</span>.<br>samza.offset.default</td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-streams-samza-reset-offset">systems.<span class="system">system-name</span>.<br>streams.<span class="stream">stream-name</span>.<br>samza.reset.offset</td>
+                    <td>false</td>
+                    <td>
+                        If set to <code>true</code>, when a Samza container starts up, it ignores any
+                        <a href="../container/checkpointing.html">checkpointed offset</a> for this particular input
+                        stream. Its behavior is thus determined by the <code>samza.offset.default</code> setting.
+                        Note that the reset takes effect <em>every time a container is started</em>, which may be
+                        every time you restart your job, or more frequently if a container fails and is restarted
+                        by the framework.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-streams-samza-priority">systems.<span class="system">system-name</span>.<br>streams.<span class="stream">stream-name</span>.<br>samza.priority</td>
+                    <td>-1</td>
+                    <td>
+                        If one or more streams have a priority set (any positive integer), they will be processed
+                        with <a href="../container/streams.html#prioritizing-input-streams">higher priority</a> than the other streams.
+                        You can set several streams to the same priority, or define multiple priority levels by
+                        assigning a higher number to the higher-priority streams. If a higher-priority stream has
+                        any messages available, they will always be processed first; messages from lower-priority
+                        streams are only processed when there are no new messages on higher-priority inputs.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-streams-samza-bootstrap">systems.<span class="system">system-name</span>.<br>streams.<span class="stream">stream-name</span>.<br>samza.bootstrap</td>
+                    <td>false</td>
+                    <td>
+                        If set to <code>true</code>, this stream will be processed as a
+                        <a href="../container/streams.html#bootstrapping">bootstrap stream</a>. This means that every time
+                        a Samza container starts up, this stream will be fully consumed before messages from any
+                        other stream are processed.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-consumer-batch-size">task.consumer.batch.size</td>
+                    <td>1</td>
+                    <td>
+                        If set to a positive integer, the task will try to consume
+                        <a href="../container/streams.html#batching">batches</a> with the given number of messages
+                        from each input stream, rather than consuming round-robin from all the input streams on
+                        each individual message. Setting this property can improve performance in some cases.
+                    </td>
+                </tr>
+
+                <tr>
+                    <th colspan="3" class="section" id="serdes"><a href="../container/serialization.html">Serializers/Deserializers (Serdes)</a></th>
+                </tr>
+
+                <tr>
+                    <td class="property" id="serializers-registry-class">serializers.registry.<br><span class="serde">serde-name</span>.class</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        Use this property to register a <a href="../container/serialization.html">serializer/deserializer</a>,
+                        which defines a way of encoding application objects as an array of bytes (used for messages
+                        in streams, and for data in persistent storage). You can give a serde any
+                        <span class="serde">serde-name</span> you want, and reference that name in properties like
+                        <a href="#systems-samza-key-serde" class="property">systems.*.samza.key.serde</a>,
+                        <a href="#systems-samza-msg-serde" class="property">systems.*.samza.msg.serde</a>,
+                        <a href="#stores-key-serde" class="property">stores.*.key.serde</a> and
+                        <a href="#stores-msg-serde" class="property">stores.*.msg.serde</a>.
+                        The value of this property is the fully-qualified name of a Java class that implements
+                        <a href="../api/javadocs/org/apache/samza/serializers/SerdeFactory.html">SerdeFactory</a>.
+                        Samza ships with several serdes:
+                        <dl>
+                            <dt><code>org.apache.samza.serializers.ByteSerdeFactory</code></dt>
+                            <dd>A no-op serde which passes through the undecoded byte array.</dd>
+                            <dt><code>org.apache.samza.serializers.IntegerSerdeFactory</code></dt>
+                            <dd>Encodes <code>java.lang.Integer</code> objects as binary (4 bytes fixed-length big-endian encoding).</dd>
+                            <dt><code>org.apache.samza.serializers.StringSerdeFactory</code></dt>
+                            <dd>Encodes <code>java.lang.String</code> objects as UTF-8.</dd>
+                            <dt><code>org.apache.samza.serializers.JsonSerdeFactory</code></dt>
+                            <dd>Encodes nested structures of <code>java.util.Map</code>, <code>java.util.List</code> etc. as JSON.</dd>
+                            <dt><code>org.apache.samza.serializers.LongSerdeFactory</code></dt>
+                            <dd>Encodes <code>java.lang.Long</code> as binary (8 bytes fixed-length big-endian encoding).</dd>
+                            <dt><code>org.apache.samza.serializers.DoubleSerdeFactory</code></dt>
+                            <dd>Encodes <code>java.lang.Double</code> as binray (8 bytes double-precision float point).</dd>
+                            <dt><code>org.apache.samza.serializers.MetricsSnapshotSerdeFactory</code></dt>
+                            <dd>Encodes <code>org.apache.samza.metrics.reporter.MetricsSnapshot</code> objects (which are
+                                used for <a href="../container/metrics.html">reporting metrics</a>) as JSON.</dd>
+                            <dt><code>org.apache.samza.serializers.KafkaSerdeFactory</code></dt>
+                            <dd>Adapter which allows existing <code>kafka.serializer.Encoder</code> and
+                                <code>kafka.serializer.Decoder</code> implementations to be used as Samza serdes.
+                                Set serializers.registry.<span class="serde">serde-name</span>.encoder and
+                                serializers.registry.<span class="serde">serde-name</span>.decoder to the appropriate
+                                class names.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <th colspan="3" class="section" id="filesystem-checkpoints">
+                        Using the filesystem for checkpoints<br>
+                        <span class="subtitle">
+                            (This section applies if you have set
+                            <a href="#task-checkpoint-factory" class="property">task.checkpoint.factory</a>
+                            <code>= org.apache.samza.checkpoint.file.FileSystemCheckpointManagerFactory</code>)
+                        </span>
+                    </th>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-checkpoint-path">task.checkpoint.path</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        Required if you are using the filesystem for checkpoints. Set this to the path on your local filesystem
+                        where checkpoint files should be stored.
+                    </td>
+                </tr>
+
+                <tr>
+                    <th colspan="3" class="section" id="elasticsearch">
+                        Using <a href="https://github.com/elastic/elasticsearch">Elasticsearch</a> for output streams<br>
+                        <span class="subtitle">
+                            (This section applies if you have set
+                            <a href="#systems-samza-factory" class="property">systems.*.samza.factory</a>
+                            <code>= org.apache.samza.system.elasticsearch.ElasticsearchSystemFactory</code>)
+                        </span>
+                    </th>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-client-factory-class">systems.<span class="system">system-name</span>.<br>client.factory</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        <strong>Required:</strong> The elasticsearch client factory used for connecting
+                        to the Elasticsearch cluster. Samza ships with the following implementations:
+                        <dl>
+                            <dt><code>org.apache.samza.system.elasticsearch.client.TransportClientFactory</code></dt>
+                            <dd>Creates a TransportClient that connects to the cluster remotely without
+                                joining it. This requires the transport host and port properties to be set.</dd>
+                            <dt><code>org.apache.samza.system.elasticsearch.client.NodeClientFactory</code></dt>
+                            <dd>Creates a Node client that connects to the cluster by joining it. By default
+                            this uses <a href="http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-zen.html">zen discovery</a> to find the cluster but other methods can be configured.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-index-request-factory-class">systems.<span class="system">system-name</span>.<br>index.request.factory</td>
+                    <td class="default">org.apache.samza.system</br>.elasticsearch.indexrequest.</br>DefaultIndexRequestFactory</td>
+                    <td class="description">
+                        The index request factory that converts the Samza OutgoingMessageEnvelope into the IndexRequest
+                        to be send to elasticsearch. The default IndexRequestFactory behaves as follows:
+                        <dl>
+                            <dt><code>Stream name</code></dt>
+                            <dd>The stream name is of the format {index-name}/{type-name} which
+                            map on to the elasticsearch index and type.</dd>
+                            <dt><code>Message id</code></dt>
+                            <dd>If the message has a key this is set as the document id, otherwise Elasticsearch will generate one for each document.</dd>
+                            <dt><code>Partition id</code></dt>
+                            <dd>If the partition key is set then this is used as the Elasticsearch routing key.</dd>
+                            <dt><code>Message</code></dt>
+                            <dd>The message must be either a byte[] which is passed directly on to Elasticsearch, or a Map which is passed on to the
+                            Elasticsearch client which serialises it into a JSON String. Samza serdes are not currently supported.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-client-host">systems.<span class="system">system-name</span>.<br>client.transport.host</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        <strong>Required</strong> for <code>TransportClientFactory</code>
+                        <p>The hostname that the TransportClientFactory connects to.</p>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-client-port">systems.<span class="system">system-name</span>.<br>client.transport.port</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        <strong>Required</strong> for <code>TransportClientFactory</code>
+                        <p>The port that the TransportClientFactory connects to.</p>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-client-settings">systems.<span class="system">system-name</span>.<br>client.elasticsearch.*</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        Any <a href="http://www.elastic.co/guide/en/elasticsearch/client/java-api/1.x/client.html">Elasticsearch client settings</a> can be used here. They will all be passed to both the transport and node clients.
+                        Some of the common settings you will want to provide are.
+                        <dl>
+                            <dt><code>systems.<span class="system">system-name</span>.client.elasticsearch.cluster.name</code></dt>
+                            <dd>The name of the Elasticsearch cluster the client is connecting to.</dd>
+                            <dt><code>systems.<span class="system">system-name</span>.client.elasticsearch.client.transport.sniff</code></dt>
+                            <dd>If set to <code>true</code> then the transport client will discover and keep
+                                up to date all cluster nodes. This is used for load balancing and fail-over on retries.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-bulk-flush-max-actions">systems.<span class="system">system-name</span>.<br>bulk.flush.max.actions</td>
+                    <td class="default">1000</td>
+                    <td class="description">
+                        The maximum number of messages to be buffered before flushing.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-bulk-flush-max-size-mb">systems.<span class="system">system-name</span>.<br>bulk.flush.max.size.mb</td>
+                    <td class="default">5</td>
+                    <td class="description">
+                        The maximum aggregate size of messages in the buffered before flushing.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-bulk-flush-interval-ms">systems.<span class="system">system-name</span>.<br>bulk.flush.interval.ms</td>
+                    <td class="default">never</td>
+                    <td class="description">
+                        How often buffered messages should be flushed.
+                    </td>
+                </tr>
+
+                <tr>
+                    <th colspan="3" class="section" id="kafka">
+                        Using <a href="http://kafka.apache.org/">Kafka</a> for input streams, output streams and checkpoints<br>
+                        <span class="subtitle">
+                            (This section applies if you have set
+                            <a href="#systems-samza-factory" class="property">systems.*.samza.factory</a>
+                            <code>= org.apache.samza.system.kafka.KafkaSystemFactory</code>)
+                        </span>
+                    </th>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-consumer-zookeeper-connect">systems.<span class="system">system-name</span>.<br>consumer.zookeeper.connect</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        The hostname and port of one or more Zookeeper nodes where information about the
+                        Kafka cluster can be found. This is given as a comma-separated list of
+                        <code>hostname:port</code> pairs, such as
+                        <code>zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181</code>.
+                        If the cluster information is at some sub-path of the Zookeeper namespace, you need to
+                        include the path at the end of the list of hostnames, for example:
+                        <code>zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181/clusters/my-kafka</code>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-consumer-auto-offset-reset">systems.<span class="system">system-name</span>.<br>consumer.auto.offset.reset</td>
+                    <td class="default">largest</td>
+                    <td class="description">
+                        This setting determines what happens if a consumer attempts to read an offset that is
+                        outside of the current valid range. This could happen if the topic does not exist, or
+                        if a checkpoint is older than the maximum message history retained by the brokers.
+                        This property is not to be confused with
+                        <a href="#systems-samza-offset-default">systems.*.samza.offset.default</a>,
+                        which determines what happens if there is no checkpoint. The following are valid
+                        values for <code>auto.offset.reset</code>:
+                        <dl>
+                            <dt><code>smallest</code></dt>
+                            <dd>Start consuming at the smallest (oldest) offset available on the broker
+                                (process as much message history as available).</dd>
+                            <dt><code>largest</code></dt>
+                            <dd>Start consuming at the largest (newest) offset available on the broker
+                                (skip any messages published while the job was not running).</dd>
+                            <dt>anything else</dt>
+                            <dd>Throw an exception and refuse to start up the job.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-consumer">systems.<span class="system">system-name</span>.<br>consumer.*</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        Any <a href="http://kafka.apache.org/documentation.html#consumerconfigs">Kafka consumer configuration</a>
+                        can be included here. For example, to change the socket timeout, you can set
+                        systems.<span class="system">system-name</span>.consumer.socket.timeout.ms.
+                        (There is no need to configure <code>group.id</code> or <code>client.id</code>,
+                        as they are automatically configured by Samza. Also, there is no need to set
+                        <code>auto.commit.enable</code> because Samza has its own checkpointing mechanism.)
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-producer-bootstrap-servers">systems.<span class="system">system-name</span>.<br>producer.bootstrap.servers</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        <b>Note</b>:
+                        <i>This variable was previously defined as "producer.metadata.broker.list", which has been deprecated with this version.</i>
+                        <br />
+                        A list of network endpoints where the Kafka brokers are running. This is given as
+                        a comma-separated list of <code>hostname:port</code> pairs, for example
+                        <code>kafka1.example.com:9092,kafka2.example.com:9092,kafka3.example.com:9092</code>.
+                        It's not necessary to list every single Kafka node in the cluster: Samza uses this
+                        property in order to discover which topics and partitions are hosted on which broker.
+                        This property is needed even if you are only consuming from Kafka, and not writing
+                        to it, because Samza uses it to discover metadata about streams being consumed.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-producer">systems.<span class="system">system-name</span>.<br>producer.*</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        Any <a href="http://kafka.apache.org/documentation.html#newproducerconfigs">Kafka producer configuration</a>
+                        can be included here. For example, to change the request timeout, you can set
+                        systems.<span class="system">system-name</span>.producer.timeout.ms.
+                        (There is no need to configure <code>client.id</code> as it is automatically
+                        configured by Samza.)
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-fetch-threshold">systems.<span class="system">system-name</span>.<br>samza.fetch.threshold</td>
+                    <td class="default">50000</td>
+                    <td class="description">
+                        When consuming streams from Kafka, a Samza container maintains an in-memory buffer
+                        for incoming messages in order to increase throughput (the stream task can continue
+                        processing buffered messages while new messages are fetched from Kafka). This
+                        parameter determines the number of messages we aim to buffer across all stream
+                        partitions consumed by a container. For example, if a container consumes 50 partitions,
+                        it will try to buffer 1000 messages per partition by default. When the number of
+                        buffered messages falls below that threshold, Samza fetches more messages from the
+                        Kafka broker to replenish the buffer. Increasing this parameter can increase a job's
+                        processing throughput, but also increases the amount of memory used.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="systems-samza-fetch-threshold-bytes">systems.<span class="system">system-name</span>.<br>samza.fetch.threshold.bytes</td>
+                    <td class="default">-1</td>
+                    <td class="description">
+                        When consuming streams from Kafka, a Samza container maintains an in-memory buffer
+                        for incoming messages in order to increase throughput (the stream task can continue
+                        processing buffered messages while new messages are fetched from Kafka). This
+                        parameter determines the total size of messages we aim to buffer across all stream
+                        partitions consumed by a container based on bytes. Defines how many bytes to use for the buffered
+                        prefetch messages for job as a whole. The bytes for a single system/stream/partition are computed based on this.
+                        This fetches the entire messages, hence this bytes limit is a soft one, and the actual usage can be the bytes
+                        limit + size of max message in the partition for a given stream. If the value of this property is > 0
+                        then this takes precedence over systems.<span class="system">system-name</span>.samza.fetch.threshold.<br>
+                        For example, if fetchThresholdBytes is set to 100000 bytes, and there are 50 SystemStreamPartitions registered,
+                        then the per-partition threshold is (100000 / 2) / 50 = 1000 bytes. As this is a soft limit, the actual usage
+                        can be 1000 bytes + size of max message. As soon as a SystemStreamPartition's buffered messages bytes drops
+                        below 1000, a fetch request will be executed to get more data for it.
+
+                        Increasing this parameter will decrease the latency between when a queue is drained of messages and when new
+                        messages are enqueued, but also leads to an increase in memory usage since more messages will be held in memory.
+
+                        The default value is -1, which means this is not used.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-checkpoint-system">task.checkpoint.system</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        This property is required if you are using Kafka for checkpoints
+                        (<a href="#task-checkpoint-factory" class="property">task.checkpoint.factory</a>
+                        <code>= org.apache.samza.checkpoint.kafka.KafkaCheckpointManagerFactory</code>).
+                        You must set it to the <span class="system">system-name</span> of a Kafka system. The stream
+                        name (topic name) within that system is automatically determined from the job name and ID:
+                        <code>__samza_checkpoint_${<a href="#job-name" class="property">job.name</a>}_${<a href="#job-id" class="property">job.id</a>}</code>
+                        (with underscores in the job name and ID replaced by hyphens).
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-checkpoint-replication-factor">task.checkpoint.<br>replication.factor</td>
+                    <td class="default">3</td>
+                    <td class="description">
+                        If you are using Kafka for checkpoints, this is the number of Kafka nodes to which you want the
+                        checkpoint topic replicated for durability.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="task-checkpoint-segment-bytes">task.checkpoint.<br>segment.bytes</td>
+                    <td class="default">26214400</td>
+                    <td class="description">
+                        If you are using Kafka for checkpoints, this is the segment size to be used for the checkpoint
+                        topic's log segments. Keeping this number small is useful because it increases the frequency
+                        that Kafka will garbage collect old checkpoints.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="store-changelog-replication-factor">stores.<span class="store">store-name</span>.changelog.<br>replication.factor</td>
+                    <td class="default">2</td>
+                    <td class="description">
+                        The property defines the number of replicas to use for the change log stream.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="store-changelog-partitions">stores.<span class="store">store-name</span>.changelog.<br>kafka.topic-level-property</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        The property allows you to specify topic level settings for the changelog topic to be created.
+                        For e.g., you can specify the clean up policy as "stores.mystore.changelog.cleanup.policy=delete".
+                        Please refer to http://kafka.apache.org/documentation.html#configuration for more topic level configurations.
+                    </td>
+                </tr>
+
+                <tr>
+                    <th colspan="3" class="section" id="regex-rewriter">
+                        Consuming all Kafka topics matching a regular expression<br>
+                        <span class="subtitle">
+                            (This section applies if you have set
+                            <a href="#job-config-rewriter-class" class="property">job.config.rewriter.*.class</a>
+                            <code>= org.apache.samza.config.RegExTopicGenerator</code>)
+                        </span>
+                    </th>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-config-rewriter-system">job.config.rewriter.<br><span class="rewriter">rewriter-name</span>.system</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        Set this property to the <span class="system">system-name</span> of the Kafka system
+                        from which you want to consume all matching topics.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-config-rewriter-regex">job.config.rewriter.<br><span class="rewriter">rewriter-name</span>.regex</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        A regular expression specifying which topics you want to consume within the Kafka system
+                        <a href="#job-config-rewriter-system" class="property">job.config.rewriter.*.system</a>.
+                        Any topics matched by this regular expression will be consumed <em>in addition to</em> any
+                        topics you specify with <a href="#task-inputs" class="property">task.inputs</a>.
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="job-config-rewriter-config">job.config.rewriter.<br><span class="rewriter">rewriter-name</span>.config.*</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        Any properties specified within this namespace are applied to the configuration of streams
+                        that match the regex in
+                        <a href="#job-config-rewriter-regex" class="property">job.config.rewriter.*.regex</a>.
+                        For example, you can set <code>job.config.rewriter.*.config.samza.msg.serde</code> to configure
+                        the deserializer for messages in the matching streams, which is equivalent to setting
+                        <a href="#systems-samza-msg-serde" class="property">systems.*.streams.*.samza.msg.serde</a>
+                        for each topic that matches the regex.
+                    </td>
+                </tr>
+
+                <tr>
+                    <th colspan="3" class="section" id="state"><a href="../container/state-management.html">Storage and State Management</a></th>
+                </tr>
+
+                <tr>
+                    <td class="property" id="stores-factory">stores.<span class="store">store-name</span>.factory</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        This property defines a store, Samza's mechanism for efficient
+                        <a href="../container/state-management.html">stateful stream processing</a>. You can give a
+                        store any <span class="store">store-name</span>, and use that name to get a reference to the
+                        store in your stream task (call
+                        <a href="../api/javadocs/org/apache/samza/task/TaskContext.html#getStore(java.lang.String)">TaskContext.getStore()</a>
+                        in your task's
+                        <a href="../api/javadocs/org/apache/samza/task/InitableTask.html#init(org.apache.samza.config.Config, org.apache.samza.task.TaskContext)">init()</a>
+                        method). The value of this property is the fully-qualified name of a Java class that implements
+                        <a href="../api/javadocs/org/apache/samza/storage/StorageEngineFactory.html">StorageEngineFactory</a>.
+                        Samza currently ships with one storage engine implementation:
+                        <dl>
+                            <dt><code>org.apache.samza.storage.kv.RocksDbKeyValueStorageEngineFactory</code></dt>
+                            <dd>An on-disk storage engine with a key-value interface, implemented using
+                                <a href="http://rocksdb.org/">RocksDB</a>. It supports fast random-access
+                                reads and writes, as well as range queries on keys. RocksDB can be configured with
+                                various <a href="#keyvalue-rocksdb">additional tuning parameters</a>.</dd>
+                        </dl>
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="stores-key-serde">stores.<span class="store">store-name</span>.key.serde</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        If the storage engine expects keys in the store to be simple byte arrays, this
+                        <a href="../container/serialization.html">serde</a> allows the stream task to access the
+                        store using another object type as key. The value of this property must be a
+                        <span class="serde">serde-name</span> that is registered with
+                        <a href="#serializers-registry-class" class="property">serializers.registry.*.class</a>.
+                        If this property is not set, keys are passed unmodified to the storage engine
+                        (and the <a href="#stores-changelog">changelog stream</a>, if appropriate).
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="stores-msg-serde">stores.<span class="store">store-name</span>.msg.serde</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        If the storage engine expects values in the store to be simple byte arrays, this
+                        <a href="../container/serialization.html">serde</a> allows the stream task to access the
+                        store using another object type as value. The value of this property must be a
+                        <span class="serde">serde-name</span> that is registered with
+                        <a href="#serializers-registry-class" class="property">serializers.registry.*.class</a>.
+                        If this property is not set, values are passed unmodified to the storage engine
+                        (and the <a href="#stores-changelog">changelog stream</a>, if appropriate).
+                    </td>
+                </tr>
+
+                <tr>
+                    <td class="property" id="stores-changelog">stores.<span class="store">store-name</span>.changelog</td>
+                    <td class="default"></td>
+                    <td class="description">
+                        Samza stores are local to a container. If the container fails, the contents of the
+                        store are lost. To prevent loss of data, you need to set this property to configure
+                        a changelog stream: Samza then ensures that writes to the store are replicated to
+                        this stream, and the store is restored from this stream after a failure. The value
+                        of this property is given in the form
+                        <span class="system">system-name</span>.<span class="stream">stream-name</span>.
+                        Any output stream can be used as changelog, but you must ensure that only one job
+                        ever writes to a given changelog stream (each instance of a job and each store
+                        needs its own changelog stream).
+                    </td>
+                </tr>
+
+                <tr>
+                    <th colspan="3" class="section" id="keyvalue-rocksdb">
+                        Using RocksDB for key-value storage<br>
+                        <span class="subtitle">

[... 537 lines stripped ...]