You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@samza.apache.org by cr...@apache.org on 2014/07/09 18:34:26 UTC

svn commit: r1609230 [7/9] - in /incubator/samza/site: ./ community/ contribute/ css/ learn/documentation/0.7.0/ learn/documentation/0.7.0/api/ learn/documentation/0.7.0/api/javadocs/ learn/documentation/0.7.0/api/javadocs/org/apache/samza/ learn/docum...

Modified: incubator/samza/site/learn/documentation/0.7.0/container/checkpointing.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/container/checkpointing.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/container/checkpointing.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/container/checkpointing.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>
@@ -144,80 +143,59 @@
 <p>This guarantee is called <em>at-least-once processing</em>: Samza ensures that your job doesn&rsquo;t miss any messages, even if containers need to be restarted. However, it is possible for your job to see the same message more than once when a container is restarted. We are planning to address this in a future version of Samza, but for now it is just something to be aware of: for example, if you are counting page views, a forcefully killed container could cause events to be slightly over-counted. You can reduce duplication by checkpointing more frequently, at a slight performance cost.</p>
 
 <p>For checkpoints to be effective, they need to be written somewhere where they will survive faults. Samza allows you to write checkpoints to the file system (using FileSystemCheckpointManager), but that doesn&rsquo;t help if the machine fails and the container needs to be restarted on another machine. The most common configuration is to use Kafka for checkpointing. You can enable this with the following job configuration:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text"># The name of your job determines the name under which checkpoints will be stored
+job.name=example-job
 
-<div class="highlight"><pre><code class="jproperties"><span class="c"># The name of your job determines the name under which checkpoints will be stored</span>
-<span class="na">job.name</span><span class="o">=</span><span class="s">example-job</span>
-
-<span class="c"># Define a system called &quot;kafka&quot; for consuming and producing to a Kafka cluster</span>
-<span class="na">systems.kafka.samza.factory</span><span class="o">=</span><span class="s">org.apache.samza.system.kafka.KafkaSystemFactory</span>
-
-<span class="c"># Declare that we want our job&#39;s checkpoints to be written to Kafka</span>
-<span class="na">task.checkpoint.factory</span><span class="o">=</span><span class="s">org.apache.samza.checkpoint.kafka.KafkaCheckpointManagerFactory</span>
-<span class="na">task.checkpoint.system</span><span class="o">=</span><span class="s">kafka</span>
-
-<span class="c"># By default, a checkpoint is written every 60 seconds. You can change this if you like.</span>
-<span class="na">task.commit.ms</span><span class="o">=</span><span class="s">60000</span></code></pre></div>
+# Define a system called &quot;kafka&quot; for consuming and producing to a Kafka cluster
+systems.kafka.samza.factory=org.apache.samza.system.kafka.KafkaSystemFactory
 
+# Declare that we want our job&#39;s checkpoints to be written to Kafka
+task.checkpoint.factory=org.apache.samza.checkpoint.kafka.KafkaCheckpointManagerFactory
+task.checkpoint.system=kafka
+
+# By default, a checkpoint is written every 60 seconds. You can change this if you like.
+task.commit.ms=60000
+</code></pre></div>
 <p>In this configuration, Samza writes checkpoints to a separate Kafka topic called __samza_checkpoint_&lt;job-name&gt;_&lt;job-id&gt; (in the example configuration above, the topic would be called __samza_checkpoint_example-job_1). Once per minute, Samza automatically sends a message to this topic, in which the current offsets of the input streams are encoded. When a Samza container starts up, it looks for the most recent offset message in this topic, and loads that checkpoint.</p>
 
 <p>Sometimes it can be useful to use checkpoints only for some input streams, but not for others. In this case, you can tell Samza to ignore any checkpointed offsets for a particular stream name:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text"># Ignore any checkpoints for the topic &quot;my-special-topic&quot;
+systems.kafka.streams.my-special-topic.samza.reset.offset=true
 
-<div class="highlight"><pre><code class="jproperties"><span class="c"># Ignore any checkpoints for the topic &quot;my-special-topic&quot;</span>
-<span class="na">systems.kafka.streams.my-special-topic.samza.reset.offset</span><span class="o">=</span><span class="s">true</span>
-
-<span class="c"># Always start consuming &quot;my-special-topic&quot; at the oldest available offset</span>
-<span class="na">systems.kafka.streams.my-special-topic.samza.offset.default</span><span class="o">=</span><span class="s">oldest</span></code></pre></div>
-
+# Always start consuming &quot;my-special-topic&quot; at the oldest available offset
+systems.kafka.streams.my-special-topic.samza.offset.default=oldest
+</code></pre></div>
 <p>The following table explains the meaning of these configuration parameters:</p>
 
-<table class="table table-condensed table-bordered table-striped">
-  <thead>
-    <tr>
-      <th>Parameter name</th>
-      <th>Value</th>
-      <th>Meaning</th>
-    </tr>
-  </thead>
-  <tbody>
-    <tr>
-      <td rowspan="2" class="nowrap">systems.&lt;system&gt;.<br>streams.&lt;stream&gt;.<br>samza.reset.offset</td>
-      <td>false (default)</td>
-      <td>When container starts up, resume processing from last checkpoint</td>
-    </tr>
-    <tr>
-      <td>true</td>
-      <td>Ignore checkpoint (pretend that no checkpoint is present)</td>
-    </tr>
-    <tr>
-      <td rowspan="2" class="nowrap">systems.&lt;system&gt;.<br>streams.&lt;stream&gt;.<br>samza.offset.default</td>
-      <td>upcoming (default)</td>
-      <td>When container starts and there is no checkpoint (or the checkpoint is ignored), only process messages that are published after the job is started, but no old messages</td>
-    </tr>
-    <tr>
-      <td>oldest</td>
-      <td>When container starts and there is no checkpoint (or the checkpoint is ignored), jump back to the oldest available message in the system, and consume all messages from that point onwards (most likely this means repeated processing of messages already seen previously)</td>
-    </tr>
-  </tbody>
+<table class="documentation">
+  <tr>
+    <th>Parameter name</th>
+    <th>Value</th>
+    <th>Meaning</th>
+  </tr>
+  <tr>
+    <td rowspan="2" class="nowrap">systems.&lt;system&gt;.<br>streams.&lt;stream&gt;.<br>samza.reset.offset</td>
+    <td>false (default)</td>
+    <td>When container starts up, resume processing from last checkpoint</td>
+  </tr>
+  <tr>
+    <td>true</td>
+    <td>Ignore checkpoint (pretend that no checkpoint is present)</td>
+  </tr>
+  <tr>
+    <td rowspan="2" class="nowrap">systems.&lt;system&gt;.<br>streams.&lt;stream&gt;.<br>samza.offset.default</td>
+    <td>upcoming (default)</td>
+    <td>When container starts and there is no checkpoint (or the checkpoint is ignored), only process messages that are published after the job is started, but no old messages</td>
+  </tr>
+  <tr>
+    <td>oldest</td>
+    <td>When container starts and there is no checkpoint (or the checkpoint is ignored), jump back to the oldest available message in the system, and consume all messages from that point onwards (most likely this means repeated processing of messages already seen previously)</td>
+  </tr>
 </table>
 
 <p>Note that the example configuration above causes your tasks to start consuming from the oldest offset <em>every time a container starts up</em>. This is useful in case you have some in-memory state in your tasks that you need to rebuild from source data in an input stream. If you are using streams in this way, you may also find <a href="streams.html">bootstrap streams</a> useful.</p>
 
-<h3 id="manipulating-checkpoints-manually">Manipulating Checkpoints Manually</h3>
-
-<p>If you want to make a one-off change to a job&rsquo;s consumer offsets, for example to force old messages to be <a href="../jobs/reprocessing.html">processed again</a> with a new version of your code, you can use CheckpointTool to inspect and manipulate the job&rsquo;s checkpoint. The tool is included in Samza&rsquo;s <a href="/contribute/code.html">source repository</a>.</p>
-
-<p>To inspect a job&rsquo;s latest checkpoint, you need to specify your job&rsquo;s config file, so that the tool knows which job it is dealing with:</p>
-
-<div class="highlight"><pre><code class="bash">samza-example/target/bin/checkpoint-tool.sh <span class="se">\</span>
-  --config-path<span class="o">=</span>file:///path/to/job/config.properties</code></pre></div>
-
-<p>This command prints out the latest checkpoint in a properties file format. You can save the output to a file, and edit it as you wish. For example, to jump back to the oldest possible point in time, you can set all the offsets to 0. Then you can feed that properties file back into checkpoint-tool.sh and save the modified checkpoint:</p>
-
-<div class="highlight"><pre><code class="bash">samza-example/target/bin/checkpoint-tool.sh <span class="se">\</span>
-  --config-path<span class="o">=</span>file:///path/to/job/config.properties <span class="se">\</span>
-  --new-offsets<span class="o">=</span>file:///path/to/new/offsets.properties</code></pre></div>
-
-<p>Note that Samza only reads checkpoints on container startup. In order for your checkpoint change to take effect, you need to first stop the job, then save the modified offsets, and then start the job again. If you write a checkpoint while the job is running, it will most likely have no effect.</p>
+<p>If you want to make a one-off change to a job&rsquo;s consumer offsets, for example to force old messages to be processed again with a new version of your code, you can use CheckpointTool to manipulate the job&rsquo;s checkpoint. The tool is included in Samza&rsquo;s <a href="/contribute/code.html">source repository</a> and documented in the README.</p>
 
 <h2 id="state-management-&raquo;"><a href="state-management.html">State Management &raquo;</a></h2>
 

Modified: incubator/samza/site/learn/documentation/0.7.0/container/event-loop.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/container/event-loop.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/container/event-loop.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/container/event-loop.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>
@@ -152,13 +151,12 @@
 <p>To receive notifications when such events happen, you can implement the <a href="../api/javadocs/org/apache/samza/task/TaskLifecycleListenerFactory.html">TaskLifecycleListenerFactory</a> interface. It returns a <a href="../api/javadocs/org/apache/samza/task/TaskLifecycleListener.html">TaskLifecycleListener</a>, whose methods are called by Samza at the appropriate times.</p>
 
 <p>You can then tell Samza to use your lifecycle listener with the following properties in your job configuration:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text"># Define a listener called &quot;my-listener&quot; by giving the factory class name
+task.lifecycle.listener.my-listener.class=com.example.foo.MyListenerFactory
 
-<div class="highlight"><pre><code class="jproperties"><span class="c"># Define a listener called &quot;my-listener&quot; by giving the factory class name</span>
-<span class="na">task.lifecycle.listener.my-listener.class</span><span class="o">=</span><span class="s">com.example.foo.MyListenerFactory</span>
-
-<span class="c"># Enable it in this job (multiple listeners can be separated by commas)</span>
-<span class="na">task.lifecycle.listeners</span><span class="o">=</span><span class="s">my-listener</span></code></pre></div>
-
+# Enable it in this job (multiple listeners can be separated by commas)
+task.lifecycle.listeners=my-listener
+</code></pre></div>
 <p>The Samza container creates one instance of your <a href="../api/javadocs/org/apache/samza/task/TaskLifecycleListener.html">TaskLifecycleListener</a>. If the container has multiple task instances (processing different input stream partitions), the beforeInit, afterInit, beforeClose and afterClose methods are called for each task instance. The <a href="../api/javadocs/org/apache/samza/task/TaskContext.html">TaskContext</a> argument of those methods gives you more information about the partitions.</p>
 
 <h2 id="jmx-&raquo;"><a href="jmx.html">JMX &raquo;</a></h2>

Modified: incubator/samza/site/learn/documentation/0.7.0/container/jmx.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/container/jmx.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/container/jmx.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/container/jmx.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>
@@ -126,13 +125,12 @@
 <p>Samza&rsquo;s containers and YARN ApplicationMaster enable <a href="http://docs.oracle.com/javase/tutorial/jmx/">JMX</a> by default. JMX can be used for managing the JVM; for example, you can connect to it using <a href="http://docs.oracle.com/javase/7/docs/technotes/guides/management/jconsole.html">jconsole</a>, which is included in the JDK.</p>
 
 <p>You can tell Samza to publish its internal <a href="metrics.html">metrics</a>, and any custom metrics you define, as JMX MBeans. To enable this, set the following properties in your job configuration:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text"># Define a Samza metrics reporter called &quot;jmx&quot;, which publishes to JMX
+metrics.reporter.jmx.class=org.apache.samza.metrics.reporter.JmxReporterFactory
 
-<div class="highlight"><pre><code class="jproperties"><span class="c"># Define a Samza metrics reporter called &quot;jmx&quot;, which publishes to JMX</span>
-<span class="na">metrics.reporter.jmx.class</span><span class="o">=</span><span class="s">org.apache.samza.metrics.reporter.JmxReporterFactory</span>
-
-<span class="c"># Use it (if you have multiple reporters defined, separate them with commas)</span>
-<span class="na">metrics.reporters</span><span class="o">=</span><span class="s">jmx</span></code></pre></div>
-
+# Use it (if you have multiple reporters defined, separate them with commas)
+metrics.reporters=jmx
+</code></pre></div>
 <p>JMX needs to be configured to use a specific port, but in a distributed environment, there is no way of knowing in advance which ports are available on the machines running your containers. Therefore Samza chooses the JMX port randomly. If you need to connect to it, you can find the port by looking in the container&rsquo;s logs, which report the JMX server details as follows:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">2014-06-02 21:50:17 JmxServer [INFO] According to InetAddress.getLocalHost.getHostName we are samza-grid-1234.example.com
 2014-06-02 21:50:17 JmxServer [INFO] Started JmxServer registry port=50214 server port=50215 url=service:jmx:rmi://localhost:50215/jndi/rmi://localhost:50214/jmxrmi

Modified: incubator/samza/site/learn/documentation/0.7.0/container/metrics.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/container/metrics.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/container/metrics.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/container/metrics.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>
@@ -128,71 +127,68 @@
 <p>Metrics can be reported in various ways. You can expose them via <a href="jmx.html">JMX</a>, which is useful in development. In production, a common setup is for each Samza container to periodically publish its metrics to a &ldquo;metrics&rdquo; Kafka topic, in which the metrics from all Samza jobs are aggregated. You can then consume this stream in another Samza job, and send the metrics to your favorite graphing system such as <a href="http://graphite.wikidot.com/">Graphite</a>.</p>
 
 <p>To set up your job to publish metrics to Kafka, you can use the following configuration:</p>
-
-<div class="highlight"><pre><code class="jproperties"><span class="c"># Define a metrics reporter called &quot;snapshot&quot;, which publishes metrics</span>
-<span class="c"># every 60 seconds.</span>
-<span class="na">metrics.reporters</span><span class="o">=</span><span class="s">snapshot</span>
-<span class="na">metrics.reporter.snapshot.class</span><span class="o">=</span><span class="s">org.apache.samza.metrics.reporter.MetricsSnapshotReporterFactory</span>
-
-<span class="c"># Tell the snapshot reporter to publish to a topic called &quot;metrics&quot;</span>
-<span class="c"># in the &quot;kafka&quot; system.</span>
-<span class="na">metrics.reporter.snapshot.stream</span><span class="o">=</span><span class="s">kafka.metrics</span>
-
-<span class="c"># Encode metrics data as JSON.</span>
-<span class="na">serializers.registry.metrics.class</span><span class="o">=</span><span class="s">org.apache.samza.serializers.MetricsSnapshotSerdeFactory</span>
-<span class="na">systems.kafka.streams.metrics.samza.msg.serde</span><span class="o">=</span><span class="s">metrics</span></code></pre></div>
-
+<div class="highlight"><pre><code class="language-text" data-lang="text"># Define a metrics reporter called &quot;snapshot&quot;, which publishes metrics
+# every 60 seconds.
+metrics.reporters=snapshot
+metrics.reporter.snapshot.class=org.apache.samza.metrics.reporter.MetricsSnapshotReporterFactory
+
+# Tell the snapshot reporter to publish to a topic called &quot;metrics&quot;
+# in the &quot;kafka&quot; system.
+metrics.reporter.snapshot.stream=kafka.metrics
+
+# Encode metrics data as JSON.
+serializers.registry.metrics.class=org.apache.samza.serializers.MetricsSnapshotSerdeFactory
+systems.kafka.streams.metrics.samza.msg.serde=metrics
+</code></pre></div>
 <p>With this configuration, the job automatically sends several JSON-encoded messages to the &ldquo;metrics&rdquo; topic in Kafka every 60 seconds. The messages look something like this:</p>
-
-<div class="highlight"><pre><code class="json"><span class="p">{</span>
-  <span class="nt">&quot;header&quot;</span><span class="p">:</span> <span class="p">{</span>
-    <span class="nt">&quot;container-name&quot;</span><span class="p">:</span> <span class="s2">&quot;samza-container-0&quot;</span><span class="p">,</span>
-    <span class="nt">&quot;host&quot;</span><span class="p">:</span> <span class="s2">&quot;samza-grid-1234.example.com&quot;</span><span class="p">,</span>
-    <span class="nt">&quot;job-id&quot;</span><span class="p">:</span> <span class="s2">&quot;1&quot;</span><span class="p">,</span>
-    <span class="nt">&quot;job-name&quot;</span><span class="p">:</span> <span class="s2">&quot;my-samza-job&quot;</span><span class="p">,</span>
-    <span class="nt">&quot;reset-time&quot;</span><span class="p">:</span> <span class="mi">1401729000347</span><span class="p">,</span>
-    <span class="nt">&quot;samza-version&quot;</span><span class="p">:</span> <span class="s2">&quot;0.0.1&quot;</span><span class="p">,</span>
-    <span class="nt">&quot;source&quot;</span><span class="p">:</span> <span class="s2">&quot;Partition-2&quot;</span><span class="p">,</span>
-    <span class="nt">&quot;time&quot;</span><span class="p">:</span> <span class="mi">1401729420566</span><span class="p">,</span>
-    <span class="nt">&quot;version&quot;</span><span class="p">:</span> <span class="s2">&quot;0.0.1&quot;</span>
-  <span class="p">},</span>
-  <span class="nt">&quot;metrics&quot;</span><span class="p">:</span> <span class="p">{</span>
-    <span class="nt">&quot;org.apache.samza.container.TaskInstanceMetrics&quot;</span><span class="p">:</span> <span class="p">{</span>
-      <span class="nt">&quot;commit-calls&quot;</span><span class="p">:</span> <span class="mi">7</span><span class="p">,</span>
-      <span class="nt">&quot;commit-skipped&quot;</span><span class="p">:</span> <span class="mi">77948</span><span class="p">,</span>
-      <span class="nt">&quot;kafka-input-topic-offset&quot;</span><span class="p">:</span> <span class="s2">&quot;1606&quot;</span><span class="p">,</span>
-      <span class="nt">&quot;messages-sent&quot;</span><span class="p">:</span> <span class="mi">985</span><span class="p">,</span>
-      <span class="nt">&quot;process-calls&quot;</span><span class="p">:</span> <span class="mi">1093</span><span class="p">,</span>
-      <span class="nt">&quot;send-calls&quot;</span><span class="p">:</span> <span class="mi">985</span><span class="p">,</span>
-      <span class="nt">&quot;send-skipped&quot;</span><span class="p">:</span> <span class="mi">76970</span><span class="p">,</span>
-      <span class="nt">&quot;window-calls&quot;</span><span class="p">:</span> <span class="mi">0</span><span class="p">,</span>
-      <span class="nt">&quot;window-skipped&quot;</span><span class="p">:</span> <span class="mi">77955</span>
-    <span class="p">}</span>
-  <span class="p">}</span>
-<span class="p">}</span></code></pre></div>
-
+<div class="highlight"><pre><code class="language-text" data-lang="text">{
+  &quot;header&quot;: {
+    &quot;container-name&quot;: &quot;samza-container-0&quot;,
+    &quot;host&quot;: &quot;samza-grid-1234.example.com&quot;,
+    &quot;job-id&quot;: &quot;1&quot;,
+    &quot;job-name&quot;: &quot;my-samza-job&quot;,
+    &quot;reset-time&quot;: 1401729000347,
+    &quot;samza-version&quot;: &quot;0.0.1&quot;,
+    &quot;source&quot;: &quot;Partition-2&quot;,
+    &quot;time&quot;: 1401729420566,
+    &quot;version&quot;: &quot;0.0.1&quot;
+  },
+  &quot;metrics&quot;: {
+    &quot;org.apache.samza.container.TaskInstanceMetrics&quot;: {
+      &quot;commit-calls&quot;: 7,
+      &quot;commit-skipped&quot;: 77948,
+      &quot;kafka-input-topic-offset&quot;: &quot;1606&quot;,
+      &quot;messages-sent&quot;: 985,
+      &quot;process-calls&quot;: 1093,
+      &quot;send-calls&quot;: 985,
+      &quot;send-skipped&quot;: 76970,
+      &quot;window-calls&quot;: 0,
+      &quot;window-skipped&quot;: 77955
+    }
+  }
+}
+</code></pre></div>
 <p>There is a separate message for each task instance, and the header tells you the job name, job ID and partition of the task. The metrics allow you to see how many messages have been processed and sent, the current offset in the input stream partition, and other details. There are additional messages which give you metrics about the JVM (heap size, garbage collection information, threads etc.), internal metrics of the Kafka producers and consumers, and more.</p>
 
 <p>It&rsquo;s easy to generate custom metrics in your job, if there&rsquo;s some value you want to keep an eye on. You can use Samza&rsquo;s built-in metrics framework, which is similar in design to Coda Hale&rsquo;s <a href="http://metrics.codahale.com/">metrics</a> library. </p>
 
 <p>You can register your custom metrics through a <a href="../api/javadocs/org/apache/samza/metrics/MetricsRegistry.html">MetricsRegistry</a>. Your stream task needs to implement <a href="../api/javadocs/org/apache/samza/task/InitableTask.html">InitableTask</a>, so that you can get the metrics registry from the <a href="../api/javadocs/org/apache/samza/task/TaskContext.html">TaskContext</a>. This simple example shows how to count the number of messages processed by your task:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">public class MyJavaStreamTask implements StreamTask, InitableTask {
+  private Counter messageCount;
 
-<div class="highlight"><pre><code class="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">MyJavaStreamTask</span> <span class="kd">implements</span> <span class="n">StreamTask</span><span class="o">,</span> <span class="n">InitableTask</span> <span class="o">{</span>
-  <span class="kd">private</span> <span class="n">Counter</span> <span class="n">messageCount</span><span class="o">;</span>
-
-  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">init</span><span class="o">(</span><span class="n">Config</span> <span class="n">config</span><span class="o">,</span> <span class="n">TaskContext</span> <span class="n">context</span><span class="o">)</span> <span class="o">{</span>
-    <span class="k">this</span><span class="o">.</span><span class="na">messageCount</span> <span class="o">=</span> <span class="n">context</span>
-      <span class="o">.</span><span class="na">getMetricsRegistry</span><span class="o">()</span>
-      <span class="o">.</span><span class="na">newCounter</span><span class="o">(</span><span class="n">getClass</span><span class="o">().</span><span class="na">getName</span><span class="o">(),</span> <span class="s">&quot;message-count&quot;</span><span class="o">);</span>
-  <span class="o">}</span>
-
-  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">process</span><span class="o">(</span><span class="n">IncomingMessageEnvelope</span> <span class="n">envelope</span><span class="o">,</span>
-                      <span class="n">MessageCollector</span> <span class="n">collector</span><span class="o">,</span>
-                      <span class="n">TaskCoordinator</span> <span class="n">coordinator</span><span class="o">)</span> <span class="o">{</span>
-    <span class="n">messageCount</span><span class="o">.</span><span class="na">inc</span><span class="o">();</span>
-  <span class="o">}</span>
-<span class="o">}</span></code></pre></div>
-
+  public void init(Config config, TaskContext context) {
+    this.messageCount = context
+      .getMetricsRegistry()
+      .newCounter(getClass().getName(), &quot;message-count&quot;);
+  }
+
+  public void process(IncomingMessageEnvelope envelope,
+                      MessageCollector collector,
+                      TaskCoordinator coordinator) {
+    messageCount.inc();
+  }
+}
+</code></pre></div>
 <p>Samza currently supports two kind of metrics: <a href="../api/javadocs/org/apache/samza/metrics/Counter.html">counters</a> and <a href="../api/javadocs/org/apache/samza/metrics/Gauge.html">gauges</a>. Use a counter when you want to track how often something occurs, and a gauge when you want to report the level of something, such as the size of a buffer. Each task instance (for each input stream partition) gets its own set of metrics.</p>
 
 <p>If you want to report metrics in some other way, e.g. directly to a graphing system (without going via Kafka), you can implement a <a href="../api/javadocs/org/apache/samza/metrics/MetricsReporterFactory.html">MetricsReporterFactory</a> and reference it in your job configuration.</p>

Modified: incubator/samza/site/learn/documentation/0.7.0/container/samza-container.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/container/samza-container.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/container/samza-container.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/container/samza-container.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>
@@ -143,17 +142,16 @@
 <h3 id="tasks-and-partitions">Tasks and Partitions</h3>
 
 <p>When the container starts, it creates instances of the <a href="../api/overview.html">task class</a> that you&rsquo;ve written. If the task class implements the <a href="../api/javadocs/org/apache/samza/task/InitableTask.html">InitableTask</a> interface, the SamzaContainer will also call the init() method.</p>
-
-<div class="highlight"><pre><code class="java"><span class="cm">/** Implement this if you want a callback when your task starts up. */</span>
-<span class="kd">public</span> <span class="kd">interface</span> <span class="nc">InitableTask</span> <span class="o">{</span>
-  <span class="kt">void</span> <span class="nf">init</span><span class="o">(</span><span class="n">Config</span> <span class="n">config</span><span class="o">,</span> <span class="n">TaskContext</span> <span class="n">context</span><span class="o">);</span>
-<span class="o">}</span></code></pre></div>
-
+<div class="highlight"><pre><code class="language-text" data-lang="text">/** Implement this if you want a callback when your task starts up. */
+public interface InitableTask {
+  void init(Config config, TaskContext context);
+}
+</code></pre></div>
 <p>How many instances of your task class are created depends on the number of partitions in the job&rsquo;s input streams. If your Samza job has ten partitions, there will be ten instantiations of your task class: one for each partition. The first task instance will receive all messages for partition one, the second instance will receive all messages for partition two, and so on.</p>
 
 <p><img src="/img/0.7.0/learn/documentation/container/tasks-and-partitions.svg" alt="Illustration of tasks consuming partitions" class="diagram-large"></p>
 
-<p>The number of partitions in the input streams is determined by the systems from which you are consuming. For example, if your input system is Kafka, you can specify the number of partitions when you create a topic from the command line or using the num.partitions in Kafka&rsquo;s server properties file.</p>
+<p>The number of partitions in the input streams is determined by the systems from which you are consuming. For example, if your input system is Kafka, you can specify the number of partitions when you create a topic.</p>
 
 <p>If a Samza job has more than one input stream, the number of task instances for the Samza job is the maximum number of partitions across all input streams. For example, if a Samza job is reading from PageViewEvent (12 partitions), and ServiceMetricEvent (14 partitions), then the Samza job would have 14 task instances (numbered 0 through 13). Task instances 12 and 13 only receive events from ServiceMetricEvent, because there is no corresponding PageViewEvent partition.</p>
 
@@ -173,27 +171,12 @@
 
 <p>If your job has multiple input streams, Samza provides a simple but powerful mechanism for joining data from different streams: each task instance receives messages from one partition of <em>each</em> of the input streams. For example, say you have two input streams, A and B, each with four partitions. Samza creates four task instances to process them, and assigns the partitions as follows:</p>
 
-<table class="table table-condensed table-bordered table-striped">
-  <thead>
-    <tr>
-      <th>Task instance</th>
-      <th>Consumes stream partitions</th>
-    </tr>
-  </thead>
-  <tbody>
-    <tr>
-      <td>0</td><td>stream A partition 0, stream B partition 0</td>
-    </tr>
-    <tr>
-      <td>1</td><td>stream A partition 1, stream B partition 1</td>
-    </tr>
-    <tr>
-      <td>2</td><td>stream A partition 2, stream B partition 2</td>
-    </tr>
-    <tr>
-      <td>3</td><td>stream A partition 3, stream B partition 3</td>
-    </tr>
-  </tbody>
+<table class="documentation">
+<tr><th>Task instance</th><th>Consumes stream partitions</th></tr>
+<tr><td>0</td><td>stream A partition 0, stream B partition 0</td></tr>
+<tr><td>1</td><td>stream A partition 1, stream B partition 1</td></tr>
+<tr><td>2</td><td>stream A partition 2, stream B partition 2</td></tr>
+<tr><td>3</td><td>stream A partition 3, stream B partition 3</td></tr>
 </table>
 
 <p>Thus, if you want two events in different streams to be processed by the same task instance, you need to ensure they are sent to the same partition number. You can achieve this by using the same partitioning key when <a href="../api/overview.html">sending the messages</a>. Joining streams is discussed in detail in the <a href="state-management.html">state management</a> section.</p>

Modified: incubator/samza/site/learn/documentation/0.7.0/container/serialization.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/container/serialization.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/container/serialization.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/container/serialization.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>
@@ -132,31 +131,30 @@
 </ol>
 
 <p>You can use whatever makes sense for your job; Samza doesn&rsquo;t impose any particular data model or serialization scheme on you. However, the cleanest solution is usually to use Samza&rsquo;s serde layer. The following configuration example shows how to use it.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text"># Define a system called &quot;kafka&quot;
+systems.kafka.samza.factory=org.apache.samza.system.kafka.KafkaSystemFactory
 
-<div class="highlight"><pre><code class="jproperties"><span class="c"># Define a system called &quot;kafka&quot;</span>
-<span class="na">systems.kafka.samza.factory</span><span class="o">=</span><span class="s">org.apache.samza.system.kafka.KafkaSystemFactory</span>
+# The job is going to consume a topic called &quot;PageViewEvent&quot; from the &quot;kafka&quot; system
+task.inputs=kafka.PageViewEvent
 
-<span class="c"># The job is going to consume a topic called &quot;PageViewEvent&quot; from the &quot;kafka&quot; system</span>
-<span class="na">task.inputs</span><span class="o">=</span><span class="s">kafka.PageViewEvent</span>
-
-<span class="c"># Define a serde called &quot;json&quot; which parses/serializes JSON objects</span>
-<span class="na">serializers.registry.json.class</span><span class="o">=</span><span class="s">org.apache.samza.serializers.JsonSerdeFactory</span>
-
-<span class="c"># Define a serde called &quot;integer&quot; which encodes an integer as 4 binary bytes (big-endian)</span>
-<span class="na">serializers.registry.integer.class</span><span class="o">=</span><span class="s">org.apache.samza.serializers.IntegerSerdeFactory</span>
-
-<span class="c"># For messages in the &quot;PageViewEvent&quot; topic, the key (the ID of the user viewing the page)</span>
-<span class="c"># is encoded as a binary integer, and the message is encoded as JSON.</span>
-<span class="na">systems.kafka.streams.PageViewEvent.samza.key.serde</span><span class="o">=</span><span class="s">integer</span>
-<span class="na">systems.kafka.streams.PageViewEvent.samza.msg.serde</span><span class="o">=</span><span class="s">json</span>
-
-<span class="c"># Define a key-value store which stores the most recent page view for each user ID.</span>
-<span class="c"># Again, the key is an integer user ID, and the value is JSON.</span>
-<span class="na">stores.LastPageViewPerUser.factory</span><span class="o">=</span><span class="s">org.apache.samza.storage.kv.KeyValueStorageEngineFactory</span>
-<span class="na">stores.LastPageViewPerUser.changelog</span><span class="o">=</span><span class="s">kafka.last-page-view-per-user</span>
-<span class="na">stores.LastPageViewPerUser.key.serde</span><span class="o">=</span><span class="s">integer</span>
-<span class="na">stores.LastPageViewPerUser.msg.serde</span><span class="o">=</span><span class="s">json</span></code></pre></div>
+# Define a serde called &quot;json&quot; which parses/serializes JSON objects
+serializers.registry.json.class=org.apache.samza.serializers.JsonSerdeFactory
 
+# Define a serde called &quot;integer&quot; which encodes an integer as 4 binary bytes (big-endian)
+serializers.registry.integer.class=org.apache.samza.serializers.IntegerSerdeFactory
+
+# For messages in the &quot;PageViewEvent&quot; topic, the key (the ID of the user viewing the page)
+# is encoded as a binary integer, and the message is encoded as JSON.
+systems.kafka.streams.PageViewEvent.samza.key.serde=integer
+systems.kafka.streams.PageViewEvent.samza.msg.serde=json
+
+# Define a key-value store which stores the most recent page view for each user ID.
+# Again, the key is an integer user ID, and the value is JSON.
+stores.LastPageViewPerUser.factory=org.apache.samza.storage.kv.KeyValueStorageEngineFactory
+stores.LastPageViewPerUser.changelog=kafka.last-page-view-per-user
+stores.LastPageViewPerUser.key.serde=integer
+stores.LastPageViewPerUser.msg.serde=json
+</code></pre></div>
 <p>Each serde is defined with a factory class. Samza comes with several builtin serdes for UTF-8 strings, binary-encoded integers, JSON (requires the samza-serializers dependency) and more. You can also create your own serializer by implementing the <a href="../api/javadocs/org/apache/samza/serializers/SerdeFactory.html">SerdeFactory</a> interface.</p>
 
 <p>The name you give to a serde (such as &ldquo;json&rdquo; and &ldquo;integer&rdquo; in the example above) is only for convenience in your job configuration; you can choose whatever name you like. For each stream and each state store, you can use the serde name to declare how messages should be serialized and deserialized.</p>

Modified: incubator/samza/site/learn/documentation/0.7.0/container/state-management.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/container/state-management.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/container/state-management.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/container/state-management.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>
@@ -245,51 +244,74 @@
 <p>Samza includes an additional in-memory caching layer in front of LevelDB, which avoids the cost of deserialization for frequently-accessed objects and batches writes. If the same key is updated multiple times in quick succession, the batching coalesces those updates into a single write. The writes are flushed to the changelog when a task <a href="checkpointing.html">commits</a>.</p>
 
 <p>To use a key-value store in your job, add the following to your job config:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text"># Use the key-value store implementation for a store called &quot;my-store&quot;
+stores.my-store.factory=org.apache.samza.storage.kv.KeyValueStorageEngineFactory
 
-<div class="highlight"><pre><code class="jproperties"><span class="c"># Use the key-value store implementation for a store called &quot;my-store&quot;</span>
-<span class="na">stores.my-store.factory</span><span class="o">=</span><span class="s">org.apache.samza.storage.kv.KeyValueStorageEngineFactory</span>
-
-<span class="c"># Use the Kafka topic &quot;my-store-changelog&quot; as the changelog stream for this store.</span>
-<span class="c"># This enables automatic recovery of the store after a failure. If you don&#39;t</span>
-<span class="c"># configure this, no changelog stream will be generated.</span>
-<span class="na">stores.my-store.changelog</span><span class="o">=</span><span class="s">kafka.my-store-changelog</span>
-
-<span class="c"># Encode keys and values in the store as UTF-8 strings.</span>
-<span class="na">serializers.registry.string.class</span><span class="o">=</span><span class="s">org.apache.samza.serializers.StringSerdeFactory</span>
-<span class="na">stores.my-store.key.serde</span><span class="o">=</span><span class="s">string</span>
-<span class="na">stores.my-store.msg.serde</span><span class="o">=</span><span class="s">string</span></code></pre></div>
-
+# Use the Kafka topic &quot;my-store-changelog&quot; as the changelog stream for this store.
+# This enables automatic recovery of the store after a failure. If you don&#39;t
+# configure this, no changelog stream will be generated.
+stores.my-store.changelog=kafka.my-store-changelog
+
+# Encode keys and values in the store as UTF-8 strings.
+serializers.registry.string.class=org.apache.samza.serializers.StringSerdeFactory
+stores.my-store.key.serde=string
+stores.my-store.msg.serde=string
+</code></pre></div>
 <p>See the <a href="serialization.html">serialization section</a> for more information on the <em>serde</em> options.</p>
 
 <p>Here is a simple example that writes every incoming message to the store:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">public class MyStatefulTask implements StreamTask, InitableTask {
+  private KeyValueStore&lt;String, String&gt; store;
 
-<div class="highlight"><pre><code class="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">MyStatefulTask</span> <span class="kd">implements</span> <span class="n">StreamTask</span><span class="o">,</span> <span class="n">InitableTask</span> <span class="o">{</span>
-  <span class="kd">private</span> <span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">store</span><span class="o">;</span>
-
-  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">init</span><span class="o">(</span><span class="n">Config</span> <span class="n">config</span><span class="o">,</span> <span class="n">TaskContext</span> <span class="n">context</span><span class="o">)</span> <span class="o">{</span>
-    <span class="k">this</span><span class="o">.</span><span class="na">store</span> <span class="o">=</span> <span class="o">(</span><span class="n">KeyValueStore</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;)</span> <span class="n">context</span><span class="o">.</span><span class="na">getStore</span><span class="o">(</span><span class="s">&quot;my-store&quot;</span><span class="o">);</span>
-  <span class="o">}</span>
-
-  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">process</span><span class="o">(</span><span class="n">IncomingMessageEnvelope</span> <span class="n">envelope</span><span class="o">,</span>
-                      <span class="n">MessageCollector</span> <span class="n">collector</span><span class="o">,</span>
-                      <span class="n">TaskCoordinator</span> <span class="n">coordinator</span><span class="o">)</span> <span class="o">{</span>
-    <span class="n">store</span><span class="o">.</span><span class="na">put</span><span class="o">((</span><span class="n">String</span><span class="o">)</span> <span class="n">envelope</span><span class="o">.</span><span class="na">getKey</span><span class="o">(),</span> <span class="o">(</span><span class="n">String</span><span class="o">)</span> <span class="n">envelope</span><span class="o">.</span><span class="na">getMessage</span><span class="o">());</span>
-  <span class="o">}</span>
-<span class="o">}</span></code></pre></div>
-
+  public void init(Config config, TaskContext context) {
+    this.store = (KeyValueStore&lt;String, String&gt;) context.getStore(&quot;my-store&quot;);
+  }
+
+  public void process(IncomingMessageEnvelope envelope,
+                      MessageCollector collector,
+                      TaskCoordinator coordinator) {
+    store.put((String) envelope.getKey(), (String) envelope.getMessage());
+  }
+}
+</code></pre></div>
 <p>Here is the complete key-value store API:</p>
-
-<div class="highlight"><pre><code class="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">KeyValueStore</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span> <span class="n">V</span><span class="o">&gt;</span> <span class="o">{</span>
-  <span class="n">V</span> <span class="nf">get</span><span class="o">(</span><span class="n">K</span> <span class="n">key</span><span class="o">);</span>
-  <span class="kt">void</span> <span class="nf">put</span><span class="o">(</span><span class="n">K</span> <span class="n">key</span><span class="o">,</span> <span class="n">V</span> <span class="n">value</span><span class="o">);</span>
-  <span class="kt">void</span> <span class="nf">putAll</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Entry</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span><span class="n">V</span><span class="o">&gt;&gt;</span> <span class="n">entries</span><span class="o">);</span>
-  <span class="kt">void</span> <span class="nf">delete</span><span class="o">(</span><span class="n">K</span> <span class="n">key</span><span class="o">);</span>
-  <span class="n">KeyValueIterator</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span><span class="n">V</span><span class="o">&gt;</span> <span class="n">range</span><span class="o">(</span><span class="n">K</span> <span class="n">from</span><span class="o">,</span> <span class="n">K</span> <span class="n">to</span><span class="o">);</span>
-  <span class="n">KeyValueIterator</span><span class="o">&lt;</span><span class="n">K</span><span class="o">,</span><span class="n">V</span><span class="o">&gt;</span> <span class="n">all</span><span class="o">();</span>
-<span class="o">}</span></code></pre></div>
-
-<p>Additional configuration properties for the key-value store are documented in the <a href="../jobs/configuration-table.html#keyvalue">configuration reference</a>.</p>
-
+<div class="highlight"><pre><code class="language-text" data-lang="text">public interface KeyValueStore&lt;K, V&gt; {
+  V get(K key);
+  void put(K key, V value);
+  void putAll(List&lt;Entry&lt;K,V&gt;&gt; entries);
+  void delete(K key);
+  KeyValueIterator&lt;K,V&gt; range(K from, K to);
+  KeyValueIterator&lt;K,V&gt; all();
+}
+</code></pre></div>
+<p>Here is a list of additional configurations accepted by the key-value store, along with their default values:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text"># The number of writes to batch together
+stores.my-store.write.batch.size=500
+
+# The number of objects to keep in Samza&#39;s cache (in front of LevelDB).
+# This must be at least as large as write.batch.size.
+# A cache size of 0 disables all caching and batching.
+stores.my-store.object.cache.size=1000
+
+# The size of the off-heap leveldb block cache in bytes, per container.
+# If you have multiple tasks within one container, each task is given a
+# proportional share of this cache.
+stores.my-store.container.cache.size.bytes=104857600
+
+# The amount of memory leveldb uses for buffering writes before they are
+# written to disk, per container. If you have multiple tasks within one
+# container, each task is given a proportional share of this buffer.
+# This setting also determines the size of leveldb&#39;s segment files.
+stores.my-store.container.write.buffer.size.bytes=33554432
+
+# Enable block compression? (set compression=none to disable)
+stores.my-store.leveldb.compression=snappy
+
+# If compression is enabled, leveldb groups approximately this many
+# uncompressed bytes into one compressed block. You probably don&#39;t need
+# to change this unless you are a compulsive fiddler.
+stores.my-store.leveldb.block.size.bytes=4096
+</code></pre></div>
 <h3 id="implementing-common-use-cases-with-the-key-value-store">Implementing common use cases with the key-value store</h3>
 
 <p>Earlier in this section we discussed some example use cases for stateful stream processing. Let&rsquo;s look at how each of these could be implemented using a key-value storage engine such as Samza&rsquo;s LevelDB.</p>

Modified: incubator/samza/site/learn/documentation/0.7.0/container/streams.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/container/streams.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/container/streams.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/container/streams.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>
@@ -124,49 +123,48 @@
 -->
 
 <p>The <a href="samza-container.html">samza container</a> reads and writes messages using the <a href="../api/javadocs/org/apache/samza/system/SystemConsumer.html">SystemConsumer</a> and <a href="../api/javadocs/org/apache/samza/system/SystemProducer.html">SystemProducer</a> interfaces. You can integrate any message broker with Samza by implementing these two interfaces.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">public interface SystemConsumer {
+  void start();
 
-<div class="highlight"><pre><code class="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">SystemConsumer</span> <span class="o">{</span>
-  <span class="kt">void</span> <span class="nf">start</span><span class="o">();</span>
+  void stop();
 
-  <span class="kt">void</span> <span class="nf">stop</span><span class="o">();</span>
+  void register(
+      SystemStreamPartition systemStreamPartition,
+      String lastReadOffset);
 
-  <span class="kt">void</span> <span class="nf">register</span><span class="o">(</span>
-      <span class="n">SystemStreamPartition</span> <span class="n">systemStreamPartition</span><span class="o">,</span>
-      <span class="n">String</span> <span class="n">lastReadOffset</span><span class="o">);</span>
+  List&lt;IncomingMessageEnvelope&gt; poll(
+      Map&lt;SystemStreamPartition, Integer&gt; systemStreamPartitions,
+      long timeout)
+    throws InterruptedException;
+}
 
-  <span class="n">List</span><span class="o">&lt;</span><span class="n">IncomingMessageEnvelope</span><span class="o">&gt;</span> <span class="nf">poll</span><span class="o">(</span>
-      <span class="n">Map</span><span class="o">&lt;</span><span class="n">SystemStreamPartition</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">systemStreamPartitions</span><span class="o">,</span>
-      <span class="kt">long</span> <span class="n">timeout</span><span class="o">)</span>
-    <span class="kd">throws</span> <span class="n">InterruptedException</span><span class="o">;</span>
-<span class="o">}</span>
+public class IncomingMessageEnvelope {
+  public Object getMessage() { ... }
 
-<span class="kd">public</span> <span class="kd">class</span> <span class="nc">IncomingMessageEnvelope</span> <span class="o">{</span>
-  <span class="kd">public</span> <span class="n">Object</span> <span class="nf">getMessage</span><span class="o">()</span> <span class="o">{</span> <span class="o">...</span> <span class="o">}</span>
+  public Object getKey() { ... }
 
-  <span class="kd">public</span> <span class="n">Object</span> <span class="nf">getKey</span><span class="o">()</span> <span class="o">{</span> <span class="o">...</span> <span class="o">}</span>
+  public SystemStreamPartition getSystemStreamPartition() { ... }
+}
 
-  <span class="kd">public</span> <span class="n">SystemStreamPartition</span> <span class="nf">getSystemStreamPartition</span><span class="o">()</span> <span class="o">{</span> <span class="o">...</span> <span class="o">}</span>
-<span class="o">}</span>
+public interface SystemProducer {
+  void start();
 
-<span class="kd">public</span> <span class="kd">interface</span> <span class="nc">SystemProducer</span> <span class="o">{</span>
-  <span class="kt">void</span> <span class="nf">start</span><span class="o">();</span>
+  void stop();
 
-  <span class="kt">void</span> <span class="nf">stop</span><span class="o">();</span>
+  void register(String source);
 
-  <span class="kt">void</span> <span class="nf">register</span><span class="o">(</span><span class="n">String</span> <span class="n">source</span><span class="o">);</span>
+  void send(String source, OutgoingMessageEnvelope envelope);
 
-  <span class="kt">void</span> <span class="nf">send</span><span class="o">(</span><span class="n">String</span> <span class="n">source</span><span class="o">,</span> <span class="n">OutgoingMessageEnvelope</span> <span class="n">envelope</span><span class="o">);</span>
+  void flush(String source);
+}
 
-  <span class="kt">void</span> <span class="nf">flush</span><span class="o">(</span><span class="n">String</span> <span class="n">source</span><span class="o">);</span>
-<span class="o">}</span>
-
-<span class="kd">public</span> <span class="kd">class</span> <span class="nc">OutgoingMessageEnvelope</span> <span class="o">{</span>
-  <span class="o">...</span>
-  <span class="kd">public</span> <span class="n">Object</span> <span class="nf">getKey</span><span class="o">()</span> <span class="o">{</span> <span class="o">...</span> <span class="o">}</span>
-
-  <span class="kd">public</span> <span class="n">Object</span> <span class="nf">getMessage</span><span class="o">()</span> <span class="o">{</span> <span class="o">...</span> <span class="o">}</span>
-<span class="o">}</span></code></pre></div>
+public class OutgoingMessageEnvelope {
+  ...
+  public Object getKey() { ... }
 
+  public Object getMessage() { ... }
+}
+</code></pre></div>
 <p>Out of the box, Samza supports Kafka (KafkaSystemConsumer and KafkaSystemProducer). However, any message bus system can be plugged in, as long as it can provide the semantics required by Samza, as described in the <a href="../api/javadocs/org/apache/samza/system/SystemConsumer.html">javadoc</a>.</p>
 
 <p>SystemConsumers and SystemProducers may read and write messages of any data type. It&rsquo;s ok if they only support byte arrays &mdash; Samza has a separate <a href="serialization.html">serialization layer</a> which converts to and from objects that application code can use. Samza does not prescribe any particular data model or serialization format.</p>
@@ -184,18 +182,16 @@
 <p>When a Samza container has several incoming messages on different stream partitions, how does it decide which to process first? The behavior is determined by a <a href="../api/javadocs/org/apache/samza/system/chooser/MessageChooser.html">MessageChooser</a>. The default chooser is RoundRobinChooser, but you can override it by implementing a custom chooser.</p>
 
 <p>To plug in your own message chooser, you need to implement the <a href="../api/javadocs/org/apache/samza/system/chooser/MessageChooserFactory.html">MessageChooserFactory</a> interface, and set the &ldquo;task.chooser.class&rdquo; configuration to the fully-qualified class name of your implementation:</p>
-
-<div class="highlight"><pre><code class="jproperties"><span class="na">task.chooser.class</span><span class="o">=</span><span class="s">com.example.samza.YourMessageChooserFactory</span></code></pre></div>
-
+<div class="highlight"><pre><code class="language-text" data-lang="text">task.chooser.class=com.example.samza.YourMessageChooserFactory
+</code></pre></div>
 <h4 id="prioritizing-input-streams">Prioritizing input streams</h4>
 
 <p>There are certain times when messages from one stream should be processed with higher priority than messages from another stream. For example, some Samza jobs consume two streams: one stream is fed by a real-time system and the other stream is fed by a batch system. In this case, it&rsquo;s useful to prioritize the real-time stream over the batch stream, so that the real-time processing doesn&rsquo;t slow down if there is a sudden burst of data on the batch stream.</p>
 
 <p>Samza provides a mechanism to prioritize one stream over another by setting this configuration parameter: systems.&lt;system&gt;.streams.&lt;stream&gt;.samza.priority=&lt;number&gt;. For example:</p>
-
-<div class="highlight"><pre><code class="jproperties"><span class="na">systems.kafka.streams.my-real-time-stream.samza.priority</span><span class="o">=</span><span class="s">2</span>
-<span class="na">systems.kafka.streams.my-batch-stream.samza.priority</span><span class="o">=</span><span class="s">1</span></code></pre></div>
-
+<div class="highlight"><pre><code class="language-text" data-lang="text">systems.kafka.streams.my-real-time-stream.samza.priority=2
+systems.kafka.streams.my-batch-stream.samza.priority=1
+</code></pre></div>
 <p>This declares that my-real-time-stream&rsquo;s messages should be processed with higher priority than my-batch-stream&rsquo;s messages. If my-real-time-stream has any messages available, they are processed first. Only if there are no messages currently waiting on my-real-time-stream, the Samza job continues processing my-batch-stream.</p>
 
 <p>Each priority level gets its own MessageChooser. It is valid to define two streams with the same priority. If messages are available from two streams at the same priority level, it&rsquo;s up to the MessageChooser for that priority level to decide which message should be processed first.</p>
@@ -211,11 +207,10 @@
 <p>Another difference between a bootstrap stream and a high-priority stream is that the bootstrap stream&rsquo;s special treatment is temporary: when it has been fully consumed (we say it has &ldquo;caught up&rdquo;), its priority drops to be the same as all the other input streams.</p>
 
 <p>To configure a stream called &ldquo;my-bootstrap-stream&rdquo; to be a fully-consumed bootstrap stream, use the following settings:</p>
-
-<div class="highlight"><pre><code class="jproperties"><span class="na">systems.kafka.streams.my-bootstrap-stream.samza.bootstrap</span><span class="o">=</span><span class="s">true</span>
-<span class="na">systems.kafka.streams.my-bootstrap-stream.samza.reset.offset</span><span class="o">=</span><span class="s">true</span>
-<span class="na">systems.kafka.streams.my-bootstrap-stream.samza.offset.default</span><span class="o">=</span><span class="s">oldest</span></code></pre></div>
-
+<div class="highlight"><pre><code class="language-text" data-lang="text">systems.kafka.streams.my-bootstrap-stream.samza.bootstrap=true
+systems.kafka.streams.my-bootstrap-stream.samza.reset.offset=true
+systems.kafka.streams.my-bootstrap-stream.samza.offset.default=oldest
+</code></pre></div>
 <p>The bootstrap=true parameter enables the bootstrap behavior (prioritization over other streams). The combination of reset.offset=true and offset.default=oldest tells Samza to always start reading the stream from the oldest offset, every time a container starts up (rather than starting to read from the most recent checkpoint).</p>
 
 <p>It is valid to define multiple bootstrap streams. In this case, the order in which they are bootstrapped is determined by the priority.</p>
@@ -225,9 +220,8 @@
 <p>In some cases, you can improve performance by consuming several messages from the same stream partition in sequence. Samza supports this mode of operation, called <em>batching</em>.</p>
 
 <p>For example, if you want to read 100 messages in a row from each stream partition (regardless of the MessageChooser), you can use this configuration parameter:</p>
-
-<div class="highlight"><pre><code class="jproperties"><span class="na">task.consumer.batch.size</span><span class="o">=</span><span class="s">100</span></code></pre></div>
-
+<div class="highlight"><pre><code class="language-text" data-lang="text">task.consumer.batch.size=100
+</code></pre></div>
 <p>With this setting, Samza tries to read a message from the most recently used <a href="../api/javadocs/org/apache/samza/system/SystemStreamPartition.html">SystemStreamPartition</a>. This behavior continues either until no more messages are available for that SystemStreamPartition, or until the batch size has been reached. When that happens, Samza defers to the MessageChooser to determine the next message to process. It then again tries to continue consume from the chosen message&rsquo;s SystemStreamPartition until the batch size is reached.</p>
 
 <h2 id="serialization-&raquo;"><a href="serialization.html">Serialization &raquo;</a></h2>

Modified: incubator/samza/site/learn/documentation/0.7.0/container/windowing.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/container/windowing.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/container/windowing.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/container/windowing.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>
@@ -126,34 +125,32 @@
 <p>Sometimes a stream processing job needs to do something in regular time intervals, regardless of how many incoming messages the job is processing. For example, say you want to report the number of page views per minute. To do this, you increment a counter every time you see a page view event. Once per minute, you send the current counter value to an output stream and reset the counter to zero.</p>
 
 <p>Samza&rsquo;s <em>windowing</em> feature provides a way for tasks to do something in regular time intervals, for example once per minute. To enable windowing, you just need to set one property in your job configuration:</p>
-
-<div class="highlight"><pre><code class="jproperties"><span class="c"># Call the window() method every 60 seconds</span>
-<span class="na">task.window.ms</span><span class="o">=</span><span class="s">60000</span></code></pre></div>
-
+<div class="highlight"><pre><code class="language-text" data-lang="text"># Call the window() method every 60 seconds
+task.window.ms=60000
+</code></pre></div>
 <p>Next, your stream task needs to implement the <a href="../api/javadocs/org/apache/samza/task/WindowableTask.html">WindowableTask</a> interface. This interface defines a window() method which is called by Samza in the regular interval that you configured.</p>
 
 <p>For example, this is how you would implement a basic per-minute event counter:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">public class EventCounterTask implements StreamTask, WindowableTask {
 
-<div class="highlight"><pre><code class="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">EventCounterTask</span> <span class="kd">implements</span> <span class="n">StreamTask</span><span class="o">,</span> <span class="n">WindowableTask</span> <span class="o">{</span>
-
-  <span class="kd">public</span> <span class="kd">static</span> <span class="kd">final</span> <span class="n">SystemStream</span> <span class="n">OUTPUT_STREAM</span> <span class="o">=</span>
-    <span class="k">new</span> <span class="nf">SystemStream</span><span class="o">(</span><span class="s">&quot;kafka&quot;</span><span class="o">,</span> <span class="s">&quot;events-per-minute&quot;</span><span class="o">);</span>
-
-  <span class="kd">private</span> <span class="kt">int</span> <span class="n">eventsSeen</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span>
+  public static final SystemStream OUTPUT_STREAM =
+    new SystemStream(&quot;kafka&quot;, &quot;events-per-minute&quot;);
 
-  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">process</span><span class="o">(</span><span class="n">IncomingMessageEnvelope</span> <span class="n">envelope</span><span class="o">,</span>
-                      <span class="n">MessageCollector</span> <span class="n">collector</span><span class="o">,</span>
-                      <span class="n">TaskCoordinator</span> <span class="n">coordinator</span><span class="o">)</span> <span class="o">{</span>
-    <span class="n">eventsSeen</span><span class="o">++;</span>
-  <span class="o">}</span>
-
-  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">window</span><span class="o">(</span><span class="n">MessageCollector</span> <span class="n">collector</span><span class="o">,</span>
-                     <span class="n">TaskCoordinator</span> <span class="n">coordinator</span><span class="o">)</span> <span class="o">{</span>
-    <span class="n">collector</span><span class="o">.</span><span class="na">send</span><span class="o">(</span><span class="k">new</span> <span class="n">OutgoingMessageEnvelope</span><span class="o">(</span><span class="n">OUTPUT_STREAM</span><span class="o">,</span> <span class="n">eventsSeen</span><span class="o">));</span>
-    <span class="n">eventsSeen</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span>
-  <span class="o">}</span>
-<span class="o">}</span></code></pre></div>
+  private int eventsSeen = 0;
 
+  public void process(IncomingMessageEnvelope envelope,
+                      MessageCollector collector,
+                      TaskCoordinator coordinator) {
+    eventsSeen++;
+  }
+
+  public void window(MessageCollector collector,
+                     TaskCoordinator coordinator) {
+    collector.send(new OutgoingMessageEnvelope(OUTPUT_STREAM, eventsSeen));
+    eventsSeen = 0;
+  }
+}
+</code></pre></div>
 <p>If you need to send messages to output streams, you can use the <a href="../api/javadocs/org/apache/samza/task/MessageCollector.html">MessageCollector</a> object passed to the window() method. Please only use that MessageCollector object for sending messages, and don&rsquo;t use it outside of the call to window().</p>
 
 <p>Note that Samza uses <a href="event-loop.html">single-threaded execution</a>, so the window() call can never happen concurrently with a process() call. This has the advantage that you don&rsquo;t need to worry about thread safety in your code (no need to synchronize anything), but the downside that the window() call may be delayed if your process() method takes a long time to return.</p>

Modified: incubator/samza/site/learn/documentation/0.7.0/index.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/index.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/index.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/index.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>
@@ -173,7 +172,6 @@
   <li><a href="jobs/packaging.html">Packaging</a></li>
   <li><a href="jobs/yarn-jobs.html">YARN Jobs</a></li>
   <li><a href="jobs/logging.html">Logging</a></li>
-  <li><a href="jobs/reprocessing.html">Reprocessing</a></li>
 </ul>
 
 <h4>YARN</h4>

Modified: incubator/samza/site/learn/documentation/0.7.0/introduction/architecture.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/introduction/architecture.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/introduction/architecture.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/introduction/architecture.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>
@@ -202,9 +201,8 @@
 <h3 id="example">Example</h3>
 
 <p>Let&rsquo;s take a look at a real example: suppose we want to count the number of page views. In SQL, you would write something like:</p>
-
-<div class="highlight"><pre><code class="sql"><span class="k">SELECT</span> <span class="n">user_id</span><span class="p">,</span> <span class="k">COUNT</span><span class="p">(</span><span class="o">*</span><span class="p">)</span> <span class="k">FROM</span> <span class="n">PageViewEvent</span> <span class="k">GROUP</span> <span class="k">BY</span> <span class="n">user_id</span></code></pre></div>
-
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT user_id, COUNT(*) FROM PageViewEvent GROUP BY user_id
+</code></pre></div>
 <p>Although Samza doesn&rsquo;t support SQL right now, the idea is the same. Two jobs are required to calculate this query: one to group messages by user ID, and the other to do the counting.</p>
 
 <p>In the first job, the grouping is done by sending all messages with the same user ID to the same partition of an intermediate topic. You can do this by using the user ID as key of the messages that are emitted by the first job, and this key is mapped to one of the intermediate topic&rsquo;s partitions (usually by taking a hash of the key mod the number of partitions). The second job consumes the intermediate topic. Each task in the second job consumes one partition of the intermediate topic, i.e. all the messages for a subset of user IDs. The task has a counter for each user ID in its partition, and the appropriate counter is incremented every time the task receives a message with a particular user ID.</p>

Modified: incubator/samza/site/learn/documentation/0.7.0/introduction/background.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/introduction/background.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/introduction/background.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/introduction/background.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>

Modified: incubator/samza/site/learn/documentation/0.7.0/introduction/concepts.html
URL: http://svn.apache.org/viewvc/incubator/samza/site/learn/documentation/0.7.0/introduction/concepts.html?rev=1609230&r1=1609229&r2=1609230&view=diff
==============================================================================
--- incubator/samza/site/learn/documentation/0.7.0/introduction/concepts.html (original)
+++ incubator/samza/site/learn/documentation/0.7.0/introduction/concepts.html Wed Jul  9 16:34:23 2014
@@ -23,7 +23,6 @@
     <link href="/css/bootstrap.min.css" rel="stylesheet"/>
     <link href="/css/font-awesome.min.css" rel="stylesheet"/>
     <link href="/css/main.css" rel="stylesheet"/>
-    <link href="/css/syntax.css" rel="stylesheet"/>
     <link rel="icon" type="image/png" href="/img/samza-icon.png">
   </head>
   <body>