You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by bb...@apache.org on 2019/04/24 22:17:17 UTC

[kafka-site] branch asf-site updated: KAFKA-SITE DOCS 8227 - Add missing links Core Concepts duality of streams tables (#204)

This is an automated email from the ASF dual-hosted git repository.

bbejeck pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/kafka-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 00e2637  KAFKA-SITE DOCS 8227 - Add missing links Core Concepts duality of streams tables (#204)
00e2637 is described below

commit 00e263735fd6254230a552d53e2921405a8dea7e
Author: Victoria Bialas <lo...@users.noreply.github.com>
AuthorDate: Wed Apr 24 15:17:13 2019 -0700

    KAFKA-SITE DOCS 8227 - Add missing links Core Concepts duality of streams tables (#204)
    
    Add missing links Core Concepts duality of streams tables
    
    Reviewers: Joel Hamill <jo...@confluent.io>, Bill Bejeck <bb...@gmail.com>
---
 10/streams/core-concepts.html                 |  3 ++-
 10/streams/developer-guide/processor-api.html |  2 +-
 11/streams/core-concepts.html                 |  5 +++--
 11/streams/developer-guide/processor-api.html |  2 +-
 20/streams/core-concepts.html                 | 26 ++++++++++++++++----------
 21/streams/core-concepts.html                 | 25 ++++++++++++-------------
 22/streams/core-concepts.html                 | 27 +++++++++++++--------------
 22/streams/developer-guide/running-app.html   |  3 ++-
 8 files changed, 50 insertions(+), 43 deletions(-)

diff --git a/10/streams/core-concepts.html b/10/streams/core-concepts.html
index 81bfdf6..13a8b3e 100644
--- a/10/streams/core-concepts.html
+++ b/10/streams/core-concepts.html
@@ -57,13 +57,14 @@
     <p>
         We first summarize the key concepts of Kafka Streams.
     </p>
+    <a id="streams_processor_node" href="#streams_processor_node"></a>
 
     <h3><a id="streams_topology" href="#streams_topology">Stream Processing Topology</a></h3>
 
     <ul>
         <li>A <b>stream</b> is the most important abstraction provided by Kafka Streams: it represents an unbounded, continuously updating data set. A stream is an ordered, replayable, and fault-tolerant sequence of immutable data records, where a <b>data record</b> is defined as a key-value pair.</li>
         <li>A <b>stream processing application</b> is any program that makes use of the Kafka Streams library. It defines its computational logic through one or more <b>processor topologies</b>, where a processor topology is a graph of stream processors (nodes) that are connected by streams (edges).</li>
-        <li>A <b><a href="#streams_processor_node">stream processor</a></b> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
+        <li>A <a id="defining-a-stream-processor" href="/{{version}}/documentation/streams/developer-guide/processor-api#defining-a-stream-processor"><b>stream processor</b></a> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
     </ul>
 
     There are two special processors in the topology:
diff --git a/10/streams/developer-guide/processor-api.html b/10/streams/developer-guide/processor-api.html
index fdf6c86..1e1df65 100644
--- a/10/streams/developer-guide/processor-api.html
+++ b/10/streams/developer-guide/processor-api.html
@@ -66,7 +66,7 @@
         </div>
         <div class="section" id="defining-a-stream-processor">
             <span id="streams-developer-guide-stream-processor"></span><h2><a class="toc-backref" href="#id2">Defining a Stream Processor</a><a class="headerlink" href="#defining-a-stream-processor" title="Permalink to this headline"></a></h2>
-            <p>A <a class="reference internal" href="../concepts.html#streams-concepts"><span class="std std-ref">stream processor</span></a> is a node in the processor topology that represents a single processing step.
+            <p>A <a class="reference internal" href="../core-concepts.html#streams_processor_node"><span class="std std-ref">stream processor</span></a> is a node in the processor topology that represents a single processing step.
                 With the Processor API, you can define arbitrary stream processors that processes one received record at a time, and connect
                 these processors with their associated state stores to compose the processor topology.</p>
             <p>You can define a customized stream processor by implementing the <code class="docutils literal"><span class="pre">Processor</span></code> interface, which provides the <code class="docutils literal"><span class="pre">process()</span></code> API method.
diff --git a/11/streams/core-concepts.html b/11/streams/core-concepts.html
index 473a268..bd930de 100644
--- a/11/streams/core-concepts.html
+++ b/11/streams/core-concepts.html
@@ -57,13 +57,14 @@
     <p>
         We first summarize the key concepts of Kafka Streams.
     </p>
+    <a id="streams_processor_node" href="#streams_processor_node"></a>
 
     <h3><a id="streams_topology" href="#streams_topology">Stream Processing Topology</a></h3>
-
+    <a id="streams_processor_node" href="#streams_processor_node"></a>
     <ul>
         <li>A <b>stream</b> is the most important abstraction provided by Kafka Streams: it represents an unbounded, continuously updating data set. A stream is an ordered, replayable, and fault-tolerant sequence of immutable data records, where a <b>data record</b> is defined as a key-value pair.</li>
         <li>A <b>stream processing application</b> is any program that makes use of the Kafka Streams library. It defines its computational logic through one or more <b>processor topologies</b>, where a processor topology is a graph of stream processors (nodes) that are connected by streams (edges).</li>
-        <li>A <b><a id="#streams_processor_node" href="#streams_processor_node">stream processor</a></b> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
+        <li>A <a id="defining-a-stream-processor" href="/{{version}}/documentation/streams/developer-guide/processor-api#defining-a-stream-processor"><b>stream processor</b></a> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
     </ul>
 
     There are two special processors in the topology:
diff --git a/11/streams/developer-guide/processor-api.html b/11/streams/developer-guide/processor-api.html
index fdf6c86..1e1df65 100644
--- a/11/streams/developer-guide/processor-api.html
+++ b/11/streams/developer-guide/processor-api.html
@@ -66,7 +66,7 @@
         </div>
         <div class="section" id="defining-a-stream-processor">
             <span id="streams-developer-guide-stream-processor"></span><h2><a class="toc-backref" href="#id2">Defining a Stream Processor</a><a class="headerlink" href="#defining-a-stream-processor" title="Permalink to this headline"></a></h2>
-            <p>A <a class="reference internal" href="../concepts.html#streams-concepts"><span class="std std-ref">stream processor</span></a> is a node in the processor topology that represents a single processing step.
+            <p>A <a class="reference internal" href="../core-concepts.html#streams_processor_node"><span class="std std-ref">stream processor</span></a> is a node in the processor topology that represents a single processing step.
                 With the Processor API, you can define arbitrary stream processors that processes one received record at a time, and connect
                 these processors with their associated state stores to compose the processor topology.</p>
             <p>You can define a customized stream processor by implementing the <code class="docutils literal"><span class="pre">Processor</span></code> interface, which provides the <code class="docutils literal"><span class="pre">process()</span></code> API method.
diff --git a/20/streams/core-concepts.html b/20/streams/core-concepts.html
index 594efaa..e522f30 100644
--- a/20/streams/core-concepts.html
+++ b/20/streams/core-concepts.html
@@ -57,13 +57,14 @@
     <p>
         We first summarize the key concepts of Kafka Streams.
     </p>
+    <a id="streams_processor_node" href="#streams_processor_node"></a>
 
     <h3><a id="streams_topology" href="#streams_topology">Stream Processing Topology</a></h3>
 
     <ul>
         <li>A <b>stream</b> is the most important abstraction provided by Kafka Streams: it represents an unbounded, continuously updating data set. A stream is an ordered, replayable, and fault-tolerant sequence of immutable data records, where a <b>data record</b> is defined as a key-value pair.</li>
         <li>A <b>stream processing application</b> is any program that makes use of the Kafka Streams library. It defines its computational logic through one or more <b>processor topologies</b>, where a processor topology is a graph of stream processors (nodes) that are connected by streams (edges).</li>
-        <li>A <b><a id="streams_processor_node" href="#streams_processor_node">stream processor</a></b> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
+        <li>A <a id="defining-a-stream-processor" href="/{{version}}/documentation/streams/developer-guide/processor-api#defining-a-stream-processor"><b>stream processor</b></a> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
     </ul>
 
     There are two special processors in the topology:
@@ -160,21 +161,26 @@
 
     <p>
         Any stream processing technology must therefore provide <strong>first-class support for streams and tables</strong>.
-        Kafka's Streams API provides such functionality through its core abstractions for
-        <code class="interpreted-text" data-role="ref">streams &lt;streams_concepts_kstream&gt;</code> and
-        <code class="interpreted-text" data-role="ref">tables &lt;streams_concepts_ktable&gt;</code>, which we will talk about in a minute.
-        Now, an interesting observation is that there is actually a <strong>close relationship between streams and tables</strong>,
-        the so-called stream-table duality.
-        And Kafka exploits this duality in many ways: for example, to make your applications
+        Kafka's Streams API provides such functionality through its core abstractions for 
+        <a id="streams_concepts_kstream" href="/{{version}}/documentation/streams/developer-guide/dsl-api#streams_concepts_kstream">streams</a>
+        <code class="interpreted-text" data-role="ref">&lt;streams_concepts_kstreams&gt;</code>
+        and <a id="streams_concepts_ktable" href="/{{version}}/documentation/streams/developer-guide/dsl-api#streams_concepts_ktable">tables</a>
+        <code class="interpreted-text" data-role="ref">&lt;streams_concepts_kstreams&gt;</code>,
+        which we will talk about in a minute. Now, an interesting observation is that there is actually a <strong>close relationship between streams and tables</strong>,
+        the so-called stream-table duality. And Kafka exploits this duality in many ways: for example, to make your applications
+        <a id="streams-developer-guide-execution-scaling" href="/{{version}}/documentation/streams/developer-guide/running-app#elastic-scaling-of-your-application">elastic</a>
         <code class="interpreted-text" data-role="ref">elastic &lt;streams_developer-guide_execution-scaling&gt;</code>,
-        to support <code class="interpreted-text" data-role="ref">fault-tolerant stateful processing &lt;streams_developer-guide_state-store_fault-tolerance&gt;</code>,
-        or to run <code class="interpreted-text" data-role="ref">interactive queries &lt;streams_concepts_interactive-queries&gt;</code>
+        to support <a id="streams_architecture_recovery" href="/{{version}}/documentation/streams/architecture#streams_architecture_recovery">fault-tolerant stateful processing</a>
+        <code class="interpreted-text" data-role="ref">&lt;streams_developer-guide_state-store_fault-tolerance&gt;</code>,
+        or to run <a id="streams-developer-guide-interactive-queries" href="/{{version}}/documentation/streams/developer-guide/interactive-queries#interactive-queries">interactive queries</a>
+        <code class="interpreted-text" data-role="ref">&lt;streams_concepts_interactive-queries&gt;</code>
         against your application's latest processing results. And, beyond its internal usage, the Kafka Streams API
         also allows developers to exploit this duality in their own applications.
     </p>
 
     <p>
-        Before we discuss concepts such as <code class="interpreted-text" data-role="ref">aggregations &lt;streams_concepts_aggregations&gt;</code>
+        Before we discuss concepts such as <a id="streams-developer-guide-dsl-aggregating" href="/{{version}}/documentation/streams/developer-guide/dsl-api#aggregating">aggregations</a>
+        <code class="interpreted-text" data-role="ref">&lt;streams_concepts_aggregations&gt;</code>,
         in Kafka Streams we must first introduce <strong>tables</strong> in more detail, and talk about the aforementioned stream-table duality.
         Essentially, this duality means that a stream can be viewed as a table, and a table can be viewed as a stream.
     </p>
diff --git a/21/streams/core-concepts.html b/21/streams/core-concepts.html
index 1e1aeb7..2908022 100644
--- a/21/streams/core-concepts.html
+++ b/21/streams/core-concepts.html
@@ -57,13 +57,13 @@
     <p>
         We first summarize the key concepts of Kafka Streams.
     </p>
-
+    <a id="streams_processor_node" href="#streams_processor_node"></a>
     <h3><a id="streams_topology" href="#streams_topology">Stream Processing Topology</a></h3>
 
     <ul>
         <li>A <b>stream</b> is the most important abstraction provided by Kafka Streams: it represents an unbounded, continuously updating data set. A stream is an ordered, replayable, and fault-tolerant sequence of immutable data records, where a <b>data record</b> is defined as a key-value pair.</li>
         <li>A <b>stream processing application</b> is any program that makes use of the Kafka Streams library. It defines its computational logic through one or more <b>processor topologies</b>, where a processor topology is a graph of stream processors (nodes) that are connected by streams (edges).</li>
-        <li>A <b><a id="streams_processor_node" href="#streams_processor_node">stream processor</a></b> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
+        <li>A <a id="defining-a-stream-processor" href="/{{version}}/documentation/streams/developer-guide/processor-api#defining-a-stream-processor"><b>stream processor</b></a> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
     </ul>
 
     There are two special processors in the topology:
@@ -160,22 +160,21 @@
 
     <p>
         Any stream processing technology must therefore provide <strong>first-class support for streams and tables</strong>.
-        Kafka's Streams API provides such functionality through its core abstractions for
-        <code class="interpreted-text" data-role="ref">streams &lt;streams_concepts_kstream&gt;</code> and
-        <code class="interpreted-text" data-role="ref">tables &lt;streams_concepts_ktable&gt;</code>, which we will talk about in a minute.
-        Now, an interesting observation is that there is actually a <strong>close relationship between streams and tables</strong>,
-        the so-called stream-table duality.
-        And Kafka exploits this duality in many ways: for example, to make your applications
-        <code class="interpreted-text" data-role="ref">elastic &lt;streams_developer-guide_execution-scaling&gt;</code>,
-        to support <code class="interpreted-text" data-role="ref">fault-tolerant stateful processing &lt;streams_developer-guide_state-store_fault-tolerance&gt;</code>,
-        or to run <code class="interpreted-text" data-role="ref">interactive queries &lt;streams_concepts_interactive-queries&gt;</code>
+        Kafka's Streams API provides such functionality through its core abstractions for 
+        <a id="streams_concepts_kstream" href="/{{version}}/documentation/streams/developer-guide/dsl-api#streams_concepts_kstream">streams</a>
+        and <a id="streams_concepts_ktable" href="/{{version}}/documentation/streams/developer-guide/dsl-api#streams_concepts_ktable">tables</a>,
+        which we will talk about in a minute. Now, an interesting observation is that there is actually a <strong>close relationship between streams and tables</strong>,
+        the so-called stream-table duality. And Kafka exploits this duality in many ways: for example, to make your applications
+        <a id="streams-developer-guide-execution-scaling" href="/{{version}}/documentation/streams/developer-guide/running-app#elastic-scaling-of-your-application">elastic</a>,
+        to support <a id="streams_architecture_recovery" href="/{{version}}/documentation/streams/architecture#streams_architecture_recovery">fault-tolerant stateful processing</a>,
+        or to run <a id="streams-developer-guide-interactive-queries" href="/{{version}}/documentation/streams/developer-guide/interactive-queries#interactive-queries">interactive queries</a>
         against your application's latest processing results. And, beyond its internal usage, the Kafka Streams API
         also allows developers to exploit this duality in their own applications.
     </p>
 
     <p>
-        Before we discuss concepts such as <code class="interpreted-text" data-role="ref">aggregations &lt;streams_concepts_aggregations&gt;</code>
-        in Kafka Streams we must first introduce <strong>tables</strong> in more detail, and talk about the aforementioned stream-table duality.
+        Before we discuss concepts such as <a id="streams-developer-guide-dsl-aggregating" href="/{{version}}/documentation/streams/developer-guide/dsl-api#aggregating">aggregations</a>
+        in Kafka Streams, we must first introduce <strong>tables</strong> in more detail, and talk about the aforementioned stream-table duality.
         Essentially, this duality means that a stream can be viewed as a table, and a table can be viewed as a stream.
     </p>
 
diff --git a/22/streams/core-concepts.html b/22/streams/core-concepts.html
index 1e1aeb7..13c2a31 100644
--- a/22/streams/core-concepts.html
+++ b/22/streams/core-concepts.html
@@ -57,13 +57,13 @@
     <p>
         We first summarize the key concepts of Kafka Streams.
     </p>
-
+    <a id="streams_processor_node" href="#streams_processor_node"></a>
     <h3><a id="streams_topology" href="#streams_topology">Stream Processing Topology</a></h3>
-
     <ul>
         <li>A <b>stream</b> is the most important abstraction provided by Kafka Streams: it represents an unbounded, continuously updating data set. A stream is an ordered, replayable, and fault-tolerant sequence of immutable data records, where a <b>data record</b> is defined as a key-value pair.</li>
         <li>A <b>stream processing application</b> is any program that makes use of the Kafka Streams library. It defines its computational logic through one or more <b>processor topologies</b>, where a processor topology is a graph of stream processors (nodes) that are connected by streams (edges).</li>
-        <li>A <b><a id="streams_processor_node" href="#streams_processor_node">stream processor</a></b> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
+        <li>A <a id="defining-a-stream-processor" href="/{{version}}/documentation/streams/developer-guide/processor-api#defining-a-stream-processor"><b>stream processor</b></a> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. 
+        </li>
     </ul>
 
     There are two special processors in the topology:
@@ -160,22 +160,21 @@
 
     <p>
         Any stream processing technology must therefore provide <strong>first-class support for streams and tables</strong>.
-        Kafka's Streams API provides such functionality through its core abstractions for
-        <code class="interpreted-text" data-role="ref">streams &lt;streams_concepts_kstream&gt;</code> and
-        <code class="interpreted-text" data-role="ref">tables &lt;streams_concepts_ktable&gt;</code>, which we will talk about in a minute.
-        Now, an interesting observation is that there is actually a <strong>close relationship between streams and tables</strong>,
-        the so-called stream-table duality.
-        And Kafka exploits this duality in many ways: for example, to make your applications
-        <code class="interpreted-text" data-role="ref">elastic &lt;streams_developer-guide_execution-scaling&gt;</code>,
-        to support <code class="interpreted-text" data-role="ref">fault-tolerant stateful processing &lt;streams_developer-guide_state-store_fault-tolerance&gt;</code>,
-        or to run <code class="interpreted-text" data-role="ref">interactive queries &lt;streams_concepts_interactive-queries&gt;</code>
+        Kafka's Streams API provides such functionality through its core abstractions for 
+        <a id="streams_concepts_kstream" href="/{{version}}/documentation/streams/developer-guide/dsl-api#streams_concepts_kstream">streams</a>
+        and <a id="streams_concepts_ktable" href="/{{version}}/documentation/streams/developer-guide/dsl-api#streams_concepts_ktable">tables</a>,
+        which we will talk about in a minute. Now, an interesting observation is that there is actually a <strong>close relationship between streams and tables</strong>,
+        the so-called stream-table duality. And Kafka exploits this duality in many ways: for example, to make your applications
+        <a id="streams-developer-guide-execution-scaling" href="/{{version}}/documentation/streams/developer-guide/running-app#elastic-scaling-of-your-application">elastic</a>,
+        to support <a id="streams_architecture_recovery" href="/{{version}}/documentation/streams/architecture#streams_architecture_recovery">fault-tolerant stateful processing</a>,
+        or to run <a id="streams-developer-guide-interactive-queries" href="/{{version}}/documentation/streams/developer-guide/interactive-queries#interactive-queries">interactive queries</a>
         against your application's latest processing results. And, beyond its internal usage, the Kafka Streams API
         also allows developers to exploit this duality in their own applications.
     </p>
 
     <p>
-        Before we discuss concepts such as <code class="interpreted-text" data-role="ref">aggregations &lt;streams_concepts_aggregations&gt;</code>
-        in Kafka Streams we must first introduce <strong>tables</strong> in more detail, and talk about the aforementioned stream-table duality.
+        Before we discuss concepts such as <a id="streams-developer-guide-dsl-aggregating" href="/{{version}}/documentation/streams/developer-guide/dsl-api#aggregating">aggregations</a>
+        in Kafka Streams, we must first introduce <strong>tables</strong> in more detail, and talk about the aforementioned stream-table duality.
         Essentially, this duality means that a stream can be viewed as a table, and a table can be viewed as a stream.
     </p>
 
diff --git a/22/streams/developer-guide/running-app.html b/22/streams/developer-guide/running-app.html
index f83210d..14ba34f 100644
--- a/22/streams/developer-guide/running-app.html
+++ b/22/streams/developer-guide/running-app.html
@@ -83,7 +83,8 @@ $ java -cp path-to-app-fatjar.jar com.example.MyStreamsApp
                   more information, see the  <a class="reference internal" href="#streams-developer-guide-execution-scaling-state-restoration"><span class="std std-ref">State restoration during workload rebalance</span></a> section).</p>
           </div>
           <div class="section" id="elastic-scaling-of-your-application">
-              <span id="streams-developer-guide-execution-scaling"></span><h2><a class="toc-backref" href="#id4">Elastic scaling of your application</a><a class="headerlink" href="#elastic-scaling-of-your-application" title="Permalink to this headline"></a></h2>
+              <span id="streams-developer-guide-execution-scaling"></span>
+              <h2><a class="toc-backref" href="#id4">Elastic scaling of your application</a><a class="headerlink" href="#elastic-scaling-of-your-application" title="Permalink to this headline"></a></h2>
               <p>Kafka Streams makes your stream processing applications elastic and scalable.  You can add and remove processing capacity
                   dynamically during application runtime without any downtime or data loss.  This makes your applications
                   resilient in the face of failures and for allows you to perform maintenance as needed (e.g. rolling upgrades).</p>