You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by gu...@apache.org on 2018/04/15 17:08:48 UTC
[kafka-site] branch asf-site updated: MINOR: fix processor node
broken link
This is an automated email from the ASF dual-hosted git repository.
guozhang pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/kafka-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 2912d98 MINOR: fix processor node broken link
2912d98 is described below
commit 2912d9832317a421260d4611f3e360e0e4a9a09b
Author: Guozhang Wang <wa...@gmail.com>
AuthorDate: Sun Apr 15 10:08:28 2018 -0700
MINOR: fix processor node broken link
---
10/streams/core-concepts.html | 2 +-
10/streams/developer-guide/memory-mgmt.html | 4 ++--
11/streams/core-concepts.html | 2 +-
11/streams/developer-guide/memory-mgmt.html | 4 ++--
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/10/streams/core-concepts.html b/10/streams/core-concepts.html
index d803b3a..f2f32ad 100644
--- a/10/streams/core-concepts.html
+++ b/10/streams/core-concepts.html
@@ -63,7 +63,7 @@
<ul>
<li>A <b>stream</b> is the most important abstraction provided by Kafka Streams: it represents an unbounded, continuously updating data set. A stream is an ordered, replayable, and fault-tolerant sequence of immutable data records, where a <b>data record</b> is defined as a key-value pair.</li>
<li>A <b>stream processing application</b> is any program that makes use of the Kafka Streams library. It defines its computational logic through one or more <b>processor topologies</b>, where a processor topology is a graph of stream processors (nodes) that are connected by streams (edges).</li>
- <li>A <b>stream processor</b> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
+ <li>A <b><a href="#streams_processor_node">stream processor</a></b> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
</ul>
There are two special processors in the topology:
diff --git a/10/streams/developer-guide/memory-mgmt.html b/10/streams/developer-guide/memory-mgmt.html
index b9ee1f3..e3a1033 100644
--- a/10/streams/developer-guide/memory-mgmt.html
+++ b/10/streams/developer-guide/memory-mgmt.html
@@ -55,9 +55,9 @@
<p>For such <code class="docutils literal"><span class="pre">KTable</span></code> instances, the record cache is used for:</p>
<ul class="simple">
<li>Internal caching and compacting of output records before they are written by the underlying stateful
- <a class="reference internal" href="../concepts.html#streams-concepts-processor"><span class="std std-ref">processor node</span></a> to its internal state stores.</li>
+ <a class="reference internal" href="../core-concepts#streams_processor_node"><span class="std std-ref">processor node</span></a> to its internal state stores.</li>
<li>Internal caching and compacting of output records before they are forwarded from the underlying stateful
- <a class="reference internal" href="../concepts.html#streams-concepts-processor"><span class="std std-ref">processor node</span></a> to any of its downstream processor nodes.</li>
+ <a class="reference internal" href="../core-concepts#streams_processor_node"><span class="std std-ref">processor node</span></a> to any of its downstream processor nodes.</li>
</ul>
<p>Use the following example to understand the behaviors with and without record caching. In this example, the input is a
<code class="docutils literal"><span class="pre">KStream<String,</span> <span class="pre">Integer></span></code> with the records <code class="docutils literal"><span class="pre"><K,V>:</span> <span class="pre"><A,</span> <span class="pre">1>,</span> <span class="pre"><D,</span> <span class="pre">5>,</span> <span class="pre"><A,</span> <span class="pre">20>,</span> <span class="pre"><A,</span> <span class="pre">300></span></code>. The focus in [...]
diff --git a/11/streams/core-concepts.html b/11/streams/core-concepts.html
index 2f22be7..0b0f43b 100644
--- a/11/streams/core-concepts.html
+++ b/11/streams/core-concepts.html
@@ -63,7 +63,7 @@
<ul>
<li>A <b>stream</b> is the most important abstraction provided by Kafka Streams: it represents an unbounded, continuously updating data set. A stream is an ordered, replayable, and fault-tolerant sequence of immutable data records, where a <b>data record</b> is defined as a key-value pair.</li>
<li>A <b>stream processing application</b> is any program that makes use of the Kafka Streams library. It defines its computational logic through one or more <b>processor topologies</b>, where a processor topology is a graph of stream processors (nodes) that are connected by streams (edges).</li>
- <li>A <b>stream processor</b> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
+ <li>A <b><a href="#streams_processor_node">stream processor</a></b> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
</ul>
There are two special processors in the topology:
diff --git a/11/streams/developer-guide/memory-mgmt.html b/11/streams/developer-guide/memory-mgmt.html
index 6c6fd2f..a73a814 100644
--- a/11/streams/developer-guide/memory-mgmt.html
+++ b/11/streams/developer-guide/memory-mgmt.html
@@ -55,9 +55,9 @@
<p>For such <code class="docutils literal"><span class="pre">KTable</span></code> instances, the record cache is used for:</p>
<ul class="simple">
<li>Internal caching and compacting of output records before they are written by the underlying stateful
- <a class="reference internal" href="../concepts.html#streams-concepts-processor"><span class="std std-ref">processor node</span></a> to its internal state stores.</li>
+ <a class="reference internal" href="../core-concepts#streams_processor_node"><span class="std std-ref">processor node</span></a> to its internal state stores.</li>
<li>Internal caching and compacting of output records before they are forwarded from the underlying stateful
- <a class="reference internal" href="../concepts.html#streams-concepts-processor"><span class="std std-ref">processor node</span></a> to any of its downstream processor nodes.</li>
+ <a class="reference internal" href="../core-concepts#streams_processor_node"><span class="std std-ref">processor node</span></a> to any of its downstream processor nodes.</li>
</ul>
<p>Use the following example to understand the behaviors with and without record caching. In this example, the input is a
<code class="docutils literal"><span class="pre">KStream<String,</span> <span class="pre">Integer></span></code> with the records <code class="docutils literal"><span class="pre"><K,V>:</span> <span class="pre"><A,</span> <span class="pre">1>,</span> <span class="pre"><D,</span> <span class="pre">5>,</span> <span class="pre"><A,</span> <span class="pre">20>,</span> <span class="pre"><A,</span> <span class="pre">300></span></code>. The focus in [...]
--
To stop receiving notification emails like this one, please contact
guozhang@apache.org.