You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by GitBox <gi...@apache.org> on 2021/05/07 16:36:21 UTC

[GitHub] [kafka] jlprat opened a new pull request #10651: MINOR: Kafka Streams code samples formating unification

jlprat opened a new pull request #10651:
URL: https://github.com/apache/kafka/pull/10651


   Code samples are now correctly formatted.
   Samples under Streams use consistently the prism library to be displayed.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] cadonna commented on a change in pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
cadonna commented on a change in pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#discussion_r634203662



##########
File path: docs/streams/developer-guide/app-reset-tool.html
##########
@@ -78,18 +78,17 @@
             <h2>Step 1: Run the application reset tool<a class="headerlink" href="#step-1-run-the-application-reset-tool" title="Permalink to this headline"></a></h2>
             <p>Invoke the application reset tool from the command line</p>
             <p>Warning! This tool makes irreversible changes to your application. It is strongly recommended that you run this once with <code class="docutils literal"><span class="pre">--dry-run</span></code> to preview your changes before making them.</p>
-            <div class="highlight-bash"><div class="highlight"><pre><span></span><code>&lt;path-to-kafka&gt;/bin/kafka-streams-application-reset</code></pre></div>
-            </div>
+            <pre class="line-numbers"><code class="language-bash">&lt;path-to-kafka&gt;/bin/kafka-streams-application-reset</code></pre>
             <p>The tool accepts the following parameters:</p>
-            <div class="highlight-bash"><div class="highlight"><pre><span>Option</span><code> <span class="o">(</span>* <span class="o">=</span> required<span class="o">)</span>                 Description
+            <pre class="line-numbers"><code class="language-bash">Option (* = required)                 Description

Review comment:
       Class `language-bash` renders the text with strange colors, e.g., the keyword "file" which is not a keyword here. Since this is actually not a bash script code, I think we leave it as plain monospaced text. 

##########
File path: docs/streams/developer-guide/dsl-api.html
##########
@@ -2542,21 +2501,22 @@ <h5><a class="toc-backref" href="#id34">KTable-KTable Foreign-Key
                               <p class="first">Performs a foreign-key LEFT JOIN of this
                                 table with another table. <a class="reference external"
                 href="/%7B%7Bversion%7D%7D/javadoc/org/apache/kafka/streams/kstream/KTable.html#leftJoin-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
-                              <div class="highlight-java">
-                                <div class="highlight">
-                                  <pre><span></span><span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> Long<span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
-                <span class="n">KTable</span><span class="o">&lt;Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;<br>//This </span><span class="o"><span class="o"><span class="n">foreignKeyExtractor</span></span> simply uses the left-value to map to the right-key.<br></span><span class="o"><span class="n">Function</span><span class="o">&lt;Long</span><span class="o">,</span> Long<span class="n"></span><span class="o">&gt;</span> <span class="n">foreignKeyExtractor</span> <span class="o">=</span> <span class="o">(x) -&gt; x;</span><br><br></span><span class="c1">// Java 8+ example, using lambda expressions</span>
-                <span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span><br>    <span class="o"><span class="n">foreignKeyExtractor,</span></span>
-                    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">"left="</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">", right="</span> <span class="o">+</span> <span class="n">rightValue</span> <span class="cm">/* ValueJoiner */</span>
-                  <span class="o">);</span></code></pre>
-                                </div>
-                              </div>
+                                <pre class="line-numbers"><code class="language-java">KTable&lt;String, Long&gt; left = ...;
+                KTable&lt;Long, Double&gt; right = ...;
+//This foreignKeyExtractor simply uses the left-value to map to the right-key.
+Function&lt;Long, Long&gt; foreignKeyExtractor = (x) -&gt; x;
+
+// Java 8+ example, using lambda expressions
+                KTable&lt;String, String&gt; joined = left.join(right,
+    foreignKeyExtractor,

Review comment:
       Could you please put `foreignKeyExtractor` on the previous line or indent it so that it is at the same column as `right`?

##########
File path: docs/streams/developer-guide/dsl-api.html
##########
@@ -3207,15 +3161,14 @@ <h5><a class="toc-backref" href="#id34">KTable-KTable Foreign-Key
                                 terminology in academic literature, where the semantics of sliding windows are different to those of hopping windows.</p>
                         </div>
                         <p>The following code defines a hopping window with a size of 5 minutes and an advance interval of 1 minute:</p>
-                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.time.Duration</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.kstream.TimeWindows</span><span class="o">;</span>
-
-<span class="c1">// A hopping time window with a size of 5 minutes and an advance interval of 1 minute.</span>
-<span class="c1">// The window&#39;s name -- the string parameter -- is used to e.g. name the backing state store.</span>
-<span class="kt">Duration</span> <span class="n">windowSizeMs</span> <span class="o">=</span> <span class="n">Duration</span><span class="o">.</span><span class="na">ofMinutes</span><span class="o">(</span><span class="mi">5</span><span class="o">);</span>
-<span class="kt">Duration</span> <span class="n">advanceMs</span> <span class="o">=</span>    <span class="n">Duration</span><span class="o">.</span><span class="na">ofMinutes</span><span class="o">(</span><span class="mi">1</span><span class="o">);</span>
-<span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">).</span><span class="na">advanceBy</span><span class="o">(</span><span class="n">advanceMs</span><span class="o">);</span></code></pre></div>
-                        </div>
+                        <pre class="line-numbers"><code class="language-java">import java.time.Duration;
+import org.apache.kafka.streams.kstream.TimeWindows;
+
+// A hopping time window with a size of 5 minutes and an advance interval of 1 minute.
+// The window&#39;s name -- the string parameter -- is used to e.g. name the backing state store.
+Duration windowSizeMs = Duration.ofMinutes(5);
+Duration advanceMs =    Duration.ofMinutes(1);

Review comment:
       Could you fix the indentation here to `Duration advanceMs = Duration.ofMinutes(1);`?

##########
File path: docs/streams/developer-guide/dsl-api.html
##########
@@ -2482,21 +2440,22 @@ <h5><a class="toc-backref" href="#id34">KTable-KTable Foreign-Key
                                 KTable that represents the &quot;current&quot; result of the join.
                                 <a class="reference external"
                 href="/%7B%7Bversion%7D%7D/javadoc/org/apache/kafka/streams/kstream/KTable.html#join-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
-                              <div class="highlight-java">
-                                <div class="highlight">
-                                  <pre><span></span><span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> Long<span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
-                <span class="n">KTable</span><span class="o">&lt;Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;<br>//This </span><span class="o"><span class="o"><span class="n">foreignKeyExtractor</span></span> simply uses the left-value to map to the right-key.<br></span><span class="o"><span class="n">Function</span><span class="o">&lt;Long</span><span class="o">,</span> Long<span class="n"></span><span class="o">&gt;</span> <span class="n">foreignKeyExtractor</span> <span class="o">=</span> <span class="o">(x) -&gt; x;</span><br><br></span><span class="c1">// Java 8+ example, using lambda expressions</span>
-                <span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span><br>    <span class="o"><span class="n">foreignKeyExtractor,</span></span>
-                    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">"left="</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">", right="</span> <span class="o">+</span> <span class="n">rightValue</span> <span class="cm">/* ValueJoiner */</span>
-                  <span class="o">);</span></code></pre>
-                                </div>
-                              </div>
+                                <pre class="line-numbers"><code class="language-java">KTable&lt;String, Long&gt; left = ...;
+                KTable&lt;Long, Double&gt; right = ...;
+//This foreignKeyExtractor simply uses the left-value to map to the right-key.
+Function&lt;Long, Long&gt; foreignKeyExtractor = (x) -&gt; x;
+
+// Java 8+ example, using lambda expressions
+                KTable&lt;String, String&gt; joined = left.join(right,
+    foreignKeyExtractor,

Review comment:
       Could you please put `foreignKeyExtractor` on the previous line or indent it so that it is at the same column as `right`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] jlprat commented on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
jlprat commented on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-838123810


   Most of the changes are purely cosmetic:
   - using the right tags for for embedding a code snippet
   - escaping `<` and `>` characters to HTML encoded strings so they are properly rendered


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] cadonna commented on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
cadonna commented on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-838119727


   Do not worry about the test failures. First of all, your PR does not touch any code, so there is no reason a test would fail due to your PR. Additionally, the failing tests are all known to be flaky. You can find known flaky tests in the JIRA by searching for the test name. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] jlprat commented on a change in pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
jlprat commented on a change in pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#discussion_r636993622



##########
File path: docs/streams/developer-guide/testing.html
##########
@@ -275,21 +275,21 @@ <h2>
             <b>Construction</b>
             <p>
                 To begin with, instantiate your processor and initialize it with the mock context:
-            <pre class="line-numbers"><code class="language-text">final Processor processorUnderTest = ...;
+            <pre class="line-numbers"><code class="language-java">final Processor processorUnderTest = ...;
 final MockProcessorContext context = new MockProcessorContext();
 processorUnderTest.init(context);</code></pre>
             If you need to pass configuration to your processor or set the default serdes, you can create the mock with
             config:
-            <pre class="line-numbers"><code class="language-text">final Properties props = new Properties();
+            <pre class="line-numbers"><code class="language-java">final Properties props = new Properties();
 props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
 props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.Long().getClass());
-props.put("some.other.config", "some config value");
+props.put(&quot;some.other.config&quot;, &quot;some config value&quot;);
 final MockProcessorContext context = new MockProcessorContext(props);</code></pre>
             </p>
             <b>Captured data</b>
             <p>
                 The mock will capture any values that your processor forwards. You can make assertions on them:
-            <pre class="line-numbers"><code class="language-text">processorUnderTest.process("key", "value");
+            <pre class="line-numbers"><code class="language-java">processorUnderTest.process("key", "value");

Review comment:
       Done!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] cadonna edited a comment on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
cadonna edited a comment on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-838118603


   Thank for the PR! 
   
   For the next time, I would recommend to open smaller PRs. Almost 2000 additions is a lot even for docs. In my experience smaller PRs tend to be merged faster.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] jlprat commented on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
jlprat commented on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-845796365


   Thanks @cadonna that one slipped through. I modified the code and brought back the foot notes with the hyperlink.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] jlprat commented on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
jlprat commented on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-846038234


   Thanks for the review, I know it was long and tricky!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] jlprat commented on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
jlprat commented on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-843207783


   Thanks @cadonna, all good points. I'll address the feedback later


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] cadonna merged pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
cadonna merged pull request #10651:
URL: https://github.com/apache/kafka/pull/10651


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] cadonna commented on a change in pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
cadonna commented on a change in pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#discussion_r636916251



##########
File path: docs/streams/developer-guide/testing.html
##########
@@ -275,21 +275,21 @@ <h2>
             <b>Construction</b>
             <p>
                 To begin with, instantiate your processor and initialize it with the mock context:
-            <pre class="line-numbers"><code class="language-text">final Processor processorUnderTest = ...;
+            <pre class="line-numbers"><code class="language-java">final Processor processorUnderTest = ...;
 final MockProcessorContext context = new MockProcessorContext();
 processorUnderTest.init(context);</code></pre>
             If you need to pass configuration to your processor or set the default serdes, you can create the mock with
             config:
-            <pre class="line-numbers"><code class="language-text">final Properties props = new Properties();
+            <pre class="line-numbers"><code class="language-java">final Properties props = new Properties();
 props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
 props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.Long().getClass());
-props.put("some.other.config", "some config value");
+props.put(&quot;some.other.config&quot;, &quot;some config value&quot;);
 final MockProcessorContext context = new MockProcessorContext(props);</code></pre>
             </p>
             <b>Captured data</b>
             <p>
                 The mock will capture any values that your processor forwards. You can make assertions on them:
-            <pre class="line-numbers"><code class="language-text">processorUnderTest.process("key", "value");
+            <pre class="line-numbers"><code class="language-java">processorUnderTest.process("key", "value");

Review comment:
       On line 304 and line 306 are two code snippets that slipped through.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] jlprat commented on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
jlprat commented on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-834676212


   The test failures in this PR should be unrelated to the changes introduced (docs). Is there anything I should do to move the PR forward?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] cadonna commented on a change in pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
cadonna commented on a change in pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#discussion_r636733013



##########
File path: docs/streams/developer-guide/memory-mgmt.html
##########
@@ -171,44 +158,42 @@ <h2><a class="toc-backref" href="#id3">RocksDB</a><a class="headerlink" href="#r
         <code class="docutils literal"><span class="pre">rocksdb.config.setter</span></code> configuration.</p>
       <p>Also, we recommend changing RocksDB's default memory allocator, because the default allocator may lead to increased memory consumption.
         To change the memory allocator to <code>jemalloc</code>, you need to set the environment variable <code>LD_PRELOAD</code>before you start your Kafka Streams application:</p>
-      <pre>
-# example: install jemalloc (on Debian)
+      <pre class="line-numbers"><code class="language-bash"># example: install jemalloc (on Debian)
 $ apt install -y libjemalloc-dev
 # set LD_PRELOAD before you start your Kafka Streams application
 $ export LD_PRELOAD="/usr/lib/x86_64-linux-gnu/libjemalloc.so”
-      </pre>
+      </code></pre>
       <p> As of 2.3.0 the memory usage across all instances can be bounded, limiting the total off-heap memory of your Kafka Streams application. To do so you must configure RocksDB to cache the index and filter blocks in the block cache, limit the memtable memory through a shared <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager">WriteBufferManager</a> and count its memory against the block cache, and then pass the same Cache object to each instance. See <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB">RocksDB Memory Usage</a> for details. An example RocksDBConfigSetter implementing this is shown below:</p>
+      <pre class="line-numbers"><code class="language-java">public static class BoundedMemoryRocksDBConfig implements RocksDBConfigSetter {
 
-      <div class="highlight-java"><div class="highlight"><pre><span></span>    <span class="kd">public</span> <span class="kd">static</span> <span class="kd">class</span> <span class="nc">BoundedMemoryRocksDBConfig</span> <span class="kd">implements</span> <span class="n">RocksDBConfigSetter</span> <span class="o">{</span>
-
-       <span class="kd">private</span> <span class="kt">static</span> <span class="n">org.rocksdb.Cache</span> <span class="n">cache</span> <span class="o">=</span> <span class="k">new</span> <span class="n">org</span><span class="o">.</span><span class="na">rocksdb</span><span class="o">.</span><span class="na">LRUCache</span><span class="o">(</span><span class="mi">TOTAL_OFF_HEAP_MEMORY</span><span class="o">,</span> <span class="n">-1</span><span class="o">,</span> <span class="n">false</span><span class="o">,</span> <span class="n">INDEX_FILTER_BLOCK_RATIO</span><span class="o">);</span><sup><a href="#fn1" id="ref1">1</a></sup>
-       <span class="kd">private</span> <span class="kt">static</span> <span class="n">org.rocksdb.WriteBufferManager</span> <span class="n">writeBufferManager</span> <span class="o">=</span> <span class="k">new</span> <span class="n">org</span><span class="o">.</span><span class="na">rocksdb</span><span class="o">.</span><span class="na">WriteBufferManager</span><span class="o">(</span><span class="mi">TOTAL_MEMTABLE_MEMORY</span><span class="o">,</span> cache<span class="o">);</span>
+   private static org.rocksdb.Cache cache = new org.rocksdb.LRUCache(TOTAL_OFF_HEAP_MEMORY, -1, false, INDEX_FILTER_BLOCK_RATIO);1

Review comment:
       The `1` at the end should be a superscript reference to a footnote. Is there a way to get again a superscript here? If not what would be the alternatives? 

##########
File path: docs/streams/developer-guide/memory-mgmt.html
##########
@@ -171,44 +158,42 @@ <h2><a class="toc-backref" href="#id3">RocksDB</a><a class="headerlink" href="#r
         <code class="docutils literal"><span class="pre">rocksdb.config.setter</span></code> configuration.</p>
       <p>Also, we recommend changing RocksDB's default memory allocator, because the default allocator may lead to increased memory consumption.
         To change the memory allocator to <code>jemalloc</code>, you need to set the environment variable <code>LD_PRELOAD</code>before you start your Kafka Streams application:</p>
-      <pre>
-# example: install jemalloc (on Debian)
+      <pre class="line-numbers"><code class="language-bash"># example: install jemalloc (on Debian)
 $ apt install -y libjemalloc-dev
 # set LD_PRELOAD before you start your Kafka Streams application
 $ export LD_PRELOAD="/usr/lib/x86_64-linux-gnu/libjemalloc.so”
-      </pre>
+      </code></pre>
       <p> As of 2.3.0 the memory usage across all instances can be bounded, limiting the total off-heap memory of your Kafka Streams application. To do so you must configure RocksDB to cache the index and filter blocks in the block cache, limit the memtable memory through a shared <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager">WriteBufferManager</a> and count its memory against the block cache, and then pass the same Cache object to each instance. See <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB">RocksDB Memory Usage</a> for details. An example RocksDBConfigSetter implementing this is shown below:</p>
+      <pre class="line-numbers"><code class="language-java">public static class BoundedMemoryRocksDBConfig implements RocksDBConfigSetter {
 
-      <div class="highlight-java"><div class="highlight"><pre><span></span>    <span class="kd">public</span> <span class="kd">static</span> <span class="kd">class</span> <span class="nc">BoundedMemoryRocksDBConfig</span> <span class="kd">implements</span> <span class="n">RocksDBConfigSetter</span> <span class="o">{</span>
-
-       <span class="kd">private</span> <span class="kt">static</span> <span class="n">org.rocksdb.Cache</span> <span class="n">cache</span> <span class="o">=</span> <span class="k">new</span> <span class="n">org</span><span class="o">.</span><span class="na">rocksdb</span><span class="o">.</span><span class="na">LRUCache</span><span class="o">(</span><span class="mi">TOTAL_OFF_HEAP_MEMORY</span><span class="o">,</span> <span class="n">-1</span><span class="o">,</span> <span class="n">false</span><span class="o">,</span> <span class="n">INDEX_FILTER_BLOCK_RATIO</span><span class="o">);</span><sup><a href="#fn1" id="ref1">1</a></sup>
-       <span class="kd">private</span> <span class="kt">static</span> <span class="n">org.rocksdb.WriteBufferManager</span> <span class="n">writeBufferManager</span> <span class="o">=</span> <span class="k">new</span> <span class="n">org</span><span class="o">.</span><span class="na">rocksdb</span><span class="o">.</span><span class="na">WriteBufferManager</span><span class="o">(</span><span class="mi">TOTAL_MEMTABLE_MEMORY</span><span class="o">,</span> cache<span class="o">);</span>
+   private static org.rocksdb.Cache cache = new org.rocksdb.LRUCache(TOTAL_OFF_HEAP_MEMORY, -1, false, INDEX_FILTER_BLOCK_RATIO);1
+   private static org.rocksdb.WriteBufferManager writeBufferManager = new org.rocksdb.WriteBufferManager(TOTAL_MEMTABLE_MEMORY, cache);
 
-       <span class="nd">@Override</span>
-       <span class="kd">public</span> <span class="kt">void</span> <span class="nf">setConfig</span><span class="o">(</span><span class="kd">final</span> <span class="n">String</span> <span class="n">storeName</span><span class="o">,</span> <span class="kd">final</span> <span class="n">Options</span> <span class="n">options</span><span class="o">,</span> <span class="kd">final</span> <span class="n">Map</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Object</span><span class="o">&gt;</span> <span class="n">configs</span><span class="o">)</span> <span class="o">{</span>
+   @Override
+   public void setConfig(final String storeName, final Options options, final Map&lt;String, Object&gt; configs) {
 
-         <span class="n">BlockBasedTableConfig</span> <span class="n">tableConfig</span> <span class="o">=</span> <span class="k">(BlockBasedTableConfig)</span> <span class="n">options</span><span><span class="o">.</span><span class="na">tableFormatConfig</span><span class="o">();</span>
+     BlockBasedTableConfig tableConfig = (BlockBasedTableConfig) options.tableFormatConfig();
 
-         <span class="c1"> // These three options in combination will limit the memory used by RocksDB to the size passed to the block cache (TOTAL_OFF_HEAP_MEMORY)</span>
-         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setBlockCache</span><span class="o">(</span><span class="mi">cache</span><span class="o">);</span>
-         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setCacheIndexAndFilterBlocks</span><span class="o">(</span><span class="kc">true</span><span class="o">);</span>
-         <span class="n">options</span><span class="o">.</span><span class="na">setWriteBufferManager</span><span class="o">(</span><span class="mi">writeBufferManager</span><span class="o">);</span>
+      // These three options in combination will limit the memory used by RocksDB to the size passed to the block cache (TOTAL_OFF_HEAP_MEMORY)
+     tableConfig.setBlockCache(cache);
+     tableConfig.setCacheIndexAndFilterBlocks(true);
+     options.setWriteBufferManager(writeBufferManager);
 
-         <span class="c1"> // These options are recommended to be set when bounding the total memory</span>
-         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setCacheIndexAndFilterBlocksWithHighPriority</span><span class="o">(</span><span class="mi">true</span><span class="o">);</span><sup><a href="#fn2" id="ref2">2</a></sup>
-         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setPinTopLevelIndexAndFilter</span><span class="o">(</span><span class="mi">true</span><span class="o">);</span>
-         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setBlockSize</span><span class="o">(</span><span class="mi">BLOCK_SIZE</span><span class="o">);</span><sup><a href="#fn3" id="ref3">3</a></sup>
-         <span class="n">options</span><span class="o">.</span><span class="na">setMaxWriteBufferNumber</span><span class="o">(</span><span class="mi">N_MEMTABLES</span><span class="o">);</span>
-         <span class="n">options</span><span class="o">.</span><span class="na">setWriteBufferSize</span><span class="o">(</span><span class="mi">MEMTABLE_SIZE</span><span class="o">);</span>
+      // These options are recommended to be set when bounding the total memory
+     tableConfig.setCacheIndexAndFilterBlocksWithHighPriority(true);2

Review comment:
       See my comment above for the `2` at the end.

##########
File path: docs/streams/developer-guide/memory-mgmt.html
##########
@@ -171,44 +158,42 @@ <h2><a class="toc-backref" href="#id3">RocksDB</a><a class="headerlink" href="#r
         <code class="docutils literal"><span class="pre">rocksdb.config.setter</span></code> configuration.</p>
       <p>Also, we recommend changing RocksDB's default memory allocator, because the default allocator may lead to increased memory consumption.
         To change the memory allocator to <code>jemalloc</code>, you need to set the environment variable <code>LD_PRELOAD</code>before you start your Kafka Streams application:</p>
-      <pre>
-# example: install jemalloc (on Debian)
+      <pre class="line-numbers"><code class="language-bash"># example: install jemalloc (on Debian)
 $ apt install -y libjemalloc-dev
 # set LD_PRELOAD before you start your Kafka Streams application
 $ export LD_PRELOAD="/usr/lib/x86_64-linux-gnu/libjemalloc.so”
-      </pre>
+      </code></pre>
       <p> As of 2.3.0 the memory usage across all instances can be bounded, limiting the total off-heap memory of your Kafka Streams application. To do so you must configure RocksDB to cache the index and filter blocks in the block cache, limit the memtable memory through a shared <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager">WriteBufferManager</a> and count its memory against the block cache, and then pass the same Cache object to each instance. See <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB">RocksDB Memory Usage</a> for details. An example RocksDBConfigSetter implementing this is shown below:</p>
+      <pre class="line-numbers"><code class="language-java">public static class BoundedMemoryRocksDBConfig implements RocksDBConfigSetter {
 
-      <div class="highlight-java"><div class="highlight"><pre><span></span>    <span class="kd">public</span> <span class="kd">static</span> <span class="kd">class</span> <span class="nc">BoundedMemoryRocksDBConfig</span> <span class="kd">implements</span> <span class="n">RocksDBConfigSetter</span> <span class="o">{</span>
-
-       <span class="kd">private</span> <span class="kt">static</span> <span class="n">org.rocksdb.Cache</span> <span class="n">cache</span> <span class="o">=</span> <span class="k">new</span> <span class="n">org</span><span class="o">.</span><span class="na">rocksdb</span><span class="o">.</span><span class="na">LRUCache</span><span class="o">(</span><span class="mi">TOTAL_OFF_HEAP_MEMORY</span><span class="o">,</span> <span class="n">-1</span><span class="o">,</span> <span class="n">false</span><span class="o">,</span> <span class="n">INDEX_FILTER_BLOCK_RATIO</span><span class="o">);</span><sup><a href="#fn1" id="ref1">1</a></sup>
-       <span class="kd">private</span> <span class="kt">static</span> <span class="n">org.rocksdb.WriteBufferManager</span> <span class="n">writeBufferManager</span> <span class="o">=</span> <span class="k">new</span> <span class="n">org</span><span class="o">.</span><span class="na">rocksdb</span><span class="o">.</span><span class="na">WriteBufferManager</span><span class="o">(</span><span class="mi">TOTAL_MEMTABLE_MEMORY</span><span class="o">,</span> cache<span class="o">);</span>
+   private static org.rocksdb.Cache cache = new org.rocksdb.LRUCache(TOTAL_OFF_HEAP_MEMORY, -1, false, INDEX_FILTER_BLOCK_RATIO);1
+   private static org.rocksdb.WriteBufferManager writeBufferManager = new org.rocksdb.WriteBufferManager(TOTAL_MEMTABLE_MEMORY, cache);
 
-       <span class="nd">@Override</span>
-       <span class="kd">public</span> <span class="kt">void</span> <span class="nf">setConfig</span><span class="o">(</span><span class="kd">final</span> <span class="n">String</span> <span class="n">storeName</span><span class="o">,</span> <span class="kd">final</span> <span class="n">Options</span> <span class="n">options</span><span class="o">,</span> <span class="kd">final</span> <span class="n">Map</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Object</span><span class="o">&gt;</span> <span class="n">configs</span><span class="o">)</span> <span class="o">{</span>
+   @Override
+   public void setConfig(final String storeName, final Options options, final Map&lt;String, Object&gt; configs) {
 
-         <span class="n">BlockBasedTableConfig</span> <span class="n">tableConfig</span> <span class="o">=</span> <span class="k">(BlockBasedTableConfig)</span> <span class="n">options</span><span><span class="o">.</span><span class="na">tableFormatConfig</span><span class="o">();</span>
+     BlockBasedTableConfig tableConfig = (BlockBasedTableConfig) options.tableFormatConfig();
 
-         <span class="c1"> // These three options in combination will limit the memory used by RocksDB to the size passed to the block cache (TOTAL_OFF_HEAP_MEMORY)</span>
-         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setBlockCache</span><span class="o">(</span><span class="mi">cache</span><span class="o">);</span>
-         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setCacheIndexAndFilterBlocks</span><span class="o">(</span><span class="kc">true</span><span class="o">);</span>
-         <span class="n">options</span><span class="o">.</span><span class="na">setWriteBufferManager</span><span class="o">(</span><span class="mi">writeBufferManager</span><span class="o">);</span>
+      // These three options in combination will limit the memory used by RocksDB to the size passed to the block cache (TOTAL_OFF_HEAP_MEMORY)
+     tableConfig.setBlockCache(cache);
+     tableConfig.setCacheIndexAndFilterBlocks(true);
+     options.setWriteBufferManager(writeBufferManager);
 
-         <span class="c1"> // These options are recommended to be set when bounding the total memory</span>
-         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setCacheIndexAndFilterBlocksWithHighPriority</span><span class="o">(</span><span class="mi">true</span><span class="o">);</span><sup><a href="#fn2" id="ref2">2</a></sup>
-         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setPinTopLevelIndexAndFilter</span><span class="o">(</span><span class="mi">true</span><span class="o">);</span>
-         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setBlockSize</span><span class="o">(</span><span class="mi">BLOCK_SIZE</span><span class="o">);</span><sup><a href="#fn3" id="ref3">3</a></sup>
-         <span class="n">options</span><span class="o">.</span><span class="na">setMaxWriteBufferNumber</span><span class="o">(</span><span class="mi">N_MEMTABLES</span><span class="o">);</span>
-         <span class="n">options</span><span class="o">.</span><span class="na">setWriteBufferSize</span><span class="o">(</span><span class="mi">MEMTABLE_SIZE</span><span class="o">);</span>
+      // These options are recommended to be set when bounding the total memory
+     tableConfig.setCacheIndexAndFilterBlocksWithHighPriority(true);2
+     tableConfig.setPinTopLevelIndexAndFilter(true);
+     tableConfig.setBlockSize(BLOCK_SIZE);3

Review comment:
       See my comment above for the `3` at the end.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] jlprat commented on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
jlprat commented on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-845954556


   Hi @cadonna I'll fix that in a couple of hours, thanks for your thorough review!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] jlprat commented on a change in pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
jlprat commented on a change in pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#discussion_r634513734



##########
File path: docs/streams/developer-guide/app-reset-tool.html
##########
@@ -78,18 +78,17 @@
             <h2>Step 1: Run the application reset tool<a class="headerlink" href="#step-1-run-the-application-reset-tool" title="Permalink to this headline"></a></h2>
             <p>Invoke the application reset tool from the command line</p>
             <p>Warning! This tool makes irreversible changes to your application. It is strongly recommended that you run this once with <code class="docutils literal"><span class="pre">--dry-run</span></code> to preview your changes before making them.</p>
-            <div class="highlight-bash"><div class="highlight"><pre><span></span><code>&lt;path-to-kafka&gt;/bin/kafka-streams-application-reset</code></pre></div>
-            </div>
+            <pre class="line-numbers"><code class="language-bash">&lt;path-to-kafka&gt;/bin/kafka-streams-application-reset</code></pre>
             <p>The tool accepts the following parameters:</p>
-            <div class="highlight-bash"><div class="highlight"><pre><span>Option</span><code> <span class="o">(</span>* <span class="o">=</span> required<span class="o">)</span>                 Description
+            <pre class="line-numbers"><code class="language-bash">Option (* = required)                 Description

Review comment:
       Fixed, and also aligned the spaces, so it fit's to the 2 column printing.

##########
File path: docs/streams/developer-guide/dsl-api.html
##########
@@ -2482,21 +2440,22 @@ <h5><a class="toc-backref" href="#id34">KTable-KTable Foreign-Key
                                 KTable that represents the &quot;current&quot; result of the join.
                                 <a class="reference external"
                 href="/%7B%7Bversion%7D%7D/javadoc/org/apache/kafka/streams/kstream/KTable.html#join-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
-                              <div class="highlight-java">
-                                <div class="highlight">
-                                  <pre><span></span><span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> Long<span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
-                <span class="n">KTable</span><span class="o">&lt;Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;<br>//This </span><span class="o"><span class="o"><span class="n">foreignKeyExtractor</span></span> simply uses the left-value to map to the right-key.<br></span><span class="o"><span class="n">Function</span><span class="o">&lt;Long</span><span class="o">,</span> Long<span class="n"></span><span class="o">&gt;</span> <span class="n">foreignKeyExtractor</span> <span class="o">=</span> <span class="o">(x) -&gt; x;</span><br><br></span><span class="c1">// Java 8+ example, using lambda expressions</span>
-                <span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span><br>    <span class="o"><span class="n">foreignKeyExtractor,</span></span>
-                    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">"left="</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">", right="</span> <span class="o">+</span> <span class="n">rightValue</span> <span class="cm">/* ValueJoiner */</span>
-                  <span class="o">);</span></code></pre>
-                                </div>
-                              </div>
+                                <pre class="line-numbers"><code class="language-java">KTable&lt;String, Long&gt; left = ...;
+                KTable&lt;Long, Double&gt; right = ...;
+//This foreignKeyExtractor simply uses the left-value to map to the right-key.
+Function&lt;Long, Long&gt; foreignKeyExtractor = (x) -&gt; x;
+
+// Java 8+ example, using lambda expressions
+                KTable&lt;String, String&gt; joined = left.join(right,
+    foreignKeyExtractor,

Review comment:
       Fixed

##########
File path: docs/streams/developer-guide/dsl-api.html
##########
@@ -2542,21 +2501,22 @@ <h5><a class="toc-backref" href="#id34">KTable-KTable Foreign-Key
                               <p class="first">Performs a foreign-key LEFT JOIN of this
                                 table with another table. <a class="reference external"
                 href="/%7B%7Bversion%7D%7D/javadoc/org/apache/kafka/streams/kstream/KTable.html#leftJoin-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
-                              <div class="highlight-java">
-                                <div class="highlight">
-                                  <pre><span></span><span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> Long<span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
-                <span class="n">KTable</span><span class="o">&lt;Long</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;<br>//This </span><span class="o"><span class="o"><span class="n">foreignKeyExtractor</span></span> simply uses the left-value to map to the right-key.<br></span><span class="o"><span class="n">Function</span><span class="o">&lt;Long</span><span class="o">,</span> Long<span class="n"></span><span class="o">&gt;</span> <span class="n">foreignKeyExtractor</span> <span class="o">=</span> <span class="o">(x) -&gt; x;</span><br><br></span><span class="c1">// Java 8+ example, using lambda expressions</span>
-                <span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">joined</span> <span class="o">=</span> <span class="n">left</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">right</span><span class="o">,</span><br>    <span class="o"><span class="n">foreignKeyExtractor,</span></span>
-                    <span class="o">(</span><span class="n">leftValue</span><span class="o">,</span> <span class="n">rightValue</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="s">"left="</span> <span class="o">+</span> <span class="n">leftValue</span> <span class="o">+</span> <span class="s">", right="</span> <span class="o">+</span> <span class="n">rightValue</span> <span class="cm">/* ValueJoiner */</span>
-                  <span class="o">);</span></code></pre>
-                                </div>
-                              </div>
+                                <pre class="line-numbers"><code class="language-java">KTable&lt;String, Long&gt; left = ...;
+                KTable&lt;Long, Double&gt; right = ...;
+//This foreignKeyExtractor simply uses the left-value to map to the right-key.
+Function&lt;Long, Long&gt; foreignKeyExtractor = (x) -&gt; x;
+
+// Java 8+ example, using lambda expressions
+                KTable&lt;String, String&gt; joined = left.join(right,
+    foreignKeyExtractor,

Review comment:
       Fixed

##########
File path: docs/streams/developer-guide/dsl-api.html
##########
@@ -3207,15 +3161,14 @@ <h5><a class="toc-backref" href="#id34">KTable-KTable Foreign-Key
                                 terminology in academic literature, where the semantics of sliding windows are different to those of hopping windows.</p>
                         </div>
                         <p>The following code defines a hopping window with a size of 5 minutes and an advance interval of 1 minute:</p>
-                        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.time.Duration</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.kstream.TimeWindows</span><span class="o">;</span>
-
-<span class="c1">// A hopping time window with a size of 5 minutes and an advance interval of 1 minute.</span>
-<span class="c1">// The window&#39;s name -- the string parameter -- is used to e.g. name the backing state store.</span>
-<span class="kt">Duration</span> <span class="n">windowSizeMs</span> <span class="o">=</span> <span class="n">Duration</span><span class="o">.</span><span class="na">ofMinutes</span><span class="o">(</span><span class="mi">5</span><span class="o">);</span>
-<span class="kt">Duration</span> <span class="n">advanceMs</span> <span class="o">=</span>    <span class="n">Duration</span><span class="o">.</span><span class="na">ofMinutes</span><span class="o">(</span><span class="mi">1</span><span class="o">);</span>
-<span class="n">TimeWindows</span><span class="o">.</span><span class="na">of</span><span class="o">(</span><span class="n">windowSizeMs</span><span class="o">).</span><span class="na">advanceBy</span><span class="o">(</span><span class="n">advanceMs</span><span class="o">);</span></code></pre></div>
-                        </div>
+                        <pre class="line-numbers"><code class="language-java">import java.time.Duration;
+import org.apache.kafka.streams.kstream.TimeWindows;
+
+// A hopping time window with a size of 5 minutes and an advance interval of 1 minute.
+// The window&#39;s name -- the string parameter -- is used to e.g. name the backing state store.
+Duration windowSizeMs = Duration.ofMinutes(5);
+Duration advanceMs =    Duration.ofMinutes(1);

Review comment:
       Fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] cadonna commented on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
cadonna commented on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-838118603


   For the next time, I would recommend to open smaller PRs. Almost 2000 additions is a lot even for docs. In my experience smaller PRs tend to be merged faster.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] jlprat commented on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
jlprat commented on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-838120807


   @cadonna I'll do it next time. I was doubting between providing a PR per file or a PR per folder. (ended up doing PR for the Streams folder).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] jlprat commented on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
jlprat commented on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-841165081


   @cadonna Would you like me to split this PR into several ones? One per file, for example?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] jlprat edited a comment on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
jlprat edited a comment on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-838123810


   Most of the changes are purely cosmetic:
   - using the right tags for for embedding a code snippet
   - escaping `<` and `>` characters to HTML encoded strings so they are properly rendered
   - removing extra indentation


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [kafka] jlprat commented on pull request #10651: MINOR: Kafka Streams code samples formating unification

Posted by GitBox <gi...@apache.org>.
jlprat commented on pull request #10651:
URL: https://github.com/apache/kafka/pull/10651#issuecomment-836338082


   For the record:
   The contribution is my original work and that I license the work to the project under the project's open source license.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org