You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by st...@apache.org on 2024/02/22 23:11:45 UTC

(kafka-site) branch 37sitedocs created (now 1de46838)

This is an automated email from the ASF dual-hosted git repository.

stanislavkozlovski pushed a change to branch 37sitedocs
in repository https://gitbox.apache.org/repos/asf/kafka-site.git


      at 1de46838 37: Add latest apache/kafka/3.7 site-docs

This branch includes the following new commits:

     new 1de46838 37: Add latest apache/kafka/3.7 site-docs

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



(kafka-site) 01/01: 37: Add latest apache/kafka/3.7 site-docs

Posted by st...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

stanislavkozlovski pushed a commit to branch 37sitedocs
in repository https://gitbox.apache.org/repos/asf/kafka-site.git

commit 1de4683885134995bc56a4b3c0ccd4483a6fbda5
Author: Stanislav <st...@confluent.io>
AuthorDate: Fri Feb 23 00:11:27 2024 +0100

    37: Add latest apache/kafka/3.7 site-docs
---
 37/configuration.html                          |   2 +-
 37/connect.html                                |  13 +-
 37/design.html                                 |  41 +-
 37/documentation.html                          |   7 +-
 37/generated/admin_client_config.html          |  44 +-
 37/generated/connect_config.html               |  22 +-
 37/generated/connect_metrics.html              |   4 +-
 37/generated/connect_rest.yaml                 |  25 +-
 37/generated/consumer_config.html              |  52 +-
 37/generated/kafka_config.html                 | 183 +------
 37/generated/mirror_connector_config.html      |  12 +-
 37/generated/producer_config.html              |  32 +-
 37/generated/protocol_api_keys.html            |   8 -
 37/generated/protocol_errors.html              |   6 -
 37/generated/protocol_messages.html            | 717 +++----------------------
 37/generated/remote_log_manager_config.html    |  10 -
 37/generated/sink_connector_config.html        |   6 +-
 37/generated/source_connector_config.html      |   6 +-
 37/generated/streams_config.html               |  36 +-
 37/generated/topic_config.html                 |   4 +-
 37/js/templateData.js                          |   6 +-
 37/ops.html                                    | 217 +++-----
 37/quickstart.html                             |  20 +-
 37/security.html                               | 204 +++----
 37/streams/developer-guide/config-streams.html |  18 -
 37/streams/developer-guide/dsl-api.html        |  16 +-
 37/streams/tutorial.html                       |   4 +-
 37/streams/upgrade-guide.html                  |  92 ----
 37/toc.html                                    |  15 +-
 37/upgrade.html                                |  70 ++-
 37/uses.html                                   |   6 +-
 31 files changed, 410 insertions(+), 1488 deletions(-)

diff --git a/37/configuration.html b/37/configuration.html
index 7bcb097b..03038223 100644
--- a/37/configuration.html
+++ b/37/configuration.html
@@ -16,7 +16,7 @@
 -->
 
 <script id="configuration-template" type="text/x-handlebars-template">
-  Kafka uses key-value pairs in the <a href="https://en.wikipedia.org/wiki/.properties">property file format</a> for configuration. These values can be supplied either from a file or programmatically.
+  Kafka uses key-value pairs in the <a href="http://en.wikipedia.org/wiki/.properties">property file format</a> for configuration. These values can be supplied either from a file or programmatically.
 
   <h3 class="anchor-heading"><a id="brokerconfigs" class="anchor-link"></a><a href="#brokerconfigs">3.1 Broker Configs</a></h3>
 
diff --git a/37/connect.html b/37/connect.html
index aa8cf9e3..2deb8901 100644
--- a/37/connect.html
+++ b/37/connect.html
@@ -41,7 +41,7 @@
     <p>In standalone mode all work is performed in a single process. This configuration is simpler to setup and get started with and may be useful in situations where only one worker makes sense (e.g. collecting log files), but it does not benefit from some of the features of Kafka Connect such as fault tolerance. You can start a standalone process with the following command:</p>
 
     <pre class="brush: bash;">
-&gt; bin/connect-standalone.sh config/connect-standalone.properties [connector1.properties connector2.json ...]</pre>
+&gt; bin/connect-standalone.sh config/connect-standalone.properties [connector1.properties connector2.properties ...]</pre>
 
     <p>The first parameter is the configuration for the worker. This includes settings such as the Kafka connection parameters, serialization format, and how frequently to commit offsets. The provided example should work well with a local cluster running with the default configuration provided by <code>config/server.properties</code>. It will require tweaking to use with a different configuration or production deployment. All workers (both standalone and distributed) require a few configs:</p>
     <ul>
@@ -60,7 +60,7 @@
     
     <p>Starting with 2.3.0, client configuration overrides can be configured individually per connector by using the prefixes <code>producer.override.</code> and <code>consumer.override.</code> for Kafka sources or Kafka sinks respectively. These overrides are included with the rest of the connector's configuration properties.</p>
 
-    <p>The remaining parameters are connector configuration files. Each file may either be a Java Properties file or a JSON file containing an object with the same structure as the request body of either the <code>POST /connectors</code> endpoint or the <code>PUT /connectors/{name}/config</code> endpoint (see the <a href="/{{version}}/generated/connect_rest.yaml">OpenAPI documentation</a>). You may include as many as you want, but all will execute within the same process (on different th [...]
+    <p>The remaining parameters are connector configuration files. You may include as many as you want, but all will execute within the same process (on different threads). You can also choose not to specify any connector configuration files on the command line, and instead use the REST API to create connectors at runtime after your standalone worker starts.</p>
 
     <p>Distributed mode handles automatic balancing of work, allows you to scale up (or down) dynamically, and offers fault tolerance both in the active tasks and for configuration and offset commit data. Execution is very similar to standalone mode:</p>
 
@@ -293,13 +293,12 @@ listeners=http://localhost:8080,https://localhost:8443</pre>
 
     <ul>
         <li><code>GET /connectors</code> - return a list of active connectors</li>
-        <li><code>POST /connectors</code> - create a new connector; the request body should be a JSON object containing a string <code>name</code> field and an object <code>config</code> field with the connector configuration parameters. The JSON object may also optionally contain a string <code>initial_state</code> field which can take the following values - <code>STOPPED</code>, <code>PAUSED</code> or <code>RUNNING</code> (the default value)</li>
+        <li><code>POST /connectors</code> - create a new connector; the request body should be a JSON object containing a string <code>name</code> field and an object <code>config</code> field with the connector configuration parameters</li>
         <li><code>GET /connectors/{name}</code> - get information about a specific connector</li>
         <li><code>GET /connectors/{name}/config</code> - get the configuration parameters for a specific connector</li>
         <li><code>PUT /connectors/{name}/config</code> - update the configuration parameters for a specific connector</li>
         <li><code>GET /connectors/{name}/status</code> - get current status of the connector, including if it is running, failed, paused, etc., which worker it is assigned to, error information if it has failed, and the state of all its tasks</li>
-        <li><code>GET /connectors/{name}/tasks</code> - get a list of tasks currently running for a connector along with their configurations</li>
-        <li><code>GET /connectors/{name}/tasks-config</code> - get the configuration of all tasks for a specific connector. This endpoint is deprecated and will be removed in the next major release. Please use the <code>GET /connectors/{name}/tasks</code> endpoint instead. Note that the response structures of the two endpoints differ slightly, please refer to the <a href="/{{version}}/generated/connect_rest.yaml">OpenAPI documentation</a> for more details</li>
+        <li><code>GET /connectors/{name}/tasks</code> - get a list of tasks currently running for a connector</li>
         <li><code>GET /connectors/{name}/tasks/{taskid}/status</code> - get current status of the task, including if it is running, failed, paused, etc., which worker it is assigned to, and error information if it has failed</li>
         <li><code>PUT /connectors/{name}/pause</code> - pause the connector and its tasks, which stops message processing until the connector is resumed. Any resources claimed by its tasks are left allocated, which allows the connector to begin processing data quickly once it is resumed.</li>
         <li id="connect_stopconnector"><code>PUT /connectors/{name}/stop</code> - stop the connector and shut down its tasks, deallocating any resources claimed by its tasks. This is more efficient from a resource usage standpoint than pausing the connector, but can cause it to take longer to begin processing data once resumed. Note that the offsets for a connector can be only modified via the offsets management endpoints if it is in the stopped state</li>
@@ -318,7 +317,7 @@ listeners=http://localhost:8080,https://localhost:8443</pre>
             <ul>
                 <li><code>GET /connectors/{name}/offsets</code> - get the current offsets for a connector</li>
                 <li><code>DELETE /connectors/{name}/offsets</code> - reset the offsets for a connector. The connector must exist and must be in the stopped state (see <a href="#connect_stopconnector"><code>PUT /connectors/{name}/stop</code></a>)</li>
-                <li><code>PATCH /connectors/{name}/offsets</code> - alter the offsets for a connector. The connector must exist and must be in the stopped state (see <a href="#connect_stopconnector"><code>PUT /connectors/{name}/stop</code></a>). The request body should be a JSON object containing a JSON array <code>offsets</code> field, similar to the response body of the <code>GET /connectors/{name}/offsets</code> endpoint.
+                <li><code>PATCH /connectors/{name}/offsets</code> - alter the offsets for a connector. The connector must exist and must be in the stopped state (see <a href="#connect_stopconnector"><code>PUT /connectors/{name}/stop</code></a>). The request body should be a JSON object containing a JSON array <code>offsets</code> field, similar to the response body of the <code>GET /connectors/{name}/offsets</code> endpoint</li>
                 An example request body for the <code>FileStreamSourceConnector</code>:
                 <pre class="line-numbers"><code class="json">
 {
@@ -357,7 +356,7 @@ listeners=http://localhost:8080,https://localhost:8443</pre>
   ]
 }
                 </code></pre>
-                The "offset" field may be null to reset the offset for a specific partition (applicable to both source and sink connectors). Note that the request body format depends on the connector implementation in the case of source connectors, whereas there is a common format across all sink connectors.</li>
+                The "offset" field may be null to reset the offset for a specific partition (applicable to both source and sink connectors). Note that the request body format depends on the connector implementation in the case of source connectors, whereas there is a common format across all sink connectors.
             </ul>
         </li>
     </ul>
diff --git a/37/design.html b/37/design.html
index 18f78d04..139eb0c1 100644
--- a/37/design.html
+++ b/37/design.html
@@ -38,11 +38,11 @@
     Kafka relies heavily on the filesystem for storing and caching messages. There is a general perception that "disks are slow" which makes people skeptical that a persistent structure can offer competitive performance.
     In fact disks are both much slower and much faster than people expect depending on how they are used; and a properly designed disk structure can often be as fast as the network.
     <p>
-    The key fact about disk performance is that the throughput of hard drives has been diverging from the latency of a disk seek for the last decade. As a result the performance of linear writes on a <a href="https://en.wikipedia.org/wiki/Non-RAID_drive_architectures">JBOD</a>
+    The key fact about disk performance is that the throughput of hard drives has been diverging from the latency of a disk seek for the last decade. As a result the performance of linear writes on a <a href="http://en.wikipedia.org/wiki/Non-RAID_drive_architectures">JBOD</a>
     configuration with six 7200rpm SATA RAID-5 array is about 600MB/sec but the performance of random writes is only about 100k/sec&mdash;a difference of over 6000X. These linear reads and writes are the most
     predictable of all usage patterns, and are heavily optimized by the operating system. A modern operating system provides read-ahead and write-behind techniques that prefetch data in large block multiples and
-    group smaller logical writes into large physical writes. A further discussion of this issue can be found in this <a href="https://queue.acm.org/detail.cfm?id=1563874">ACM Queue article</a>; they actually find that
-    <a href="https://deliveryimages.acm.org/10.1145/1570000/1563874/jacobs3.jpg">sequential disk access can in some cases be faster than random memory access!</a>
+    group smaller logical writes into large physical writes. A further discussion of this issue can be found in this <a href="http://queue.acm.org/detail.cfm?id=1563874">ACM Queue article</a>; they actually find that
+    <a href="http://deliveryimages.acm.org/10.1145/1570000/1563874/jacobs3.jpg">sequential disk access can in some cases be faster than random memory access!</a>
     <p>
     To compensate for this performance divergence, modern operating systems have become increasingly aggressive in their use of main memory for disk caching. A modern OS will happily divert <i>all</i> free memory to
     disk caching with little performance penalty when the memory is reclaimed. All disk reads and writes will go through this unified cache. This feature cannot easily be turned off without using direct I/O, so even
@@ -64,7 +64,7 @@
     This suggests a design which is very simple: rather than maintain as much as possible in-memory and flush it all out to the filesystem in a panic when we run out of space, we invert that. All data is immediately
     written to a persistent log on the filesystem without necessarily flushing to disk. In effect this just means that it is transferred into the kernel's pagecache.
     <p>
-    This style of pagecache-centric design is described in an <a href="https://varnish-cache.org/wiki/ArchitectNotes">article</a> on the design of Varnish here (along with a healthy dose of arrogance).
+    This style of pagecache-centric design is described in an <a href="http://varnish-cache.org/wiki/ArchitectNotes">article</a> on the design of Varnish here (along with a healthy dose of arrogance).
 
     <h4 class="anchor-heading"><a id="design_constanttime" class="anchor-link"></a><a href="#design_constanttime">Constant Time Suffices</a></h4>
     <p>
@@ -107,7 +107,7 @@
     <p>
     The message log maintained by the broker is itself just a directory of files, each populated by a sequence of message sets that have been written to disk in the same format used by the producer and consumer.
     Maintaining this common format allows optimization of the most important operation: network transfer of persistent log chunks. Modern unix operating systems offer a highly optimized code path for transferring data
-    out of pagecache to a socket; in Linux this is done with the <a href="https://man7.org/linux/man-pages/man2/sendfile.2.html">sendfile system call</a>.
+    out of pagecache to a socket; in Linux this is done with the <a href="http://man7.org/linux/man-pages/man2/sendfile.2.html">sendfile system call</a>.
     <p>
     To understand the impact of sendfile, it is important to understand the common data path for transfer of data from file to socket:
     <ol>
@@ -136,9 +136,8 @@
     the user can always compress its messages one at a time without any support needed from Kafka, but this can lead to very poor compression ratios as much of the redundancy is due to repetition between messages of
     the same type (e.g. field names in JSON or user agents in web logs or common string values). Efficient compression requires compressing multiple messages together rather than compressing each message individually.
     <p>
-    Kafka supports this with an efficient batching format. A batch of messages can be grouped together, compressed, and sent to the server in this form. The broker decompresses the batch in order to validate it. For
-    example, it validates that the number of records in the batch is same as what batch header states. This batch of messages is then written to disk in compressed form. The batch will remain compressed in the log and it will also be transmitted to the 
-    consumer in compressed form. The consumer decompresses any compressed data that it receives.
+    Kafka supports this with an efficient batching format. A batch of messages can be clumped together compressed and sent to the server in this form. This batch of messages will be written in compressed form and will
+    remain compressed in the log and will only be decompressed by the consumer.
     <p>
     Kafka supports GZIP, Snappy, LZ4 and ZStandard compression protocols. More details on compression can be found <a href="https://cwiki.apache.org/confluence/display/KAFKA/Compression">here</a>.
 
@@ -160,7 +159,7 @@
     to accumulate no more than a fixed number of messages and to wait no longer than some fixed latency bound (say 64k or 10 ms). This allows the accumulation of more bytes to send, and few larger I/O operations on the
     servers. This buffering is configurable and gives a mechanism to trade off a small amount of additional latency for better throughput.
     <p>
-    Details on <a href="#producerconfigs">configuration</a> and the <a href="https://kafka.apache.org/082/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html">api</a> for the producer can be found
+    Details on <a href="#producerconfigs">configuration</a> and the <a href="http://kafka.apache.org/082/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html">api</a> for the producer can be found
     elsewhere in the documentation.
 
     <h3 class="anchor-heading"><a id="theconsumer" class="anchor-link"></a><a href="#theconsumer">4.5 The Consumer</a></h3>
@@ -171,8 +170,8 @@
     <h4 class="anchor-heading"><a id="design_pull" class="anchor-link"></a><a href="#design_pull">Push vs. pull</a></h4>
     <p>
     An initial question we considered is whether consumers should pull data from brokers or brokers should push data to the consumer. In this respect Kafka follows a more traditional design, shared by most messaging
-    systems, where data is pushed to the broker from the producer and pulled from the broker by the consumer. Some logging-centric systems, such as <a href="https://github.com/facebook/scribe">Scribe</a> and
-    <a href="https://flume.apache.org/">Apache Flume</a>, follow a very different push-based path where data is pushed downstream. There are pros and cons to both approaches. However, a push-based system has difficulty
+    systems, where data is pushed to the broker from the producer and pulled from the broker by the consumer. Some logging-centric systems, such as <a href="http://github.com/facebook/scribe">Scribe</a> and
+    <a href="http://flume.apache.org/">Apache Flume</a>, follow a very different push-based path where data is pushed downstream. There are pros and cons to both approaches. However, a push-based system has difficulty
     dealing with diverse consumers as the broker controls the rate at which data is transferred. The goal is generally for the consumer to be able to consume at the maximum possible rate; unfortunately, in a push
     system this means the consumer tends to be overwhelmed when its rate of consumption falls below the rate of production (a denial of service attack, in essence). A pull-based system has the nicer property that
     the consumer simply falls behind and catches up when it can. This can be mitigated with some kind of backoff protocol by which the consumer can indicate it is overwhelmed, but getting the rate of transfer to
@@ -364,7 +363,7 @@
     <h4><a id="design_replicatedlog" href="#design_replicatedlog">Replicated Logs: Quorums, ISRs, and State Machines (Oh my!)</a></h4>
 
     At its heart a Kafka partition is a replicated log. The replicated log is one of the most basic primitives in distributed data systems, and there are many approaches for implementing one. A replicated log can be
-    used by other systems as a primitive for implementing other distributed systems in the <a href="https://en.wikipedia.org/wiki/State_machine_replication">state-machine style</a>.
+    used by other systems as a primitive for implementing other distributed systems in the <a href="http://en.wikipedia.org/wiki/State_machine_replication">state-machine style</a>.
     <p>
     A replicated log models the process of coming into consensus on the order of a series of values (generally numbering the log entries 0, 1, 2, ...). There are many ways to implement this, but the simplest and fastest
     is with a leader who chooses the ordering of values provided to it. As long as the leader remains alive, all followers need to only copy the values and ordering the leader chooses.
@@ -384,16 +383,16 @@
     This majority vote approach has a very nice property: the latency is dependent on only the fastest servers. That is, if the replication factor is three, the latency is determined by the faster follower not the slower one.
     <p>
     There are a rich variety of algorithms in this family including ZooKeeper's
-    <a href="https://web.archive.org/web/20140602093727/https://www.stanford.edu/class/cs347/reading/zab.pdf">Zab</a>,
+    <a href="http://web.archive.org/web/20140602093727/http://www.stanford.edu/class/cs347/reading/zab.pdf">Zab</a>,
     <a href="https://www.usenix.org/system/files/conference/atc14/atc14-paper-ongaro.pdf">Raft</a>,
-    and <a href="https://pmg.csail.mit.edu/papers/vr-revisited.pdf">Viewstamped Replication</a>.
+    and <a href="http://pmg.csail.mit.edu/papers/vr-revisited.pdf">Viewstamped Replication</a>.
     The most similar academic publication we are aware of to Kafka's actual implementation is
-    <a href="https://research.microsoft.com/apps/pubs/default.aspx?id=66814">PacificA</a> from Microsoft.
+    <a href="http://research.microsoft.com/apps/pubs/default.aspx?id=66814">PacificA</a> from Microsoft.
     <p>
     The downside of majority vote is that it doesn't take many failures to leave you with no electable leaders. To tolerate one failure requires three copies of the data, and to tolerate two failures requires five copies
     of the data. In our experience having only enough redundancy to tolerate a single failure is not enough for a practical system, but doing every write five times, with 5x the disk space requirements and 1/5th the
     throughput, is not very practical for large volume data problems. This is likely why quorum algorithms more commonly appear for shared cluster configuration such as ZooKeeper but are less common for primary data
-    storage. For example in HDFS the namenode's high-availability feature is built on a <a href="https://blog.cloudera.com/blog/2012/10/quorum-based-journaling-in-cdh4-1">majority-vote-based journal</a>, but this more
+    storage. For example in HDFS the namenode's high-availability feature is built on a <a href="http://blog.cloudera.com/blog/2012/10/quorum-based-journaling-in-cdh4-1">majority-vote-based journal</a>, but this more
     expensive approach is not used for the data itself.
     <p>
     Kafka takes a slightly different approach to choosing its quorum set. Instead of majority vote, Kafka dynamically maintains a set of in-sync replicas (ISR) that are caught-up to the leader. Only members of this set
@@ -494,12 +493,12 @@
     <li><i>Event sourcing</i>. This is a style of application design which co-locates query processing with application design and uses a log of changes as the primary store for the application.
     <li><i>Journaling for high-availability</i>. A process that does local computation can be made fault-tolerant by logging out changes that it makes to its local state so another process can reload these changes and
     carry on if it should fail. A concrete example of this is handling counts, aggregations, and other "group by"-like processing in a stream query system. Samza, a real-time stream-processing framework,
-    <a href="https://samza.apache.org/learn/documentation/0.7.0/container/state-management.html">uses this feature</a> for exactly this purpose.
+    <a href="http://samza.apache.org/learn/documentation/0.7.0/container/state-management.html">uses this feature</a> for exactly this purpose.
     </ol>
     In each of these cases one needs primarily to handle the real-time feed of changes, but occasionally, when a machine crashes or data needs to be re-loaded or re-processed, one needs to do a full load.
     Log compaction allows feeding both of these use cases off the same backing topic.
 
-    This style of usage of a log is described in more detail in <a href="https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying">this blog post</a>.
+    This style of usage of a log is described in more detail in <a href="http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying">this blog post</a>.
     <p>
     The general idea is quite simple. If we had infinite log retention, and we logged each change in the above cases, then we would have captured the state of the system at each time from when it first began.
     Using this complete log, we could restore to any point in time by replaying the first N records in the log. This hypothetical complete log is not very practical for systems that update a single record many times
@@ -651,9 +650,9 @@
     <h4 class="anchor-heading"><a id="design_quotascpu" class="anchor-link"></a><a href="#design_quotascpu">Request Rate Quotas</a></h4>
     <p>
         Request rate quotas are defined as the percentage of time a client can utilize on request handler I/O
-        threads and network threads of each broker within a quota window. A quota of <code>n%</code> represents
-        <code>n%</code> of one thread, so the quota is out of a total capacity of <code>((num.io.threads + num.network.threads) * 100)%</code>.
-        Each group of clients may use a total percentage of upto <code>n%</code> across all I/O and network threads in a quota
+        threads and network threads of each broker within a quota window. A quota of <tt>n%</tt> represents
+        <tt>n%</tt> of one thread, so the quota is out of a total capacity of <tt>((num.io.threads + num.network.threads) * 100)%</tt>.
+        Each group of clients may use a total percentage of upto <tt>n%</tt> across all I/O and network threads in a quota
         window before being throttled. Since the number of threads allocated for I/O and network threads are typically based
         on the number of cores available on the broker host, request rate quotas represent the total percentage of CPU
         that may be used by each group of clients sharing the quota.
diff --git a/37/documentation.html b/37/documentation.html
index cd6373e7..3589c446 100644
--- a/37/documentation.html
+++ b/37/documentation.html
@@ -33,7 +33,7 @@
     <!--//#include virtual="../includes/_docs_banner.htm" -->
     
     <h1>Documentation</h1>
-    <h3>Kafka 3.7 Documentation</h3>
+    <h3>Kafka 3.4 Documentation</h3>
     Prior releases: <a href="/07/documentation.html">0.7.x</a>, 
                     <a href="/08/documentation.html">0.8.0</a>, 
                     <a href="/081/documentation.html">0.8.1.X</a>, 
@@ -58,9 +58,6 @@
                     <a href="/31/documentation.html">3.1.X</a>.
                     <a href="/32/documentation.html">3.2.X</a>.
                     <a href="/33/documentation.html">3.3.X</a>.
-                    <a href="/34/documentation.html">3.4.X</a>.
-                    <a href="/35/documentation.html">3.5.X</a>.
-                    <a href="/36/documentation.html">3.6.X</a>.
 
    <h2 class="anchor-heading"><a id="gettingStarted" class="anchor-link"></a><a href="#gettingStarted">1. Getting Started</a></h2>
       <h3 class="anchor-heading"><a id="introduction" class="anchor-link"></a><a href="#introduction">1.1 Introduction</a></h3>
@@ -73,8 +70,6 @@
       <!--#include virtual="ecosystem.html" -->
       <h3 class="anchor-heading"><a id="upgrade" class="anchor-link"></a><a href="#upgrade">1.5 Upgrading From Previous Versions</a></h3>
       <!--#include virtual="upgrade.html" -->
-      <h3 class="anchor-heading"><a id="docker" class="anchor-link"></a><a href="#docker">1.6 Docker</a></h3>
-      <!--#include virtual="docker.html" -->
 
     <h2 class="anchor-heading"><a id="api" class="anchor-link"></a><a href="#api">2. APIs</a></h2>
 
diff --git a/37/generated/admin_client_config.html b/37/generated/admin_client_config.html
index 853d5350..f33a93e5 100644
--- a/37/generated/admin_client_config.html
+++ b/37/generated/admin_client_config.html
@@ -1,20 +1,10 @@
 <ul class="config-list">
 <li>
-<h4><a id="bootstrap.controllers"></a><a id="adminclientconfigs_bootstrap.controllers" href="#adminclientconfigs_bootstrap.controllers">bootstrap.controllers</a></h4>
-<p>A list of host/port pairs to use for establishing the initial connection to the KRaft controller quorum. This list should be in the form <code>host1:port1,host2:port2,...</code>.</p>
-<table><tbody>
-<tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>""</td></tr>
-<tr><th>Valid Values:</th><td></td></tr>
-<tr><th>Importance:</th><td>high</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="bootstrap.servers"></a><a id="adminclientconfigs_bootstrap.servers" href="#adminclientconfigs_bootstrap.servers">bootstrap.servers</a></h4>
 <p>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynam [...]
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>""</td></tr>
+<tr><th>Default:</th><td></td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>high</td></tr>
 </tbody></table>
@@ -271,7 +261,7 @@
 </li>
 <li>
 <h4><a id="socket.connection.setup.timeout.ms"></a><a id="adminclientconfigs_socket.connection.setup.timeout.ms" href="#adminclientconfigs_socket.connection.setup.timeout.ms">socket.connection.setup.timeout.ms</a></h4>
-<p>The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the <code>socket.connection.setup.timeout.max.ms</code> value.</p>
+<p>The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>10000 (10 seconds)</td></tr>
@@ -284,7 +274,7 @@
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.2,TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -304,7 +294,7 @@
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' i [...]
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -340,16 +330,6 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="enable.metrics.push"></a><a id="adminclientconfigs_enable.metrics.push" href="#adminclientconfigs_enable.metrics.push">enable.metrics.push</a></h4>
-<p>Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client.</p>
-<table><tbody>
-<tr><th>Type:</th><td>boolean</td></tr>
-<tr><th>Default:</th><td>true</td></tr>
-<tr><th>Valid Values:</th><td></td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="metadata.max.age.ms"></a><a id="adminclientconfigs_metadata.max.age.ms" href="#adminclientconfigs_metadata.max.age.ms">metadata.max.age.ms</a></h4>
 <p>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</p>
 <table><tbody>
@@ -411,7 +391,7 @@
 </li>
 <li>
 <h4><a id="reconnect.backoff.ms"></a><a id="adminclientconfigs_reconnect.backoff.ms" href="#adminclientconfigs_reconnect.backoff.ms">reconnect.backoff.ms</a></h4>
-<p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the <code>reconnect.backoff.max.ms</code> value.</p>
+<p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>50</td></tr>
@@ -430,18 +410,8 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="retry.backoff.max.ms"></a><a id="adminclientconfigs_retry.backoff.max.ms" href="#adminclientconfigs_retry.backoff.max.ms">retry.backoff.max.ms</a></h4>
-<p>The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If <code>retry.backoff.ms</code [...]
-<table><tbody>
-<tr><th>Type:</th><td>long</td></tr>
-<tr><th>Default:</th><td>1000 (1 second)</td></tr>
-<tr><th>Valid Values:</th><td>[0,...]</td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="retry.backoff.ms"></a><a id="adminclientconfigs_retry.backoff.ms" href="#adminclientconfigs_retry.backoff.ms">retry.backoff.ms</a></h4>
-<p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the <code>retry.backoff.max.ms</code> value.</p>
+<p>The amount of time to wait before attempting to retry a failed request. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>100</td></tr>
@@ -681,7 +651,7 @@
 </li>
 <li>
 <h4><a id="ssl.engine.factory.class"></a><a id="adminclientconfigs_ssl.engine.factory.class" href="#adminclientconfigs_ssl.engine.factory.class">ssl.engine.factory.class</a></h4>
-<p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connection [...]
+<p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
diff --git a/37/generated/connect_config.html b/37/generated/connect_config.html
index 26eba82a..fffbeb33 100644
--- a/37/generated/connect_config.html
+++ b/37/generated/connect_config.html
@@ -344,7 +344,7 @@
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.2,TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -364,7 +364,7 @@
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' i [...]
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -681,7 +681,7 @@
 </li>
 <li>
 <h4><a id="reconnect.backoff.ms"></a><a id="connectconfigs_reconnect.backoff.ms" href="#connectconfigs_reconnect.backoff.ms">reconnect.backoff.ms</a></h4>
-<p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the <code>reconnect.backoff.max.ms</code> value.</p>
+<p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>50</td></tr>
@@ -740,18 +740,8 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="retry.backoff.max.ms"></a><a id="connectconfigs_retry.backoff.max.ms" href="#connectconfigs_retry.backoff.max.ms">retry.backoff.max.ms</a></h4>
-<p>The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If <code>retry.backoff.ms</code [...]
-<table><tbody>
-<tr><th>Type:</th><td>long</td></tr>
-<tr><th>Default:</th><td>1000 (1 second)</td></tr>
-<tr><th>Valid Values:</th><td>[0,...]</td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="retry.backoff.ms"></a><a id="connectconfigs_retry.backoff.ms" href="#connectconfigs_retry.backoff.ms">retry.backoff.ms</a></h4>
-<p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the <code>retry.backoff.max.ms</code> value.</p>
+<p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>100</td></tr>
@@ -981,7 +971,7 @@
 </li>
 <li>
 <h4><a id="socket.connection.setup.timeout.ms"></a><a id="connectconfigs_socket.connection.setup.timeout.ms" href="#connectconfigs_socket.connection.setup.timeout.ms">socket.connection.setup.timeout.ms</a></h4>
-<p>The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the <code>socket.connection.setup.timeout.max.ms</code> value.</p>
+<p>The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>10000 (10 seconds)</td></tr>
@@ -1021,7 +1011,7 @@
 </li>
 <li>
 <h4><a id="ssl.engine.factory.class"></a><a id="connectconfigs_ssl.engine.factory.class" href="#connectconfigs_ssl.engine.factory.class">ssl.engine.factory.class</a></h4>
-<p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connection [...]
+<p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
diff --git a/37/generated/connect_metrics.html b/37/generated/connect_metrics.html
index 328cd19f..addf60ce 100644
--- a/37/generated/connect_metrics.html
+++ b/37/generated/connect_metrics.html
@@ -1,5 +1,5 @@
-[2024-01-08 16:06:18,550] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:694)
-[2024-01-08 16:06:18,554] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:704)
+[2024-02-22 11:02:50,169] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:693)
+[2024-02-22 11:02:50,170] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:703)
 <table class="data-table"><tbody>
 <tr>
 <td colspan=3 class="mbeanName" style="background-color:#ccc; font-weight: bold;">kafka.connect:type=connect-worker-metrics</td></tr>
diff --git a/37/generated/connect_rest.yaml b/37/generated/connect_rest.yaml
index 3ab39df9..03d51602 100644
--- a/37/generated/connect_rest.yaml
+++ b/37/generated/connect_rest.yaml
@@ -8,7 +8,7 @@ info:
     name: Apache 2.0
     url: https://www.apache.org/licenses/LICENSE-2.0.html
   title: Kafka Connect REST API
-  version: 3.7.0
+  version: 3.6.2-SNAPSHOT
 paths:
   /:
     get:
@@ -55,13 +55,6 @@ paths:
         required: true
         schema:
           type: string
-      - description: "The scope for the logging modification (single-worker, cluster-wide,\
-          \ etc.)"
-        in: query
-        name: scope
-        schema:
-          type: string
-          default: worker
       requestBody:
         content:
           application/json:
@@ -395,10 +388,9 @@ paths:
                 items:
                   $ref: '#/components/schemas/TaskInfo'
           description: default response
-      summary: List all tasks and their configurations for the specified connector
+      summary: List all tasks for the specified connector
   /connectors/{connector}/tasks-config:
     get:
-      deprecated: true
       operationId: getTasksConfig
       parameters:
       - in: path
@@ -647,19 +639,6 @@ components:
           type: object
           additionalProperties:
             type: string
-        initialState:
-          type: string
-          enum:
-          - RUNNING
-          - PAUSED
-          - STOPPED
-        initial_state:
-          type: string
-          enum:
-          - RUNNING
-          - PAUSED
-          - STOPPED
-          writeOnly: true
         name:
           type: string
     PluginInfo:
diff --git a/37/generated/consumer_config.html b/37/generated/consumer_config.html
index eec8081b..856feb16 100644
--- a/37/generated/consumer_config.html
+++ b/37/generated/consumer_config.html
@@ -50,16 +50,6 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="group.protocol"></a><a id="consumerconfigs_group.protocol" href="#consumerconfigs_group.protocol">group.protocol</a></h4>
-<p>The group protocol consumer should use. We currently support "classic" or "consumer". If "consumer" is specified, then the consumer group protocol will be used. Otherwise, the classic group protocol will be used.</p>
-<table><tbody>
-<tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>classic</td></tr>
-<tr><th>Valid Values:</th><td>(case insensitive) [CONSUMER, CLASSIC]</td></tr>
-<tr><th>Importance:</th><td>high</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="heartbeat.interval.ms"></a><a id="consumerconfigs_heartbeat.interval.ms" href="#consumerconfigs_heartbeat.interval.ms">heartbeat.interval.ms</a></h4>
 <p>The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.</p>
 <table><tbody>
@@ -260,16 +250,6 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="group.remote.assignor"></a><a id="consumerconfigs_group.remote.assignor" href="#consumerconfigs_group.remote.assignor">group.remote.assignor</a></h4>
-<p>The server-side assignor to use. If no assignor is specified, the group coordinator will pick one. This configuration is applied only if <code>group.protocol</code> is set to "consumer".</p>
-<table><tbody>
-<tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td></td></tr>
-<tr><th>Importance:</th><td>medium</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="isolation.level"></a><a id="consumerconfigs_isolation.level" href="#consumerconfigs_isolation.level">isolation.level</a></h4>
 <p>Controls how to read messages written transactionally. If set to <code>read_committed</code>, consumer.poll() will only return transactional messages which have been committed. If set to <code>read_uncommitted</code> (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. <p>Messages will always be returned in offset order. Hence, in  <code>read_committed</ [...]
 <table><tbody>
@@ -441,7 +421,7 @@
 </li>
 <li>
 <h4><a id="socket.connection.setup.timeout.ms"></a><a id="consumerconfigs_socket.connection.setup.timeout.ms" href="#consumerconfigs_socket.connection.setup.timeout.ms">socket.connection.setup.timeout.ms</a></h4>
-<p>The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the <code>socket.connection.setup.timeout.max.ms</code> value.</p>
+<p>The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>10000 (10 seconds)</td></tr>
@@ -454,7 +434,7 @@
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.2,TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -474,7 +454,7 @@
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' i [...]
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -550,16 +530,6 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="enable.metrics.push"></a><a id="consumerconfigs_enable.metrics.push" href="#consumerconfigs_enable.metrics.push">enable.metrics.push</a></h4>
-<p>Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client.</p>
-<table><tbody>
-<tr><th>Type:</th><td>boolean</td></tr>
-<tr><th>Default:</th><td>true</td></tr>
-<tr><th>Valid Values:</th><td></td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="fetch.max.wait.ms"></a><a id="consumerconfigs_fetch.max.wait.ms" href="#consumerconfigs_fetch.max.wait.ms">fetch.max.wait.ms</a></h4>
 <p>The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.</p>
 <table><tbody>
@@ -641,7 +611,7 @@
 </li>
 <li>
 <h4><a id="reconnect.backoff.ms"></a><a id="consumerconfigs_reconnect.backoff.ms" href="#consumerconfigs_reconnect.backoff.ms">reconnect.backoff.ms</a></h4>
-<p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the <code>reconnect.backoff.max.ms</code> value.</p>
+<p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>50</td></tr>
@@ -650,18 +620,8 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="retry.backoff.max.ms"></a><a id="consumerconfigs_retry.backoff.max.ms" href="#consumerconfigs_retry.backoff.max.ms">retry.backoff.max.ms</a></h4>
-<p>The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If <code>retry.backoff.ms</code [...]
-<table><tbody>
-<tr><th>Type:</th><td>long</td></tr>
-<tr><th>Default:</th><td>1000 (1 second)</td></tr>
-<tr><th>Valid Values:</th><td>[0,...]</td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="retry.backoff.ms"></a><a id="consumerconfigs_retry.backoff.ms" href="#consumerconfigs_retry.backoff.ms">retry.backoff.ms</a></h4>
-<p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the <code>retry.backoff.max.ms</code> value.</p>
+<p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>100</td></tr>
@@ -901,7 +861,7 @@
 </li>
 <li>
 <h4><a id="ssl.engine.factory.class"></a><a id="consumerconfigs_ssl.engine.factory.class" href="#consumerconfigs_ssl.engine.factory.class">ssl.engine.factory.class</a></h4>
-<p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connection [...]
+<p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
diff --git a/37/generated/kafka_config.html b/37/generated/kafka_config.html
index 6594def5..e6ecacc5 100644
--- a/37/generated/kafka_config.html
+++ b/37/generated/kafka_config.html
@@ -111,7 +111,7 @@
 </li>
 <li>
 <h4><a id="controller.quorum.fetch.timeout.ms"></a><a id="brokerconfigs_controller.quorum.fetch.timeout.ms" href="#brokerconfigs_controller.quorum.fetch.timeout.ms">controller.quorum.fetch.timeout.ms</a></h4>
-<p>Maximum time without a successful fetch from the current leader before becoming a candidate and triggering an election for voters; Maximum time a leader can go without receiving valid fetch or fetchSnapshot request from a majority of the quorum before resigning.</p>
+<p>Maximum time without a successful fetch from the current leader before becoming a candidate and triggering an election for voters; Maximum time without receiving fetch from a majority of the quorum before asking around to see if there's a new epoch for leader.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
 <tr><th>Default:</th><td>2000 (2 seconds)</td></tr>
@@ -154,17 +154,6 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="eligible.leader.replicas.enable"></a><a id="brokerconfigs_eligible.leader.replicas.enable" href="#brokerconfigs_eligible.leader.replicas.enable">eligible.leader.replicas.enable</a></h4>
-<p>Enable the Eligible leader replicas</p>
-<table><tbody>
-<tr><th>Type:</th><td>boolean</td></tr>
-<tr><th>Default:</th><td>false</td></tr>
-<tr><th>Valid Values:</th><td></td></tr>
-<tr><th>Importance:</th><td>high</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="leader.imbalance.check.interval.seconds"></a><a id="brokerconfigs_leader.imbalance.check.interval.seconds" href="#brokerconfigs_leader.imbalance.check.interval.seconds">leader.imbalance.check.interval.seconds</a></h4>
 <p>The frequency with which the partition rebalance check is triggered by the controller</p>
 <table><tbody>
@@ -1177,116 +1166,6 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="group.consumer.assignors"></a><a id="brokerconfigs_group.consumer.assignors" href="#brokerconfigs_group.consumer.assignors">group.consumer.assignors</a></h4>
-<p>The server side assignors as a list of full class names. The first one in the list is considered as the default assignor to be used in the case where the consumer does not specify an assignor.</p>
-<table><tbody>
-<tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>org.apache.kafka.coordinator.group.assignor.UniformAssignor,org.apache.kafka.coordinator.group.assignor.RangeAssignor</td></tr>
-<tr><th>Valid Values:</th><td></td></tr>
-<tr><th>Importance:</th><td>medium</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
-<h4><a id="group.consumer.heartbeat.interval.ms"></a><a id="brokerconfigs_group.consumer.heartbeat.interval.ms" href="#brokerconfigs_group.consumer.heartbeat.interval.ms">group.consumer.heartbeat.interval.ms</a></h4>
-<p>The heartbeat interval given to the members of a consumer group.</p>
-<table><tbody>
-<tr><th>Type:</th><td>int</td></tr>
-<tr><th>Default:</th><td>5000 (5 seconds)</td></tr>
-<tr><th>Valid Values:</th><td>[1,...]</td></tr>
-<tr><th>Importance:</th><td>medium</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
-<h4><a id="group.consumer.max.heartbeat.interval.ms"></a><a id="brokerconfigs_group.consumer.max.heartbeat.interval.ms" href="#brokerconfigs_group.consumer.max.heartbeat.interval.ms">group.consumer.max.heartbeat.interval.ms</a></h4>
-<p>The maximum heartbeat interval for registered consumers.</p>
-<table><tbody>
-<tr><th>Type:</th><td>int</td></tr>
-<tr><th>Default:</th><td>15000 (15 seconds)</td></tr>
-<tr><th>Valid Values:</th><td>[1,...]</td></tr>
-<tr><th>Importance:</th><td>medium</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
-<h4><a id="group.consumer.max.session.timeout.ms"></a><a id="brokerconfigs_group.consumer.max.session.timeout.ms" href="#brokerconfigs_group.consumer.max.session.timeout.ms">group.consumer.max.session.timeout.ms</a></h4>
-<p>The maximum allowed session timeout for registered consumers.</p>
-<table><tbody>
-<tr><th>Type:</th><td>int</td></tr>
-<tr><th>Default:</th><td>60000 (1 minute)</td></tr>
-<tr><th>Valid Values:</th><td>[1,...]</td></tr>
-<tr><th>Importance:</th><td>medium</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
-<h4><a id="group.consumer.max.size"></a><a id="brokerconfigs_group.consumer.max.size" href="#brokerconfigs_group.consumer.max.size">group.consumer.max.size</a></h4>
-<p>The maximum number of consumers that a single consumer group can accommodate.</p>
-<table><tbody>
-<tr><th>Type:</th><td>int</td></tr>
-<tr><th>Default:</th><td>2147483647</td></tr>
-<tr><th>Valid Values:</th><td>[1,...]</td></tr>
-<tr><th>Importance:</th><td>medium</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
-<h4><a id="group.consumer.min.heartbeat.interval.ms"></a><a id="brokerconfigs_group.consumer.min.heartbeat.interval.ms" href="#brokerconfigs_group.consumer.min.heartbeat.interval.ms">group.consumer.min.heartbeat.interval.ms</a></h4>
-<p>The minimum heartbeat interval for registered consumers.</p>
-<table><tbody>
-<tr><th>Type:</th><td>int</td></tr>
-<tr><th>Default:</th><td>5000 (5 seconds)</td></tr>
-<tr><th>Valid Values:</th><td>[1,...]</td></tr>
-<tr><th>Importance:</th><td>medium</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
-<h4><a id="group.consumer.min.session.timeout.ms"></a><a id="brokerconfigs_group.consumer.min.session.timeout.ms" href="#brokerconfigs_group.consumer.min.session.timeout.ms">group.consumer.min.session.timeout.ms</a></h4>
-<p>The minimum allowed session timeout for registered consumers.</p>
-<table><tbody>
-<tr><th>Type:</th><td>int</td></tr>
-<tr><th>Default:</th><td>45000 (45 seconds)</td></tr>
-<tr><th>Valid Values:</th><td>[1,...]</td></tr>
-<tr><th>Importance:</th><td>medium</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
-<h4><a id="group.consumer.session.timeout.ms"></a><a id="brokerconfigs_group.consumer.session.timeout.ms" href="#brokerconfigs_group.consumer.session.timeout.ms">group.consumer.session.timeout.ms</a></h4>
-<p>The timeout to detect client failures when using the consumer group protocol.</p>
-<table><tbody>
-<tr><th>Type:</th><td>int</td></tr>
-<tr><th>Default:</th><td>45000 (45 seconds)</td></tr>
-<tr><th>Valid Values:</th><td>[1,...]</td></tr>
-<tr><th>Importance:</th><td>medium</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
-<h4><a id="group.coordinator.rebalance.protocols"></a><a id="brokerconfigs_group.coordinator.rebalance.protocols" href="#brokerconfigs_group.coordinator.rebalance.protocols">group.coordinator.rebalance.protocols</a></h4>
-<p>The list of enabled rebalance protocols. Supported protocols: consumer,classic. The consumer rebalance protocol is in early access and therefore must not be used in production.</p>
-<table><tbody>
-<tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>classic</td></tr>
-<tr><th>Valid Values:</th><td>[consumer, classic]</td></tr>
-<tr><th>Importance:</th><td>medium</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
-<h4><a id="group.coordinator.threads"></a><a id="brokerconfigs_group.coordinator.threads" href="#brokerconfigs_group.coordinator.threads">group.coordinator.threads</a></h4>
-<p>The number of threads used by the group coordinator.</p>
-<table><tbody>
-<tr><th>Type:</th><td>int</td></tr>
-<tr><th>Default:</th><td>1</td></tr>
-<tr><th>Valid Values:</th><td>[1,...]</td></tr>
-<tr><th>Importance:</th><td>medium</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="group.initial.rebalance.delay.ms"></a><a id="brokerconfigs_group.initial.rebalance.delay.ms" href="#brokerconfigs_group.initial.rebalance.delay.ms">group.initial.rebalance.delay.ms</a></h4>
 <p>The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins.</p>
 <table><tbody>
@@ -1357,8 +1236,8 @@
 <p>Specify which version of the inter-broker protocol will be used.<br> This is typically bumped after all brokers were upgraded to a new version.<br> Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check MetadataVersion for the full list.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>3.8-IV0</td></tr>
-<tr><th>Valid Values:</th><td>[0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3 [...]
+<tr><th>Default:</th><td>3.6-IV2</td></tr>
+<tr><th>Valid Values:</th><td>[0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2]</td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 <tr><th>Update Mode:</th><td>read-only</td></tr>
 </tbody></table>
@@ -1545,7 +1424,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
 <tr><th>Default:</th><td>3.0-IV1</td></tr>
-<tr><th>Valid Values:</th><td>[0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3 [...]
+<tr><th>Valid Values:</th><td>[0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2]</td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 <tr><th>Update Mode:</th><td>read-only</td></tr>
 </tbody></table>
@@ -2157,7 +2036,7 @@
 </li>
 <li>
 <h4><a id="socket.connection.setup.timeout.ms"></a><a id="brokerconfigs_socket.connection.setup.timeout.ms" href="#brokerconfigs_socket.connection.setup.timeout.ms">socket.connection.setup.timeout.ms</a></h4>
-<p>The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the <code>socket.connection.setup.timeout.max.ms</code> value.</p>
+<p>The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>10000 (10 seconds)</td></tr>
@@ -2204,7 +2083,7 @@
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.2,TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 <tr><th>Update Mode:</th><td>per-broker</td></tr>
@@ -2292,7 +2171,7 @@
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' i [...]
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 <tr><th>Update Mode:</th><td>per-broker</td></tr>
@@ -2531,7 +2410,7 @@
 </li>
 <li>
 <h4><a id="controller.quorum.retry.backoff.ms"></a><a id="brokerconfigs_controller.quorum.retry.backoff.ms" href="#brokerconfigs_controller.quorum.retry.backoff.ms">controller.quorum.retry.backoff.ms</a></h4>
-<p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the <code>retry.backoff.max.ms</code> value.</p>
+<p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
 <tr><th>Default:</th><td>20</td></tr>
@@ -2761,17 +2640,6 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="remote.log.index.file.cache.total.size.bytes"></a><a id="brokerconfigs_remote.log.index.file.cache.total.size.bytes" href="#brokerconfigs_remote.log.index.file.cache.total.size.bytes">remote.log.index.file.cache.total.size.bytes</a></h4>
-<p>The total size of the space allocated to store index files fetched from remote storage in the local storage.</p>
-<table><tbody>
-<tr><th>Type:</th><td>long</td></tr>
-<tr><th>Default:</th><td>1073741824 (1 gibibyte)</td></tr>
-<tr><th>Valid Values:</th><td>[1,...]</td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-<tr><th>Update Mode:</th><td>cluster-wide</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="remote.log.manager.task.interval.ms"></a><a id="brokerconfigs_remote.log.manager.task.interval.ms" href="#brokerconfigs_remote.log.manager.task.interval.ms">remote.log.manager.task.interval.ms</a></h4>
 <p>Interval at which remote log manager runs the scheduled tasks like copy segments, and clean up remote log segments.</p>
 <table><tbody>
@@ -2959,28 +2827,6 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.allow.dn.changes"></a><a id="brokerconfigs_ssl.allow.dn.changes" href="#brokerconfigs_ssl.allow.dn.changes">ssl.allow.dn.changes</a></h4>
-<p>Indicates whether changes to the certificate distinguished name should be allowed during a dynamic reconfiguration of certificates or not.</p>
-<table><tbody>
-<tr><th>Type:</th><td>boolean</td></tr>
-<tr><th>Default:</th><td>false</td></tr>
-<tr><th>Valid Values:</th><td></td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
-<h4><a id="ssl.allow.san.changes"></a><a id="brokerconfigs_ssl.allow.san.changes" href="#brokerconfigs_ssl.allow.san.changes">ssl.allow.san.changes</a></h4>
-<p>Indicates whether changes to the certificate subject alternative names should be allowed during a dynamic reconfiguration of certificates or not.</p>
-<table><tbody>
-<tr><th>Type:</th><td>boolean</td></tr>
-<tr><th>Default:</th><td>false</td></tr>
-<tr><th>Valid Values:</th><td></td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="ssl.endpoint.identification.algorithm"></a><a id="brokerconfigs_ssl.endpoint.identification.algorithm" href="#brokerconfigs_ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a></h4>
 <p>The endpoint identification algorithm to validate server hostname using server certificate. </p>
 <table><tbody>
@@ -2993,7 +2839,7 @@
 </li>
 <li>
 <h4><a id="ssl.engine.factory.class"></a><a id="brokerconfigs_ssl.engine.factory.class" href="#brokerconfigs_ssl.engine.factory.class">ssl.engine.factory.class</a></h4>
-<p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connection [...]
+<p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
@@ -3025,17 +2871,6 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="telemetry.max.bytes"></a><a id="brokerconfigs_telemetry.max.bytes" href="#brokerconfigs_telemetry.max.bytes">telemetry.max.bytes</a></h4>
-<p>The maximum size (after compression if compression is used) of telemetry metrics pushed from a client to the broker. The default value is 1048576 (1 MB).</p>
-<table><tbody>
-<tr><th>Type:</th><td>int</td></tr>
-<tr><th>Default:</th><td>1048576 (1 mebibyte)</td></tr>
-<tr><th>Valid Values:</th><td>[1,...]</td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-<tr><th>Update Mode:</th><td>read-only</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="transaction.abort.timed.out.transaction.cleanup.interval.ms"></a><a id="brokerconfigs_transaction.abort.timed.out.transaction.cleanup.interval.ms" href="#brokerconfigs_transaction.abort.timed.out.transaction.cleanup.interval.ms">transaction.abort.timed.out.transaction.cleanup.interval.ms</a></h4>
 <p>The interval at which to rollback transactions that have timed out</p>
 <table><tbody>
diff --git a/37/generated/mirror_connector_config.html b/37/generated/mirror_connector_config.html
index 761a7326..37eb2142 100644
--- a/37/generated/mirror_connector_config.html
+++ b/37/generated/mirror_connector_config.html
@@ -194,7 +194,7 @@
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.2,TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -214,7 +214,7 @@
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' i [...]
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -541,7 +541,7 @@
 </li>
 <li>
 <h4><a id="ssl.engine.factory.class"></a><a id="mirror_connector_ssl.engine.factory.class" href="#mirror_connector_ssl.engine.factory.class">ssl.engine.factory.class</a></h4>
-<p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connection [...]
+<p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
@@ -615,7 +615,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td>A concrete subclass of org.apache.kafka.connect.storage.Converter, A class with a public, no-argument constructor</td></tr>
+<tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>low</td></tr>
 </tbody></table>
 </li>
@@ -625,7 +625,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td>A concrete subclass of org.apache.kafka.connect.storage.Converter, A class with a public, no-argument constructor</td></tr>
+<tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>low</td></tr>
 </tbody></table>
 </li>
@@ -635,7 +635,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td>A concrete subclass of org.apache.kafka.connect.storage.HeaderConverter, A class with a public, no-argument constructor</td></tr>
+<tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>low</td></tr>
 </tbody></table>
 </li>
diff --git a/37/generated/producer_config.html b/37/generated/producer_config.html
index f39f2f74..6b3fd448 100644
--- a/37/generated/producer_config.html
+++ b/37/generated/producer_config.html
@@ -371,7 +371,7 @@
 </li>
 <li>
 <h4><a id="socket.connection.setup.timeout.ms"></a><a id="producerconfigs_socket.connection.setup.timeout.ms" href="#producerconfigs_socket.connection.setup.timeout.ms">socket.connection.setup.timeout.ms</a></h4>
-<p>The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the <code>socket.connection.setup.timeout.max.ms</code> value.</p>
+<p>The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>10000 (10 seconds)</td></tr>
@@ -384,7 +384,7 @@
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.2,TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -404,7 +404,7 @@
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' i [...]
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -460,16 +460,6 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="enable.metrics.push"></a><a id="producerconfigs_enable.metrics.push" href="#producerconfigs_enable.metrics.push">enable.metrics.push</a></h4>
-<p>Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client.</p>
-<table><tbody>
-<tr><th>Type:</th><td>boolean</td></tr>
-<tr><th>Default:</th><td>true</td></tr>
-<tr><th>Valid Values:</th><td></td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="interceptor.classes"></a><a id="producerconfigs_interceptor.classes" href="#producerconfigs_interceptor.classes">interceptor.classes</a></h4>
 <p>A list of classes to use as interceptors. Implementing the <code>org.apache.kafka.clients.producer.ProducerInterceptor</code> interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors.</p>
 <table><tbody>
@@ -581,7 +571,7 @@
 </li>
 <li>
 <h4><a id="reconnect.backoff.ms"></a><a id="producerconfigs_reconnect.backoff.ms" href="#producerconfigs_reconnect.backoff.ms">reconnect.backoff.ms</a></h4>
-<p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the <code>reconnect.backoff.max.ms</code> value.</p>
+<p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>50</td></tr>
@@ -590,18 +580,8 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="retry.backoff.max.ms"></a><a id="producerconfigs_retry.backoff.max.ms" href="#producerconfigs_retry.backoff.max.ms">retry.backoff.max.ms</a></h4>
-<p>The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If <code>retry.backoff.ms</code [...]
-<table><tbody>
-<tr><th>Type:</th><td>long</td></tr>
-<tr><th>Default:</th><td>1000 (1 second)</td></tr>
-<tr><th>Valid Values:</th><td>[0,...]</td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="retry.backoff.ms"></a><a id="producerconfigs_retry.backoff.ms" href="#producerconfigs_retry.backoff.ms">retry.backoff.ms</a></h4>
-<p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the <code>retry.backoff.max.ms</code> value.</p>
+<p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>100</td></tr>
@@ -841,7 +821,7 @@
 </li>
 <li>
 <h4><a id="ssl.engine.factory.class"></a><a id="producerconfigs_ssl.engine.factory.class" href="#producerconfigs_ssl.engine.factory.class">ssl.engine.factory.class</a></h4>
-<p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connection [...]
+<p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
diff --git a/37/generated/protocol_api_keys.html b/37/generated/protocol_api_keys.html
index 0a5a3cbf..aaa7f201 100644
--- a/37/generated/protocol_api_keys.html
+++ b/37/generated/protocol_api_keys.html
@@ -127,13 +127,5 @@
 <td><a href="#The_Messages_AllocateProducerIds">AllocateProducerIds</a></td><td>67</td></tr>
 <tr>
 <td><a href="#The_Messages_ConsumerGroupHeartbeat">ConsumerGroupHeartbeat</a></td><td>68</td></tr>
-<tr>
-<td><a href="#The_Messages_ConsumerGroupDescribe">ConsumerGroupDescribe</a></td><td>69</td></tr>
-<tr>
-<td><a href="#The_Messages_GetTelemetrySubscriptions">GetTelemetrySubscriptions</a></td><td>71</td></tr>
-<tr>
-<td><a href="#The_Messages_PushTelemetry">PushTelemetry</a></td><td>72</td></tr>
-<tr>
-<td><a href="#The_Messages_ListClientMetricsResources">ListClientMetricsResources</a></td><td>74</td></tr>
 </tbody></table>
 
diff --git a/37/generated/protocol_errors.html b/37/generated/protocol_errors.html
index 33ad1e12..4a2f284a 100644
--- a/37/generated/protocol_errors.html
+++ b/37/generated/protocol_errors.html
@@ -119,11 +119,5 @@
 <tr><td>UNRELEASED_INSTANCE_ID</td><td>111</td><td>False</td><td>The instance ID is still used by another member in the consumer group. That member must leave first.</td></tr>
 <tr><td>UNSUPPORTED_ASSIGNOR</td><td>112</td><td>False</td><td>The assignor or its version range is not supported by the consumer group.</td></tr>
 <tr><td>STALE_MEMBER_EPOCH</td><td>113</td><td>False</td><td>The member epoch is stale. The member must retry after receiving its updated member epoch via the ConsumerGroupHeartbeat API.</td></tr>
-<tr><td>MISMATCHED_ENDPOINT_TYPE</td><td>114</td><td>False</td><td>The request was sent to an endpoint of the wrong type.</td></tr>
-<tr><td>UNSUPPORTED_ENDPOINT_TYPE</td><td>115</td><td>False</td><td>This endpoint type is not supported yet.</td></tr>
-<tr><td>UNKNOWN_CONTROLLER_ID</td><td>116</td><td>False</td><td>This controller ID is not known.</td></tr>
-<tr><td>UNKNOWN_SUBSCRIPTION_ID</td><td>117</td><td>False</td><td>Client sent a push telemetry request with an invalid or outdated subscription ID.</td></tr>
-<tr><td>TELEMETRY_TOO_LARGE</td><td>118</td><td>False</td><td>Client sent a push telemetry request larger than the maximum size the broker will accept.</td></tr>
-<tr><td>INVALID_REGISTRATION</td><td>119</td><td>False</td><td>The controller has considered the broker registration to be invalid.</td></tr>
 </tbody></table>
 
diff --git a/37/generated/protocol_messages.html b/37/generated/protocol_messages.html
index 0035a2ea..4a2a40a0 100644
--- a/37/generated/protocol_messages.html
+++ b/37/generated/protocol_messages.html
@@ -372,42 +372,6 @@
 <td>_tagged_fields</td><td>The tagged fields</td></tr>
 </tbody></table>
 </div>
-<div><pre>Produce Request (Version: 10) => transactional_id acks timeout_ms [topic_data] TAG_BUFFER 
-  transactional_id => COMPACT_NULLABLE_STRING
-  acks => INT16
-  timeout_ms => INT32
-  topic_data => name [partition_data] TAG_BUFFER 
-    name => COMPACT_STRING
-    partition_data => index records TAG_BUFFER 
-      index => INT32
-      records => COMPACT_RECORDS
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>transactional_id</td><td>The transactional ID, or null if the producer is not transactional.</td></tr>
-<tr>
-<td>acks</td><td>The number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments, 1 for only the leader and -1 for the full ISR.</td></tr>
-<tr>
-<td>timeout_ms</td><td>The timeout to await a response in milliseconds.</td></tr>
-<tr>
-<td>topic_data</td><td>Each topic to produce to.</td></tr>
-<tr>
-<td>name</td><td>The topic name.</td></tr>
-<tr>
-<td>partition_data</td><td>Each partition to produce to.</td></tr>
-<tr>
-<td>index</td><td>The partition index.</td></tr>
-<tr>
-<td>records</td><td>The record data to be produced.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
 <b>Responses:</b><br>
 <div><pre>Produce Response (Version: 0) => [responses] 
   responses => name [partition_responses] 
@@ -747,59 +711,6 @@
 <td>_tagged_fields</td><td>The tagged fields</td></tr>
 </tbody></table>
 </div>
-<div><pre>Produce Response (Version: 10) => [responses] throttle_time_ms TAG_BUFFER 
-  responses => name [partition_responses] TAG_BUFFER 
-    name => COMPACT_STRING
-    partition_responses => index error_code base_offset log_append_time_ms log_start_offset [record_errors] error_message TAG_BUFFER 
-      index => INT32
-      error_code => INT16
-      base_offset => INT64
-      log_append_time_ms => INT64
-      log_start_offset => INT64
-      record_errors => batch_index batch_index_error_message TAG_BUFFER 
-        batch_index => INT32
-        batch_index_error_message => COMPACT_NULLABLE_STRING
-      error_message => COMPACT_NULLABLE_STRING
-  throttle_time_ms => INT32
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>responses</td><td>Each produce response</td></tr>
-<tr>
-<td>name</td><td>The topic name</td></tr>
-<tr>
-<td>partition_responses</td><td>Each partition that we produced to within the topic.</td></tr>
-<tr>
-<td>index</td><td>The partition index.</td></tr>
-<tr>
-<td>error_code</td><td>The error code, or 0 if there was no error.</td></tr>
-<tr>
-<td>base_offset</td><td>The base offset.</td></tr>
-<tr>
-<td>log_append_time_ms</td><td>The timestamp returned by broker after appending the messages. If CreateTime is used for the topic, the timestamp will be -1.  If LogAppendTime is used for the topic, the timestamp will be the broker local time when the messages are appended.</td></tr>
-<tr>
-<td>log_start_offset</td><td>The log start offset.</td></tr>
-<tr>
-<td>record_errors</td><td>The batch indices of records that caused the batch to be dropped</td></tr>
-<tr>
-<td>batch_index</td><td>The batch index of the record that cause the batch to be dropped</td></tr>
-<tr>
-<td>batch_index_error_message</td><td>The error message of the record that caused the batch to be dropped</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>error_message</td><td>The global error message summarizing the common root cause of the records that caused the batch to be dropped</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>throttle_time_ms</td><td>The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
 <h5><a name="The_Messages_Fetch">Fetch API (Key: 1):</a></h5>
 
 <b>Requests:</b><br>
@@ -1651,77 +1562,6 @@
 <td>_tagged_fields</td><td>The tagged fields</td></tr>
 </tbody></table>
 </div>
-<div><pre>Fetch Request (Version: 16) => max_wait_ms min_bytes max_bytes isolation_level session_id session_epoch [topics] [forgotten_topics_data] rack_id TAG_BUFFER 
-  max_wait_ms => INT32
-  min_bytes => INT32
-  max_bytes => INT32
-  isolation_level => INT8
-  session_id => INT32
-  session_epoch => INT32
-  topics => topic_id [partitions] TAG_BUFFER 
-    topic_id => UUID
-    partitions => partition current_leader_epoch fetch_offset last_fetched_epoch log_start_offset partition_max_bytes TAG_BUFFER 
-      partition => INT32
-      current_leader_epoch => INT32
-      fetch_offset => INT64
-      last_fetched_epoch => INT32
-      log_start_offset => INT64
-      partition_max_bytes => INT32
-  forgotten_topics_data => topic_id [partitions] TAG_BUFFER 
-    topic_id => UUID
-    partitions => INT32
-  rack_id => COMPACT_STRING
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>max_wait_ms</td><td>The maximum time in milliseconds to wait for the response.</td></tr>
-<tr>
-<td>min_bytes</td><td>The minimum bytes to accumulate in the response.</td></tr>
-<tr>
-<td>max_bytes</td><td>The maximum bytes to fetch.  See KIP-74 for cases where this limit may not be honored.</td></tr>
-<tr>
-<td>isolation_level</td><td>This setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to [...]
-<tr>
-<td>session_id</td><td>The fetch session ID.</td></tr>
-<tr>
-<td>session_epoch</td><td>The fetch session epoch, which is used for ordering requests in a session.</td></tr>
-<tr>
-<td>topics</td><td>The topics to fetch.</td></tr>
-<tr>
-<td>topic_id</td><td>The unique topic ID</td></tr>
-<tr>
-<td>partitions</td><td>The partitions to fetch.</td></tr>
-<tr>
-<td>partition</td><td>The partition index.</td></tr>
-<tr>
-<td>current_leader_epoch</td><td>The current leader epoch of the partition.</td></tr>
-<tr>
-<td>fetch_offset</td><td>The message offset.</td></tr>
-<tr>
-<td>last_fetched_epoch</td><td>The epoch of the last fetched record or -1 if there is none</td></tr>
-<tr>
-<td>log_start_offset</td><td>The earliest available offset of the follower replica.  The field is only used when the request is sent by the follower.</td></tr>
-<tr>
-<td>partition_max_bytes</td><td>The maximum bytes to fetch from this partition.  See KIP-74 for cases where this limit may not be honored.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>forgotten_topics_data</td><td>In an incremental fetch request, the partitions to remove.</td></tr>
-<tr>
-<td>topic_id</td><td>The unique topic ID</td></tr>
-<tr>
-<td>partitions</td><td>The partitions indexes to forget.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>rack_id</td><td>Rack ID of the consumer making this request</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
 <b>Responses:</b><br>
 <div><pre>Fetch Response (Version: 0) => [responses] 
   responses => topic [partitions] 
@@ -2478,68 +2318,6 @@
 <td>_tagged_fields</td><td>The tagged fields</td></tr>
 </tbody></table>
 </div>
-<div><pre>Fetch Response (Version: 16) => throttle_time_ms error_code session_id [responses] TAG_BUFFER 
-  throttle_time_ms => INT32
-  error_code => INT16
-  session_id => INT32
-  responses => topic_id [partitions] TAG_BUFFER 
-    topic_id => UUID
-    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] preferred_read_replica records TAG_BUFFER 
-      partition_index => INT32
-      error_code => INT16
-      high_watermark => INT64
-      last_stable_offset => INT64
-      log_start_offset => INT64
-      aborted_transactions => producer_id first_offset TAG_BUFFER 
-        producer_id => INT64
-        first_offset => INT64
-      preferred_read_replica => INT32
-      records => COMPACT_RECORDS
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>throttle_time_ms</td><td>The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.</td></tr>
-<tr>
-<td>error_code</td><td>The top level response error code.</td></tr>
-<tr>
-<td>session_id</td><td>The fetch session ID, or 0 if this is not part of a fetch session.</td></tr>
-<tr>
-<td>responses</td><td>The response topics.</td></tr>
-<tr>
-<td>topic_id</td><td>The unique topic ID</td></tr>
-<tr>
-<td>partitions</td><td>The topic partitions.</td></tr>
-<tr>
-<td>partition_index</td><td>The partition index.</td></tr>
-<tr>
-<td>error_code</td><td>The error code, or 0 if there was no fetch error.</td></tr>
-<tr>
-<td>high_watermark</td><td>The current high water mark.</td></tr>
-<tr>
-<td>last_stable_offset</td><td>The last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED)</td></tr>
-<tr>
-<td>log_start_offset</td><td>The current log start offset.</td></tr>
-<tr>
-<td>aborted_transactions</td><td>The aborted transactions.</td></tr>
-<tr>
-<td>producer_id</td><td>The producer id associated with the aborted transaction.</td></tr>
-<tr>
-<td>first_offset</td><td>The first offset in the aborted transaction.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>preferred_read_replica</td><td>The preferred read replica for the consumer to use on its next fetch request</td></tr>
-<tr>
-<td>records</td><td>The record data.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
 <h5><a name="The_Messages_ListOffsets">ListOffsets API (Key: 2):</a></h5>
 
 <b>Requests:</b><br>
@@ -6137,7 +5915,7 @@
 </tr><tr>
 <td>group_id</td><td>The unique group identifier.</td></tr>
 <tr>
-<td>generation_id_or_member_epoch</td><td>The generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.</td></tr>
+<td>generation_id_or_member_epoch</td><td>The generation of the group if using the generic group protocol or the member epoch if using the consumer protocol.</td></tr>
 <tr>
 <td>member_id</td><td>The member ID assigned by the group coordinator.</td></tr>
 <tr>
@@ -6173,7 +5951,7 @@
 </tr><tr>
 <td>group_id</td><td>The unique group identifier.</td></tr>
 <tr>
-<td>generation_id_or_member_epoch</td><td>The generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.</td></tr>
+<td>generation_id_or_member_epoch</td><td>The generation of the group if using the generic group protocol or the member epoch if using the consumer protocol.</td></tr>
 <tr>
 <td>member_id</td><td>The member ID assigned by the group coordinator.</td></tr>
 <tr>
@@ -6209,7 +5987,7 @@
 </tr><tr>
 <td>group_id</td><td>The unique group identifier.</td></tr>
 <tr>
-<td>generation_id_or_member_epoch</td><td>The generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.</td></tr>
+<td>generation_id_or_member_epoch</td><td>The generation of the group if using the generic group protocol or the member epoch if using the consumer protocol.</td></tr>
 <tr>
 <td>member_id</td><td>The member ID assigned by the group coordinator.</td></tr>
 <tr>
@@ -6245,7 +6023,7 @@
 </tr><tr>
 <td>group_id</td><td>The unique group identifier.</td></tr>
 <tr>
-<td>generation_id_or_member_epoch</td><td>The generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.</td></tr>
+<td>generation_id_or_member_epoch</td><td>The generation of the group if using the generic group protocol or the member epoch if using the consumer protocol.</td></tr>
 <tr>
 <td>member_id</td><td>The member ID assigned by the group coordinator.</td></tr>
 <tr>
@@ -6280,7 +6058,7 @@
 </tr><tr>
 <td>group_id</td><td>The unique group identifier.</td></tr>
 <tr>
-<td>generation_id_or_member_epoch</td><td>The generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.</td></tr>
+<td>generation_id_or_member_epoch</td><td>The generation of the group if using the generic group protocol or the member epoch if using the consumer protocol.</td></tr>
 <tr>
 <td>member_id</td><td>The member ID assigned by the group coordinator.</td></tr>
 <tr>
@@ -6314,7 +6092,7 @@
 </tr><tr>
 <td>group_id</td><td>The unique group identifier.</td></tr>
 <tr>
-<td>generation_id_or_member_epoch</td><td>The generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.</td></tr>
+<td>generation_id_or_member_epoch</td><td>The generation of the group if using the generic group protocol or the member epoch if using the consumer protocol.</td></tr>
 <tr>
 <td>member_id</td><td>The member ID assigned by the group coordinator.</td></tr>
 <tr>
@@ -6351,7 +6129,7 @@
 </tr><tr>
 <td>group_id</td><td>The unique group identifier.</td></tr>
 <tr>
-<td>generation_id_or_member_epoch</td><td>The generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.</td></tr>
+<td>generation_id_or_member_epoch</td><td>The generation of the group if using the generic group protocol or the member epoch if using the consumer protocol.</td></tr>
 <tr>
 <td>member_id</td><td>The member ID assigned by the group coordinator.</td></tr>
 <tr>
@@ -6390,7 +6168,7 @@
 </tr><tr>
 <td>group_id</td><td>The unique group identifier.</td></tr>
 <tr>
-<td>generation_id_or_member_epoch</td><td>The generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.</td></tr>
+<td>generation_id_or_member_epoch</td><td>The generation of the group if using the generic group protocol or the member epoch if using the consumer protocol.</td></tr>
 <tr>
 <td>member_id</td><td>The member ID assigned by the group coordinator.</td></tr>
 <tr>
@@ -6435,7 +6213,7 @@
 </tr><tr>
 <td>group_id</td><td>The unique group identifier.</td></tr>
 <tr>
-<td>generation_id_or_member_epoch</td><td>The generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.</td></tr>
+<td>generation_id_or_member_epoch</td><td>The generation of the group if using the generic group protocol or the member epoch if using the consumer protocol.</td></tr>
 <tr>
 <td>member_id</td><td>The member ID assigned by the group coordinator.</td></tr>
 <tr>
@@ -6894,42 +6672,6 @@
 <td>_tagged_fields</td><td>The tagged fields</td></tr>
 </tbody></table>
 </div>
-<div><pre>OffsetFetch Request (Version: 9) => [groups] require_stable TAG_BUFFER 
-  groups => group_id member_id member_epoch [topics] TAG_BUFFER 
-    group_id => COMPACT_STRING
-    member_id => COMPACT_NULLABLE_STRING
-    member_epoch => INT32
-    topics => name [partition_indexes] TAG_BUFFER 
-      name => COMPACT_STRING
-      partition_indexes => INT32
-  require_stable => BOOLEAN
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>groups</td><td>Each group we would like to fetch offsets for</td></tr>
-<tr>
-<td>group_id</td><td>The group ID.</td></tr>
-<tr>
-<td>member_id</td><td>The member ID assigned by the group coordinator if using the new consumer protocol (KIP-848).</td></tr>
-<tr>
-<td>member_epoch</td><td>The member epoch if using the new consumer protocol (KIP-848).</td></tr>
-<tr>
-<td>topics</td><td>Each topic we would like to fetch offsets for, or null to fetch offsets for all topics.</td></tr>
-<tr>
-<td>name</td><td>The topic name.</td></tr>
-<tr>
-<td>partition_indexes</td><td>The partition indexes we would like to fetch offsets for.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>require_stable</td><td>Whether broker should hold on returning unstable offsets but set a retriable error code for the partitions.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
 <b>Responses:</b><br>
 <div><pre>OffsetFetch Response (Version: 0) => [topics] 
   topics => name [partitions] 
@@ -7156,74 +6898,26 @@
 <tr>
 <td>error_code</td><td>The top-level error code, or 0 if there was no error.</td></tr>
 <tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
-<div><pre>OffsetFetch Response (Version: 7) => throttle_time_ms [topics] error_code TAG_BUFFER 
-  throttle_time_ms => INT32
-  topics => name [partitions] TAG_BUFFER 
-    name => COMPACT_STRING
-    partitions => partition_index committed_offset committed_leader_epoch metadata error_code TAG_BUFFER 
-      partition_index => INT32
-      committed_offset => INT64
-      committed_leader_epoch => INT32
-      metadata => COMPACT_NULLABLE_STRING
-      error_code => INT16
-  error_code => INT16
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>throttle_time_ms</td><td>The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.</td></tr>
-<tr>
-<td>topics</td><td>The responses per topic.</td></tr>
-<tr>
-<td>name</td><td>The topic name.</td></tr>
-<tr>
-<td>partitions</td><td>The responses per partition</td></tr>
-<tr>
-<td>partition_index</td><td>The partition index.</td></tr>
-<tr>
-<td>committed_offset</td><td>The committed message offset.</td></tr>
-<tr>
-<td>committed_leader_epoch</td><td>The leader epoch.</td></tr>
-<tr>
-<td>metadata</td><td>The partition metadata.</td></tr>
-<tr>
-<td>error_code</td><td>The error code, or 0 if there was no error.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>error_code</td><td>The top-level error code, or 0 if there was no error.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
-<div><pre>OffsetFetch Response (Version: 8) => throttle_time_ms [groups] TAG_BUFFER 
-  throttle_time_ms => INT32
-  groups => group_id [topics] error_code TAG_BUFFER 
-    group_id => COMPACT_STRING
-    topics => name [partitions] TAG_BUFFER 
-      name => COMPACT_STRING
-      partitions => partition_index committed_offset committed_leader_epoch metadata error_code TAG_BUFFER 
-        partition_index => INT32
-        committed_offset => INT64
-        committed_leader_epoch => INT32
-        metadata => COMPACT_NULLABLE_STRING
-        error_code => INT16
-    error_code => INT16
+<td>_tagged_fields</td><td>The tagged fields</td></tr>
+</tbody></table>
+</div>
+<div><pre>OffsetFetch Response (Version: 7) => throttle_time_ms [topics] error_code TAG_BUFFER 
+  throttle_time_ms => INT32
+  topics => name [partitions] TAG_BUFFER 
+    name => COMPACT_STRING
+    partitions => partition_index committed_offset committed_leader_epoch metadata error_code TAG_BUFFER 
+      partition_index => INT32
+      committed_offset => INT64
+      committed_leader_epoch => INT32
+      metadata => COMPACT_NULLABLE_STRING
+      error_code => INT16
+  error_code => INT16
 </pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
 </tr><tr>
 <td>throttle_time_ms</td><td>The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.</td></tr>
 <tr>
-<td>groups</td><td>The responses per group id.</td></tr>
-<tr>
-<td>group_id</td><td>The group ID.</td></tr>
-<tr>
 <td>topics</td><td>The responses per topic.</td></tr>
 <tr>
 <td>name</td><td>The topic name.</td></tr>
@@ -7238,20 +6932,18 @@
 <tr>
 <td>metadata</td><td>The partition metadata.</td></tr>
 <tr>
-<td>error_code</td><td>The partition-level error code, or 0 if there was no error.</td></tr>
+<td>error_code</td><td>The error code, or 0 if there was no error.</td></tr>
 <tr>
 <td>_tagged_fields</td><td>The tagged fields</td></tr>
 <tr>
 <td>_tagged_fields</td><td>The tagged fields</td></tr>
 <tr>
-<td>error_code</td><td>The group-level error code, or 0 if there was no error.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
+<td>error_code</td><td>The top-level error code, or 0 if there was no error.</td></tr>
 <tr>
 <td>_tagged_fields</td><td>The tagged fields</td></tr>
 </tbody></table>
 </div>
-<div><pre>OffsetFetch Response (Version: 9) => throttle_time_ms [groups] TAG_BUFFER 
+<div><pre>OffsetFetch Response (Version: 8) => throttle_time_ms [groups] TAG_BUFFER 
   throttle_time_ms => INT32
   groups => group_id [topics] error_code TAG_BUFFER 
     group_id => COMPACT_STRING
@@ -16458,20 +16150,6 @@
 <td>_tagged_fields</td><td>The tagged fields</td></tr>
 </tbody></table>
 </div>
-<div><pre>DescribeCluster Request (Version: 1) => include_cluster_authorized_operations endpoint_type TAG_BUFFER 
-  include_cluster_authorized_operations => BOOLEAN
-  endpoint_type => INT8
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>include_cluster_authorized_operations</td><td>Whether to include cluster authorized operations.</td></tr>
-<tr>
-<td>endpoint_type</td><td>The endpoint type to describe. 1=brokers, 2=controllers.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
 <b>Responses:</b><br>
 <div><pre>DescribeCluster Response (Version: 0) => throttle_time_ms error_code error_message cluster_id controller_id [brokers] cluster_authorized_operations TAG_BUFFER 
   throttle_time_ms => INT32
@@ -16516,52 +16194,6 @@
 <td>_tagged_fields</td><td>The tagged fields</td></tr>
 </tbody></table>
 </div>
-<div><pre>DescribeCluster Response (Version: 1) => throttle_time_ms error_code error_message endpoint_type cluster_id controller_id [brokers] cluster_authorized_operations TAG_BUFFER 
-  throttle_time_ms => INT32
-  error_code => INT16
-  error_message => COMPACT_NULLABLE_STRING
-  endpoint_type => INT8
-  cluster_id => COMPACT_STRING
-  controller_id => INT32
-  brokers => broker_id host port rack TAG_BUFFER 
-    broker_id => INT32
-    host => COMPACT_STRING
-    port => INT32
-    rack => COMPACT_NULLABLE_STRING
-  cluster_authorized_operations => INT32
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>throttle_time_ms</td><td>The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.</td></tr>
-<tr>
-<td>error_code</td><td>The top-level error code, or 0 if there was no error</td></tr>
-<tr>
-<td>error_message</td><td>The top-level error message, or null if there was no error.</td></tr>
-<tr>
-<td>endpoint_type</td><td>The endpoint type that was described. 1=brokers, 2=controllers.</td></tr>
-<tr>
-<td>cluster_id</td><td>The cluster ID that responding broker belongs to.</td></tr>
-<tr>
-<td>controller_id</td><td>The ID of the controller broker.</td></tr>
-<tr>
-<td>brokers</td><td>Each broker in the response.</td></tr>
-<tr>
-<td>broker_id</td><td>The broker ID.</td></tr>
-<tr>
-<td>host</td><td>The broker hostname.</td></tr>
-<tr>
-<td>port</td><td>The broker port.</td></tr>
-<tr>
-<td>rack</td><td>The rack of the broker, or null if it has not been assigned to a rack.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>cluster_authorized_operations</td><td>32-bit bitfield to represent authorized operations for this cluster.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
 <h5><a name="The_Messages_DescribeProducers">DescribeProducers API (Key: 61):</a></h5>
 
 <b>Requests:</b><br>
@@ -16826,7 +16458,7 @@
 <h5><a name="The_Messages_ConsumerGroupHeartbeat">ConsumerGroupHeartbeat API (Key: 68):</a></h5>
 
 <b>Requests:</b><br>
-<div><pre>ConsumerGroupHeartbeat Request (Version: 0) => group_id member_id member_epoch instance_id rack_id rebalance_timeout_ms [subscribed_topic_names] server_assignor [topic_partitions] TAG_BUFFER 
+<div><pre>ConsumerGroupHeartbeat Request (Version: 0) => group_id member_id member_epoch instance_id rack_id rebalance_timeout_ms [subscribed_topic_names] subscribed_topic_regex server_assignor [client_assignors] [topic_partitions] TAG_BUFFER 
   group_id => COMPACT_STRING
   member_id => COMPACT_STRING
   member_epoch => INT32
@@ -16834,7 +16466,15 @@
   rack_id => COMPACT_NULLABLE_STRING
   rebalance_timeout_ms => INT32
   subscribed_topic_names => COMPACT_STRING
+  subscribed_topic_regex => COMPACT_NULLABLE_STRING
   server_assignor => COMPACT_NULLABLE_STRING
+  client_assignors => name minimum_version maximum_version reason metadata_version metadata_bytes TAG_BUFFER 
+    name => COMPACT_STRING
+    minimum_version => INT16
+    maximum_version => INT16
+    reason => INT8
+    metadata_version => INT16
+    metadata_bytes => COMPACT_BYTES
   topic_partitions => topic_id [partitions] TAG_BUFFER 
     topic_id => UUID
     partitions => INT32
@@ -16852,12 +16492,30 @@
 <tr>
 <td>rack_id</td><td>null if not provided or if it didn't change since the last heartbeat; the rack ID of consumer otherwise.</td></tr>
 <tr>
-<td>rebalance_timeout_ms</td><td>-1 if it didn't change since the last heartbeat; the maximum time in milliseconds that the coordinator will wait on the member to revoke its partitions otherwise.</td></tr>
+<td>rebalance_timeout_ms</td><td>-1 if it didn't chance since the last heartbeat; the maximum time in milliseconds that the coordinator will wait on the member to revoke its partitions otherwise.</td></tr>
 <tr>
 <td>subscribed_topic_names</td><td>null if it didn't change since the last heartbeat; the subscribed topic names otherwise.</td></tr>
 <tr>
+<td>subscribed_topic_regex</td><td>null if it didn't change since the last heartbeat; the subscribed topic regex otherwise</td></tr>
+<tr>
 <td>server_assignor</td><td>null if not used or if it didn't change since the last heartbeat; the server side assignor to use otherwise.</td></tr>
 <tr>
+<td>client_assignors</td><td>null if not used or if it didn't change since the last heartbeat; the list of client-side assignors otherwise.</td></tr>
+<tr>
+<td>name</td><td>The name of the assignor.</td></tr>
+<tr>
+<td>minimum_version</td><td>The minimum supported version for the metadata.</td></tr>
+<tr>
+<td>maximum_version</td><td>The maximum supported version for the metadata.</td></tr>
+<tr>
+<td>reason</td><td>The reason of the metadata update.</td></tr>
+<tr>
+<td>metadata_version</td><td>The version of the metadata.</td></tr>
+<tr>
+<td>metadata_bytes</td><td>The metadata.</td></tr>
+<tr>
+<td>_tagged_fields</td><td>The tagged fields</td></tr>
+<tr>
 <td>topic_partitions</td><td>null if it didn't change since the last heartbeat; the partitions owned by the member.</td></tr>
 <tr>
 <td>topic_id</td><td>The topic ID.</td></tr>
@@ -16870,17 +16528,24 @@
 </tbody></table>
 </div>
 <b>Responses:</b><br>
-<div><pre>ConsumerGroupHeartbeat Response (Version: 0) => throttle_time_ms error_code error_message member_id member_epoch heartbeat_interval_ms assignment TAG_BUFFER 
+<div><pre>ConsumerGroupHeartbeat Response (Version: 0) => throttle_time_ms error_code error_message member_id member_epoch should_compute_assignment heartbeat_interval_ms assignment TAG_BUFFER 
   throttle_time_ms => INT32
   error_code => INT16
   error_message => COMPACT_NULLABLE_STRING
   member_id => COMPACT_NULLABLE_STRING
   member_epoch => INT32
+  should_compute_assignment => BOOLEAN
   heartbeat_interval_ms => INT32
-  assignment => [topic_partitions] TAG_BUFFER 
-    topic_partitions => topic_id [partitions] TAG_BUFFER 
+  assignment => error [assigned_topic_partitions] [pending_topic_partitions] metadata_version metadata_bytes TAG_BUFFER 
+    error => INT8
+    assigned_topic_partitions => topic_id [partitions] TAG_BUFFER 
+      topic_id => UUID
+      partitions => INT32
+    pending_topic_partitions => topic_id [partitions] TAG_BUFFER 
       topic_id => UUID
       partitions => INT32
+    metadata_version => INT16
+    metadata_bytes => COMPACT_BYTES
 </pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -16895,267 +16560,27 @@
 <tr>
 <td>member_epoch</td><td>The member epoch.</td></tr>
 <tr>
+<td>should_compute_assignment</td><td>True if the member should compute the assignment for the group.</td></tr>
+<tr>
 <td>heartbeat_interval_ms</td><td>The heartbeat interval in milliseconds.</td></tr>
 <tr>
 <td>assignment</td><td>null if not provided; the assignment otherwise.</td></tr>
 <tr>
-<td>topic_partitions</td><td>The partitions assigned to the member that can be used immediately.</td></tr>
-<tr>
-<td>topic_id</td><td>The topic ID.</td></tr>
-<tr>
-<td>partitions</td><td>The partitions.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
-<h5><a name="The_Messages_ConsumerGroupDescribe">ConsumerGroupDescribe API (Key: 69):</a></h5>
-
-<b>Requests:</b><br>
-<div><pre>ConsumerGroupDescribe Request (Version: 0) => [group_ids] include_authorized_operations TAG_BUFFER 
-  group_ids => COMPACT_STRING
-  include_authorized_operations => BOOLEAN
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>group_ids</td><td>The ids of the groups to describe</td></tr>
-<tr>
-<td>include_authorized_operations</td><td>Whether to include authorized operations.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
-<b>Responses:</b><br>
-<div><pre>ConsumerGroupDescribe Response (Version: 0) => throttle_time_ms [groups] TAG_BUFFER 
-  throttle_time_ms => INT32
-  groups => error_code error_message group_id group_state group_epoch assignment_epoch assignor_name [members] authorized_operations TAG_BUFFER 
-    error_code => INT16
-    error_message => COMPACT_NULLABLE_STRING
-    group_id => COMPACT_STRING
-    group_state => COMPACT_STRING
-    group_epoch => INT32
-    assignment_epoch => INT32
-    assignor_name => COMPACT_STRING
-    members => member_id instance_id rack_id member_epoch client_id client_host [subscribed_topic_names] subscribed_topic_regex assignment target_assignment TAG_BUFFER 
-      member_id => COMPACT_STRING
-      instance_id => COMPACT_NULLABLE_STRING
-      rack_id => COMPACT_NULLABLE_STRING
-      member_epoch => INT32
-      client_id => COMPACT_STRING
-      client_host => COMPACT_STRING
-      subscribed_topic_names => COMPACT_STRING
-      subscribed_topic_regex => COMPACT_NULLABLE_STRING
-      assignment => [topic_partitions] error metadata_version metadata_bytes TAG_BUFFER 
-        topic_partitions => topic_id topic_name [partitions] TAG_BUFFER 
-          topic_id => UUID
-          topic_name => COMPACT_STRING
-          partitions => INT32
-        error => INT8
-        metadata_version => INT32
-        metadata_bytes => COMPACT_BYTES
-      target_assignment => [topic_partitions] error metadata_version metadata_bytes TAG_BUFFER 
-        topic_partitions => topic_id topic_name [partitions] TAG_BUFFER 
-          topic_id => UUID
-          topic_name => COMPACT_STRING
-          partitions => INT32
-        error => INT8
-        metadata_version => INT32
-        metadata_bytes => COMPACT_BYTES
-    authorized_operations => INT32
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>throttle_time_ms</td><td>The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.</td></tr>
-<tr>
-<td>groups</td><td>Each described group.</td></tr>
-<tr>
-<td>error_code</td><td>The describe error, or 0 if there was no error.</td></tr>
-<tr>
-<td>error_message</td><td>The top-level error message, or null if there was no error.</td></tr>
-<tr>
-<td>group_id</td><td>The group ID string.</td></tr>
-<tr>
-<td>group_state</td><td>The group state string, or the empty string.</td></tr>
-<tr>
-<td>group_epoch</td><td>The group epoch.</td></tr>
-<tr>
-<td>assignment_epoch</td><td>The assignment epoch.</td></tr>
-<tr>
-<td>assignor_name</td><td>The selected assignor.</td></tr>
-<tr>
-<td>members</td><td>The members.</td></tr>
-<tr>
-<td>member_id</td><td>The member ID.</td></tr>
-<tr>
-<td>instance_id</td><td>The member instance ID.</td></tr>
-<tr>
-<td>rack_id</td><td>The member rack ID.</td></tr>
-<tr>
-<td>member_epoch</td><td>The current member epoch.</td></tr>
-<tr>
-<td>client_id</td><td>The client ID.</td></tr>
-<tr>
-<td>client_host</td><td>The client host.</td></tr>
-<tr>
-<td>subscribed_topic_names</td><td>The subscribed topic names.</td></tr>
-<tr>
-<td>subscribed_topic_regex</td><td>the subscribed topic regex otherwise or null of not provided.</td></tr>
-<tr>
-<td>assignment</td><td>The current assignment.</td></tr>
+<td>error</td><td>The assigned error.</td></tr>
 <tr>
-<td>topic_partitions</td><td>The assigned topic-partitions to the member.</td></tr>
+<td>assigned_topic_partitions</td><td>The partitions assigned to the member that can be used immediately.</td></tr>
 <tr>
 <td>topic_id</td><td>The topic ID.</td></tr>
 <tr>
-<td>topic_name</td><td>The topic name.</td></tr>
-<tr>
 <td>partitions</td><td>The partitions.</td></tr>
 <tr>
 <td>_tagged_fields</td><td>The tagged fields</td></tr>
 <tr>
-<td>error</td><td>The assigned error.</td></tr>
-<tr>
-<td>metadata_version</td><td>The assignor metadata version.</td></tr>
-<tr>
-<td>metadata_bytes</td><td>The assignor metadata bytes.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>target_assignment</td><td>The target assignment.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>authorized_operations</td><td>32-bit bitfield to represent authorized operations for this group.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
-<h5><a name="The_Messages_GetTelemetrySubscriptions">GetTelemetrySubscriptions API (Key: 71):</a></h5>
-
-<b>Requests:</b><br>
-<div><pre>GetTelemetrySubscriptions Request (Version: 0) => client_instance_id TAG_BUFFER 
-  client_instance_id => UUID
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>client_instance_id</td><td>Unique id for this client instance, must be set to 0 on the first request.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
-<b>Responses:</b><br>
-<div><pre>GetTelemetrySubscriptions Response (Version: 0) => throttle_time_ms error_code client_instance_id subscription_id [accepted_compression_types] push_interval_ms telemetry_max_bytes delta_temporality [requested_metrics] TAG_BUFFER 
-  throttle_time_ms => INT32
-  error_code => INT16
-  client_instance_id => UUID
-  subscription_id => INT32
-  accepted_compression_types => INT8
-  push_interval_ms => INT32
-  telemetry_max_bytes => INT32
-  delta_temporality => BOOLEAN
-  requested_metrics => COMPACT_STRING
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>throttle_time_ms</td><td>The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.</td></tr>
-<tr>
-<td>error_code</td><td>The error code, or 0 if there was no error.</td></tr>
-<tr>
-<td>client_instance_id</td><td>Assigned client instance id if ClientInstanceId was 0 in the request, else 0.</td></tr>
-<tr>
-<td>subscription_id</td><td>Unique identifier for the current subscription set for this client instance.</td></tr>
-<tr>
-<td>accepted_compression_types</td><td>Compression types that broker accepts for the PushTelemetryRequest.</td></tr>
-<tr>
-<td>push_interval_ms</td><td>Configured push interval, which is the lowest configured interval in the current subscription set.</td></tr>
-<tr>
-<td>telemetry_max_bytes</td><td>The maximum bytes of binary data the broker accepts in PushTelemetryRequest.</td></tr>
-<tr>
-<td>delta_temporality</td><td>Flag to indicate monotonic/counter metrics are to be emitted as deltas or cumulative values</td></tr>
-<tr>
-<td>requested_metrics</td><td>Requested metrics prefix string match. Empty array: No metrics subscribed, Array[0] empty string: All metrics subscribed.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
-<h5><a name="The_Messages_PushTelemetry">PushTelemetry API (Key: 72):</a></h5>
-
-<b>Requests:</b><br>
-<div><pre>PushTelemetry Request (Version: 0) => client_instance_id subscription_id terminating compression_type metrics TAG_BUFFER 
-  client_instance_id => UUID
-  subscription_id => INT32
-  terminating => BOOLEAN
-  compression_type => INT8
-  metrics => COMPACT_BYTES
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>client_instance_id</td><td>Unique id for this client instance.</td></tr>
-<tr>
-<td>subscription_id</td><td>Unique identifier for the current subscription.</td></tr>
-<tr>
-<td>terminating</td><td>Client is terminating the connection.</td></tr>
-<tr>
-<td>compression_type</td><td>Compression codec used to compress the metrics.</td></tr>
-<tr>
-<td>metrics</td><td>Metrics encoded in OpenTelemetry MetricsData v1 protobuf format.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
-<b>Responses:</b><br>
-<div><pre>PushTelemetry Response (Version: 0) => throttle_time_ms error_code TAG_BUFFER 
-  throttle_time_ms => INT32
-  error_code => INT16
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>throttle_time_ms</td><td>The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.</td></tr>
-<tr>
-<td>error_code</td><td>The error code, or 0 if there was no error.</td></tr>
-<tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
-<h5><a name="The_Messages_ListClientMetricsResources">ListClientMetricsResources API (Key: 74):</a></h5>
-
-<b>Requests:</b><br>
-<div><pre>ListClientMetricsResources Request (Version: 0) => TAG_BUFFER 
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>_tagged_fields</td><td>The tagged fields</td></tr>
-</tbody></table>
-</div>
-<b>Responses:</b><br>
-<div><pre>ListClientMetricsResources Response (Version: 0) => throttle_time_ms error_code [client_metrics_resources] TAG_BUFFER 
-  throttle_time_ms => INT32
-  error_code => INT16
-  client_metrics_resources => name TAG_BUFFER 
-    name => COMPACT_STRING
-</pre><table class="data-table"><tbody>
-<tr><th>Field</th>
-<th>Description</th>
-</tr><tr>
-<td>throttle_time_ms</td><td>The duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.</td></tr>
-<tr>
-<td>error_code</td><td></td></tr>
+<td>pending_topic_partitions</td><td>The partitions assigned to the member that cannot be used because they are not released by their former owners yet.</td></tr>
 <tr>
-<td>client_metrics_resources</td><td></td></tr>
+<td>metadata_version</td><td>The version of the metadata.</td></tr>
 <tr>
-<td>name</td><td></td></tr>
+<td>metadata_bytes</td><td>The assigned metadata.</td></tr>
 <tr>
 <td>_tagged_fields</td><td>The tagged fields</td></tr>
 <tr>
diff --git a/37/generated/remote_log_manager_config.html b/37/generated/remote_log_manager_config.html
index 139b3c6c..dc11de6e 100644
--- a/37/generated/remote_log_manager_config.html
+++ b/37/generated/remote_log_manager_config.html
@@ -130,16 +130,6 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="remote.log.index.file.cache.total.size.bytes"></a><a id="remote_log_manager_remote.log.index.file.cache.total.size.bytes" href="#remote_log_manager_remote.log.index.file.cache.total.size.bytes">remote.log.index.file.cache.total.size.bytes</a></h4>
-<p>The total size of the space allocated to store index files fetched from remote storage in the local storage.</p>
-<table><tbody>
-<tr><th>Type:</th><td>long</td></tr>
-<tr><th>Default:</th><td>1073741824 (1 gibibyte)</td></tr>
-<tr><th>Valid Values:</th><td>[1,...]</td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="remote.log.manager.task.interval.ms"></a><a id="remote_log_manager_remote.log.manager.task.interval.ms" href="#remote_log_manager_remote.log.manager.task.interval.ms">remote.log.manager.task.interval.ms</a></h4>
 <p>Interval at which remote log manager runs the scheduled tasks like copy segments, and clean up remote log segments.</p>
 <table><tbody>
diff --git a/37/generated/sink_connector_config.html b/37/generated/sink_connector_config.html
index 28622dee..f495784e 100644
--- a/37/generated/sink_connector_config.html
+++ b/37/generated/sink_connector_config.html
@@ -55,7 +55,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td>A concrete subclass of org.apache.kafka.connect.storage.Converter, A class with a public, no-argument constructor</td></tr>
+<tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>low</td></tr>
 </tbody></table>
 </li>
@@ -65,7 +65,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td>A concrete subclass of org.apache.kafka.connect.storage.Converter, A class with a public, no-argument constructor</td></tr>
+<tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>low</td></tr>
 </tbody></table>
 </li>
@@ -75,7 +75,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td>A concrete subclass of org.apache.kafka.connect.storage.HeaderConverter, A class with a public, no-argument constructor</td></tr>
+<tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>low</td></tr>
 </tbody></table>
 </li>
diff --git a/37/generated/source_connector_config.html b/37/generated/source_connector_config.html
index e0158b4f..718dea32 100644
--- a/37/generated/source_connector_config.html
+++ b/37/generated/source_connector_config.html
@@ -35,7 +35,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td>A concrete subclass of org.apache.kafka.connect.storage.Converter, A class with a public, no-argument constructor</td></tr>
+<tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>low</td></tr>
 </tbody></table>
 </li>
@@ -45,7 +45,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td>A concrete subclass of org.apache.kafka.connect.storage.Converter, A class with a public, no-argument constructor</td></tr>
+<tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>low</td></tr>
 </tbody></table>
 </li>
@@ -55,7 +55,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td>A concrete subclass of org.apache.kafka.connect.storage.HeaderConverter, A class with a public, no-argument constructor</td></tr>
+<tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>low</td></tr>
 </tbody></table>
 </li>
diff --git a/37/generated/streams_config.html b/37/generated/streams_config.html
index 801e2673..bc329f99 100644
--- a/37/generated/streams_config.html
+++ b/37/generated/streams_config.html
@@ -61,7 +61,7 @@
 </li>
 <li>
 <h4><a id="client.id"></a><a id="streamsconfigs_client.id" href="#streamsconfigs_client.id">client.id</a></h4>
-<p>An ID prefix string used for the client IDs of internal [main-|restore-|global-]consumer, producer, and admin clients with pattern <code>&lt;client.id&gt;-[Global]StreamThread[-&lt;threadSequenceNumber$gt;]-&lt;consumer|producer|restore-consumer|global-consumer&gt;</code>.</p>
+<p>An ID prefix string used for the client IDs of internal consumer, producer and restore-consumer, with pattern <code>&lt;client.id&gt;-StreamThread-&lt;threadSequenceNumber$gt;-&lt;consumer|producer|restore-consumer&gt;</code>.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
 <tr><th>Default:</th><td>""</td></tr>
@@ -211,11 +211,11 @@
 </li>
 <li>
 <h4><a id="rack.aware.assignment.strategy"></a><a id="streamsconfigs_rack.aware.assignment.strategy" href="#streamsconfigs_rack.aware.assignment.strategy">rack.aware.assignment.strategy</a></h4>
-<p>The strategy we use for rack aware assignment. Rack aware assignment will take <code>client.rack</code> and <code>racks</code> of <code>TopicPartition</code> into account when assigning tasks to minimize cross rack traffic. Valid settings are : <code>none</code> (default), which will disable rack aware assignment; <code>min_traffic</code>, which will compute minimum cross rack traffic assignment; <code>balance_subtopology</code>, which will compute minimum cross rack traffic and try t [...]
+<p>The strategy we use for rack aware assignment. Rack aware assignment will take <code>client.rack</code> and <code>racks</code> of <code>TopicPartition</code> into account when assigning tasks to minimize cross rack traffic. Valid settings are : <code>none</code> (default), which will disable rack aware assignment; <code>min_traffic</code>, which will compute minimum cross rack traffic assignment.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
 <tr><th>Default:</th><td>none</td></tr>
-<tr><th>Valid Values:</th><td>[none, min_traffic, balance_subtopology]</td></tr>
+<tr><th>Valid Values:</th><td>[none, min_traffic]</td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
 </li>
@@ -285,7 +285,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
 <tr><th>Default:</th><td>none</td></tr>
-<tr><th>Valid Values:</th><td>org.apache.kafka.streams.StreamsConfig$$Lambda$8/83954662@4b85612c</td></tr>
+<tr><th>Valid Values:</th><td>org.apache.kafka.streams.StreamsConfig$$Lambda$21/0x0000000800084000@59ec2012</td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
 </li>
@@ -370,26 +370,6 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="dsl.store.suppliers.class"></a><a id="streamsconfigs_dsl.store.suppliers.class" href="#streamsconfigs_dsl.store.suppliers.class">dsl.store.suppliers.class</a></h4>
-<p>Defines which store implementations to plug in to DSL operators. Must implement the <code>org.apache.kafka.streams.state.DslStoreSuppliers</code> interface.</p>
-<table><tbody>
-<tr><th>Type:</th><td>class</td></tr>
-<tr><th>Default:</th><td>org.apache.kafka.streams.state.BuiltInDslStoreSuppliers$RocksDBDslStoreSuppliers</td></tr>
-<tr><th>Valid Values:</th><td></td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-</tbody></table>
-</li>
-<li>
-<h4><a id="enable.metrics.push"></a><a id="streamsconfigs_enable.metrics.push" href="#streamsconfigs_enable.metrics.push">enable.metrics.push</a></h4>
-<p>Whether to enable pushing of internal [main-|restore-|global]consumer, producer, and admin client metrics to the cluster, if the cluster has a client metrics subscription which matches a client.</p>
-<table><tbody>
-<tr><th>Type:</th><td>boolean</td></tr>
-<tr><th>Default:</th><td>true</td></tr>
-<tr><th>Valid Values:</th><td></td></tr>
-<tr><th>Importance:</th><td>low</td></tr>
-</tbody></table>
-</li>
-<li>
 <h4><a id="metadata.max.age.ms"></a><a id="streamsconfigs_metadata.max.age.ms" href="#streamsconfigs_metadata.max.age.ms">metadata.max.age.ms</a></h4>
 <p>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</p>
 <table><tbody>
@@ -481,7 +461,7 @@
 </li>
 <li>
 <h4><a id="reconnect.backoff.ms"></a><a id="streamsconfigs_reconnect.backoff.ms" href="#streamsconfigs_reconnect.backoff.ms">reconnect.backoff.ms</a></h4>
-<p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the <code>reconnect.backoff.max.ms</code> value.</p>
+<p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>50</td></tr>
@@ -521,7 +501,7 @@
 </li>
 <li>
 <h4><a id="retry.backoff.ms"></a><a id="streamsconfigs_retry.backoff.ms" href="#streamsconfigs_retry.backoff.ms">retry.backoff.ms</a></h4>
-<p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the <code>retry.backoff.max.ms</code> value.</p>
+<p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>100</td></tr>
@@ -561,11 +541,11 @@
 </li>
 <li>
 <h4><a id="upgrade.from"></a><a id="streamsconfigs_upgrade.from" href="#streamsconfigs_upgrade.from">upgrade.from</a></h4>
-<p>Allows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 2.4+. When upgrading from 3.3 to a newer version it is not required to specify this config. Default is `null`. Accepted values are "0.10.0", "0.10.1", "0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "2.7", "2.8", "3.0", "3.1", "3.2", "3.3", "3.4" (for upgrading from the corresponding old version).</p>
+<p>Allows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 2.4+. When upgrading from 3.3 to a newer version it is not required to specify this config. Default is `null`. Accepted values are "0.10.0", "0.10.1", "0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "2.7", "2.8", "3.0", "3.1", "3.2", "3.3", "3.4", "3.5(for upgrading from the corresponding old version).</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td>[null, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 3.0, 3.1, 3.2, 3.3, 3.4]</td></tr>
+<tr><th>Valid Values:</th><td>[null, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5]</td></tr>
 <tr><th>Importance:</th><td>low</td></tr>
 </tbody></table>
 </li>
diff --git a/37/generated/topic_config.html b/37/generated/topic_config.html
index 2f33598a..2f7a62c2 100644
--- a/37/generated/topic_config.html
+++ b/37/generated/topic_config.html
@@ -111,7 +111,7 @@
 </li>
 <li>
 <h4><a id="local.retention.ms"></a><a id="topicconfigs_local.retention.ms" href="#topicconfigs_local.retention.ms">local.retention.ms</a></h4>
-<p>The number of milliseconds to keep the local log segment before it gets deleted. Default value is -2, it represents `retention.ms` value is to be used. The effective value should always be less than or equal to `retention.ms` value.</p>
+<p>The number of milli seconds to keep the local log segment before it gets deleted. Default value is -2, it represents `retention.ms` value is to be used. The effective value should always be less than or equal to `retention.ms` value.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
 <tr><th>Default:</th><td>-2</td></tr>
@@ -148,7 +148,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
 <tr><th>Default:</th><td>3.0-IV1</td></tr>
-<tr><th>Valid Values:</th><td>[0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3 [...]
+<tr><th>Valid Values:</th><td>[0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2]</td></tr>
 <tr><th>Server Default Property:</th><td>log.message.format.version</td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
diff --git a/37/js/templateData.js b/37/js/templateData.js
index bfe5598b..a391696d 100644
--- a/37/js/templateData.js
+++ b/37/js/templateData.js
@@ -17,8 +17,8 @@ limitations under the License.
 
 // Define variables for doc templates
 var context={
-    "version": "37",
-    "dotVersion": "3.7",
-    "fullDotVersion": "3.7.0",
+    "version": "36",
+    "dotVersion": "3.6",
+    "fullDotVersion": "3.6.2-SNAPSHOT",
     "scalaVersion": "2.13"
 };
diff --git a/37/ops.html b/37/ops.html
index c4be91cb..b6a30aa2 100644
--- a/37/ops.html
+++ b/37/ops.html
@@ -15,9 +15,7 @@
  limitations under the License.
 -->
 
-<script id="ops-template" type="text/x-handlebars-template">
-
-  <p>Here is some information on actually running Kafka as a production system. Please send us any additional tips you know of.</p>
+  Here is some information on actually running Kafka as a production system based on usage and experience at LinkedIn. Please send us any additional tips you know of.
 
   <h3 class="anchor-heading"><a id="basic_ops" class="anchor-link"></a><a href="#basic_ops">6.1 Basic Kafka Operations</a></h3>
 
@@ -370,13 +368,13 @@
   There are two interfaces that can be used to engage a throttle. The simplest, and safest, is to apply a throttle when invoking the kafka-reassign-partitions.sh, but kafka-configs.sh can also be used to view and alter the throttle values directly.
   <p></p>
   So for example, if you were to execute a rebalance, with the below command, it would move partitions at no more than 50MB/s.
-  <pre class="language-bash">$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --execute --reassignment-json-file bigger-cluster.json --throttle 50000000</pre>
+  <pre class="language-bash">$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --execute --reassignment-json-file bigger-cluster.json --throttle 50000000</code></pre>
   When you execute this script you will see the throttle engage:
   <pre class="line-numbers"><code class="language-bash">  The inter-broker throttle limit was set to 50000000 B/s
   Successfully started partition reassignment for foo1-0</code></pre>
   <p>Should you wish to alter the throttle, during a rebalance, say to increase the throughput so it completes quicker, you can do this by re-running the execute command with the --additional option passing the same reassignment-json-file:</p>
   <pre class="language-bash">$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092  --additional --execute --reassignment-json-file bigger-cluster.json --throttle 700000000
-  The inter-broker throttle limit was set to 700000000 B/s</pre>
+  The inter-broker throttle limit was set to 700000000 B/s</code></pre>
 
   <p>Once the rebalance completes the administrator can check the status of the rebalance using the --verify option.
       If the rebalance has completed, the throttle will be removed via the --verify command. It is important that
@@ -448,12 +446,12 @@
   <p><i>(2) Ensuring Progress:</i></p>
   <p>If the throttle is set too low, in comparison to the incoming write rate, it is possible for replication to not
       make progress. This occurs when:</p>
-  <pre>max(BytesInPerSec) > throttle</pre>
+  <pre>max(BytesInPerSec) > throttle</code></pre>
   <p>
       Where BytesInPerSec is the metric that monitors the write throughput of producers into each broker. </p>
   <p>The administrator can monitor whether replication is making progress, during the rebalance, using the metric:</p>
 
-  <pre>kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=([-.\w]+),topic=([-.\w]+),partition=([0-9]+)</pre>
+  <pre>kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=([-.\w]+),topic=([-.\w]+),partition=([0-9]+)</code></pre>
 
   <p>The lag should constantly decrease during replication. If the metric does not decrease the administrator should
       increase the
@@ -520,7 +518,7 @@
   <p>
   This is not the only possible deployment pattern. It is possible to read from or write to a remote Kafka cluster over the WAN, though obviously this will add whatever latency is required to get the cluster.
   <p>
-  Kafka naturally batches data in both the producer and consumer so it can achieve high-throughput even over a high-latency connection. To allow this though it may be necessary to increase the TCP socket buffer sizes for the producer, consumer, and broker using the <code>socket.send.buffer.bytes</code> and <code>socket.receive.buffer.bytes</code> configurations. The appropriate way to set this is documented <a href="https://en.wikipedia.org/wiki/Bandwidth-delay_product">here</a>.
+  Kafka naturally batches data in both the producer and consumer so it can achieve high-throughput even over a high-latency connection. To allow this though it may be necessary to increase the TCP socket buffer sizes for the producer, consumer, and broker using the <code>socket.send.buffer.bytes</code> and <code>socket.receive.buffer.bytes</code> configurations. The appropriate way to set this is documented <a href="http://en.wikipedia.org/wiki/Bandwidth-delay_product">here</a>.
   <p>
   It is generally <i>not</i> advisable to run a <i>single</i> Kafka cluster that spans multiple datacenters over a high-latency link. This will incur very high replication latency both for Kafka writes and ZooKeeper writes, and neither Kafka nor ZooKeeper will remain available in all locations if the network between locations is unavailable.
 
@@ -702,14 +700,14 @@ us-east.admin.bootstrap.servers = broker8-secondary:9092
     Exactly-once semantics are supported for dedicated MirrorMaker clusters as of version 3.5.0.</p>
   
   <p>
-    For new MirrorMaker clusters, set the <code>exactly.once.source.support</code> property to enabled for all targeted Kafka clusters that should be written to with exactly-once semantics. For example, to enable exactly-once for writes to cluster <code>us-east</code>, the following configuration can be used:
+    For new MirrorMaker clusters, set the <code>exactly.once.source.support</code> property to enabled for all targeted Kafka clusters that should be written to with exactly-once semantics. For example, to enable exactly-once for writes to cluster </code>us-east</code>, the following configuration can be used:
   </p>
 
 <pre class="line-numbers"><code class="language-text">us-east.exactly.once.source.support = enabled
 </code></pre>
   
   <p>
-    For existing MirrorMaker clusters, a two-step upgrade is necessary. Instead of immediately setting the <code>exactly.once.source.support</code> property to enabled, first set it to <code>preparing</code> on all nodes in the cluster. Once this is complete, it can be set to <code>enabled</code> on all nodes in the cluster, in a second round of restarts.
+    For existing MirrorMaker clusters, a two-step upgrade is necessary. Instead of immediately setting the <code>exactly.once.source.support</code> property to enabled, first set it to <code>preparing</code> on all nodes in the cluster. Once this is complete, it can be set to </code>enabled</code> on all nodes in the cluster, in a second round of restarts.
   </p>
   
   <p>
@@ -1326,8 +1324,8 @@ $ bin/kafka-acls.sh \
   It is unlikely to require much OS-level tuning, but there are three potentially important OS-level configurations:
   <ul>
       <li>File descriptor limits: Kafka uses file descriptors for log segments and open connections. If a broker hosts many partitions, consider that the broker needs at least (number_of_partitions)*(partition_size/segment_size) to track all log segments in addition to the number of connections the broker makes. We recommend at least 100000 allowed file descriptors for the broker processes as a starting point. Note: The mmap() function adds an extra reference to the file associated with  [...]
-      <li>Max socket buffer size: can be increased to enable high-performance data transfer between data centers as <a href="https://www.psc.edu/index.php/networking/641-tcp-tune">described here</a>.
-      <li>Maximum number of memory map areas a process may have (aka vm.max_map_count). <a href="https://kernel.org/doc/Documentation/sysctl/vm.txt">See the Linux kernel documentation</a>. You should keep an eye at this OS-level property when considering the maximum number of partitions a broker may have. By default, on a number of Linux systems, the value of vm.max_map_count is somewhere around 65535. Each log segment, allocated per partition, requires a pair of index/timeindex files, a [...]
+      <li>Max socket buffer size: can be increased to enable high-performance data transfer between data centers as <a href="http://www.psc.edu/index.php/networking/641-tcp-tune">described here</a>.
+      <li>Maximum number of memory map areas a process may have (aka vm.max_map_count). <a href="http://kernel.org/doc/Documentation/sysctl/vm.txt">See the Linux kernel documentation</a>. You should keep an eye at this OS-level property when considering the maximum number of partitions a broker may have. By default, on a number of Linux systems, the value of vm.max_map_count is somewhere around 65535. Each log segment, allocated per partition, requires a pair of index/timeindex files, an [...]
   </ul>
   <p>
 
@@ -1355,14 +1353,14 @@ $ bin/kafka-acls.sh \
 
   <h4 class="anchor-heading"><a id="linuxflush" class="anchor-link"></a><a href="#linuxflush">Understanding Linux OS Flush Behavior</a></h4>
 
-  In Linux, data written to the filesystem is maintained in <a href="https://en.wikipedia.org/wiki/Page_cache">pagecache</a> until it must be written out to disk (due to an application-level fsync or the OS's own flush policy). The flushing of data is done by a set of background threads called pdflush (or in post 2.6.32 kernels "flusher threads").
+  In Linux, data written to the filesystem is maintained in <a href="http://en.wikipedia.org/wiki/Page_cache">pagecache</a> until it must be written out to disk (due to an application-level fsync or the OS's own flush policy). The flushing of data is done by a set of background threads called pdflush (or in post 2.6.32 kernels "flusher threads").
   <p>
   Pdflush has a configurable policy that controls how much dirty data can be maintained in cache and for how long before it must be written back to disk.
-  This policy is described <a href="https://web.archive.org/web/20160518040713/http://www.westnet.com/~gsmith/content/linux-pdflush.htm">here</a>.
+  This policy is described <a href="http://web.archive.org/web/20160518040713/http://www.westnet.com/~gsmith/content/linux-pdflush.htm">here</a>.
   When Pdflush cannot keep up with the rate of data being written it will eventually cause the writing process to block incurring latency in the writes to slow down the accumulation of data.
   <p>
   You can see the current state of OS memory usage by doing
-  <pre class="language-bash"> &gt; cat /proc/meminfo</pre>
+  <pre class="language-bash"> &gt; cat /proc/meminfo </code></pre>
   The meaning of these values are described in the link above.
   <p>
   Using pagecache has several advantages over an in-process cache for storing data that will be written out to disk:
@@ -1455,8 +1453,8 @@ $ bin/kafka-acls.sh \
       </tr>
       <tr>
         <td>Byte in rate from other brokers</td>
-        <td>kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesInPerSec</td>
-        <td>Byte in (from the other brokers) rate across all topics.</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesInPerSec,topic=([-.\w]+)</td>
+        <td>Byte in (from the other brokers) rate per topic. Omitting 'topic=(...)' will yield the all-topic rate.</td>
       </tr>
       <tr>
         <td>Controller Request rate from Broker</td>
@@ -1539,8 +1537,8 @@ $ bin/kafka-acls.sh \
       </tr>
       <tr>
         <td>Byte out rate to other brokers</td>
-        <td>kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesOutPerSec</td>
-        <td>Byte out (to the other brokers) rate across all topics</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesOutPerSec,topic=([-.\w]+)</td>
+        <td>Byte out (to the other brokers) rate per topic. Omitting 'topic=(...)' will yield the all-topic rate.</td>
       </tr>
       <tr>
         <td>Rejected byte rate</td>
@@ -2958,6 +2956,26 @@ active-process-ratio metrics which have a recording level of <code>info</code>:
         <td>The total number of processed records across all source processor nodes of this task.</td>
         <td>kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)</td>
       </tr>
+      <tr>
+        <td>commit-latency-avg</td>
+        <td>The average execution time in ns, for committing.</td>
+        <td>kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>commit-latency-max</td>
+        <td>The maximum execution time in ns, for committing.</td>
+        <td>kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>commit-rate</td>
+        <td>The average number of commit calls per second.</td>
+        <td>kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>commit-total</td>
+        <td>The total number of commit calls.</td>
+        <td>kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)</td>
+      </tr>
       <tr>
         <td>record-lateness-avg</td>
         <td>The average observed lateness of records (stream time - record timestamp).</td>
@@ -3572,7 +3590,7 @@ for built-in state stores, currently we have:
   <h3 class="anchor-heading"><a id="zk" class="anchor-link"></a><a href="#zk">6.9 ZooKeeper</a></h3>
 
   <h4 class="anchor-heading"><a id="zkversion" class="anchor-link"></a><a href="#zkversion">Stable version</a></h4>
-  The current stable branch is 3.8. Kafka is regularly updated to include the latest release in the 3.8 series.
+  The current stable branch is 3.5. Kafka is regularly updated to include the latest release in the 3.5 series.
   
   <h4 class="anchor-heading"><a id="zk_depr" class="anchor-link"></a><a href="#zk_depr">ZooKeeper Deprecation</a></h4>
   <p>With the release of Apache Kafka 3.5, Zookeeper is now marked deprecated. Removal of ZooKeeper is planned in the next major release of Apache Kafka (version 4.0),
@@ -3581,7 +3599,7 @@ for built-in state stores, currently we have:
      see <a href="#kraft_missing">current missing features</a> for more information.</p>
 
     <h5 class="anchor-heading"><a id="zk_depr_migration" class="anchor-link"></a><a href="#zk_drep_migration">Migration</a></h5>
-    <p>Users are recommended to begin planning for migration to KRaft and also begin testing to provide any feedback. Refer to <a href="#kraft_zk_migration">ZooKeeper to KRaft Migration</a> for details on how to perform a live migration from ZooKeeper to KRaft and current limitations.</p>
+    <p>Migration of an existing ZooKeeper based Kafka cluster to KRaft is currently Preview and we expect it to be ready for production usage in version 3.6. Users are recommended to begin planning for migration to KRaft and also begin testing to provide any feedback. Refer to <a href="#kraft_zk_migration">ZooKeeper to KRaft Migration</a> for details on how to perform a live migration from ZooKeeper to KRaft and current limitations.</p>
 	
     <h5 class="anchor-heading"><a id="zk_depr_3xsupport" class="anchor-link"></a><a href="#zk_depr_3xsupport">3.x and ZooKeeper Support</a></h5>
     <p>The final 3.x minor release, that supports ZooKeeper mode, will receive critical bug fixes and security fixes for 12 months after its release.</p>
@@ -3712,48 +3730,44 @@ foo
   <ul>
     <li>Supporting JBOD configurations with multiple storage directories</li>
     <li>Modifying certain dynamic configurations on the standalone KRaft controller</li>
+    <li>Delegation tokens</li>
   </ul>
 
   <h4 class="anchor-heading"><a id="kraft_zk_migration" class="anchor-link"></a><a href="#kraft_zk_migration">ZooKeeper to KRaft Migration</a></h4>
 
   <p>
     <b>ZooKeeper to KRaft migration is considered an Early Access feature and is not recommended for production clusters.</b>
-    Please report issues with ZooKeeper to KRaft migration using the
-    <a href="https://issues.apache.org/jira/projects/KAFKA" target="_blank">project JIRA</a> and the "kraft" component.
   </p>
 
-  <h3>Terminology</h3>
+  <p>The following features are not yet supported for ZK to KRaft migrations:</p>
+
   <ul>
-    <li>Brokers that are in <b>ZK mode</b> store their metadata in Apache ZooKepeer. This is the old mode of handling metadata.</li>
-    <li>Brokers that are in <b>KRaft mode</b> store their metadata in a KRaft quorum. This is the new and improved mode of handling metadata.</li>
-    <li><b>Migration</b> is the process of moving cluster metadata from ZooKeeper into a KRaft quorum.</li>
+    <li>Downgrading to ZooKeeper mode during or after the migration</li>
+    <li>Other features <a href="#kraft_missing">not yet supported in KRaft</a></li>
   </ul>
 
-  <h3>Migration Phases</h3>
-  In general, the migration process passes through several phases.
+  <p>
+    Please report issues with ZooKeeper to KRaft migration using the
+    <a href="https://issues.apache.org/jira/projects/KAFKA" target="_blank">project JIRA</a> and the "kraft" component.
+  </p>
 
-  <ul>
-    <li>In the <b>initial phase</b>, all the brokers are in ZK mode, and there is a ZK-based controller.</li>
-    <li>During the <b>initial metadata load</b>, a KRaft quorum loads the metadata from ZooKeeper,</li>
-    <li>In <b>hybrid phase</b>, some brokers are in ZK mode, but there is a KRaft controller.</li>
-    <li>In <b>dual-write phase</b>, all brokers are KRaft, but the KRaft controller is continuing to write to ZK.</li>
-    <li>When the migration has been <b>finalized</b>, we no longer write metadata to ZooKeeper.</li>
-  </ul>
+  <h3>Terminology</h3>
+  <p>
+    We use the term "migration" here to refer to the process of changing a Kafka cluster's metadata
+    system from ZooKeeper to KRaft and migrating existing metadata. An "upgrade" refers to installing a newer version of Kafka. It is not recommended to
+    upgrade the software at the same time as performing a metadata migration.
+  </p>
 
-  <h3>Limitations</h3>
-  <ul>
-    <li>While a cluster is being migrated from ZK mode to KRaft mode, we do not support changing the <i>metadata
-      version</i> (also known as the <i>inter.broker.protocol</i> version.) Please do not attempt to do this during
-      a migration, or you may break the cluster.</li>
-    <li>After the migration has been finalized, it is not possible to revert back to ZooKeeper mode.</li>
-    <li><a href="#kraft_missing">As noted above</a>, some features are not fully implemented in KRaft mode. If you are
-      using one of those features, you will not be able to migrate to KRaft yet.</li>
-  </ul>
+  <p>
+    We also use the term "ZK mode" to refer to Kafka brokers which are using ZooKeeper as their metadata
+    system. "KRaft mode" refers Kafka brokers which are using a KRaft controller quorum as their metadata system.
+  </p>
 
   <h3>Preparing for migration</h3>
   <p>
-    Before beginning the migration, the Kafka brokers must be upgraded to software version {{fullDotVersion}} and have the
-    "inter.broker.protocol.version" configuration set to "{{dotVersion}}".
+    Before beginning the migration, the Kafka brokers must be upgraded to software version 3.5.0 and have the
+    "inter.broker.protocol.version" configuration set to "3.5". See <a href="#upgrade_3_5_0">Upgrading to 3.5.0</a> for
+    upgrade instructions.
   </p>
 
   <p>
@@ -3834,7 +3848,7 @@ advertised.listeners=PLAINTEXT://localhost:9092
 listener.security.protocol.map=PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
 
 # Set the IBP
-inter.broker.protocol.version={{dotVersion}}
+inter.broker.protocol.version=3.5
 
 # Enable the migration
 zookeeper.metadata.migration.enable=true
@@ -3876,7 +3890,7 @@ advertised.listeners=PLAINTEXT://localhost:9092
 listener.security.protocol.map=PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
 
 # Don't set the IBP, KRaft uses "metadata.version" feature flag
-# inter.broker.protocol.version={{dotVersion}}
+# inter.broker.protocol.version=3.5
 
 # Remove the migration enabled flag
 # zookeeper.metadata.migration.enable=true
@@ -3892,23 +3906,6 @@ controller.listener.names=CONTROLLER</pre>
     Each broker is restarted with a KRaft configuration until the entire cluster is running in KRaft mode.
   </p>
 
-  <h3>Reverting to ZooKeeper mode During the Migration</h3>
-    While the cluster is still in migration mode, it is possible to revert to ZK mode. In order to do this:
-    <ol>
-      <li>
-        For each KRaft broker:
-        <ul>
-          <li>Stop the broker.</li>
-          <li>Remove the __cluster_metadata directory on the broker.</li>
-          <li>Remove the <code>zookeeper.metadata.migration.enable</code> configuration and the KRaft controllers related configurations like <code>controller.quorum.voters</code>
-            and <code>controller.listener.names</code> from the broker configuration properties file.</li>
-          <li>Restart the broker in ZooKeeper mode.</li>
-        </ul>
-      </li>
-      <li>Take down the KRaft quorum.</li>
-      <li>Using ZooKeeper shell, delete the controller node using <code>rmr /controller</code>, so that a ZooKeeper-based broker can become the next controller.</li>
-    </ol>
-
   <h3>Finalizing the migration</h3>
   <p>
     Once all brokers have been restarted in KRaft mode, the last step to finalize the migration is to take the
@@ -3983,96 +3980,31 @@ listeners=CONTROLLER://:9093
   <li><code>retention.bytes</code></li>
 </ul>
 
-  <p>The configuration prefixed with <code>local</code> are to specify the time/size the "local" log file can accept before moving to remote storage, and then get deleted.
-  If unset, The value in <code>retention.ms</code> and <code>retention.bytes</code> will be used.</p>
-
-<h4 class="anchor-heading"><a id="tiered_storage_config_ex" class="anchor-link"></a><a href="#tiered_storage_config_ex">Quick Start Example</a></h4>
-
-<p>Apache Kafka doesn't provide an out-of-the-box RemoteStorageManager implementation. To have a preview of the tiered storage
-  feature, the <a href="https://github.com/apache/kafka/blob/trunk/storage/src/test/java/org/apache/kafka/server/log/remote/storage/LocalTieredStorage.java">LocalTieredStorage</a>
-  implemented for integration test can be used, which will create a temporary directory in local storage to simulate the remote storage.
+  The configuration prefixed with <code>local</code> are to specify the time/size the "local" log file can accept before moving to remote storage, and then get deleted.
+  If unset, The value in <code>retention.ms</code> and <code>retention.bytes</code> will be used.
 </p>
 
-<p>To adopt the `LocalTieredStorage`, the test library needs to be built locally</p>
-<pre># please checkout to the specific version tag you're using before building it
-# ex: `git checkout {{fullDotVersion}}`
-./gradlew clean :storage:testJar</pre>
-<p>After build successfully, there should be a `kafka-storage-x.x.x-test.jar` file under `storage/build/libs`.
-Next, setting configurations in the broker side to enable tiered storage feature.</p>
+<h4 class="anchor-heading"><a id="tiered_storage_config_ex" class="anchor-link"></a><a href="#tiered_storage_config_ex">Configurations Example</a></h4>
 
+<p>Here is a sample configuration to enable tiered storage feature in broker side:
 <pre>
 # Sample Zookeeper/Kraft broker server.properties listening on PLAINTEXT://:9092
 remote.log.storage.system.enable=true
-
-# Setting the listener for the clients in RemoteLogMetadataManager to talk to the brokers.
+# Please provide the implementation for remoteStorageManager. This is the mandatory configuration for tiered storage.
+# remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.NoOpRemoteStorageManager
+# Using the "PLAINTEXT" listener for the clients in RemoteLogMetadataManager to talk to the brokers.
 remote.log.metadata.manager.listener.name=PLAINTEXT
-
-# Please provide the implementation info for remoteStorageManager.
-# This is the mandatory configuration for tiered storage.
-# Here, we use the `LocalTieredStorage` built above.
-remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.LocalTieredStorage
-remote.log.storage.manager.class.path=/PATH/TO/kafka-storage-{{fullDotVersion}}-test.jar
-
-# These 2 prefix are default values, but customizable
-remote.log.storage.manager.impl.prefix=rsm.config.
-remote.log.metadata.manager.impl.prefix=rlmm.config.
-
-# Configure the directory used for `LocalTieredStorage`
-# Note, please make sure the brokers need to have access to this directory
-rsm.config.dir=/tmp/kafka-remote-storage
-
-# This needs to be changed if number of brokers in the cluster is more than 1
-rlmm.config.remote.log.metadata.topic.replication.factor=1
-
-# Try to speed up the log retention check interval for testing
-log.retention.check.interval.ms=1000
-</pre>
-
-<p>Following <a href="#quickstart_startserver">quick start guide</a> to start up the kafka environment.
-  Then, create a topic with tiered storage enabled with configs:
-
-<pre>
-# remote.storage.enable=true -> enables tiered storage on the topic
-# local.retention.ms=1000 -> The number of milliseconds to keep the local log segment before it gets deleted.
-  Note that a local log segment is eligible for deletion only after it gets uploaded to remote.
-# retention.ms=3600000 -> when segments exceed this time, the segments in remote storage will be deleted
-# segment.bytes=1048576 -> for test only, to speed up the log segment rolling interval
-# file.delete.delay.ms=10000 -> for test only, to speed up the local-log segment file delete delay
-
-bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server localhost:9092 \
---config remote.storage.enable=true --config local.retention.ms=1000 --config retention.ms=3600000 \
---config segment.bytes=1048576 --config file.delete.delay.ms=1000
 </pre>
+</p>
 
-<p>Try to send messages to the `tieredTopic` topic to roll the log segment:</p>
-
-<pre>
-bin/kafka-producer-perf-test.sh --topic tieredTopic --num-records 1200 --record-size 1024 --throughput -1 --producer-props bootstrap.servers=localhost:9092
+<p>After broker is started, creating a topic with tiered storage enabled, and a small log time retention value to try this feature:
+<pre>bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server localhost:9092 --config remote.storage.enable=true --config local.retention.ms=1000
 </pre>
+</p>
 
 <p>Then, after the active segment is rolled, the old segment should be moved to the remote storage and get deleted.
-  This can be verified by checking the remote log directory configured above. For example:
 </p>
 
-<pre> > ls /tmp/kafka-remote-storage/kafka-tiered-storage/tieredTopic-0-jF8s79t9SrG_PNqlwv7bAA
-00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.index
-00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.snapshot
-00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.leader_epoch_checkpoint
-00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.timeindex
-00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.log
-</pre>
-
-<p>Lastly, we can try to consume some data from the beginning and print offset number, to make sure it will successfully fetch offset 0 from the remote storage.</p>
-
-<pre>bin/kafka-console-consumer.sh --topic tieredTopic --from-beginning --max-messages 1 --bootstrap-server localhost:9092 --property print.offset=true</pre>
-
-<p>Please note, if you want to disable tiered storage at the cluster level, you should delete the tiered storage enabled topics explicitly.
-  Attempting to disable tiered storage at the cluster level without deleting the topics using tiered storage will result in an exception during startup.</p>
-
-<pre>bin/kafka-topics.sh --delete --topic tieredTopic --bootstrap-server localhost:9092</pre>
-
-<p>After topics are deleted, you're safe to set <code>remote.log.storage.system.enable=false</code> in the broker configuration.</p>
-
 <h4 class="anchor-heading"><a id="tiered_storage_limitation" class="anchor-link"></a><a href="#tiered_storage_limitation">Limitations</a></h4>
 
 <p>While the early access release of Tiered Storage offers the opportunity to try out this new feature, it is important to be aware of the following limitations:
@@ -4083,6 +4015,7 @@ bin/kafka-producer-perf-test.sh --topic tieredTopic --num-records 1200 --record-
   <li>Deleting tiered storage enabled topics is required before disabling tiered storage at the broker level</li>
   <li>Admin actions related to tiered storage feature are only supported on clients from version 3.0 onwards</li>
 </ul>
+</p>
 
 <p>For more information, please check <a href="https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes">Tiered Storage Early Access Release Note</a>.
 </p>
diff --git a/37/quickstart.html b/37/quickstart.html
index 94f278b2..f396be4f 100644
--- a/37/quickstart.html
+++ b/37/quickstart.html
@@ -76,10 +76,6 @@ $ bin/kafka-server-start.sh config/server.properties</code></pre>
             Kafka with KRaft
         </h5>
 
-        <p>Kafka can be run using KRaft mode using local scripts and downloaded files or the docker image. Follow one of the sections below but not both to start the kafka server.</p>
-
-        <h5>Using downloaded files</h5>
-
         <p>
             Generate a Cluster UUID
         </p>
@@ -98,20 +94,6 @@ $ bin/kafka-server-start.sh config/server.properties</code></pre>
 
         <pre class="line-numbers"><code class="language-bash">$ bin/kafka-server-start.sh config/kraft/server.properties</code></pre>
 
-        <h5>Using docker image</h5>
-
-        <p>
-            Get the docker image
-        </p>
-
-        <pre class="line-numbers"><code class="language-bash">$ docker pull apache/kafka:{{fullDotVersion}}</code></pre>
-
-        <p>
-            Start the kafka docker container
-        </p>
-
-        <pre class="line-numbers"><code class="language-bash">$ docker run -p 9092:9092 apache/kafka:{{fullDotVersion}}</code></pre>
-
         <p>
             Once the Kafka server has successfully launched, you will have a basic Kafka environment running and ready to use.
         </p>
@@ -373,7 +355,7 @@ wordCounts.toStream().to("output-topic", Produced.with(Serdes.String(), Serdes.L
             <a href="#quickstart_kafkacongrats">Congratulations!</a>
         </h4>
 
-        <p>You have successfully finished the Apache Kafka quickstart.</p>
+        <p>You have successfully finished the Apache Kafka quickstart.<div>
 
         <p>To learn more, we suggest the following next steps:</p>
 
diff --git a/37/security.html b/37/security.html
index 895f2b0b..63ff3bb6 100644
--- a/37/security.html
+++ b/37/security.html
@@ -54,7 +54,7 @@
 	    
     <p>The <code>LISTENER_NAME</code> is usually a descriptive name which defines the purpose of
       the listener. For example, many configurations use a separate listener for client traffic,
-      so they might refer to the corresponding listener as <code>CLIENT</code> in the configuration:</p>
+      so they might refer to the corresponding listener as <code>CLIENT</code> in the configuration:</p
       
     <pre class="line-numbers"><code class="language-text">listeners=CLIENT://localhost:9092</code></pre>
       
@@ -471,7 +471,7 @@ ssl.truststore.password=test1234</code></pre>
             <pre class="line-numbers"><code class="language-text">security.inter.broker.protocol=SSL</code></pre>
 
             <p>
-            Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If stronger algorithms are needed (for example, AES with 256-bit keys), the <a href="https://www.oracle.com/technetwork/java/javase/downloads/index.html">JCE Unlimited Strength Jurisdiction Policy Files</a> must be obtained and installed in the JDK/JRE. See the
+            Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If stronger algorithms are needed (for example, AES with 256-bit keys), the <a href="http://www.oracle.com/technetwork/java/javase/downloads/index.html">JCE Unlimited Strength Jurisdiction Policy Files</a> must be obtained and installed in the JDK/JRE. See the
             <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html">JCA Providers Documentation</a> for more information.
             </p>
 
@@ -535,25 +535,25 @@ ssl.key.password=test1234</code></pre>
                 <li><h5><a id="security_jaas_broker"
                     href="#security_jaas_broker">JAAS configuration for Kafka brokers</a></h5>
 
-                    <p><code>KafkaServer</code> is the section name in the JAAS file used by each
+                    <p><tt>KafkaServer</tt> is the section name in the JAAS file used by each
                     KafkaServer/Broker. This section provides SASL configuration options
                     for the broker including any SASL client connections made by the broker
                     for inter-broker communication. If multiple listeners are configured to use
                     SASL, the section name may be prefixed with the listener name in lower-case
-                    followed by a period, e.g. <code>sasl_ssl.KafkaServer</code>.</p>
+                    followed by a period, e.g. <tt>sasl_ssl.KafkaServer</tt>.</p>
 
-                    <p><code>Client</code> section is used to authenticate a SASL connection with
+                    <p><tt>Client</tt> section is used to authenticate a SASL connection with
                     zookeeper. It also allows the brokers to set SASL ACL on zookeeper
                     nodes which locks these nodes down so that only the brokers can
                     modify it. It is necessary to have the same principal name across all
                     brokers. If you want to use a section name other than Client, set the
-                    system property <code>zookeeper.sasl.clientconfig</code> to the appropriate
-                    name (<i>e.g.</i>, <code>-Dzookeeper.sasl.clientconfig=ZkClient</code>).</p>
+                    system property <tt>zookeeper.sasl.clientconfig</tt> to the appropriate
+                    name (<i>e.g.</i>, <tt>-Dzookeeper.sasl.clientconfig=ZkClient</tt>).</p>
 
                     <p>ZooKeeper uses "zookeeper" as the service name by default. If you
                     want to change this, set the system property
-                    <code>zookeeper.sasl.client.username</code> to the appropriate name
-                    (<i>e.g.</i>, <code>-Dzookeeper.sasl.client.username=zk</code>).</p>
+                    <tt>zookeeper.sasl.client.username</tt> to the appropriate name
+                    (<i>e.g.</i>, <tt>-Dzookeeper.sasl.client.username=zk</tt>).</p>
 
                     <p>Brokers may also configure JAAS using the broker configuration property <code>sasl.jaas.config</code>.
                     The property name must be prefixed with the listener prefix including the SASL mechanism,
@@ -609,8 +609,8 @@ listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.p
                         <li><h6 class="anchor-heading"><a id="security_client_staticjaas" class="anchor-link"></a><a href="#security_client_staticjaas">JAAS configuration using static config file</a></h6>
                             To configure SASL authentication on the clients using static JAAS config file:
                             <ol>
-                                <li>Add a JAAS config file with a client login section named <code>KafkaClient</code>. Configure
-                                    a login module in <code>KafkaClient</code> for the selected mechanism as described in the examples
+                                <li>Add a JAAS config file with a client login section named <tt>KafkaClient</tt>. Configure
+                                    a login module in <tt>KafkaClient</tt> for the selected mechanism as described in the examples
                                     for setting up <a href="#security_sasl_kerberos_clientconfig">GSSAPI (Kerberos)</a>,
                                     <a href="#security_sasl_plain_clientconfig">PLAIN</a>,
                                     <a href="#security_sasl_scram_clientconfig">SCRAM</a> or
@@ -719,7 +719,7 @@ Client {
     principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
 };</code></pre>
 
-                            <code>KafkaServer</code> section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It
+                            <tt>KafkaServer</tt> section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It
                             allows the broker to login using the keytab specified in this section. See <a href="#security_jaas_broker">notes</a> for more details on Zookeeper SASL configuration.
                         </li>
                         <li>Pass the JAAS and optionally the krb5 file locations as JVM parameters to each Kafka broker (see <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a> for more details):
@@ -760,7 +760,7 @@ sasl.enabled.mechanisms=GSSAPI</code></pre>
 
                             JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
                             as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
-                            <code>KafkaClient</code>. This option allows only one user for all client connections from a JVM.</li>
+                            <tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</li>
                         <li>Make sure the keytabs configured in the JAAS configuration are readable by the operating system user who is starting kafka client.</li>
                         <li>Optionally pass the krb5 file locations as JVM parameters to each client JVM (see <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a> for more details):
                             <pre class="line-numbers"><code class="language-bash">-Djava.security.krb5.conf=/etc/kafka/krb5.conf</code></pre></li>
@@ -788,9 +788,9 @@ sasl.kerberos.service.name=kafka</code></pre></li>
     user_admin="admin-secret"
     user_alice="alice-secret";
 };</code></pre>
-                            This configuration defines two users (<i>admin</i> and <i>alice</i>). The properties <code>username</code> and <code>password</code>
-                            in the <code>KafkaServer</code> section are used by the broker to initiate connections to other brokers. In this example,
-                            <i>admin</i> is the user for inter-broker communication. The set of properties <code>user_<i>userName</i></code> defines
+                            This configuration defines two users (<i>admin</i> and <i>alice</i>). The properties <tt>username</tt> and <tt>password</tt>
+                            in the <tt>KafkaServer</tt> section are used by the broker to initiate connections to other brokers. In this example,
+                            <i>admin</i> is the user for inter-broker communication. The set of properties <tt>user_<i>userName</i></tt> defines
                             the passwords for all users that connect to the broker and the broker validates all client connections including
                             those from other brokers using these properties.</li>
                         <li>Pass the JAAS config file location as JVM parameter to each Kafka broker:
@@ -812,14 +812,14 @@ sasl.enabled.mechanisms=PLAIN</code></pre></li>
                             <pre class="line-numbers"><code class="language-text">sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
     username="alice" \
     password="alice-secret";</code></pre>
-                            <p>The options <code>username</code> and <code>password</code> are used by clients to configure
+                            <p>The options <tt>username</tt> and <tt>password</tt> are used by clients to configure
                                 the user for client connections. In this example, clients connect to the broker as user <i>alice</i>.
                                 Different clients within a JVM may connect as different users by specifying different user names
                                 and passwords in <code>sasl.jaas.config</code>.</p>
 
                             <p>JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
                                 as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
-                                <code>KafkaClient</code>. This option allows only one user for all client connections from a JVM.</p></li>
+                                <tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</p></li>
                         <li>Configure the following properties in producer.properties or consumer.properties:
                             <pre class="line-numbers"><code class="language-text">security.protocol=SASL_SSL
 sasl.mechanism=PLAIN</code></pre></li>
@@ -853,7 +853,7 @@ sasl.mechanism=PLAIN</code></pre></li>
             <ol>
                 <li><h5 class="anchor-heading"><a id="security_sasl_scram_credentials" class="anchor-link"></a><a href="#security_sasl_scram_credentials">Creating SCRAM Credentials</a></h5>
                     <p>The SCRAM implementation in Kafka uses Zookeeper as credential store. Credentials can be created in
-                        Zookeeper using <code>kafka-configs.sh</code>. For each SCRAM mechanism enabled, credentials must be created
+                        Zookeeper using <tt>kafka-configs.sh</tt>. For each SCRAM mechanism enabled, credentials must be created
                         by adding a config with the mechanism name. Credentials for inter-broker communication must be created
                         before Kafka brokers are started. Client credentials may be created and updated dynamically and updated
                         credentials will be used to authenticate new connections.</p>
@@ -877,7 +877,7 @@ sasl.mechanism=PLAIN</code></pre></li>
     username="admin"
     password="admin-secret";
 };</code></pre>
-                            The properties <code>username</code> and <code>password</code> in the <code>KafkaServer</code> section are used by
+                            The properties <tt>username</tt> and <tt>password</tt> in the <tt>KafkaServer</tt> section are used by
                             the broker to initiate connections to other brokers. In this example, <i>admin</i> is the user for
                             inter-broker communication.</li>
                         <li>Pass the JAAS config file location as JVM parameter to each Kafka broker:
@@ -900,14 +900,14 @@ sasl.enabled.mechanisms=SCRAM-SHA-256 (or SCRAM-SHA-512)</code></pre></li>
     username="alice" \
     password="alice-secret";</code></pre>
 
-                            <p>The options <code>username</code> and <code>password</code> are used by clients to configure
+                            <p>The options <tt>username</tt> and <tt>password</tt> are used by clients to configure
                                 the user for client connections. In this example, clients connect to the broker as user <i>alice</i>.
                                 Different clients within a JVM may connect as different users by specifying different user names
                                 and passwords in <code>sasl.jaas.config</code>.</p>
 
                             <p>JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
                                 as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
-                                <code>KafkaClient</code>. This option allows only one user for all client connections from a JVM.</p></li>
+                                <tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</p></li>
                         <li>Configure the following properties in producer.properties or consumer.properties:
                             <pre class="line-numbers"><code class="language-text">security.protocol=SASL_SSL
 sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)</code></pre></li>
@@ -948,9 +948,9 @@ sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)</code></pre></li>
     org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
     unsecuredLoginStringClaim_sub="admin";
 };</code></pre>
-                            The property <code>unsecuredLoginStringClaim_sub</code> in the <code>KafkaServer</code> section is used by
+                            The property <tt>unsecuredLoginStringClaim_sub</tt> in the <tt>KafkaServer</tt> section is used by
                             the broker when it initiates connections to other brokers. In this example, <i>admin</i> will appear in the
-                            subject (<code>sub</code>) claim and will be the user for inter-broker communication.</li>
+                            subject (<tt>sub</tt>) claim and will be the user for inter-broker communication.</li>
                         <li>Pass the JAAS config file location as JVM parameter to each Kafka broker:
                             <pre class="line-numbers"><code class="language-bash">-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</code></pre></li>
                         <li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>. For example:
@@ -970,15 +970,15 @@ sasl.enabled.mechanisms=OAUTHBEARER</code></pre></li>
                             <pre class="line-numbers"><code class="language-text">sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
     unsecuredLoginStringClaim_sub="alice";</code></pre>
 
-                            <p>The option <code>unsecuredLoginStringClaim_sub</code> is used by clients to configure
-                                the subject (<code>sub</code>) claim, which determines the user for client connections.
+                            <p>The option <tt>unsecuredLoginStringClaim_sub</tt> is used by clients to configure
+                                the subject (<tt>sub</tt>) claim, which determines the user for client connections.
                                 In this example, clients connect to the broker as user <i>alice</i>.
-                                Different clients within a JVM may connect as different users by specifying different subject (<code>sub</code>)
+                                Different clients within a JVM may connect as different users by specifying different subject (<tt>sub</tt>)
                                 claims in <code>sasl.jaas.config</code>.</p>
 
                             <p>JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
                                 as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
-                                <code>KafkaClient</code>. This option allows only one user for all client connections from a JVM.</p></li>
+                                <tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</p></li>
                         <li>Configure the following properties in producer.properties or consumer.properties:
                             <pre class="line-numbers"><code class="language-text">security.protocol=SASL_SSL (or SASL_PLAINTEXT if non-production)
 sasl.mechanism=OAUTHBEARER</code></pre></li>
@@ -997,48 +997,48 @@ sasl.mechanism=OAUTHBEARER</code></pre></li>
                                     <th>Documentation</th>
                                 </tr>
                                 <tr>
-                                    <td><code>unsecuredLoginStringClaim_&lt;claimname&gt;="value"</code></td>
-                                    <td>Creates a <code>String</code> claim with the given name and value. Any valid
-                                        claim name can be specified except '<code>iat</code>' and '<code>exp</code>' (these are
+                                    <td><tt>unsecuredLoginStringClaim_&lt;claimname&gt;="value"</tt></td>
+                                    <td>Creates a <tt>String</tt> claim with the given name and value. Any valid
+                                        claim name can be specified except '<tt>iat</tt>' and '<tt>exp</tt>' (these are
                                         automatically generated).</td>
                                 </tr>
                                 <tr>
-                                    <td><code>unsecuredLoginNumberClaim_&lt;claimname&gt;="value"</code></td>
-                                    <td>Creates a <code>Number</code> claim with the given name and value. Any valid
-                                        claim name can be specified except '<code>iat</code>' and '<code>exp</code>' (these are
+                                    <td><tt>unsecuredLoginNumberClaim_&lt;claimname&gt;="value"</tt></td>
+                                    <td>Creates a <tt>Number</tt> claim with the given name and value. Any valid
+                                        claim name can be specified except '<tt>iat</tt>' and '<tt>exp</tt>' (these are
                                         automatically generated).</td>
                                 </tr>
                                 <tr>
-                                    <td><code>unsecuredLoginListClaim_&lt;claimname&gt;="value"</code></td>
-                                    <td>Creates a <code>String List</code> claim with the given name and values parsed
+                                    <td><tt>unsecuredLoginListClaim_&lt;claimname&gt;="value"</tt></td>
+                                    <td>Creates a <tt>String List</tt> claim with the given name and values parsed
                                         from the given value where the first character is taken as the delimiter. For
-                                        example: <code>unsecuredLoginListClaim_fubar="|value1|value2"</code>. Any valid
-                                        claim name can be specified except '<code>iat</code>' and '<code>exp</code>' (these are
+                                        example: <tt>unsecuredLoginListClaim_fubar="|value1|value2"</tt>. Any valid
+                                        claim name can be specified except '<tt>iat</tt>' and '<tt>exp</tt>' (these are
                                         automatically generated).</td>
                                 </tr>
                                 <tr>
-                                    <td><code>unsecuredLoginExtension_&lt;extensionname&gt;="value"</code></td>
-                                    <td>Creates a <code>String</code> extension with the given name and value.
-                                        For example: <code>unsecuredLoginExtension_traceId="123"</code>. A valid extension name
+                                    <td><tt>unsecuredLoginExtension_&lt;extensionname&gt;="value"</tt></td>
+                                    <td>Creates a <tt>String</tt> extension with the given name and value.
+                                        For example: <tt>unsecuredLoginExtension_traceId="123"</tt>. A valid extension name
                                         is any sequence of lowercase or uppercase alphabet characters. In addition, the "auth" extension name is reserved.
                                         A valid extension value is any combination of characters with ASCII codes 1-127.
                                 </tr>
                                 <tr>
-                                    <td><code>unsecuredLoginPrincipalClaimName</code></td>
-                                    <td>Set to a custom claim name if you wish the name of the <code>String</code>
-                                        claim holding the principal name to be something other than '<code>sub</code>'.</td>
+                                    <td><tt>unsecuredLoginPrincipalClaimName</tt></td>
+                                    <td>Set to a custom claim name if you wish the name of the <tt>String</tt>
+                                        claim holding the principal name to be something other than '<tt>sub</tt>'.</td>
                                 </tr>
                                 <tr>
-                                    <td><code>unsecuredLoginLifetimeSeconds</code></td>
+                                    <td><tt>unsecuredLoginLifetimeSeconds</tt></td>
                                     <td>Set to an integer value if the token expiration is to be set to something
                                         other than the default value of 3600 seconds (which is 1 hour). The
-                                        '<code>exp</code>' claim will be set to reflect the expiration time.</td>
+                                        '<tt>exp</tt>' claim will be set to reflect the expiration time.</td>
                                 </tr>
                                 <tr>
-                                    <td><code>unsecuredLoginScopeClaimName</code></td>
-                                    <td>Set to a custom claim name if you wish the name of the <code>String</code> or
-                                        <code>String List</code> claim holding any token scope to be something other than
-                                        '<code>scope</code>'.</td>
+                                    <td><tt>unsecuredLoginScopeClaimName</tt></td>
+                                    <td>Set to a custom claim name if you wish the name of the <tt>String</tt> or
+                                        <tt>String List</tt> claim holding any token scope to be something other than
+                                        '<tt>scope</tt>'.</td>
                                 </tr>
                             </table>
                         </li>
@@ -1053,25 +1053,25 @@ sasl.mechanism=OAUTHBEARER</code></pre></li>
                                     <th>Documentation</th>
                                 </tr>
                                 <tr>
-                                    <td><code>unsecuredValidatorPrincipalClaimName="value"</code></td>
-                                    <td>Set to a non-empty value if you wish a particular <code>String</code> claim
+                                    <td><tt>unsecuredValidatorPrincipalClaimName="value"</tt></td>
+                                    <td>Set to a non-empty value if you wish a particular <tt>String</tt> claim
                                         holding a principal name to be checked for existence; the default is to check
-                                        for the existence of the '<code>sub</code>' claim.</td>
+                                        for the existence of the '<tt>sub</tt>' claim.</td>
                                 </tr>
                                 <tr>
-                                    <td><code>unsecuredValidatorScopeClaimName="value"</code></td>
-                                    <td>Set to a custom claim name if you wish the name of the <code>String</code> or
-                                        <code>String List</code> claim holding any token scope to be something other than
-                                        '<code>scope</code>'.</td>
+                                    <td><tt>unsecuredValidatorScopeClaimName="value"</tt></td>
+                                    <td>Set to a custom claim name if you wish the name of the <tt>String</tt> or
+                                        <tt>String List</tt> claim holding any token scope to be something other than
+                                        '<tt>scope</tt>'.</td>
                                 </tr>
                                 <tr>
-                                    <td><code>unsecuredValidatorRequiredScope="value"</code></td>
+                                    <td><tt>unsecuredValidatorRequiredScope="value"</tt></td>
                                     <td>Set to a space-delimited list of scope values if you wish the
-                                        <code>String/String List</code> claim holding the token scope to be checked to
+                                        <tt>String/String List</tt> claim holding the token scope to be checked to
                                         make sure it contains certain values.</td>
                                 </tr>
                                 <tr>
-                                    <td><code>unsecuredValidatorAllowableClockSkewMs="value"</code></td>
+                                    <td><tt>unsecuredValidatorAllowableClockSkewMs="value"</tt></td>
                                     <td>Set to a positive integer value if you wish to allow up to some number of
                                         positive milliseconds of clock skew (the default is 0).</td>
                                 </tr>
@@ -1094,33 +1094,33 @@ sasl.mechanism=OAUTHBEARER</code></pre></li>
                             <th>Producer/Consumer/Broker Configuration Property</th>
                         </tr>
                         <tr>
-                            <td><code>sasl.login.refresh.window.factor</code></td>
+                            <td><tt>sasl.login.refresh.window.factor</tt></td>
                         </tr>
                         <tr>
-                            <td><code>sasl.login.refresh.window.jitter</code></td>
+                            <td><tt>sasl.login.refresh.window.jitter</tt></td>
                         </tr>
                         <tr>
-                            <td><code>sasl.login.refresh.min.period.seconds</code></td>
+                            <td><tt>sasl.login.refresh.min.period.seconds</tt></td>
                         </tr>
                         <tr>
-                            <td><code>sasl.login.refresh.min.buffer.seconds</code></td>
+                            <td><tt>sasl.login.refresh.min.buffer.seconds</tt></td>
                         </tr>
                     </table>
                 </li>
                 <li><h5><a id="security_sasl_oauthbearer_prod" href="#security_sasl_oauthbearer_prod">Secure/Production Use of SASL/OAUTHBEARER</a></h5>
                     Production use cases will require writing an implementation of
-                    <code>org.apache.kafka.common.security.auth.AuthenticateCallbackHandler</code> that can handle an instance of
-                    <code>org.apache.kafka.common.security.oauthbearer.OAuthBearerTokenCallback</code> and declaring it via either the
-                    <code>sasl.login.callback.handler.class</code> configuration option for a
+                    <tt>org.apache.kafka.common.security.auth.AuthenticateCallbackHandler</tt> that can handle an instance of
+                    <tt>org.apache.kafka.common.security.oauthbearer.OAuthBearerTokenCallback</tt> and declaring it via either the
+                    <tt>sasl.login.callback.handler.class</tt> configuration option for a
                     non-broker client or via the
-                    <code>listener.name.sasl_ssl.oauthbearer.sasl.login.callback.handler.class</code>
+                    <tt>listener.name.sasl_ssl.oauthbearer.sasl.login.callback.handler.class</tt>
                     configuration option for brokers (when SASL/OAUTHBEARER is the inter-broker
                     protocol).
                     <p>
                         Production use cases will also require writing an implementation of
-                        <code>org.apache.kafka.common.security.auth.AuthenticateCallbackHandler</code> that can handle an instance of
-                        <code>org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallback</code> and declaring it via the
-                        <code>listener.name.sasl_ssl.oauthbearer.sasl.server.callback.handler.class</code>
+                        <tt>org.apache.kafka.common.security.auth.AuthenticateCallbackHandler</tt> that can handle an instance of
+                        <tt>org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallback</tt> and declaring it via the
+                        <tt>listener.name.sasl_ssl.oauthbearer.sasl.server.callback.handler.class</tt>
                         broker configuration option.
                 </li>
                 <li><h5><a id="security_sasl_oauthbearer_security" href="#security_sasl_oauthbearer_security">Security Considerations for SASL/OAUTHBEARER</a></h5>
@@ -1138,7 +1138,7 @@ sasl.mechanism=OAUTHBEARER</code></pre></li>
 
         <li><h4 class="anchor-heading"><a id="security_sasl_multimechanism" class="anchor-link"></a><a href="#security_sasl_multimechanism">Enabling multiple SASL mechanisms in a broker</a></h4>
             <ol>
-                <li>Specify configuration for the login modules of all enabled mechanisms in the <code>KafkaServer</code> section of the JAAS config file. For example:
+                <li>Specify configuration for the login modules of all enabled mechanisms in the <tt>KafkaServer</tt> section of the JAAS config file. For example:
                     <pre class="line-numbers"><code class="language-text">KafkaServer {
     com.sun.security.auth.module.Krb5LoginModule required
     useKeyTab=true
@@ -1165,12 +1165,12 @@ sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other enabled mechani
         <li><h4 class="anchor-heading"><a id="saslmechanism_rolling_upgrade" class="anchor-link"></a><a href="#saslmechanism_rolling_upgrade">Modifying SASL mechanism in a Running Cluster</a></h4>
             <p>SASL mechanism can be modified in a running cluster using the following sequence:</p>
             <ol>
-                <li>Enable new SASL mechanism by adding the mechanism to <code>sasl.enabled.mechanisms</code> in server.properties for each broker. Update JAAS config file to include both
+                <li>Enable new SASL mechanism by adding the mechanism to <tt>sasl.enabled.mechanisms</tt> in server.properties for each broker. Update JAAS config file to include both
                     mechanisms as described <a href="#security_sasl_multimechanism">here</a>. Incrementally bounce the cluster nodes.</li>
                 <li>Restart clients using the new mechanism.</li>
-                <li>To change the mechanism of inter-broker communication (if this is required), set <code>sasl.mechanism.inter.broker.protocol</code> in server.properties to the new mechanism and
+                <li>To change the mechanism of inter-broker communication (if this is required), set <tt>sasl.mechanism.inter.broker.protocol</tt> in server.properties to the new mechanism and
                     incrementally bounce the cluster again.</li>
-                <li>To remove old mechanism (if this is required), remove the old mechanism from <code>sasl.enabled.mechanisms</code> in server.properties and remove the entries for the
+                <li>To remove old mechanism (if this is required), remove the old mechanism from <tt>sasl.enabled.mechanisms</tt> in server.properties and remove the entries for the
                     old mechanism from JAAS config file. Incrementally bounce the cluster again.</li>
             </ol>
         </li>
@@ -1186,7 +1186,7 @@ sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other enabled mechani
             <p>Typical steps for delegation token usage are:</p>
             <ol>
                 <li>User authenticates with the Kafka cluster via SASL or SSL, and obtains a delegation token. This can be done
-                    using Admin APIs or using <code>kafka-delegation-tokens.sh</code> script.</li>
+                    using Admin APIs or using <tt>kafka-delegation-tokens.sh</tt> script.</li>
                 <li>User securely passes the delegation token to Kafka clients for authenticating with the Kafka cluster.</li>
                 <li>Token owner/renewer can renew/expire the delegation tokens.</li>
             </ol>
@@ -1194,7 +1194,7 @@ sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other enabled mechani
             <ol>
                 <li><h5 class="anchor-heading"><a id="security_token_management" class="anchor-link"></a><a href="#security_token_management">Token Management</a></h5>
                     <p> A secret is used to generate and verify delegation tokens. This is supplied using config
-                        option <code>delegation.token.secret.key</code>. The same secret key must be configured across all the brokers.
+                        option <tt>delegation.token.secret.key</tt>. The same secret key must be configured across all the brokers.
                         If using Kafka with KRaft the controllers must also be configured with the secret using the same config option.
                         If the secret is not set or set to empty string, delegation token authentication and API operations will fail.</p>
 
@@ -1206,21 +1206,21 @@ sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other enabled mechani
                         We intend to make these configurable in a future Kafka release.</p>
 
                     <p>A token has a current life, and a maximum renewable life. By default, tokens must be renewed once every 24 hours
-                        for up to 7 days. These can be configured using <code>delegation.token.expiry.time.ms</code>
-                        and <code>delegation.token.max.lifetime.ms</code> config options.</p>
+                        for up to 7 days. These can be configured using <tt>delegation.token.expiry.time.ms</tt>
+                        and <tt>delegation.token.max.lifetime.ms</tt> config options.</p>
 
                     <p>Tokens can also be cancelled explicitly.  If a token is not renewed by the token’s expiration time or if token
                         is beyond the max life time, it will be deleted from all broker caches as well as from zookeeper.</p>
                 </li>
 
                 <li><h5 class="anchor-heading"><a id="security_sasl_create_tokens" class="anchor-link"></a><a href="#security_sasl_create_tokens">Creating Delegation Tokens</a></h5>
-                    <p>Tokens can be created by using Admin APIs or using <code>kafka-delegation-tokens.sh</code> script.
+                    <p>Tokens can be created by using Admin APIs or using <tt>kafka-delegation-tokens.sh</tt> script.
                         Delegation token requests (create/renew/expire/describe) should be issued only on SASL or SSL authenticated channels.
                         Tokens can not be requests if the initial authentication is done through delegation token.
-                        A token can be created by the user for that user or others as well by specifying the <code>--owner-principal</code> parameter.
+                        A token can be created by the user for that user or others as well by specifying the <tt>--owner-principal</tt> parameter.
                         Owner/Renewers can renew or expire tokens. Owner/renewers can always describe their own tokens.
                         To describe other tokens, a DESCRIBE_TOKEN permission needs to be added on the User resource representing the owner of the token.
-                        <code>kafka-delegation-tokens.sh</code> script examples are given below.</p>
+                        <tt>kafka-delegation-tokens.sh</tt> script examples are given below.</p>
                     <p>Create a delegation token:
                     <pre class="line-numbers"><code class="language-bash">&gt; bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --create   --max-life-time-period -1 --command-config client.properties --renewer-principal User:user1</code></pre>
                     <p>Create a delegation token for a different owner:
@@ -1246,14 +1246,14 @@ sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other enabled mechani
     password="lAYYSFmLs4bTjf+lTZ1LCHR/ZZFNA==" \
     tokenauth="true";</code></pre>
 
-                            <p>The options <code>username</code> and <code>password</code> are used by clients to configure the token id and
-                                token HMAC. And the option <code>tokenauth</code> is used to indicate the server about token authentication.
+                            <p>The options <tt>username</tt> and <tt>password</tt> are used by clients to configure the token id and
+                                token HMAC. And the option <tt>tokenauth</tt> is used to indicate the server about token authentication.
                                 In this example, clients connect to the broker using token id: <i>tokenID123</i>. Different clients within a
                                 JVM may connect using different tokens by specifying different token details in <code>sasl.jaas.config</code>.</p>
 
                             <p>JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
                                 as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
-                                <code>KafkaClient</code>. This option allows only one user for all client connections from a JVM.</p></li>
+                                <tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</p></li>
                     </ol>
                 </li>
 
@@ -1273,7 +1273,7 @@ sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other enabled mechani
     </ol>
 
     <h3 class="anchor-heading"><a id="security_authz" class="anchor-link"></a><a href="#security_authz">7.5 Authorization and ACLs</a></h3>
-    Kafka ships with a pluggable authorization framework, which is configured with the <code>authorizer.class.name</code> property in the server configuration.
+    Kafka ships with a pluggable authorization framework, which is configured with the <tt>authorizer.class.name</tt> property in the server configuration.
     Configured implementations must extend <code>org.apache.kafka.server.authorizer.Authorizer</code>.
     Kafka provides default implementations which store ACLs in the cluster metadata (either Zookeeper or the KRaft metadata log).
 
@@ -1332,7 +1332,7 @@ DEFAULT</code></pre>
     <h5 class="anchor-heading"><a id="security_authz_sasl" class="anchor-link"></a><a href="#security_authz_sasl">Customizing SASL User Name</a></h5>
 
     By default, the SASL user name will be the primary part of the Kerberos principal. One can change that by setting <code>sasl.kerberos.principal.to.local.rules</code> to a customized rule in server.properties.
-    The format of <code>sasl.kerberos.principal.to.local.rules</code> is a list where each rule works in the same way as the auth_to_local in <a href="https://web.mit.edu/Kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html">Kerberos configuration file (krb5.conf)</a>. This also support additional lowercase/uppercase rule, to force the translated result to be all lowercase/uppercase. This is done by adding a "/L" or "/U" to the end of the rule. check below formats for syntax.
+    The format of <code>sasl.kerberos.principal.to.local.rules</code> is a list where each rule works in the same way as the auth_to_local in <a href="http://web.mit.edu/Kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html">Kerberos configuration file (krb5.conf)</a>. This also support additional lowercase/uppercase rule, to force the translated result to be all lowercase/uppercase. This is done by adding a "/L" or "/U" to the end of the rule. check below formats for syntax.
     Each rules starts with RULE: and contains an expression as the following formats. See the kerberos documentation for more details.
     <pre class="line-numbers"><code class="language-text">RULE:[n:string](regexp)s/pattern/replacement/
 RULE:[n:string](regexp)s/pattern/replacement/g
@@ -2308,19 +2308,19 @@ security.inter.broker.protocol=SSL</code></pre>
         Use the broker properties file to set TLS configs for brokers as described below.
     </p>
     <p>
-        Use the <code>--zk-tls-config-file &lt;file&gt;</code> option to set TLS configs in the Zookeeper Security Migration Tool.
-        The <code>kafka-acls.sh</code> and <code>kafka-configs.sh</code> CLI tools also support the <code>--zk-tls-config-file &lt;file&gt;</code> option.
+        Use the <tt>--zk-tls-config-file &lt;file&gt;</tt> option to set TLS configs in the Zookeeper Security Migration Tool.
+        The <tt>kafka-acls.sh</tt> and <tt>kafka-configs.sh</tt> CLI tools also support the <tt>--zk-tls-config-file &lt;file&gt;</tt> option.
     </p>
     <p>
-        Use the <code>-zk-tls-config-file &lt;file&gt;</code> option (note the single-dash rather than double-dash)
-        to set TLS configs for the <code>zookeeper-shell.sh</code> CLI tool.
+        Use the <tt>-zk-tls-config-file &lt;file&gt;</tt> option (note the single-dash rather than double-dash)
+        to set TLS configs for the <tt>zookeeper-shell.sh</tt> CLI tool.
     </p>
     <h4 class="anchor-heading"><a id="zk_authz_new" class="anchor-link"></a><a href="#zk_authz_new">7.7.1 New clusters</a></h4>
     <h5 class="anchor-heading"><a id="zk_authz_new_sasl" class="anchor-link"></a><a href="#zk_authz_new_sasl">7.7.1.1 ZooKeeper SASL Authentication</a></h5>
     To enable ZooKeeper SASL authentication on brokers, there are two necessary steps:
     <ol>
         <li> Create a JAAS login file and set the appropriate system property to point to it as described above</li>
-        <li> Set the configuration property <code>zookeeper.set.acl</code> in each broker to true</li>
+        <li> Set the configuration property <tt>zookeeper.set.acl</tt> in each broker to true</li>
     </ol>
 
     The metadata stored in ZooKeeper for the Kafka cluster is world-readable, but can only be modified by the brokers. The rationale behind this decision is that the data stored in ZooKeeper is not sensitive, but inappropriate manipulation of that data can cause cluster disruption. We also recommend limiting the access to ZooKeeper via network segmentation (only brokers and some admin tools need access to ZooKeeper).
@@ -2333,10 +2333,10 @@ security.inter.broker.protocol=SSL</code></pre>
     hostname verification of the brokers and any CLI tool by ZooKeeper will succeed.
     <p>
         It is possible to use something other than the DN for the identity of mTLS clients by writing a class that
-        extends <code>org.apache.zookeeper.server.auth.X509AuthenticationProvider</code> and overrides the method
-        <code>protected String getClientId(X509Certificate clientCert)</code>.
-        Choose a scheme name and set <code>authProvider.[scheme]</code> in ZooKeeper to be the fully-qualified class name
-        of the custom implementation; then set <code>ssl.authProvider=[scheme]</code> to use it.
+        extends <tt>org.apache.zookeeper.server.auth.X509AuthenticationProvider</tt> and overrides the method
+        <tt>protected String getClientId(X509Certificate clientCert)</tt>.
+        Choose a scheme name and set <tt>authProvider.[scheme]</tt> in ZooKeeper to be the fully-qualified class name
+        of the custom implementation; then set <tt>ssl.authProvider=[scheme]</tt> to use it.
     </p>
     Here is a sample (partial) ZooKeeper configuration for enabling TLS authentication.
     These configurations are described in the
@@ -2387,13 +2387,13 @@ ssl.trustStore.password=zk-ts-passwd</code></pre>
         </li>
         <li>Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations (including connecting to the TLS-enabled ZooKeeper port) as required, which enables brokers to authenticate to ZooKeeper. At the end of the rolling restart, brokers are able to manipulate znodes with strict ACLs, but they will not create znodes with those ACLs</li>
         <li>If you enabled mTLS, disable the non-TLS port in ZooKeeper</li>
-        <li>Perform a second rolling restart of brokers, this time setting the configuration parameter <code>zookeeper.set.acl</code> to true, which enables the use of secure ACLs when creating znodes</li>
-        <li>Execute the ZkSecurityMigrator tool. To execute the tool, there is this script: <code>bin/zookeeper-security-migration.sh</code> with <code>zookeeper.acl</code> set to secure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file &lt;file&gt;</code> option if you enable mTLS.</li>
+        <li>Perform a second rolling restart of brokers, this time setting the configuration parameter <tt>zookeeper.set.acl</tt> to true, which enables the use of secure ACLs when creating znodes</li>
+        <li>Execute the ZkSecurityMigrator tool. To execute the tool, there is this script: <tt>bin/zookeeper-security-migration.sh</tt> with <tt>zookeeper.acl</tt> set to secure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file &lt;file&gt;</code> option if you enable mTLS.</li>
     </ol>
     <p>It is also possible to turn off authentication in a secure cluster. To do it, follow these steps:</p>
     <ol>
-        <li>Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations, which enables brokers to authenticate, but setting <code>zookeeper.set.acl</code> to false. At the end of the rolling restart, brokers stop creating znodes with secure ACLs, but are still able to authenticate and manipulate all znodes</li>
-        <li>Execute the ZkSecurityMigrator tool. To execute the tool, run this script <code>bin/zookeeper-security-migration.sh</code> with <code>zookeeper.acl</code> set to unsecure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file &lt;file&gt;</code> option if you need to set TLS configuration.</li>
+        <li>Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations, which enables brokers to authenticate, but setting <tt>zookeeper.set.acl</tt> to false. At the end of the rolling restart, brokers stop creating znodes with secure ACLs, but are still able to authenticate and manipulate all znodes</li>
+        <li>Execute the ZkSecurityMigrator tool. To execute the tool, run this script <tt>bin/zookeeper-security-migration.sh</tt> with <tt>zookeeper.acl</tt> set to unsecure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file &lt;file&gt;</code> option if you need to set TLS configuration.</li>
         <li>If you are disabling mTLS, enable the non-TLS port in ZooKeeper</li>
         <li>Perform a second rolling restart of brokers, this time omitting the system property that sets the JAAS login file and/or removing ZooKeeper mutual TLS configuration (including connecting to the non-TLS-enabled ZooKeeper port) as required</li>
         <li>If you are disabling mTLS, disable the TLS port in ZooKeeper</li>
@@ -2415,8 +2415,8 @@ ssl.trustStore.password=zk-ts-passwd</code></pre>
     <h3 class="anchor-heading"><a id="zk_encryption" class="anchor-link"></a><a href="#zk_encryption">7.8 ZooKeeper Encryption</a></h3>
     ZooKeeper connections that use mutual TLS are encrypted.
     Beginning with ZooKeeper version 3.5.7 (the version shipped with Kafka version 2.5) ZooKeeper supports a sever-side config
-    <code>ssl.clientAuth</code> (case-insensitively: <code>want</code>/<code>need</code>/<code>none</code> are the valid options, the default is <code>need</code>),
-    and setting this value to <code>none</code> in ZooKeeper allows clients to connect via a TLS-encrypted connection
+    <tt>ssl.clientAuth</tt> (case-insensitively: <tt>want</tt>/<tt>need</tt>/<tt>none</tt> are the valid options, the default is <tt>need</tt>),
+    and setting this value to <tt>none</tt> in ZooKeeper allows clients to connect via a TLS-encrypted connection
     without presenting their own certificate.  Here is a sample (partial) Kafka Broker configuration for connecting to ZooKeeper with just TLS encryption.
     These configurations are described above in <a href="#brokerconfigs">Broker Configs</a>.
     <pre class="line-numbers"><code class="language-text"># connect to the ZooKeeper port configured for TLS
diff --git a/37/streams/developer-guide/config-streams.html b/37/streams/developer-guide/config-streams.html
index 70dd5ff0..8846f352 100644
--- a/37/streams/developer-guide/config-streams.html
+++ b/37/streams/developer-guide/config-streams.html
@@ -310,23 +310,6 @@ streamsSettings.put(StreamsConfig.NUM_STANDBY_REPLICAS_CONFIG, 1);</code></pre>
             <td colspan="2">Default serializer/deserializer for the inner class of windowed values, implementing the <code class="docutils literal"><span class="pre">Serde</span></code> interface.</td>
             <td>null</td>
           </tr>
-          <tr class="row-even"><td>default.dsl.store</td>
-            <td>Low</td>
-            <td colspan="2">
-              [DEPRECATED] The default state store type used by DSL operators. Deprecated in
-              favor of <code>dsl.store.suppliers.class</code>
-              </td>
-            <td><code>ROCKS_DB</code></td>
-          </tr>
-          <tr class="row-odd"><td>dsl.store.suppliers.class</td>
-            <td>Low</td>
-            <td colspan="2">
-              Defines a default state store implementation to be used by any stateful DSL operator
-              that has not explicitly configured the store implementation type. Must implement
-              the <code>org.apache.kafka.streams.state.DslStoreSuppliers</code> interface.
-            </td>
-            <td><code>BuiltInDslStoreSuppliers.RocksDBDslStoreSuppliers</code></td>
-          </tr>
           <tr class="row-even"><td>max.task.idle.ms</td>
             <td>Medium</td>
             <td colspan="2">
@@ -730,7 +713,6 @@ streamsConfiguration.put(StreamsConfig.DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG,
             <ul class="simple">
               <li><code class="docutils literal"><span class="pre">none</span></code>. This is the default value which means rack aware task assignment will be disabled.</li>
               <li><code class="docutils literal"><span class="pre">min_traffic</span></code>. This settings means that the rack aware task assigner will compute an assignment which tries to minimize cross rack traffic.</li>
-              <li><code class="docutils literal"><span class="pre">balance_subtopology</span></code>. This settings means that the rack aware task assigner will compute an assignment which will try to balance tasks from same subtopology to different clients and minimize cross rack traffic on top of that.</li>
             </ul>
             <p>
               This config can be used together with <a class="reference internal" href="#rack-aware-assignment-non-overlap-cost">rack.aware.assignment.non_overlap_cost</a> and
diff --git a/37/streams/developer-guide/dsl-api.html b/37/streams/developer-guide/dsl-api.html
index acd40ad8..ed2afb58 100644
--- a/37/streams/developer-guide/dsl-api.html
+++ b/37/streams/developer-guide/dsl-api.html
@@ -1699,7 +1699,7 @@ KTable&lt;String, Integer&gt; aggregated = groupedTable.aggregate(
                         <p>For equi-joins, input data must be co-partitioned when joining. This ensures that input records with the same key from both sides of the
                             join, are delivered to the same stream task during processing.
                             <strong>It is your responsibility to ensure data co-partitioning when joining</strong>.
-                            Co-partitioning is not required when performing <a class="reference internal" href="#ktable-ktable-fk-join"><span class="std std-ref">KTable-KTable Foreign-Key joins</span></a> and <a class="reference internal" href="#streams_concepts_globalktable"><span class="std std-ref">Global KTable joins</span></a>.
+                            Co-partitioning is not required when performing <a class="reference internal" href="#streams-developer-guide-dsl-joins-ktable-ktable-fk-join"><span class="std std-ref">KTable-KTable Foreign-Key joins</span></a> and <a class="reference internal" href="#streams_concepts_globalktable"><span class="std std-ref">Global KTable joins</span></a>.
                             </p>
                         <p>The requirements for data co-partitioning are:</p>
                         <ul class="simple">
@@ -1724,7 +1724,7 @@ KTable&lt;String, Integer&gt; aggregated = groupedTable.aggregate(
                             not required because <em>all</em> partitions of the <code class="docutils literal"><span class="pre">GlobalKTable</span></code>&#8216;s underlying changelog stream are made available to
                              each <code class="docutils literal"><span class="pre">KafkaStreams</span></code> instance. That is, each instance has a full copy of the changelog stream.  Further, a
                             <code class="docutils literal"><span class="pre">KeyValueMapper</span></code> allows for non-key based joins from the <code class="docutils literal"><span class="pre">KStream</span></code> to the <code class="docutils literal"><span class="pre">GlobalKTable</span></code>.
-                            <a class="reference internal" href="#ktable-ktable-fk-join"><span class="std std-ref">KTable-KTable Foreign-Key joins</span></a> also do not require co-partitioning. Kafka Streams internally ensures co-partitioning for Foreign-Key joins.
+                            <a class="reference internal" href="#streams-developer-guide-dsl-joins-ktable-ktable-fk-join"><span class="std std-ref">KTable-KTable Foreign-Key joins</span></a> also do not require co-partitioning. Kafka Streams internally ensures co-partitioning for Foreign-Key joins.
                             </p>
 
                         <div class="admonition note">
@@ -1893,7 +1893,7 @@ KStream&lt;String, String&gt; joined = left.leftJoin(right,
                                             join output records.</p>
                                             <blockquote>
                                                 <div><ul class="simple">
-                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
+                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> key or a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
                                                 </ul>
                                                 </div></blockquote>
                                         </li>
@@ -1954,7 +1954,7 @@ KStream&lt;String, String&gt; joined = left.outerJoin(right,
                                             join output records.</p>
                                             <blockquote>
                                                 <div><ul class="simple">
-                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
+                                                    <li>Input records with a <code class="docutils literal"><span class="pre">null</span></code> key or a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
                                                 </ul>
                                                 </div></blockquote>
                                         </li>
@@ -2542,6 +2542,10 @@ Function&lt;Long, Long&gt; foreignKeyExtractor = (x) -&gt; x;
                                   <blockquote>
                                     <div>
                                       <ul class="simple">
+                                      <li>
+                                            Records for which the <code class="docutils literal"><span class="pre">foreignKeyExtractor</span></code> produces <code class="docutils literal"><span class="pre">null</span></code> are ignored and do not trigger a join.
+                                            If you want to join with <code class="docutils literal"><span class="pre">null</span></code> foreign keys, use a suitable sentinel value to do so (i.e. <code class="docutils literal"><span class="pre">"NULL"</span></code> for a String field, or <code class="docutils literal"><span class="pre">-1</span></code> for an auto-incrementing integer field).
+                                        </li>
                                         <li>Input records with a <code class="docutils
                                             literal"><span class="pre">null</span></code>
                                           value are interpreted as <em>tombstones</em>
@@ -2900,7 +2904,7 @@ KStream&lt;String, String&gt; joined = left.leftJoin(right,
                                             <blockquote>
                                                 <div><ul class="simple">
                                                     <li>Only input records for the left side (stream) trigger the join.  Input records for the right side (table) update only the internal right-side join state.</li>
-                                                    <li>Input records for the stream with a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
+                                                    <li>Input records for the stream with a <code class="docutils literal"><span class="pre">null</span></code> key or a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
                                                     <li>Input records for the table with a <code class="docutils literal"><span class="pre">null</span></code> value are interpreted as <em>tombstones</em> for the corresponding key, which indicate the deletion of the key from the table.
                                                         Tombstones do not trigger the join.</li>
                                                 </ul>
@@ -3177,7 +3181,7 @@ KStream&lt;String, String&gt; joined = left.leftJoin(right,
                                             <blockquote>
                                                 <div><ul class="simple">
                                                     <li>Only input records for the left side (stream) trigger the join.  Input records for the right side (table) update only the internal right-side join state.</li>
-                                                    <li>Input records for the stream with a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
+                                                    <li>Input records for the stream with a <code class="docutils literal"><span class="pre">null</span></code> key or a <code class="docutils literal"><span class="pre">null</span></code> value are ignored and do not trigger the join.</li>
                                                     <li>Input records for the table with a <code class="docutils literal"><span class="pre">null</span></code> value are interpreted as <em>tombstones</em>, which indicate the deletion of a record key from the table.  Tombstones do not trigger the
                                                         join.</li>
                                                 </ul>
diff --git a/37/streams/tutorial.html b/37/streams/tutorial.html
index efa6eba6..017d7796 100644
--- a/37/streams/tutorial.html
+++ b/37/streams/tutorial.html
@@ -47,7 +47,7 @@
     -DarchetypeArtifactId=streams-quickstart-java \
     -DarchetypeVersion={{fullDotVersion}} \
     -DgroupId=streams.examples \
-    -DartifactId=streams-quickstart\
+    -DartifactId=streams.examples \
     -Dversion=0.1 \
     -Dpackage=myapps</code></pre>
     <p>
@@ -55,7 +55,7 @@
         Assuming the above parameter values are used, this command will create a project structure that looks like this:
     </p>
 
-    <pre class="line-numbers"><code class="language-bash">&gt; tree streams-quickstart
+    <pre class="line-numbers"><code class="language-bash">&gt; tree streams.examples
     streams-quickstart
     |-- pom.xml
     |-- src
diff --git a/37/streams/upgrade-guide.html b/37/streams/upgrade-guide.html
index 80f53b85..6f40747d 100644
--- a/37/streams/upgrade-guide.html
+++ b/37/streams/upgrade-guide.html
@@ -133,98 +133,6 @@
         More details about the new config <code>StreamsConfig#TOPOLOGY_OPTIMIZATION</code> can be found in <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-295%3A+Add+Streams+Configuration+Allowing+for+Optional+Topology+Optimization">KIP-295</a>.
     </p>
 
-    <h3><a id="streams_api_changes_370" href="#streams_api_changes_370">Streams API changes in 3.7.0</a></h3>
-    <p>
-        We added a new method to <code>KafkaStreams</code>, namely <code>KafkaStreams#setStandbyUpdateListener()</code> in
-        <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-988%3A+Streams+Standby+Update+Listener">KIP-988</a>,
-        in which users can provide their customized implementation of the newly added <code>StandbyUpdateListener</code> interface to continuously monitor changes to standby tasks.
-    </p>
-
-    <p>
-        IQv2 supports <code>RangeQuery</code> that allows to specify unbounded, bounded, or half-open key-ranges, which return data in unordered (byte[]-lexicographical) order (per partition).
-        <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-985%3A+Add+reverseRange+and+reverseAll+query+over+kv-store+in+IQv2">KIP-985</a> extends this functionality by adding <code>.withDescendingKeys()</code> and <code>.withAscendingKeys()</code>to allow user to receive data in descending or ascending order.
-    </p>
-    <p>
-        <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-992%3A+Proposal+to+introduce+IQv2+Query+Types%3A+TimestampedKeyQuery+and+TimestampedRangeQuery">KIP-992</a> adds two new query types,
-        namely <code>TimestampedKeyQuery</code> and <code>TimestampedRangeQuery</code>. Both should be used to query a timestamped key-value store, to retrieve a <code>ValueAndTimestamp</code> result.
-        The existing <code>KeyQuery</code> and <code>RangeQuery</code> are changed to always return the value only for timestamped key-value stores.
-    </p>
-
-    <p>
-        IQv2 adds support for <code>MultiVersionedKeyQuery</code> (introduced in <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-968%3A+Support+single-key_multi-timestamp+interactive+queries+%28IQv2%29+for+versioned+state+stores">KIP-968</a>)
-        that allows retrieving a set of records from a versioned state store for a given key and a specified time range.
-        Users have to use <code>fromTime(Instant)</code> and/or <code>toTime(Instant)</code> to specify a half or a complete time range.
-    </p>
-
-    <p>
-        IQv2 adds support for <code>VersionedKeyQuery</code> (introduced in <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-960%3A+Support+single-key_single-timestamp+interactive+queries+%28IQv2%29+for+versioned+state+stores">KIP-960</a>)
-        that allows retrieving a single record from a versioned state store based on its key and timestamp.
-        Users have to use the <code>asOf(Instant)</code> method to define a query that returns the record's version for the specified timestamp.
-        To be more precise, the key query returns the record with the greatest timestamp <code>&lt;= Instant</code>.
-    </p>
-
-    <p>
-        The non-null key requirements for Kafka Streams join operators were relaxed as part of <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-962%3A+Relax+non-null+key+requirement+in+Kafka+Streams">KIP-962</a>.
-        The behavior of the following operators changed.
-    <ul>
-        <li>left join KStream-KStream: no longer drop left records with null-key and call ValueJoiner with 'null' for right value.</li>
-        <li>outer join KStream-KStream: no longer drop left/right records with null-key and call ValueJoiner with 'null' for right/left value.</li>
-        <li>left-foreign-key join KTable-KTable: no longer drop left records with null-foreign-key returned by the ForeignKeyExtractor and call ValueJoiner with 'null' for right value.</li>
-        <li>left join KStream-KTable: no longer drop left records with null-key and call ValueJoiner with 'null' for right value.</li>
-        <li>left join KStream-GlobalTable: no longer drop records when KeyValueMapper returns 'null' and call ValueJoiner with 'null' for right value.</li>
-    </ul>
-    Stream-DSL users who want to keep the current behavior can prepend a .filter() operator to the aforementioned operators and filter accordingly.
-    The following snippets illustrate how to keep the old behavior.
-    <pre>
-    <code class="java">
-            //left join KStream-KStream
-            leftStream
-            .filter((key, value) -> key != null)
-            .leftJoin(rightStream, (leftValue, rightValue) -> join(leftValue, rightValue), windows);
-
-            //outer join KStream-KStream
-            rightStream
-            .filter((key, value) -> key != null);
-            leftStream
-            .filter((key, value) -> key != null)
-            .outerJoin(rightStream, (leftValue, rightValue) -> join(leftValue, rightValue), windows);
-
-            //left-foreign-key join KTable-KTable
-            Function&ltString, String&gt foreignKeyExtractor = leftValue -> ...
-            leftTable
-            .filter((key, value) -> foreignKeyExtractor.apply(value) != null)
-            .leftJoin(rightTable, foreignKeyExtractor, (leftValue, rightValue) -> join(leftValue, rightValue), Named.as("left-foreign-key-table-join"));
-
-            //left join KStream-KTable
-            leftStream
-            .filter((key, value) -> key != null)
-            .leftJoin(kTable, (k, leftValue, rightValue) -> join(leftValue, rightValue));
-
-            //left join KStream-GlobalTable
-            KeyValueMapper&ltString, String, String&gt keyValueMapper = (key, value) -> ...;
-            leftStream
-            .filter((key, value) -> keyValueMapper.apply(key,value) != null)
-            .leftJoin(globalTable, keyValueMapper, (leftValue, rightValue) -> join(leftValue, rightValue));
-    </code>
-    </pre>
-    </p>
-
-
-    <p>
-        The <code>default.dsl.store</code> config was deprecated in favor of the new
-        <code>dsl.store.suppliers.class</code> config to allow for custom state store
-        implementations to be configured as the default.
-
-        If you currently specify <code>default.dsl.store=ROCKS_DB</code> or <code>default.dsl.store=IN_MEMORY</code> replace those
-        configurations with <code>dsl.store.suppliers.class=BuiltInDslStoreSuppliers.RocksDBDslStoreSuppliers.class</code> and
-        <code>dsl.stores.suppliers.class=BuiltInDslStoreSuppliers.InMemoryDslStoreSuppliers.class</code> respectively
-    </p>
-
-    <p>
-      A new configuration option <code>balance_subtopology</code> for <code>rack.aware.assignment.strategy</code> was introduced in 3.7 release.
-      For more information, including how it can be enabled and further configured, see the <a href="/{{version}}/documentation/streams/developer-guide/config-streams.html#rack-aware-assignment-strategy"><b>Kafka Streams Developer Guide</b></a>.
-    </p>
-
     <h3><a id="streams_api_changes_360" href="#streams_api_changes_360">Streams API changes in 3.6.0</a></h3>
     <p>
       Rack aware task assignment was introduced in <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-925%3A+Rack+aware+task+assignment+in+Kafka+Streams">KIP-925</a>.
diff --git a/37/toc.html b/37/toc.html
index 24134911..737ef887 100644
--- a/37/toc.html
+++ b/37/toc.html
@@ -27,7 +27,6 @@
                 <li><a href="#quickstart">1.3 Quick Start</a>
                 <li><a href="#ecosystem">1.4 Ecosystem</a>
                 <li><a href="#upgrade">1.5 Upgrading</a>
-                <li><a href="#docker">1.6 Docker</a>
             </ul>
         </li>
         <li><a href="#api">2. APIs</a>
@@ -99,7 +98,7 @@
                     </ul>
                 </li>
                 <li><a href="#datacenters">6.2 Datacenters</a></li>
-                <li><a href="#georeplication">6.3 Geo-Replication (Cross-Cluster Data Mirroring)</a>
+                <li><a href="#georeplication">6.3 Geo-Replication (Cross-Cluster Data Mirroring)</a></li>
                     <ul>
                         <li><a href="#georeplication-overview">Geo-Replication Overview</a></li>
                         <li><a href="#georeplication-flows">What Are Replication Flows</a></li>
@@ -110,7 +109,7 @@
                         <li><a href="#georeplication-monitoring">Monitoring Geo-Replication</a></li>
                     </ul>
                 </li>
-                <li><a href="#multitenancy">6.4 Multi-Tenancy</a>
+                <li><a href="#multitenancy">6.4 Multi-Tenancy</a></li>
                     <ul>
                         <li><a href="#multitenancy-overview">Multi-Tenancy Overview</a></li>
                         <li><a href="#multitenancy-topic-naming">Creating User Spaces (Namespaces)</a></li>
@@ -174,7 +173,7 @@
                     <ul>
                         <li><a href="#tiered_storage_overview">Tiered Storage Overview</a></li>
                         <li><a href="#tiered_storage_config">Configuration</a></li>
-                        <li><a href="#tiered_storage_config_ex">Quick Start Example</a></li>
+                        <li><a href="#tiered_storage_config_ex">Configurations Example</a></li>
                         <li><a href="#tiered_storage_limitation">Limitations</a></li>
                     </ul>
                 </li>
@@ -188,9 +187,9 @@
                 <li><a href="#security_sasl">7.4 Authentication using SASL</a></li>
                 <li><a href="#security_authz">7.5 Authorization and ACLs</a></li>
                 <li><a href="#security_rolling_upgrade">7.6 Incorporating Security Features in a Running Cluster</a></li>
-                <li><a href="#zk_authz">7.7 ZooKeeper Authentication</a>
+                <li><a href="#zk_authz">7.7 ZooKeeper Authentication</a></li>
                 <ul>
-                    <li><a href="#zk_authz_new">New Clusters</a>
+                    <li><a href="#zk_authz_new">New Clusters</a></li>
                     <ul>
                         <li><a href="#zk_authz_new_sasl">ZooKeeper SASL Authentication</a></li>
                         <li><a href="#zk_authz_new_mtls">ZooKeeper Mutual TLS Authentication</a></li>
@@ -205,7 +204,7 @@
         <li><a href="#connect">8. Kafka Connect</a>
             <ul>
                 <li><a href="#connect_overview">8.1 Overview</a></li>
-                <li><a href="#connect_user">8.2 User Guide</a>
+                <li><a href="#connect_user">8.2 User Guide</a></li>
                 <ul>
                     <li><a href="#connect_running">Running Kafka Connect</a></li>
                     <li><a href="#connect_configuring">Configuring Connectors</a></li>
@@ -215,7 +214,7 @@
                     <li><a href="#connect_exactlyonce">Exactly-once support</a></li>
                     <li><a href="#connect_plugindiscovery">Plugin Discovery</a></li>
                 </ul>
-                <li><a href="#connect_development">8.3 Connector Development Guide</a>
+                <li><a href="#connect_development">8.3 Connector Development Guide</a></li>
                 <ul>
                     <li><a href="#connect_concepts">Core Concepts and APIs</a></li>
                     <li><a href="#connect_developing">Developing a Simple Connector</a></li>
diff --git a/37/upgrade.html b/37/upgrade.html
index d3713263..e9a98507 100644
--- a/37/upgrade.html
+++ b/37/upgrade.html
@@ -19,9 +19,10 @@
 
 <script id="upgrade-template" type="text/x-handlebars-template">
 
-<h4><a id="upgrade_3_6_0" href="#upgrade_3_6_0">Upgrading to 3.6.0 from any version 0.8.x through 3.5.x</a></h4>
 
-    <h5><a id="upgrade_360_zk" href="#upgrade_360_zk">Upgrading ZooKeeper-based clusters</a></h5>
+<h4><a id="upgrade_3_6_1" href="#upgrade_3_6_1">Upgrading to 3.6.1 from any version 0.8.x through 3.5.x</a></h4>
+
+    <h5><a id="upgrade_361_zk" href="#upgrade_361_zk">Upgrading ZooKeeper-based clusters</a></h5>
     <p><b>If you are upgrading from a version prior to 2.1.x, please see the note in step 5 below about the change to the schema used to store consumer offsets.
         Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.</b></p>
 
@@ -62,7 +63,7 @@
         </li>
     </ol>
 
-    <h5><a id="upgrade_360_kraft" href="#upgrade_360_kraft">Upgrading KRaft-based clusters</a></h5>
+    <h5><a id="upgrade_361_kraft" href="#upgrade_361_kraft">Upgrading KRaft-based clusters</a></h5>
     <p><b>If you are upgrading from a version prior to 3.3.0, please see the note in step 3 below. Once you have changed the metadata.version to the latest version, it will not be possible to downgrade to a version prior to 3.3-IV0.</b></p>
 
     <p><b>For a rolling upgrade:</b></p>
@@ -117,13 +118,45 @@
             <a href="https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes">Tiered Storage Early Access Release Note</a>.
         </li>
         <li>Transaction partition verification (<a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense">KIP-890</a>)
-            has been added to data partitions to prevent hanging transactions. Workloads with compression can experience InvalidRecordExceptions and UnknownServerExceptions.
-            This feature can be disabled by setting <code>transaction.partition.verification.enable</code> to false. Note that the default for 3.6 is true.
-            The configuration can also be updated dynamically and is applied to the broker.
-            This will be fixed in 3.6.1. See <a href="https://issues.apache.org/jira/browse/KAFKA-15653">KAFKA-15653</a> for more details.
+            has been added to data partitions to prevent hanging transactions. This feature is enabled by default and can be disabled by setting <code>transaction.partition.verification.enable</code> to false.
+            The configuration can also be updated dynamically and is applied to the broker. Workloads running on version 3.6.0 with compression can experience
+            InvalidRecordExceptions and UnknownServerExceptions. Upgrading to 3.6.1 or newer or disabling the feature fixes the issue.
         </li>
     </ul>
 
+<h4><a id="upgrade_3_5_2" href="#upgrade_3_5_2">Upgrading to 3.5.2 from any version 0.8.x through 3.4.x</a></h4>
+    All upgrade steps remain same as <a href="#upgrade_3_5_0">upgrading to 3.5.0</a>
+    <h5><a id="upgrade_352_notable" href="#upgrade_352_notable">Notable changes in 3.5.2</a></h5>
+    <ul>
+    <li>
+        When migrating producer ID blocks from ZK to KRaft, there could be duplicate producer IDs being given to
+        transactional or idempotent producers. This can cause long term problems since the producer IDs are
+        persisted and reused for a long time.
+        See <a href="https://issues.apache.org/jira/browse/KAFKA-15552">KAFKA-15552</a> for more details.
+    </li>
+    <li>
+        In 3.5.0 and 3.5.1, there could be an issue that the empty ISR is returned from controller after AlterPartition request
+        during rolling upgrade. This issue will impact the availability of the topic partition.
+        See <a href="https://issues.apache.org/jira/browse/KAFKA-15353">KAFKA-15353</a> for more details.
+    </li>
+</ul>
+
+<h4><a id="upgrade_3_5_1" href="#upgrade_3_5_1">Upgrading to 3.5.1 from any version 0.8.x through 3.4.x</a></h4>
+    All upgrade steps remain same as <a href="#upgrade_3_5_0">upgrading to 3.5.0</a>
+    <h5><a id="upgrade_351_notable" href="#upgrade_351_notable">Notable changes in 3.5.1</a></h5>
+    <ul>
+    <li>
+        Upgraded the dependency, snappy-java, to a version which is not vulnerable to
+        <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-34455">CVE-2023-34455.</a>
+        You can find more information about the CVE at <a href="https://kafka.apache.org/cve-list#CVE-2023-34455">Kafka CVE list.</a>
+    </li>
+    <li>
+        Fixed a regression introduced in 3.3.0, which caused <code>security.protocol</code> configuration values to be restricted to
+        upper case only. After the fix, <code>security.protocol</code> values are case insensitive.
+        See <a href="https://issues.apache.org/jira/browse/KAFKA-15053">KAFKA-15053</a> for details.
+    </li>
+</ul>
+
 <h4><a id="upgrade_3_5_0" href="#upgrade_3_5_0">Upgrading to 3.5.0 from any version 0.8.x through 3.4.x</a></h4>
 
     <h5><a id="upgrade_350_zk" href="#upgrade_350_zk">Upgrading ZooKeeper-based clusters</a></h5>
@@ -181,10 +214,8 @@
                 ./bin/kafka-features.sh upgrade --metadata 3.5
             </code>
         </li>
-        <li>Note that cluster metadata downgrade is not supported in this version since it has metadata changes.
-            Every <a href="https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java">MetadataVersion</a>
-            after 3.2.x has a boolean parameter that indicates if there are metadata changes (i.e. <code>IBP_3_3_IV3(7, "3.3", "IV3", true)</code> means this version has metadata changes).
-            Given your current and target versions, a downgrade is only possible if there are no metadata changes in the versions between.</li>
+        <li>Note that the cluster metadata version cannot be downgraded to a pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded.
+            However, it is possible to downgrade to production versions such as 3.3-IV0, 3.3-IV1, etc.</li>
     </ol>
 
     <h5><a id="upgrade_350_notable" href="#upgrade_350_notable">Notable changes in 3.5.0</a></h5>
@@ -274,10 +305,8 @@
                 ./bin/kafka-features.sh upgrade --metadata 3.4
             </code>
         </li>
-        <li>Note that cluster metadata downgrade is not supported in this version since it has metadata changes.
-            Every <a href="https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java">MetadataVersion</a>
-            after 3.2.x has a boolean parameter that indicates if there are metadata changes (i.e. <code>IBP_3_3_IV3(7, "3.3", "IV3", true)</code> means this version has metadata changes).
-            Given your current and target versions, a downgrade is only possible if there are no metadata changes in the versions between.</li>
+        <li>Note that the cluster metadata version cannot be downgraded to a pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded.
+            However, it is possible to downgrade to production versions such as 3.3-IV0, 3.3-IV1, etc.</li>
     </ol>
 
 <h5><a id="upgrade_340_notable" href="#upgrade_340_notable">Notable changes in 3.4.0</a></h5>
@@ -344,10 +373,7 @@
         ./bin/kafka-features.sh upgrade --metadata 3.3
         </code>
     </li>
-    <li>Note that cluster metadata downgrade is not supported in this version since it has metadata changes.
-        Every <a href="https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java">MetadataVersion</a>
-        after 3.2.x has a boolean parameter that indicates if there are metadata changes (i.e. <code>IBP_3_3_IV3(7, "3.3", "IV3", true)</code> means this version has metadata changes).
-        Given your current and target versions, a downgrade is only possible if there are no metadata changes in the versions between.</li>
+    <li>Note that the cluster metadata version cannot be downgraded to a pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded. However, it is possible to downgrade to production versions such as 3.3-IV0, 3.3-IV1, etc.</li>
 </ol>
 
 <h5><a id="upgrade_331_notable" href="#upgrade_331_notable">Notable changes in 3.3.1</a></h5>
@@ -434,7 +460,7 @@
             <a href="https://www.slf4j.org/codes.html#no_tlm">possible compatibility issues originating from the logging framework</a>.</li>
         <li>The example connectors, <code>FileStreamSourceConnector</code> and <code>FileStreamSinkConnector</code>, have been
             removed from the default classpath. To use them in Kafka Connect standalone or distributed mode they need to be
-            explicitly added, for example <code>CLASSPATH=./libs/connect-file-3.2.0.jar ./bin/connect-distributed.sh</code>.</li>
+            explicitly added, for example <code>CLASSPATH=./lib/connect-file-3.2.0.jar ./bin/connect-distributed.sh</code>.</li>
     </ul>
 
 <h4><a id="upgrade_3_1_0" href="#upgrade_3_1_0">Upgrading to 3.1.0 from any version 0.8.x through 3.0.x</a></h4>
@@ -1067,11 +1093,11 @@
         if there are no snapshot files in 3.4 data directory. For more details about the workaround please refer to <a href="https://cwiki.apache.org/confluence/display/ZOOKEEPER/Upgrade+FAQ">ZooKeeper Upgrade FAQ</a>.
     </li>
     <li>
-        An embedded Jetty based <a href="https://zookeeper.apache.org/doc/r3.5.6/zookeeperAdmin.html#sc_adminserver">AdminServer</a> added in ZooKeeper 3.5.
+        An embedded Jetty based <a href="http://zookeeper.apache.org/doc/r3.5.6/zookeeperAdmin.html#sc_adminserver">AdminServer</a> added in ZooKeeper 3.5.
         AdminServer is enabled by default in ZooKeeper and is started on port 8080.
         AdminServer is disabled by default in the ZooKeeper config (<code>zookeeper.properties</code>) provided by the Apache Kafka distribution.
         Make sure to update your local <code>zookeeper.properties</code> file with <code>admin.enableServer=false</code> if you wish to disable the AdminServer.
-        Please refer <a href="https://zookeeper.apache.org/doc/r3.5.6/zookeeperAdmin.html#sc_adminserver">AdminServer config</a> to configure the AdminServer.
+        Please refer <a href="http://zookeeper.apache.org/doc/r3.5.6/zookeeperAdmin.html#sc_adminserver">AdminServer config</a> to configure the AdminServer.
     </li>
 </ol>
 
diff --git a/37/uses.html b/37/uses.html
index 94d22b47..51f16a87 100644
--- a/37/uses.html
+++ b/37/uses.html
@@ -28,7 +28,7 @@ solution for large scale message processing applications.
 In our experience messaging uses are often comparatively low-throughput, but may require low end-to-end latency and often depend on the strong
 durability guarantees Kafka provides.
 <p>
-In this domain Kafka is comparable to traditional messaging systems such as <a href="https://activemq.apache.org">ActiveMQ</a> or
+In this domain Kafka is comparable to traditional messaging systems such as <a href="http://activemq.apache.org">ActiveMQ</a> or
 <a href="https://www.rabbitmq.com">RabbitMQ</a>.
 
 <h4 class="anchor-heading"><a id="uses_website" class="anchor-link"></a><a href="#uses_website">Website Activity Tracking</a></h4>
@@ -66,11 +66,11 @@ Such processing pipelines create graphs of real-time data flows based on the ind
 Starting in 0.10.0.0, a light-weight but powerful stream processing library called <a href="/documentation/streams">Kafka Streams</a>
 is available in Apache Kafka to perform such data processing as described above.
 Apart from Kafka Streams, alternative open source stream processing tools include <a href="https://storm.apache.org/">Apache Storm</a> and
-<a href="https://samza.apache.org/">Apache Samza</a>.
+<a href="http://samza.apache.org/">Apache Samza</a>.
 
 <h4 class="anchor-heading"><a id="uses_eventsourcing" class="anchor-link"></a><a href="#uses_eventsourcing">Event Sourcing</a></h4>
 
-<a href="https://martinfowler.com/eaaDev/EventSourcing.html">Event sourcing</a> is a style of application design where state changes are logged as a
+<a href="http://martinfowler.com/eaaDev/EventSourcing.html">Event sourcing</a> is a style of application design where state changes are logged as a
 time-ordered sequence of records. Kafka's support for very large stored log data makes it an excellent backend for an application built in this style.
 
 <h4 class="anchor-heading"><a id="uses_commitlog" class="anchor-link"></a><a href="#uses_commitlog">Commit Log</a></h4>