You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by st...@apache.org on 2024/02/24 09:50:55 UTC

(kafka-site) branch asf-site updated: 37: Add latest apache/kafka/3.7 site-docs (#587)

This is an automated email from the ASF dual-hosted git repository.

stanislavkozlovski pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/kafka-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 12e5cf74 37: Add latest apache/kafka/3.7 site-docs (#587)
12e5cf74 is described below

commit 12e5cf7470bb2679cd3173e3797d21012f54b374
Author: Stanislav Kozlovski <st...@outlook.com>
AuthorDate: Sat Feb 24 10:50:49 2024 +0100

    37: Add latest apache/kafka/3.7 site-docs (#587)
    
    This patch adds the latest apache kafka site-docs to the kafka-site repo
---
 37/generated/admin_client_config.html     |   4 +-
 37/generated/connect_config.html          |   4 +-
 37/generated/connect_metrics.html         |   4 +-
 37/generated/consumer_config.html         |   4 +-
 37/generated/kafka_config.html            |   6 +-
 37/generated/mirror_connector_config.html |   4 +-
 37/generated/producer_config.html         |   4 +-
 37/generated/streams_config.html          |   6 +-
 37/ops.html                               | 241 ++++++++++++++++++++++++++----
 37/streams/upgrade-guide.html             |   4 +-
 37/upgrade.html                           | 134 ++++++++++++++++-
 11 files changed, 363 insertions(+), 52 deletions(-)

diff --git a/37/generated/admin_client_config.html b/37/generated/admin_client_config.html
index 853d5350..7697af59 100644
--- a/37/generated/admin_client_config.html
+++ b/37/generated/admin_client_config.html
@@ -284,7 +284,7 @@
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.2,TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -304,7 +304,7 @@
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' i [...]
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
diff --git a/37/generated/connect_config.html b/37/generated/connect_config.html
index 26eba82a..9ae7d712 100644
--- a/37/generated/connect_config.html
+++ b/37/generated/connect_config.html
@@ -344,7 +344,7 @@
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.2,TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -364,7 +364,7 @@
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' i [...]
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
diff --git a/37/generated/connect_metrics.html b/37/generated/connect_metrics.html
index 328cd19f..c6077384 100644
--- a/37/generated/connect_metrics.html
+++ b/37/generated/connect_metrics.html
@@ -1,5 +1,5 @@
-[2024-01-08 16:06:18,550] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:694)
-[2024-01-08 16:06:18,554] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:704)
+[2024-02-23 00:02:00,837] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:694)
+[2024-02-23 00:02:00,838] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:704)
 <table class="data-table"><tbody>
 <tr>
 <td colspan=3 class="mbeanName" style="background-color:#ccc; font-weight: bold;">kafka.connect:type=connect-worker-metrics</td></tr>
diff --git a/37/generated/consumer_config.html b/37/generated/consumer_config.html
index eec8081b..b88bacf6 100644
--- a/37/generated/consumer_config.html
+++ b/37/generated/consumer_config.html
@@ -454,7 +454,7 @@
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.2,TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -474,7 +474,7 @@
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' i [...]
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
diff --git a/37/generated/kafka_config.html b/37/generated/kafka_config.html
index 6594def5..f50a687f 100644
--- a/37/generated/kafka_config.html
+++ b/37/generated/kafka_config.html
@@ -1357,7 +1357,7 @@
 <p>Specify which version of the inter-broker protocol will be used.<br> This is typically bumped after all brokers were upgraded to a new version.<br> Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check MetadataVersion for the full list.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>3.8-IV0</td></tr>
+<tr><th>Default:</th><td>3.7-IV4</td></tr>
 <tr><th>Valid Values:</th><td>[0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3 [...]
 <tr><th>Importance:</th><td>medium</td></tr>
 <tr><th>Update Mode:</th><td>read-only</td></tr>
@@ -2204,7 +2204,7 @@
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.2,TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 <tr><th>Update Mode:</th><td>per-broker</td></tr>
@@ -2292,7 +2292,7 @@
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' i [...]
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 <tr><th>Update Mode:</th><td>per-broker</td></tr>
diff --git a/37/generated/mirror_connector_config.html b/37/generated/mirror_connector_config.html
index 761a7326..cea9b7bd 100644
--- a/37/generated/mirror_connector_config.html
+++ b/37/generated/mirror_connector_config.html
@@ -194,7 +194,7 @@
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.2,TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -214,7 +214,7 @@
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' i [...]
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
diff --git a/37/generated/producer_config.html b/37/generated/producer_config.html
index f39f2f74..37662232 100644
--- a/37/generated/producer_config.html
+++ b/37/generated/producer_config.html
@@ -384,7 +384,7 @@
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.2,TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
@@ -404,7 +404,7 @@
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' i [...]
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>TLSv1.2</td></tr>
+<tr><th>Default:</th><td>TLSv1.3</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
diff --git a/37/generated/streams_config.html b/37/generated/streams_config.html
index 801e2673..0fd8f100 100644
--- a/37/generated/streams_config.html
+++ b/37/generated/streams_config.html
@@ -285,7 +285,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
 <tr><th>Default:</th><td>none</td></tr>
-<tr><th>Valid Values:</th><td>org.apache.kafka.streams.StreamsConfig$$Lambda$8/83954662@4b85612c</td></tr>
+<tr><th>Valid Values:</th><td>org.apache.kafka.streams.StreamsConfig$$Lambda$21/0x00000008000841f0@5bcab519</td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
 </li>
@@ -561,11 +561,11 @@
 </li>
 <li>
 <h4><a id="upgrade.from"></a><a id="streamsconfigs_upgrade.from" href="#streamsconfigs_upgrade.from">upgrade.from</a></h4>
-<p>Allows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 2.4+. When upgrading from 3.3 to a newer version it is not required to specify this config. Default is `null`. Accepted values are "0.10.0", "0.10.1", "0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "2.7", "2.8", "3.0", "3.1", "3.2", "3.3", "3.4" (for upgrading from the corresponding old version).</p>
+<p>Allows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 2.4+. When upgrading from 3.3 to a newer version it is not required to specify this config. Default is `null`. Accepted values are "0.10.0", "0.10.1", "0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "2.7", "2.8", "3.0", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6(for upgrading from the corresponding old version).</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td>[null, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 3.0, 3.1, 3.2, 3.3, 3.4]</td></tr>
+<tr><th>Valid Values:</th><td>[null, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6]</td></tr>
 <tr><th>Importance:</th><td>low</td></tr>
 </tbody></table>
 </li>
diff --git a/37/ops.html b/37/ops.html
index c4be91cb..52f2f377 100644
--- a/37/ops.html
+++ b/37/ops.html
@@ -1289,12 +1289,16 @@ $ bin/kafka-acls.sh \
 
   <h3 class="anchor-heading"><a id="java" class="anchor-link"></a><a href="#java">6.6 Java Version</a></h3>
 
-  Java 8, Java 11, and Java 17 are supported. Note that Java 8 support has been deprecated since Apache Kafka 3.0 and will be removed in Apache Kafka 4.0.
+  Java 8, Java 11, and Java 17 are supported.
+  <p>
+  Note that Java 8 support project-wide has been deprecated since Apache Kafka 3.0 and Java 11 support for the broker and tools
+  has been deprecated since Apache Kafka 3.7. Both will be removed in Apache Kafka 4.0.
+  <p>
   Java 11 and later versions perform significantly better if TLS is enabled, so they are highly recommended (they also include a number of other
   performance improvements: G1GC, CRC32C, Compact Strings, Thread-Local Handshakes and more).
-
+  <p>
   From a security perspective, we recommend the latest released patch version as older freely available versions have disclosed security vulnerabilities.
-
+  <p>
   Typical arguments for running Kafka with OpenJDK-based Java implementations (including Oracle JDK) are:
 
   <pre class="line-numbers"><code class="language-text">  -Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC
@@ -1892,6 +1896,66 @@ $ bin/kafka-acls.sh \
         <td>Rate of write errors from remote storage per topic. Omitting 'topic=(...)' will yield the all-topic rate</td>
         <td>kafka.server:type=BrokerTopicMetrics,name=RemoteCopyErrorsPerSec,topic=([-.\w]+)</td>
       </tr>
+      <tr>
+        <td>Remote Copy Lag Bytes</td>
+        <td>Bytes which are eligible for tiering, but are not in remote storage yet. Omitting 'topic=(...)' will yield the all-topic sum</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=RemoteCopyLagBytes,topic=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>Remote Copy Lag Segments</td>
+        <td>Segments which are eligible for tiering, but are not in remote storage yet. Omitting 'topic=(...)' will yield the all-topic count</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=RemoteCopyLagSegments,topic=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>Remote Delete Requests Per Sec</td>
+        <td>Rate of delete requests to remote storage per topic. Omitting 'topic=(...)' will yield the all-topic rate</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=RemoteDeleteRequestsPerSec,topic=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>Remote Delete Errors Per Sec</td>
+        <td>Rate of delete errors from remote storage per topic. Omitting 'topic=(...)' will yield the all-topic rate</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=RemoteDeleteErrorsPerSec,topic=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>Remote Delete Lag Bytes</td>
+        <td>Tiered bytes which are eligible for deletion, but have not been deleted yet. Omitting 'topic=(...)' will yield the all-topic sum</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=RemoteDeleteLagBytes,topic=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>Remote Delete Lag Segments</td>
+        <td>Tiered segments which are eligible for deletion, but have not been deleted yet. Omitting 'topic=(...)' will yield the all-topic count</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=RemoteDeleteLagSegments,topic=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>Build Remote Log Aux State Requests Per Sec</td>
+        <td>Rate of requests for rebuilding the auxiliary state from remote storage per topic. Omitting 'topic=(...)' will yield the all-topic rate</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=BuildRemoteLogAuxStateRequestsPerSec,topic=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>Build Remote Log Aux State Errors Per Sec</td>
+        <td>Rate of errors for rebuilding the auxiliary state from remote storage per topic. Omitting 'topic=(...)' will yield the all-topic rate</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=BuildRemoteLogAuxStateErrorsPerSec,topic=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>Remote Log Size Computation Time</td>
+        <td>The amount of time needed to compute the size of the remote log. Omitting 'topic=(...)' will yield the all-topic time</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=RemoteLogSizeComputationTime,topic=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>Remote Log Size Bytes</td>
+        <td>The total size of a remote log in bytes. Omitting 'topic=(...)' will yield the all-topic sum</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=RemoteLogSizeBytes,topic=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>Remote Log Metadata Count</td>
+        <td>The total number of metadata entries for remote storage. Omitting 'topic=(...)' will yield the all-topic count</td>
+        <td>kafka.server:type=BrokerTopicMetrics,name=RemoteLogMetadataCount,topic=([-.\w]+)</td>
+      </tr>
+      <tr>
+        <td>Delayed Remote Fetch Expires Per Sec</td>
+        <td>The number of expired remote fetches per second. Omitting 'topic=(...)' will yield the all-topic rate</td>
+        <td>kafka.server:type=DelayedRemoteFetchMetrics,name=ExpiresPerSec,topic=([-.\w]+)</td>
+      </tr>
       <tr>
         <td>RemoteLogReader Task Queue Size</td>
         <td>Size of the queue holding remote storage read tasks</td>
@@ -3710,7 +3774,7 @@ foo
   <p>The following features are not fully implemented in KRaft mode:</p>
 
   <ul>
-    <li>Supporting JBOD configurations with multiple storage directories</li>
+    <li>Supporting JBOD configurations with multiple storage directories. Note that an Early Access release is supported in 3.7 as per <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft">KIP-858</a>. Note that it is not yet recommended for use in production environments. Please refer to the <a href="https://cwiki.apache.org/confluence/display/KAFKA/Kafka+JBOD+in+KRaft+Early+Access+Release+Notes">release notes</a> to help us test [...]
     <li>Modifying certain dynamic configurations on the standalone KRaft controller</li>
   </ul>
 
@@ -3805,12 +3869,15 @@ zookeeper.metadata.migration.enable=true
 # ZooKeeper client configuration
 zookeeper.connect=localhost:2181
 
+# The inter broker listener in brokers to allow KRaft controller send RPCs to brokers
+inter.broker.listener.name=PLAINTEXT
+
 # Other configs ...</pre>
 
   <p><em>Note: The KRaft cluster <code>node.id</code> values must be different from any existing ZK broker <code>broker.id</code>.
   In KRaft-mode, the brokers and controllers share the same Node ID namespace.</em></p>
 
-  <h3>Enabling the migration on the brokers</h3>
+  <h3>Enter Migration Mode on the Brokers</h3>
   <p>
     Once the KRaft controller quorum has been started, the brokers will need to be reconfigured and restarted. Brokers
     may be restarted in a rolling fashion to avoid impacting cluster availability. Each broker requires the
@@ -3855,9 +3922,10 @@ controller.listener.names=CONTROLLER</pre>
 
   <h3>Migrating brokers to KRaft</h3>
   <p>
-    Once the KRaft controller completes the metadata migration, the brokers will still be running in ZK mode. While the
-    KRaft controller is in migration mode, it will continue sending controller RPCs to the ZK mode brokers. This includes
-    RPCs like UpdateMetadata and LeaderAndIsr.
+    Once the KRaft controller completes the metadata migration, the brokers will still be running
+    in ZooKeeper mode. While the KRaft controller is in migration mode, it will continue sending
+    controller RPCs to the ZooKeeper mode brokers. This includes RPCs like UpdateMetadata and
+    LeaderAndIsr.
   </p>
 
   <p>
@@ -3867,6 +3935,12 @@ controller.listener.names=CONTROLLER</pre>
     The zookeeper configurations should be removed at this point.
   </p>
 
+  <p>
+    If your broker has authorization configured via the <code>authorizer.class.name</code> property
+    using <code>kafka.security.authorizer.AclAuthorizer</code>, this is also the time to change it
+    to use <code>org.apache.kafka.metadata.authorizer.StandardAuthorizer</code> instead.
+  </p>
+
   <pre>
 # Sample KRaft broker server.properties listening on 9092
 process.roles=broker
@@ -3892,29 +3966,16 @@ controller.listener.names=CONTROLLER</pre>
     Each broker is restarted with a KRaft configuration until the entire cluster is running in KRaft mode.
   </p>
 
-  <h3>Reverting to ZooKeeper mode During the Migration</h3>
-    While the cluster is still in migration mode, it is possible to revert to ZK mode. In order to do this:
-    <ol>
-      <li>
-        For each KRaft broker:
-        <ul>
-          <li>Stop the broker.</li>
-          <li>Remove the __cluster_metadata directory on the broker.</li>
-          <li>Remove the <code>zookeeper.metadata.migration.enable</code> configuration and the KRaft controllers related configurations like <code>controller.quorum.voters</code>
-            and <code>controller.listener.names</code> from the broker configuration properties file.</li>
-          <li>Restart the broker in ZooKeeper mode.</li>
-        </ul>
-      </li>
-      <li>Take down the KRaft quorum.</li>
-      <li>Using ZooKeeper shell, delete the controller node using <code>rmr /controller</code>, so that a ZooKeeper-based broker can become the next controller.</li>
-    </ol>
-
   <h3>Finalizing the migration</h3>
   <p>
     Once all brokers have been restarted in KRaft mode, the last step to finalize the migration is to take the
     KRaft controllers out of migration mode. This is done by removing the "zookeeper.metadata.migration.enable"
     property from each of their configs and restarting them one at a time.
   </p>
+  <p>
+    Once the migration has been finalized, you can safely deprovision your ZooKeeper cluster, assuming you are
+    not using it for anything else. After this point, it is no longer possible to revert to ZooKeeper mode.
+  </p>
 
   <pre>
 # Sample KRaft cluster controller.properties listening on 9093
@@ -3932,6 +3993,136 @@ listeners=CONTROLLER://:9093
 
 # Other configs ...</pre>
 
+  <h3>Reverting to ZooKeeper mode During the Migration</h3>
+  <p>
+    While the cluster is still in migration mode, it is possible to revert to ZooKeeper mode.  The process
+    to follow depends on how far the migration has progressed. In order to find out how to revert,
+    select the <b>final</b> migration step that you have <b>completed</b> in this table.
+  </p>
+  <p>
+    Note that the directions given here assume that each step was fully completed, and they were
+    done in order. So, for example, we assume that if "Enter Migration Mode on the Brokers" was
+    completed, "Provisioning the KRaft controller quorum" was also fully completed previously.
+  </p>
+  <p>
+    If you did not fully complete any step, back out whatever you have done and then follow revert
+    directions for the last fully completed step.
+  </p>
+
+  <table class="data-table">
+      <tbody>
+      <tr>
+        <th>Final Migration Section Completed</th>
+        <th>Directions for Reverting</th>
+        <th>Notes</th>
+      </tr>
+      <tr>
+        <td>Preparing for migration</td>
+        <td>
+          The preparation section does not involve leaving ZooKeeper mode. So there is nothing to do in the
+          case of a revert.
+        </td>
+        <td>
+        </td>
+      </tr>
+      <tr>
+        <td>Provisioning the KRaft controller quorum</td>
+        <td>
+          <ul>
+            <li>
+              Deprovision the KRaft controller quorum.
+            </li>
+            <li>
+              Then you are done.
+            </li>
+          </ul>
+        </td>
+        <td>
+        </td>
+      </tr>
+      <tr>
+        <td>Enter Migration Mode on the brokers</td>
+        <td>
+          <ul>
+            <li>
+              Deprovision the KRaft controller quorum.
+            </li>
+            <li>
+              Using <code>zookeeper-shell.sh</code>, run <code>rmr /controller</code> so that one
+              of the brokers can become the new old-style controller.
+            </li>
+            <li>
+              On each broker, remove the <code>zookeeper.metadata.migration.enable</code>,
+              <code>controller.listener.names</code>, and <code>controller.quorum.voters</code>
+              configurations, and replace <code>node.id</code> with <code>broker.id</code>.
+              Then perform a rolling restart of all brokers.
+            </li>
+            <li>
+              Then you are done.
+            </li>
+          </ul>
+        </td>
+        <td>
+          It is important to perform the <code>zookeeper-shell.sh</code> step quickly, to minimize the amount of
+          time that the cluster lacks a controller.
+        </td>
+      </tr>
+      <tr>
+        <td>Migrating brokers to KRaft</td>
+        <td>
+          <ul>
+            <li>
+              On each broker, remove the <code>process.roles</code> configuration, and
+              restore the <code>zookeeper.connect</code> configuration to its previous value.
+              If your cluster requires other ZooKeeper configurations for brokers, such as
+              <code>zookeeper.ssl.protocol</code>, re-add those configurations as well.
+              Then perform a rolling restart of all brokers.
+            </li>
+            <li>
+              Deprovision the KRaft controller quorum.
+            </li>
+            <li>
+              Using <code>zookeeper-shell.sh</code>, run <code>rmr /controller</code> so that one
+              of the brokers can become the new old-style controller.
+            </li>
+            <li>
+              On each broker, remove the <code>zookeeper.metadata.migration.enable</code>,
+              <code>controller.listener.names</code>, and <code>controller.quorum.voters</code>
+              configurations. Replace <code>node.id</code> with <code>broker.id</code>.
+              Then perform a second rolling restart of all brokers.
+            </li>
+            <li>
+              Then you are done.
+            </li>
+          </ul>
+        </td>
+        <td>
+          <ul>
+            <li>
+              It is important to perform the <code>zookeeper-shell.sh</code> step <b>quickly</b>, to minimize the amount of
+              time that the cluster lacks a controller.
+            </li>
+            <li>
+              Make sure that on the first cluster roll, <code>zookeeper.metadata.migration.enable</code> remains set to
+              </code>true</code>. <b>Do not set it to false until the second cluster roll.</b>
+            </li>
+          </ul>
+        </td>
+      </tr>
+      <tr>
+        <td>Finalizing the migration</td>
+        <td>
+          If you have finalized the ZK migration, then you cannot revert.
+        </td>
+        <td>
+          Some users prefer to wait for a week or two before finalizing the migration. While this
+          requires you to keep the ZooKeeper cluster running for a while longer, it may be helpful
+          in validating KRaft mode in your cluster.
+        </td>
+      </tr>
+    </tbody>
+ </table>
+
 
 <h3 class="anchor-heading"><a id="tiered_storage" class="anchor-link"></a><a href="#kraft">6.11 Tiered Storage</a></h3>
 
diff --git a/37/streams/upgrade-guide.html b/37/streams/upgrade-guide.html
index 80f53b85..14544077 100644
--- a/37/streams/upgrade-guide.html
+++ b/37/streams/upgrade-guide.html
@@ -1425,7 +1425,7 @@
             <td>Kafka Streams API (rows)</td>
             <td>0.10.0.x</td>
             <td>0.10.1.x and 0.10.2.x</td>
-            <td>0.11.0.x and<br>1.0.x and<br>1.1.x and<br>2.0.x and<br>2.1.x and<br>2.2.x and<br>2.3.x and<br>2.4.x and<br>2.5.x and<br>2.6.x and<br>2.7.x and<br>2.8.x and<br>3.0.x and<br>3.1.x and<br>3.2.x and<br>3.3.x and<br>3.4.x and<br>3.5.x and<br>3.6.x</td>
+            <td>0.11.0.x and<br>1.0.x and<br>1.1.x and<br>2.0.x and<br>2.1.x and<br>2.2.x and<br>2.3.x and<br>2.4.x and<br>2.5.x and<br>2.6.x and<br>2.7.x and<br>2.8.x and<br>3.0.x and<br>3.1.x and<br>3.2.x and<br>3.3.x and<br>3.4.x and<br>3.5.x and<br>3.6.x and<br>3.7.x</td>
           </tr>
           <tr>
             <td>0.10.0.x</td>
@@ -1452,7 +1452,7 @@
             <td>compatible; requires message format 0.10 or higher;<br>if message headers are used, message format 0.11<br>or higher required</td>
           </tr>
           <tr>
-            <td>2.2.1 and<br>2.3.x and<br>2.4.x and<br>2.5.x and<br>2.6.x and<br>2.7.x and<br>2.8.x and<br>3.0.x and<br>3.1.x and<br>3.2.x and<br>3.3.x and<br>3.4.x and<br>3.5.x and<br>3.6.x</td>
+            <td>2.2.1 and<br>2.3.x and<br>2.4.x and<br>2.5.x and<br>2.6.x and<br>2.7.x and<br>2.8.x and<br>3.0.x and<br>3.1.x and<br>3.2.x and<br>3.3.x and<br>3.4.x and<br>3.5.x and<br>3.6.x and<br>3.7.x</td>
             <td></td>
             <td></td>
             <td>compatible; requires message format 0.11 or higher;<br>enabling exactly-once v2 requires 2.4.x or higher</td>
diff --git a/37/upgrade.html b/37/upgrade.html
index d3713263..0b3c92a6 100644
--- a/37/upgrade.html
+++ b/37/upgrade.html
@@ -19,9 +19,97 @@
 
 <script id="upgrade-template" type="text/x-handlebars-template">
 
-<h4><a id="upgrade_3_6_0" href="#upgrade_3_6_0">Upgrading to 3.6.0 from any version 0.8.x through 3.5.x</a></h4>
+<h4><a id="upgrade_3_7_0" href="#upgrade_3_7_0">Upgrading to 3.7.0 from any version 0.8.x through 3.6.x</a></h4>
 
-    <h5><a id="upgrade_360_zk" href="#upgrade_360_zk">Upgrading ZooKeeper-based clusters</a></h5>
+
+    <h5><a id="upgrade_370_zk" href="#upgrade_370_zk">Upgrading ZooKeeper-based clusters</a></h5>
+    <p><b>If you are upgrading from a version prior to 2.1.x, please see the note in step 5 below about the change to the schema used to store consumer offsets.
+        Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.</b></p>
+
+    <p><b>For a rolling upgrade:</b></p>
+
+    <ol>
+        <li>Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
+            are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
+            overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
+            to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
+            <ul>
+                <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. <code>3.6</code>, <code>3.5</code>, etc.)</li>
+                <li>log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION  (See <a href="#upgrade_10_performance_impact">potential performance impact
+                    following the upgrade</a> for the details on what this configuration does.)</li>
+            </ul>
+            If you are upgrading from version 0.11.0.x or above, and you have not overridden the message format, then you only need to override
+            the inter-broker protocol version.
+            <ul>
+                <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. <code>3.6</code>, <code>3.5</code>, etc.)</li>
+            </ul>
+        </li>
+        <li>Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
+            brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
+            It is still possible to downgrade at this point if there are any problems.
+        </li>
+        <li>Once the cluster's behavior and performance has been verified, bump the protocol version by editing
+            <code>inter.broker.protocol.version</code> and setting it to <code>3.7</code>.
+        </li>
+        <li>Restart the brokers one by one for the new protocol version to take effect. Once the brokers begin using the latest
+            protocol version, it will no longer be possible to downgrade the cluster to an older version.
+        </li>
+        <li>If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
+            upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
+            change log.message.format.version to 3.7 on each broker and restart them one by one. Note that the older Scala clients,
+            which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs
+            (or to take advantage of <a href="#upgrade_11_exactly_once_semantics">exactly once semantics</a>),
+            the newer Java clients must be used.
+        </li>
+    </ol>
+
+    <h5><a id="upgrade_370_kraft" href="#upgrade_370_kraft">Upgrading KRaft-based clusters</a></h5>
+    <p><b>If you are upgrading from a version prior to 3.3.0, please see the note in step 3 below. Once you have changed the metadata.version to the latest version, it will not be possible to downgrade to a version prior to 3.3-IV0.</b></p>
+
+    <p><b>For a rolling upgrade:</b></p>
+
+    <ol>
+        <li>Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the
+            brokers will be running the latest version and you can verify that the cluster's behavior and performance meets expectations.
+        </li>
+        <li>Once the cluster's behavior and performance has been verified, bump the metadata.version by running
+            <code>
+                ./bin/kafka-features.sh upgrade --metadata 3.7
+            </code>
+        </li>
+        <li>Note that cluster metadata downgrade is not supported in this version since it has metadata changes.
+            Every <a href="https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java">MetadataVersion</a>
+            after 3.2.x has a boolean parameter that indicates if there are metadata changes (i.e. <code>IBP_3_3_IV3(7, "3.3", "IV3", true)</code> means this version has metadata changes).
+            Given your current and target versions, a downgrade is only possible if there are no metadata changes in the versions between.</li>
+    </ol>
+
+    <h5><a id="upgrade_370_notable" href="#upgrade_370_notable">Notable changes in 3.7.0</a></h5>
+    <ul>
+        <li>Java 11 support for the broker and tools has been deprecated and will be removed in Apache Kafka 4.0. This complements
+            the previous deprecation of Java 8 for all components. Please refer to
+            <a href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=284789510">KIP-1013</a> for more details.
+        </li>
+        <li>Client APIs released prior to Apache Kafka 2.1 are now marked deprecated in 3.7 and will be removed in Apache Kafka 4.0. See <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-896%3A+Remove+old+client+protocol+API+versions+in+Kafka+4.0">KIP-896</a> for details and RPC versions that are now deprecated.
+        </li>
+        <li>Early access of the new simplified Consumer Rebalance Protocol is available, and it is not recommended for use in production environments.
+            You are encouraged to test it and provide feedback!
+            For more information about the early access feature, please check <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol">KIP-848</a> and the <a href="https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes">Early Access Release Notes</a>.
+        </li>
+        <li>More metrics related to Tiered Storage have been introduced. They should improve the operational experience
+            of running Tiered Storage in production.
+            For more detailed information, please refer to <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-963%3A+Additional+metrics+in+Tiered+Storage">KIP-963</a>.
+        </li>
+        <li>Kafka Streams ships multiple KIPs for IQv2 support.
+            See the <a href="/{{version}}/documentation/streams/upgrade-guide#streams_api_changes_370">Kafka Streams upgrade section</a> for more details.
+        </li>
+        <li>All the notable changes are present in the <a href="https://kafka.apache.org/blog#apache_kafka_370_release_announcement">blog post announcing the 3.7.0 release.</a>
+        </li>
+    </ul>
+
+
+<h4><a id="upgrade_3_6_1" href="#upgrade_3_6_1">Upgrading to 3.6.1 from any version 0.8.x through 3.5.x</a></h4>
+
+    <h5><a id="upgrade_361_zk" href="#upgrade_361_zk">Upgrading ZooKeeper-based clusters</a></h5>
     <p><b>If you are upgrading from a version prior to 2.1.x, please see the note in step 5 below about the change to the schema used to store consumer offsets.
         Once you have changed the inter.broker.protocol.version to the latest version, it will not be possible to downgrade to a version prior to 2.1.</b></p>
 
@@ -62,7 +150,7 @@
         </li>
     </ol>
 
-    <h5><a id="upgrade_360_kraft" href="#upgrade_360_kraft">Upgrading KRaft-based clusters</a></h5>
+    <h5><a id="upgrade_361_kraft" href="#upgrade_361_kraft">Upgrading KRaft-based clusters</a></h5>
     <p><b>If you are upgrading from a version prior to 3.3.0, please see the note in step 3 below. Once you have changed the metadata.version to the latest version, it will not be possible to downgrade to a version prior to 3.3-IV0.</b></p>
 
     <p><b>For a rolling upgrade:</b></p>
@@ -117,13 +205,45 @@
             <a href="https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes">Tiered Storage Early Access Release Note</a>.
         </li>
         <li>Transaction partition verification (<a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense">KIP-890</a>)
-            has been added to data partitions to prevent hanging transactions. Workloads with compression can experience InvalidRecordExceptions and UnknownServerExceptions.
-            This feature can be disabled by setting <code>transaction.partition.verification.enable</code> to false. Note that the default for 3.6 is true.
-            The configuration can also be updated dynamically and is applied to the broker.
-            This will be fixed in 3.6.1. See <a href="https://issues.apache.org/jira/browse/KAFKA-15653">KAFKA-15653</a> for more details.
+            has been added to data partitions to prevent hanging transactions. This feature is enabled by default and can be disabled by setting <code>transaction.partition.verification.enable</code> to false.
+            The configuration can also be updated dynamically and is applied to the broker. Workloads running on version 3.6.0 with compression can experience
+            InvalidRecordExceptions and UnknownServerExceptions. Upgrading to 3.6.1 or newer or disabling the feature fixes the issue.
         </li>
     </ul>
 
+<h4><a id="upgrade_3_5_2" href="#upgrade_3_5_2">Upgrading to 3.5.2 from any version 0.8.x through 3.4.x</a></h4>
+    All upgrade steps remain same as <a href="#upgrade_3_5_0">upgrading to 3.5.0</a>
+    <h5><a id="upgrade_352_notable" href="#upgrade_352_notable">Notable changes in 3.5.2</a></h5>
+    <ul>
+    <li>
+        When migrating producer ID blocks from ZK to KRaft, there could be duplicate producer IDs being given to
+        transactional or idempotent producers. This can cause long term problems since the producer IDs are
+        persisted and reused for a long time.
+        See <a href="https://issues.apache.org/jira/browse/KAFKA-15552">KAFKA-15552</a> for more details.
+    </li>
+    <li>
+        In 3.5.0 and 3.5.1, there could be an issue that the empty ISR is returned from controller after AlterPartition request
+        during rolling upgrade. This issue will impact the availability of the topic partition.
+        See <a href="https://issues.apache.org/jira/browse/KAFKA-15353">KAFKA-15353</a> for more details.
+    </li>
+</ul>
+
+<h4><a id="upgrade_3_5_1" href="#upgrade_3_5_1">Upgrading to 3.5.1 from any version 0.8.x through 3.4.x</a></h4>
+    All upgrade steps remain same as <a href="#upgrade_3_5_0">upgrading to 3.5.0</a>
+    <h5><a id="upgrade_351_notable" href="#upgrade_351_notable">Notable changes in 3.5.1</a></h5>
+    <ul>
+    <li>
+        Upgraded the dependency, snappy-java, to a version which is not vulnerable to
+        <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-34455">CVE-2023-34455.</a>
+        You can find more information about the CVE at <a href="https://kafka.apache.org/cve-list#CVE-2023-34455">Kafka CVE list.</a>
+    </li>
+    <li>
+        Fixed a regression introduced in 3.3.0, which caused <code>security.protocol</code> configuration values to be restricted to
+        upper case only. After the fix, <code>security.protocol</code> values are case insensitive.
+        See <a href="https://issues.apache.org/jira/browse/KAFKA-15053">KAFKA-15053</a> for details.
+    </li>
+</ul>
+
 <h4><a id="upgrade_3_5_0" href="#upgrade_3_5_0">Upgrading to 3.5.0 from any version 0.8.x through 3.4.x</a></h4>
 
     <h5><a id="upgrade_350_zk" href="#upgrade_350_zk">Upgrading ZooKeeper-based clusters</a></h5>