You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by da...@apache.org on 2022/09/28 20:57:27 UTC

[kafka-site] 01/01: Update and publish docs for 3.3

This is an automated email from the ASF dual-hosted git repository.

davidarthur pushed a commit to branch 3.3-doc-publish
in repository https://gitbox.apache.org/repos/asf/kafka-site.git

commit 70aafe9b4629a4e669b4f58f7c55fdc6e9e8dfdf
Author: David Arthur <mu...@gmail.com>
AuthorDate: Wed Sep 28 16:57:00 2022 -0400

    Update and publish docs for 3.3
---
 33/design.html                    |  36 +++++---
 33/generated/connect_metrics.html |   4 +-
 33/ops.html                       | 119 ++++++++++++++++++++++++++-
 33/security.html                  | 167 ++++++++++++++++++++++++++++++++++----
 documentation.html                |   2 +-
 downloads.html                    |  51 ++++++++++--
 6 files changed, 340 insertions(+), 39 deletions(-)

diff --git a/33/design.html b/33/design.html
index 6e32b2d7..9485ab9c 100644
--- a/33/design.html
+++ b/33/design.html
@@ -322,18 +322,33 @@
     Followers consume messages from the leader just as a normal Kafka consumer would and apply them to their own log. Having the followers pull from the leader has the nice property of allowing the follower to naturally
     batch together log entries they are applying to their log.
     <p>
-    As with most distributed systems automatically handling failures requires having a precise definition of what it means for a node to be "alive". For Kafka node liveness has two conditions
+    As with most distributed systems, automatically handling failures requires a precise definition of what it means for a node to be "alive." In Kafka, a special node
+    known as the "controller" is responsible for managing the registration of brokers in the cluster. Broker liveness has two conditions:
     <ol>
-        <li>A node must be able to maintain its session with ZooKeeper (via ZooKeeper's heartbeat mechanism)
-        <li>If it is a follower it must replicate the writes happening on the leader and not fall "too far" behind
+      <li>Brokers must maintain an active session with the controller in order to receive regular metadata updates.</li>
+      <li>Brokers acting as followers must replicate the writes from the leader and not fall "too far" behind.</li>
     </ol>
-    We refer to nodes satisfying these two conditions as being "in sync" to avoid the vagueness of "alive" or "failed". The leader keeps track of the set of "in sync" nodes. If a follower dies, gets stuck, or falls
-    behind, the leader will remove it from the list of in sync replicas. The determination of stuck and lagging replicas is controlled by the replica.lag.time.max.ms configuration.
+    <p>
+    What is meant by an "active session" depends on the cluster configuration. For KRaft clusters, an active session is maintained by 
+    sending periodic heartbeats to the controller. If the controller fails to receive a heartbeat before the timeout configured by 
+    <code>broker.session.timeout.ms</code> expires, then the node is considered offline.
+    <p>
+    For clusters using Zookeeper, liveness is determined indirectly through the existence of an ephemeral node which is created by the broker on
+    initialization of its Zookeeper session. If the broker loses its session after failing to send heartbeats to Zookeeper before expiration of
+    <code>zookeeper.session.timeout.ms</code>, then the node gets deleted. The controller would then notice the node deletion through a Zookeeper watch
+    and mark the broker offline.
+    <p>
+    We refer to nodes satisfying these two conditions as being "in sync" to avoid the vagueness of "alive" or "failed". The leader keeps track of the set of "in sync" replicas,
+    which is known as the ISR. If either of these conditions fail to be satisified, then the broker will be removed from the ISR. For example,
+    if a follower dies, then the controller will notice the failure through the loss of its session, and will remove the broker from the ISR.
+    On the other hand, if the follower lags too far behind the leader but still has an active session, then the leader can also remove it from the ISR.
+    The determination of lagging replicas is controlled through the <code>replica.lag.time.max.ms</code> configuration. 
+    Replicas that cannot catch up to the end of the log on the leader within the max time set by this configuration are removed from the ISR.
     <p>
     In distributed systems terminology we only attempt to handle a "fail/recover" model of failures where nodes suddenly cease working and then later recover (perhaps without knowing that they have died). Kafka does not
     handle so-called "Byzantine" failures in which nodes produce arbitrary or malicious responses (perhaps due to bugs or foul play).
     <p>
-    We can now more precisely define that a message is considered committed when all in sync replicas for that partition have applied it to their log.
+    We can now more precisely define that a message is considered committed when all replicas in the ISR for that partition have applied it to their log.
     Only committed messages are ever given out to the consumer. This means that the consumer need not worry about potentially seeing a message that could be lost if the leader fails. Producers, on the other hand,
     have the option of either waiting for the message to be committed or not, depending on their preference for tradeoff between latency and durability. This preference is controlled by the acks setting that the
     producer uses.
@@ -381,7 +396,7 @@
     expensive approach is not used for the data itself.
     <p>
     Kafka takes a slightly different approach to choosing its quorum set. Instead of majority vote, Kafka dynamically maintains a set of in-sync replicas (ISR) that are caught-up to the leader. Only members of this set
-    are eligible for election as leader. A write to a Kafka partition is not considered committed until <i>all</i> in-sync replicas have received the write. This ISR set is persisted to ZooKeeper whenever it changes.
+    are eligible for election as leader. A write to a Kafka partition is not considered committed until <i>all</i> in-sync replicas have received the write. This ISR set is persisted in the cluster metadata whenever it changes.
     Because of this, any replica in the ISR is eligible to be elected leader. This is an important factor for Kafka's usage model where there are many partitions and ensuring leadership balance is important.
     With this ISR model and <i>f+1</i> replicas, a Kafka topic can tolerate <i>f</i> failures without losing committed messages.
     <p>
@@ -442,9 +457,10 @@
     share of its partitions.
     <p>
     It is also important to optimize the leadership election process as that is the critical window of unavailability. A naive implementation of leader election would end up running an election per partition for all
-    partitions a node hosted when that node failed. Instead, we elect one of the brokers as the "controller". This controller detects failures at the broker level and is responsible for changing the leader of all
-    affected partitions in a failed broker. The result is that we are able to batch together many of the required leadership change notifications which makes the election process far cheaper and faster for a large number
-    of partitions. If the controller fails, one of the surviving brokers will become the new controller.
+    partitions a node hosted when that node failed. As discussed above in the section on <a href="#replication">replication</a>, Kafka clusters have a special role known as the "controller" which is
+    responsible for managing the registration of brokers. If the controller detects the failure of a broker, it is responsible for electing one of the remaining members of the ISR to serve as the new leader.
+    The result is that we are able to batch together many of the required leadership change notifications which makes the election process far cheaper and faster for a large number
+    of partitions. If the controller itself fails, then another controller will be elected.
 
     <h3 class="anchor-heading"><a id="compaction" class="anchor-link"></a><a href="#compaction">4.8 Log Compaction</a></h3>
 
diff --git a/33/generated/connect_metrics.html b/33/generated/connect_metrics.html
index 5ef15b2b..8bbd957e 100644
--- a/33/generated/connect_metrics.html
+++ b/33/generated/connect_metrics.html
@@ -1,5 +1,5 @@
-[2022-09-26 10:18:26,810] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:693)
-[2022-09-26 10:18:26,812] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:703)
+[2022-09-28 16:37:02,145] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:693)
+[2022-09-28 16:37:02,148] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics:703)
 <table class="data-table"><tbody>
 <tr>
 <td colspan=3 class="mbeanName" style="background-color:#ccc; font-weight: bold;">kafka.connect:type=connect-worker-metrics</td></tr>
diff --git a/33/ops.html b/33/ops.html
index 0b25384e..9ce05131 100644
--- a/33/ops.html
+++ b/33/ops.html
@@ -1269,11 +1269,11 @@ $ bin/kafka-acls.sh \
   Java 8, Java 11, and Java 17 are supported. Note that Java 8 support has been deprecated since Apache Kafka 3.0 and will be removed in Apache Kafka 4.0.
   Java 11 and later versions perform significantly better if TLS is enabled, so they are highly recommended (they also include a number of other
   performance improvements: G1GC, CRC32C, Compact Strings, Thread-Local Handshakes and more).
-  
+
   From a security perspective, we recommend the latest released patch version as older freely available versions have disclosed security vulnerabilities.
 
   Typical arguments for running Kafka with OpenJDK-based Java implementations (including Oracle JDK) are:
-  
+
   <pre class="line-numbers"><code class="language-text">  -Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC
   -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M
   -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -XX:+ExplicitGCInvokesConcurrent</code></pre>
@@ -3401,6 +3401,121 @@ for built-in state stores, currently we have:
     <li>Don't overbuild the cluster: large clusters, especially in a write heavy usage pattern, means a lot of intracluster communication (quorums on the writes and subsequent cluster member updates), but don't underbuild it (and risk swamping the cluster). Having more servers adds to your read capacity.</li>
   </ul>
   Overall, we try to keep the ZooKeeper system as small as will handle the load (plus standard growth capacity planning) and as simple as possible. We try not to do anything fancy with the configuration or application layout as compared to the official release as well as keep it as self contained as possible. For these reasons, we tend to skip the OS packaged versions, since it has a tendency to try to put things in the OS standard hierarchy, which can be 'messy', for want of a better wa [...]
+
+  <h3 class="anchor-heading"><a id="kraft" class="anchor-link"></a><a href="#kraft">6.10 KRaft</a></h3>
+
+  <h4 class="anchor-heading"><a id="kraft_config" class="anchor-link"></a><a href="#kraft_config">Configuration</a></h4>
+
+  <h5 class="anchor-heading"><a id="kraft_role" class="anchor-link"></a><a href="#kraft_role">Process Roles</a></h5>
+
+  <p>In KRaft mode each Kafka server can be configured as a controller, a broker, or both using the <code>process.roles<code> property. This property can have the following values:</p>
+
+  <ul>
+    <li>If <code>process.roles</code> is set to <code>broker</code>, the server acts as a broker.</li>
+    <li>If <code>process.roles</code> is set to <code>controller</code>, the server acts as a controller.</li>
+    <li>If <code>process.roles</code> is set to <code>broker,controller</code>, the server acts as both a broker and a controller.</li>
+    <li>If <code>process.roles</code> is not set at all, it is assumed to be in ZooKeeper mode.</li>
+  </ul>
+
+  <p>Kafka servers that act as both brokers and controllers are referred to as "combined" servers. Combined servers are simpler to operate for small use cases like a development environment. The key disadvantage is that the controller will be less isolated from the rest of the system. For example, it is not possible to roll or scale the controllers separately from the brokers in combined mode. Combined mode is not recommended in critical deployment environments.</p>
+
+
+  <h5 class="anchor-heading"><a id="kraft_voter" class="anchor-link"></a><a href="#kraft_voter">Controllers</a></h5>
+
+  <p>In KRaft mode, specific Kafka servers are selected to be controllers (unlike the ZooKeeper-based mode, where any server can become the Controller). The servers selected to be controllers will participate in the metadata quorum. Each controller is either an active or a hot standby for the current active controller.</p>
+
+  <p>A Kafka admin will typically select 3 or 5 servers for this role, depending on factors like cost and the number of concurrent failures your system should withstand without availability impact. A majority of the controllers must be alive in order to maintain availability. With 3 controllers, the cluster can tolerate 1 controller failure; with 5 controllers, the cluster can tolerate 2 controller failures.</p>
+
+  <p>All of the servers in a Kafka cluster discover the quorum voters using the <code>controller.quorum.voters</code> property. This identifies the quorum controller servers that should be used. All the controllers must be enumerated. Each controller is identified with their <code>id</code>, <code>host</code> and <code>port</code> information. For example:</p>
+
+  <pre class="line-numbers"><code class="language-bash">controller.quorum.voters=id1@host1:port1,id2@host2:port2,id3@host3:port3</code></pre>
+
+  <p>If a Kafka cluster has 3 controllers named controller1, controller2 and controller3, then controller1 may have the following configuration:</p>
+
+  <pre class="line-numbers"><code class="language-bash">
+process.roles=controller
+node.id=1
+listeners=CONTROLLER://controller1.example.com:9093
+controller.quorum.voters=1@controller1.example.com:9093,2@controller2.example.com:9093,3@controller3.example.com:9093</code></pre>
+
+  <p>Every broker and controller must set the <code>controller.quorum.voters</code> property. The node ID supplied in the <code>controller.quorum.voters</code> property must match the corresponding id on the controller servers. For example, on controller1, node.id must be set to 1, and so forth. Each node ID must be unique across all the servers in a particular cluster. No two servers can have the same node ID regardless of their <code>process.roles<code> values.
+
+  <h4 class="anchor-heading"><a id="kraft_storage" class="anchor-link"></a><a href="#kraft_storage">Storage Tool</a></h4>
+  <p></p>
+  The <code>kafka-storage.sh random-uuid</code> command can be used to generate a cluster ID for your new cluster. This cluster ID must be used when formatting each server in the cluster with the <code>kafka-storage.sh format</code> command.
+
+  <p>This is different from how Kafka has operated in the past. Previously, Kafka would format blank storage directories automatically, and also generate a new cluster ID automatically. One reason for the change is that auto-formatting can sometimes obscure an error condition. This is particularly important for the metadata log maintained by the controller and broker servers. If a majority of the controllers were able to start with an empty log directory, a leader might be able to be ele [...]
+
+  <h4 class="anchor-heading"><a id="kraft_debug" class="anchor-link"></a><a href="#kraft_debug">Debugging</a></h4>
+
+  <h5 class="anchor-heading"><a id="kraft_metadata_tool" class="anchor-link"></a><a href="#kraft_metadata_tool">Metadata Quorum Tool</a></h5>
+
+  <p>The <code>kafka-metadata-quorum</code> tool can be used to describe the runtime state of the cluster metadata partition. For example, the following command displays a summary of the metadata quorum:</p>
+
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-metadata-quorum.sh --bootstrap-server  broker_host:port describe --status
+ClusterId:              fMCL8kv1SWm87L_Md-I2hg
+LeaderId:               3002
+LeaderEpoch:            2
+HighWatermark:          10
+MaxFollowerLag:         0
+MaxFollowerLagTimeMs:   -1
+CurrentVoters:          [3000,3001,3002]
+CurrentObservers:       [0,1,2]</code></pre>
+
+  <h5 class="anchor-heading"><a id="kraft_dump_log" class="anchor-link"></a><a href="#kraft_dump_log">Dump Log Tool</a></h5>
+
+  <p>The <code>kafka-dump-log</code> tool can be used to debug the log segments and snapshots for the cluster metadata directory. The tool will scan the provided files and decode the metadata records. For example, this command decodes and prints the records in the first log segment:</p>
+
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-dump-log.sh --cluster-metadata-decoder --files metadata_log_dir/__cluster_metadata-0/00000000000000000000.log</code></pre>
+
+  <p>This command decodes and prints the recrods in the a cluster metadata snapshot:</p>
+
+  <pre class="line-numbers"><code class="language-bash">  &gt; bin/kafka-dump-log.sh --cluster-metadata-decoder --files metadata_log_dir/__cluster_metadata-0/00000000000000000100-0000000001.checkpoint</code></pre>
+
+  <h5 class="anchor-heading"><a id="kraft_shell_tool" class="anchor-link"></a><a href="#kraft_shell_tool">Metadata Shell</a></h5>
+
+  <p>The <code>kafka-metadata-shell<code> tool can be used to interactively inspect the state of the cluster metadata partition:</p>
+
+  <pre class="line-numbers"><code class="language-bash">
+  &gt; bin/kafka-metadata-shell.sh  --snapshot metadata_log_dir/__cluster_metadata-0/00000000000000000000.log
+&gt;&gt; ls /
+brokers  local  metadataQuorum  topicIds  topics
+&gt;&gt; ls /topics
+foo
+&gt;&gt; cat /topics/foo/0/data
+{
+  "partitionId" : 0,
+  "topicId" : "5zoAlv-xEh9xRANKXt1Lbg",
+  "replicas" : [ 1 ],
+  "isr" : [ 1 ],
+  "removingReplicas" : null,
+  "addingReplicas" : null,
+  "leader" : 1,
+  "leaderEpoch" : 0,
+  "partitionEpoch" : 0
+}
+&gt;&gt; exit
+  </code></pre>
+
+  <h4 class="anchor-heading"><a id="kraft_deployment" class="anchor-link"></a><a href="#kraft_deployment">Deploying Considerations</a></h4>
+
+  <ul>
+    <li>Kafka server's <code>process.role</code> should be set to either <code>broker</code> or <code>controller</code> but not both. Combined mode can be used in development enviroment but it should be avoided in critical deployment evironments.</li>
+    <li>For redundancy, a Kafka cluster should use 3 controllers. More than 3 servers is not recommended in critical environments. In the rare case of a partial network failure it is possible for the cluster metadata quorum to become unavailable. This limitation will be addresses in a future release of Kafka.</li>
+    <li>The Kafka controllers store all of the metadata for the cluster in memory and on disk. We believe that for a typical Kafka cluster 5GB of main memory and 5GB of disk space on the metadata log director is sufficient.</li>
+
+  <h4 class="anchor-heading"><a id="kraft_deployment" class="anchor-link"></a><a href="#kraft_deployment">Missing Features</a></h4>
+
+  <p>The following features are not fullying implemented in KRaft mode:</p>
+
+  <ul>
+    <li>Configuring SCRAM users via the administrative API</li>
+    <li>Supporting JBOD configurations with multiple storage directories</li>
+    <li>Modifying certain dynamic configurations on the standalone KRaft controller</li>
+    <li>Delegation tokens</li>
+    <li>Upgrade from ZooKeeper mode</li>
+  </ul>
+
 </script>
 
 <div class="p-ops"></div>
diff --git a/33/security.html b/33/security.html
index d9b26f5d..f401c1c1 100644
--- a/33/security.html
+++ b/33/security.html
@@ -36,7 +36,136 @@
 
     The guides below explain how to configure and use the security features in both clients and brokers.
 
-    <h3 class="anchor-heading"><a id="security_ssl" class="anchor-link"></a><a href="#security_ssl">7.2 Encryption and Authentication using SSL</a></h3>
+    <h3 class="anchor-heading"><a id="listener_configuration" class="anchor-link"></a><a href="#listener_configuration">7.2 Listener Configuration</a></h3>
+
+    <p>In order to secure a Kafka cluster, it is necessary to secure the channels that are used to
+      communicate with the servers. Each server must define the set of listeners that are used to
+      receive requests from clients as well as other servers. Each listener may be configured
+      to authenticate clients using various mechanisms and to ensure traffic between the
+      server and the client is encrypted. This section provides a primer for the configuration
+      of listeners.</p>
+
+    <p>Kafka servers support listening for connections on multiple ports. This is configured through
+      the <code>listeners</code> property in the server configuration, which accepts a comma-separated
+      list of the listeners to enable. At least one listener must be defined on each server. The format
+      of each listener defined in <code>listeners</code> is given below:</p>
+	  
+    <pre class="line-numbers"><code class="language-text">{LISTENER_NAME}://{hostname}:{port}</code></pre>
+	    
+    <p>The <code>LISTENER_NAME</code> is usually a descriptive name which defines the purpose of
+      the listener. For example, many configurations use a separate listener for client traffic,
+      so they might refer to the corresponding listener as <code>CLIENT</code> in the configuration:</p
+      
+    <pre class="line-numbers"><code class="language-text">listeners=CLIENT://localhost:9092</code></pre>
+      
+    <p>The security protocol of each listener is defined in a separate configuration:
+      <code>listener.security.protocol.map</code>. The value is a comma-separated list
+      of each listener mapped to its security protocol. For example, the follow value
+      configuration specifies that the <code>CLIENT</code> listener will use SSL while the
+      <code>BROKER</code> listener will use plaintext.</p>
+    
+    <pre class="line-numbers"><code class="language-text">listener.security.protocol.map=CLIENT:SSL,BROKER:PLAINTEXT</code></pre>
+	    
+    <p>Possible options for the security protocol are given below:</p>
+    <ol>
+      <li>PLAINTEXT</li>
+      <li>SSL</li>
+      <li>SASL_PLAINTEXT</li>
+      <li>SASL_SSL</li>
+    </ol>
+
+    <p>The plaintext protocol provides no security and does not require any additional configuration.
+      In the following sections, this document covers how to configure the remaining protocols.</p>
+
+    <p>If each required listener uses a separate security protocol, it is also possible to use the
+      security protocol name as the listener name in <code>listeners</code>. Using the example above,
+      we could skip the definition of the <code>CLIENT</code> and <code>BROKER</code> listeners
+      using the following definition:</p>
+    
+    <pre class="line-numbers"><code class="language-text">listeners=SSL://localhost:9092,PLAINTEXT://localhost:9093</code></pre>
+      
+    <p>However, we recommend users to provide explicit names for the listeners since it
+      makes the intended usage of each listener clearer.</p>
+
+    <p>Among the listeners in this list, it is possible to declare the listener to be used for
+      inter-broker communication by setting the <code>inter.broker.listener.name</code> configuration
+      to the name of the listener. The primary purpose of the inter-broker listener is
+      partition replication. If not defined, then the inter-broker listener is determined
+      by the security protocol defined by <code>security.inter.broker.protocol</code>, which
+      defaults to <code>PLAINTEXT</code>.</p>
+    
+    <p>For legacy clusters which rely on Zookeeper to store cluster metadata, it is possible to
+      declare a separate listener to be used for metadata propagation from the active controller
+      to the brokers. This is defined by <code>control.plane.listener.name</code>. The active controller
+      will use this listener when it needs to push metadata updates to the brokers in the cluster.
+      The benefit of using a control plane listener is that it uses a separate processing thread,
+      which makes it less likely for application traffic to impede timely propagation of metadata changes
+      (such as partition leader and ISR updates). Note that the default value is null, which
+      means that the controller will use the same listener defined by <code>inter.broker.listener</code></p>
+    
+    <p>In a KRaft cluster, a broker is any server which has the <code>broker</code> role enabled
+      in <code>process.roles</code> and a controller is any server which has the <code>controller</code>
+      role enabled. Listener configuration depends on the role. The listener defined by
+      <code>inter.broker.listener.name</code> is used exclusively for requests between brokers.
+      Controllers, on the other hand, must use separate listener which is defined by the
+      <code>controller.listener.names</code> configuration. This cannot be set to the same
+      value as the inter-broker listener.</p>
+
+    <p>Controllers receive requests both from other controllers and from brokers. For
+      this reason, even if a server does not have the <code>controller</code> role enabled
+      (i.e. it is just a broker), it must still define the controller listener along with
+      any security properties that are needed to configure it. For example, we might
+      use the following configuration on a standalone broker:</p>
+      
+    <pre class="line-numbers"><code class="language-text">process.roles=broker
+listeners=BROKER://localhost:9092
+inter.broker.listener.name=BROKER
+controller.quorum.voters=0@localhost:9093
+controller.listener.names=CONTROLLER
+listener.security.protocol.map=BROKER:SASL_SSL,CONTROLLER:SASL_SSL</code></pre>
+
+    <p>The controller listener is still configured in this example to use the <code>SASL_SSL</code>
+      security protocol, but it is not included in <code>listeners</code> since the broker
+      does not expose the controller listener itself. The port that will be used in this case
+      comes from the <code>controller.quorum.voters</code> configuration, which defines
+      the complete list of controllers.</p>
+
+    <p>For KRaft servers which have both the broker and controller role enabled, the configuration
+      is similar. The only difference is that the controller listener must be included in
+      <code>listeners</code>:</p>
+    
+    <pre class="line-numbers"><code class="language-text">process.roles=broker,controller
+listeners=BROKER://localhost:9092,CONTROLLER://localhost:9093
+inter.broker.listener.name=BROKER
+controller.quorum.voters=0@localhost:9093
+controller.listener.names=CONTROLLER
+listener.security.protocol.map=BROKER:SASL_SSL,CONTROLLER:SASL_SSL</code></pre>
+
+    <p>It is a requirement for the port defined in <code>controller.quorum.voters</code> to
+      exactly match one of the exposed controller listeners. For example, here the
+      <code>CONTROLLER</code> listener is bound to port 9093. The connection string
+      defined by <code>controller.quorum.voters</code> must then also use port 9093,
+      as it does here.</p>
+
+    <p>The controller will accept requests on all listeners defined by <code>controller.listener.names</code>.
+      Typically there would be just one controller listener, but it is possible to have more.
+      For example, this provides a way to change the active listener from one port or security
+      protocol to another through a roll of the cluster (one roll to expose the new listener,
+      and one roll to remove the old listener). When multiple controller listeners are defined,
+      the first one in the list will be used for outbound requests.</p>
+
+    <p>It is conventional in Kafka to use a separate listener for clients. This allows the
+      inter-cluster listeners to be isolated at the network level. In the case of the controller
+      listener in KRaft, the listener should be isolated since clients do not work with it
+      anyway. Clients are expected to connect to any other listener configured on a broker.
+      Any requests that are bound for the controller will be forwarded as described
+      <a href="#kraft_principal_forwarding">below</a></p>
+    
+    <p>In the following <a href="#security_ssl">section</a>, this document covers how to enable SSL
+      on a listener for encryption as well as authentication. The subsequent <a href="#security_sasl">section</a> will then
+      cover additional authentication mechanisms using SASL.</p>
+    
+    <h3 class="anchor-heading"><a id="security_ssl" class="anchor-link"></a><a href="#security_ssl">7.3 Encryption and Authentication using SSL</a></h3>
     Apache Kafka allows clients to use SSL for encryption of traffic as well as authentication. By default, SSL is disabled but can be turned on if needed.
     The following paragraphs explain in detail how to set up your own PKI infrastructure, use it to create certificates and configure Kafka to use these.
 
@@ -314,10 +443,8 @@ keyUsage               = digitalSignature, keyEncipherment</code></pre>
                 </li>
             </ol>
         </li>
+
         <li><h4 class="anchor-heading"><a id="security_configbroker" class="anchor-link"></a><a href="#security_configbroker">Configuring Kafka Brokers</a></h4>
-            Kafka Brokers support listening for connections on multiple ports.
-            We need to configure the following property in server.properties, which must have one or more comma-separated values:
-            <pre><code class="language-text">listeners</code></pre>
 
             If SSL is not enabled for inter-broker communication (see below for how to enable it), both PLAINTEXT and SSL ports will be necessary.
             <pre class="line-numbers"><code class="language-text">listeners=PLAINTEXT://host.name:port,SSL://host.name:port</code></pre>
@@ -397,7 +524,7 @@ ssl.key.password=test1234</code></pre>
 &gt; kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config client-ssl.properties</code></pre>
         </li>
     </ol>
-    <h3 class="anchor-heading"><a id="security_sasl" class="anchor-link"></a><a href="#security_sasl">7.3 Authentication using SASL</a></h3>
+    <h3 class="anchor-heading"><a id="security_sasl" class="anchor-link"></a><a href="#security_sasl">7.4 Authentication using SASL</a></h3>
 
     <ol>
         <li><h4 class="anchor-heading"><a id="security_sasl_jaasconfig" class="anchor-link"></a><a href="#security_sasl_jaasconfig">JAAS configuration</a></h4>
@@ -1135,8 +1262,12 @@ sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other enabled mechani
         </li>
     </ol>
 
-    <h3 class="anchor-heading"><a id="security_authz" class="anchor-link"></a><a href="#security_authz">7.4 Authorization and ACLs</a></h3>
-    Kafka ships with a pluggable Authorizer and an out-of-box authorizer implementation that uses zookeeper to store all the acls. The Authorizer is configured by setting <tt>authorizer.class.name</tt> in server.properties. To enable the out of the box implementation use:
+    <h3 class="anchor-heading"><a id="security_authz" class="anchor-link"></a><a href="#security_authz">7.5 Authorization and ACLs</a></h3>
+    Kafka ships with a pluggable authorization framework, which is configured with the <tt>authorizer.class.name</tt> property in the server confgiuration.
+    Configured implementations must extend <code>org.apache.kafka.server.authorizer.Authorizer</code>.
+    Kafka provides default implementations which store ACLs in the cluster metadata (either Zookeeper or the KRaft metadata log).
+
+    For Zookeeper-based clusters, the provided implementation is configured as follows:
     <pre class="line-numbers"><code class="language-text">authorizer.class.name=kafka.security.authorizer.AclAuthorizer</code></pre>
     Kafka acls are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H on any Resource R matching ResourcePattern RP". You can read more about the acl structure in <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface">KIP-11</a> and resource patterns in <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-290%3A+Support+for+Prefixed+ACLs">KIP-290</a>. In order to add, remove or list acls you can use th [...]
     <pre class="line-numbers"><code class="language-text">allow.everyone.if.no.acl.found=true</code></pre>
@@ -1934,7 +2065,7 @@ bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminc
         </tbody>
     </table>
 
-    <h3 class="anchor-heading"><a id="security_rolling_upgrade" class="anchor-link"></a><a href="#security_rolling_upgrade">7.5 Incorporating Security Features in a Running Cluster</a></h3>
+    <h3 class="anchor-heading"><a id="security_rolling_upgrade" class="anchor-link"></a><a href="#security_rolling_upgrade">7.6 Incorporating Security Features in a Running Cluster</a></h3>
     You can secure a running cluster via one or more of the supported protocols discussed previously. This is done in phases:
     <p></p>
     <ul>
@@ -1944,7 +2075,7 @@ bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminc
         <li>A final incremental bounce to close the PLAINTEXT port.</li>
     </ul>
     <p></p>
-    The specific steps for configuring SSL and SASL are described in sections <a href="#security_ssl">7.2</a> and <a href="#security_sasl">7.3</a>.
+    The specific steps for configuring SSL and SASL are described in sections <a href="#security_ssl">7.3</a> and <a href="#security_sasl">7.4</a>.
     Follow these steps to enable security for your desired protocol(s).
     <p></p>
     The security implementation lets you configure different protocols for both broker-client and broker-broker communication.
@@ -1992,10 +2123,10 @@ security.inter.broker.protocol=SSL</code></pre>
     <pre class="line-numbers"><code class="language-text">listeners=SSL://broker1:9092,SASL_SSL://broker1:9093
 security.inter.broker.protocol=SSL</code></pre>
 
-    ZooKeeper can be secured independently of the Kafka cluster. The steps for doing this are covered in section <a href="#zk_authz_migration">7.6.2</a>.
+    ZooKeeper can be secured independently of the Kafka cluster. The steps for doing this are covered in section <a href="#zk_authz_migration">7.7.2</a>.
 
 
-    <h3 class="anchor-heading"><a id="zk_authz" class="anchor-link"></a><a href="#zk_authz">7.6 ZooKeeper Authentication</a></h3>
+    <h3 class="anchor-heading"><a id="zk_authz" class="anchor-link"></a><a href="#zk_authz">7.7 ZooKeeper Authentication</a></h3>
     ZooKeeper supports mutual TLS (mTLS) authentication beginning with the 3.5.x versions.
     Kafka supports authenticating to ZooKeeper with SASL and mTLS -- either individually or both together --
     beginning with version 2.5. See
@@ -2027,8 +2158,8 @@ security.inter.broker.protocol=SSL</code></pre>
         Use the <tt>-zk-tls-config-file &lt;file&gt;</tt> option (note the single-dash rather than double-dash)
         to set TLS configs for the <tt>zookeeper-shell.sh</tt> CLI tool.
     </p>
-    <h4 class="anchor-heading"><a id="zk_authz_new" class="anchor-link"></a><a href="#zk_authz_new">7.6.1 New clusters</a></h4>
-    <h5 class="anchor-heading"><a id="zk_authz_new_sasl" class="anchor-link"></a><a href="#zk_authz_new_sasl">7.6.1.1 ZooKeeper SASL Authentication</a></h5>
+    <h4 class="anchor-heading"><a id="zk_authz_new" class="anchor-link"></a><a href="#zk_authz_new">7.7.1 New clusters</a></h4>
+    <h5 class="anchor-heading"><a id="zk_authz_new_sasl" class="anchor-link"></a><a href="#zk_authz_new_sasl">7.7.1.1 ZooKeeper SASL Authentication</a></h5>
     To enable ZooKeeper SASL authentication on brokers, there are two necessary steps:
     <ol>
         <li> Create a JAAS login file and set the appropriate system property to point to it as described above</li>
@@ -2037,7 +2168,7 @@ security.inter.broker.protocol=SSL</code></pre>
 
     The metadata stored in ZooKeeper for the Kafka cluster is world-readable, but can only be modified by the brokers. The rationale behind this decision is that the data stored in ZooKeeper is not sensitive, but inappropriate manipulation of that data can cause cluster disruption. We also recommend limiting the access to ZooKeeper via network segmentation (only brokers and some admin tools need access to ZooKeeper).
 
-    <h5 class="anchor-heading"><a id="zk_authz_new_mtls" class="anchor-link"></a><a href="#zk_authz_new_mtls">7.6.1.2 ZooKeeper Mutual TLS Authentication</a></h5>
+    <h5 class="anchor-heading"><a id="zk_authz_new_mtls" class="anchor-link"></a><a href="#zk_authz_new_mtls">7.7.1.2 ZooKeeper Mutual TLS Authentication</a></h5>
     ZooKeeper mTLS authentication can be enabled with or without SASL authentication.  As mentioned above,
     when using mTLS alone, every broker and any CLI tools (such as the <a href="#zk_authz_migration">ZooKeeper Security Migration Tool</a>)
     must generally identify itself with the same Distinguished Name (DN) because it is the DN that is ACL'ed, which means
@@ -2084,7 +2215,7 @@ zookeeper.set.acl=true</code></pre>
     to a value different from the keystore password itself.
     Be sure to set the key password to be the same as the keystore password.
 
-    <h4 class="anchor-heading"><a id="zk_authz_migration" class="anchor-link"></a><a href="#zk_authz_migration">7.6.2 Migrating clusters</a></h4>
+    <h4 class="anchor-heading"><a id="zk_authz_migration" class="anchor-link"></a><a href="#zk_authz_migration">7.7.2 Migrating clusters</a></h4>
     If you are running a version of Kafka that does not support security or simply with security disabled, and you want to make the cluster secure, then you need to execute the following steps to enable ZooKeeper authentication with minimal disruption to your operations:
     <ol>
         <li>Enable SASL and/or mTLS authentication on ZooKeeper.  If enabling mTLS, you would now have both a non-TLS port and a TLS port, like this:
@@ -2114,17 +2245,17 @@ ssl.trustStore.password=zk-ts-passwd</code></pre>
     <pre class="line-numbers"><code class="language-bash">&gt; bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect=localhost:2181</code></pre>
     <p>Run this to see the full list of parameters:</p>
     <pre class="line-numbers"><code class="language-bash">&gt; bin/zookeeper-security-migration.sh --help</code></pre>
-    <h4 class="anchor-heading"><a id="zk_authz_ensemble" class="anchor-link"></a><a href="#zk_authz_ensemble">7.6.3 Migrating the ZooKeeper ensemble</a></h4>
+    <h4 class="anchor-heading"><a id="zk_authz_ensemble" class="anchor-link"></a><a href="#zk_authz_ensemble">7.7.3 Migrating the ZooKeeper ensemble</a></h4>
     It is also necessary to enable SASL and/or mTLS authentication on the ZooKeeper ensemble. To do it, we need to perform a rolling restart of the server and set a few properties. See above for mTLS information.  Please refer to the ZooKeeper documentation for more detail:
     <ol>
         <li><a href="https://zookeeper.apache.org/doc/r3.5.7/zookeeperProgrammers.html#sc_ZooKeeperAccessControl">Apache ZooKeeper documentation</a></li>
         <li><a href="https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL">Apache ZooKeeper wiki</a></li>
     </ol>
-    <h4 class="anchor-heading"><a id="zk_authz_quorum" class="anchor-link"></a><a href="#zk_authz_quorum">7.6.4 ZooKeeper Quorum Mutual TLS Authentication</a></h4>
+    <h4 class="anchor-heading"><a id="zk_authz_quorum" class="anchor-link"></a><a href="#zk_authz_quorum">7.7.4 ZooKeeper Quorum Mutual TLS Authentication</a></h4>
     It is possible to enable mTLS authentication between the ZooKeeper servers themselves.
     Please refer to the <a href="https://zookeeper.apache.org/doc/r3.5.7/zookeeperAdmin.html#Quorum+TLS">ZooKeeper documentation</a> for more detail.
 
-    <h3 class="anchor-heading"><a id="zk_encryption" class="anchor-link"></a><a href="#zk_encryption">7.7 ZooKeeper Encryption</a></h3>
+    <h3 class="anchor-heading"><a id="zk_encryption" class="anchor-link"></a><a href="#zk_encryption">7.8 ZooKeeper Encryption</a></h3>
     ZooKeeper connections that use mutual TLS are encrypted.
     Beginning with ZooKeeper version 3.5.7 (the version shipped with Kafka version 2.5) ZooKeeper supports a sever-side config
     <tt>ssl.clientAuth</tt> (case-insensitively: <tt>want</tt>/<tt>need</tt>/<tt>none</tt> are the valid options, the default is <tt>need</tt>),
diff --git a/documentation.html b/documentation.html
index 1eea2c2d..de631f9a 100644
--- a/documentation.html
+++ b/documentation.html
@@ -1,2 +1,2 @@
 <!-- should always link the latest release's documentation -->
-<!--#include virtual="32/documentation.html" -->
+<!--#include virtual="33/documentation.html" -->
diff --git a/downloads.html b/downloads.html
index d4539d11..453f7945 100644
--- a/downloads.html
+++ b/downloads.html
@@ -6,7 +6,46 @@
 	<div class="right">
     <h1>Download</h1>
 
-    <p>3.2.3 is the latest release. The current stable version is 3.2.3.</p>
+    <p>3.3.0 is the latest release. The current stable version is 3.3.0.</p>
+
+    <p>
+    You can verify your download by following these <a href="https://www.apache.org/info/verification.html">procedures</a> and using these <a href="https://downloads.apache.org/kafka/KEYS">KEYS</a>.
+    </p>
+
+    <span id="3.3.0"></span>
+    <h3 class="download-version">3.3.0<a href="#3.3.0"><i class="fas fa-link " style="color:#053ce2"></i></a></h3>
+    <ul>
+        <li>
+            Released Sept 28, 2022
+        </li>
+        <li>
+            <a href="https://downloads.apache.org/kafka/3.3.0/RELEASE_NOTES.html">Release Notes</a>
+        </li>
+        <li>
+            Source download: <a href="https://downloads.apache.org/kafka/3.3.0/kafka-3.3.0-src.tgz">kafka-3.3.0-src.tgz</a> (<a href="https://downloads.apache.org/kafka/3.3.0/kafka-3.3.0-src.tgz.asc">asc</a>, <a href="https://downloads.apache.org/kafka/3.3.0/kafka-3.3.0-src.tgz.sha512">sha512</a>)
+        </li>
+        <li>
+            Binary downloads:
+            <ul>
+                <li>Scala 2.12 &nbsp;- <a href="https://downloads.apache.org/kafka/3.3.0/kafka_2.12-3.3.0.tgz">kafka_2.12-3.3.0.tgz</a> (<a href="https://downloads.apache.org/kafka/3.3.0/kafka_2.12-3.3.0.tgz.asc">asc</a>, <a href="https://downloads.apache.org/kafka/3.3.0/kafka_2.12-3.3.0.tgz.sha512">sha512</a>)</li>
+                <li>Scala 2.13 &nbsp;- <a href="https://downloads.apache.org/kafka/3.3.0/kafka_2.13-3.3.0.tgz">kafka_2.13-3.3.0.tgz</a> (<a href="https://downloads.apache.org/kafka/3.3.0/kafka_2.13-3.3.0.tgz.asc">asc</a>, <a href="https://downloads.apache.org/kafka/3.3.0/kafka_2.13-3.3.0.tgz.sha512">sha512</a>)</li>
+            </ul>
+            We build for multiple versions of Scala. This only matters if you are using Scala and you want a version
+            built for the same Scala version you use. Otherwise any version should work (2.13 is recommended).
+        </li>
+    </ul>
+
+    <p>
+        Kafka 3.3.0 includes a number of significant new features. Here is a summary of some notable changes:
+    </p>
+
+    <ul>
+        <li>TODO</li>
+    </ul>
+
+    <p>
+        For more information, please read the detailed <a href="https://downloads.apache.org/kafka/3.3.0/RELEASE_NOTES.html">Release Notes</a>.
+    </p>
 
     <p>
     You can verify your download by following these <a href="https://www.apache.org/info/verification.html">procedures</a> and using these <a href="https://downloads.apache.org/kafka/KEYS">KEYS</a>.
@@ -19,16 +58,16 @@
             Released Sept 19, 2022
         </li>
         <li>
-            <a href="https://downloads.apache.org/kafka/3.2.3/RELEASE_NOTES.html">Release Notes</a>
+            <a href="https://archive.apache.org/dist/kafka/3.2.3/RELEASE_NOTES.html">Release Notes</a>
         </li>
         <li>
-            Source download: <a href="https://downloads.apache.org/kafka/3.2.3/kafka-3.2.3-src.tgz">kafka-3.2.3-src.tgz</a> (<a href="https://downloads.apache.org/kafka/3.2.3/kafka-3.2.3-src.tgz.asc">asc</a>, <a href="https://downloads.apache.org/kafka/3.2.3/kafka-3.2.3-src.tgz.sha512">sha512</a>)
+            Source download: <a href="https://archive.apache.org/dist/kafka/3.2.3/kafka-3.2.3-src.tgz">kafka-3.2.3-src.tgz</a> (<a href="https://archive.apache.org/dist/kafka/3.2.3/kafka-3.2.3-src.tgz.asc">asc</a>, <a href="https://archive.apache.org/dist/kafka/3.2.3/kafka-3.2.3-src.tgz.sha512">sha512</a>)
         </li>
         <li>
             Binary downloads:
             <ul>
-                <li>Scala 2.12 &nbsp;- <a href="https://downloads.apache.org/kafka/3.2.3/kafka_2.12-3.2.3.tgz">kafka_2.12-3.2.3.tgz</a> (<a href="https://downloads.apache.org/kafka/3.2.3/kafka_2.12-3.2.3.tgz.asc">asc</a>, <a href="https://downloads.apache.org/kafka/3.2.3/kafka_2.12-3.2.3.tgz.sha512">sha512</a>)</li>
-                <li>Scala 2.13 &nbsp;- <a href="https://downloads.apache.org/kafka/3.2.3/kafka_2.13-3.2.3.tgz">kafka_2.13-3.2.3.tgz</a> (<a href="https://downloads.apache.org/kafka/3.2.3/kafka_2.13-3.2.3.tgz.asc">asc</a>, <a href="https://downloads.apache.org/kafka/3.2.3/kafka_2.13-3.2.3.tgz.sha512">sha512</a>)</li>
+                <li>Scala 2.12 &nbsp;- <a href="https://archive.apache.org/dist/kafka/3.2.3/kafka_2.12-3.2.3.tgz">kafka_2.12-3.2.3.tgz</a> (<a href="https://archive.apache.org/dist/kafka/3.2.3/kafka_2.12-3.2.3.tgz.asc">asc</a>, <a href="https://archive.apache.org/dist/kafka/3.2.3/kafka_2.12-3.2.3.tgz.sha512">sha512</a>)</li>
+                <li>Scala 2.13 &nbsp;- <a href="https://archive.apache.org/dist/kafka/3.2.3/kafka_2.13-3.2.3.tgz">kafka_2.13-3.2.3.tgz</a> (<a href="https://archive.apache.org/dist/kafka/3.2.3/kafka_2.13-3.2.3.tgz.asc">asc</a>, <a href="https://archive.apache.org/dist/kafka/3.2.3/kafka_2.13-3.2.3.tgz.sha512">sha512</a>)</li>
             </ul>
             We build for multiple versions of Scala. This only matters if you are using Scala and you want a version
             built for the same Scala version you use. Otherwise any version should work (2.13 is recommended).
@@ -37,7 +76,7 @@
 
     <p>
         Kafka 3.2.3 fixes <a href="cve-list#CVE-2022-34917">CVE-2022-34917</a> and 7 other issues since the 3.2.1 release.
-        For more information, please read the detailed <a href="https://downloads.apache.org/kafka/3.2.3/RELEASE_NOTES.html">Release Notes</a>.
+        For more information, please read the detailed <a href="https://downloads.apache.org/dist/kafka/3.2.3/RELEASE_NOTES.html">Release Notes</a>.
     </p>
 
     <span id="3.2.2"></span>