You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by gu...@apache.org on 2018/05/16 03:26:30 UTC

[kafka] branch trunk updated: MINOR: doc change for deprecate removal (#5006)

This is an automated email from the ASF dual-hosted git repository.

guozhang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
     new c9161af  MINOR: doc change for deprecate removal (#5006)
c9161af is described below

commit c9161afda998e42d8ee9ebbfe5e647ff337bd19b
Author: Guozhang Wang <wa...@gmail.com>
AuthorDate: Tue May 15 20:26:19 2018 -0700

    MINOR: doc change for deprecate removal (#5006)
    
    Reviewers: John Roesler <jo...@confluent.io>, Bill Bejeck <bi...@confluent.io>, Matthias J. Sax <ma...@confluent.io>
---
 docs/streams/upgrade-guide.html | 100 +++++++++++++++++++---------------------
 docs/upgrade.html               |  69 ++++-----------------------
 2 files changed, 58 insertions(+), 111 deletions(-)

diff --git a/docs/streams/upgrade-guide.html b/docs/streams/upgrade-guide.html
index 646908d..f6b237f 100644
--- a/docs/streams/upgrade-guide.html
+++ b/docs/streams/upgrade-guide.html
@@ -34,70 +34,46 @@
     </div>
 
     <p>
-        If you want to upgrade from 1.1.x to 2.0.0 and you have customized window store implementations on the <code>ReadOnlyWindowStore</code> interface
-        you'd need to update your code to incorporate the newly added public APIs; otherwise you don't need to make any code changes.
-        See <a href="#streams_api_changes_200">below</a> for a complete list of 2.0.0 API and semantic changes that allow you to advance your application and/or simplify your code base.
+        Upgrading from any older version to 2.0.0 is possible: (1) you need to make sure to update you code accordingly, because there are some minor non-compatible API changes since older
+        releases (the code changes are expected to be minimal, please see below for the details),
+        (2) upgrading to 2.0.0 in the online mode requires two rolling bounces.
+        For (2), in the first rolling bounce phase users need to set config <code>upgrade.from="older version"</code> (possible values are <code>"0.10.0", "0.10.1", "0.10.2", "0.11.0", "1.0", and "1.1"</code>)
+        (cf. <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-268%3A+Simplify+Kafka+Streams+Rebalance+Metadata+Upgrade">KIP-268</a>):
     </p>
+    <ul>
+        <li> prepare your application instances for a rolling bounce and make sure that config <code>upgrade.from</code> is set to the version from which it is being upgrade to new version 2.0.0</li>
+        <li> bounce each instance of your application once </li>
+        <li> prepare your newly deployed 2.0.0 application instances for a second round of rolling bounces; make sure to remove the value for config <code>upgrade.mode</code> </li>
+        <li> bounce each instance of your application once more to complete the upgrade </li>
+    </ul>
+    <p> As an alternative, an offline upgrade is also possible. Upgrading from any versions as old as 0.10.0.x to 2.0.0 in offline mode require the following steps: </p>
+    <ul>
+        <li> stop all old (e.g., 0.10.0.x) application instances </li>
+        <li> update your code and swap old code and jar file with new code and new jar file </li>
+        <li> restart all new (2.0.0) application instances </li>
+    </ul>
 
     <p>
-        If you want to upgrade from 1.0.x to 2.0.0 and you have customized window store implementations on the <code>ReadOnlyWindowStore</code> interface
-        you'd need to update your code to incorporate the newly added public APIs.
-        Otherwise, if you are using Java 7 you don't need to make any code changes as the public API is fully backward compatible;
-        but if you are using Java 8 method references in your Kafka Streams code you might need to update your code to resolve method ambiguities.
-        Hot-swaping the jar-file only might not work for this case.
-        See below a complete list of <a href="#streams_api_changes_200">2.0.0</a> and <a href="#streams_api_changes_110">1.1.0</a>
-        API and semantic changes that allow you to advance your application and/or simplify your code base.
+        Note, that a brokers must be on version 0.10.1 or higher to run a Kafka Streams application version 0.10.1 or higher;
+        On-disk message format must be 0.10 or higher to run a Kafka Streams application version 1.0 or higher.
+        For Kafka Streams 0.10.0, broker version 0.10.0 or higher is required.
     </p>
 
     <p>
-        If you want to upgrade from 0.10.2.x or 0.11.0.x to 2.0.x and you have customized window store implementations on the <code>ReadOnlyWindowStore</code> interface
-        you'd need to update your code to incorporate the newly added public APIs.
-        Otherwise, if you are using Java 7 you don't need to do any code changes as the public API is fully backward compatible;
-        but if you are using Java 8 method references in your Kafka Streams code you might need to update your code to resolve method ambiguities.
-        However, some public APIs were deprecated and thus it is recommended to update your code eventually to allow for future upgrades.
-        See below a complete list of <a href="#streams_api_changes_200">2.0</a>, <a href="#streams_api_changes_110">1.1</a>,
-        <a href="#streams_api_changes_100">1.0</a>, and <a href="#streams_api_changes_0110">0.11.0</a> API
-        and semantic changes that allow you to advance your application and/or simplify your code base, including the usage of new features.
-        Additionally, Streams API 1.1.x requires broker on-disk message format version 0.10 or higher; thus, you need to make sure that the message
-        format is configured correctly before you upgrade your Kafka Streams application.
+        In 2.0.0 we have added a few new APIs on the <code>ReadOnlyWindowStore</code> interface (for details please read <a href="#streams_api_changes_200">Streams API changes</a> below).
+        If you have customized window store implementations that extends the <code>ReadOnlyWindowStore</code> interface you need to make code changes.
     </p>
 
     <p>
-        If you want to upgrade from 0.10.1.x to 2.0.x see the Upgrade Sections for <a href="/{{version}}/documentation/#upgrade_1020_streams"><b>0.10.2</b></a>,
-        <a href="/{{version}}/documentation/#upgrade_1100_streams"><b>0.11.0</b></a>,
-        <a href="/{{version}}/documentation/#upgrade_100_streams"><b>1.0</b></a>,
-        <a href="/{{version}}/documentation/#upgrade_110_streams"><b>1.1</b></a>, and
-        <a href="/{{version}}/documentation/#upgrade_200_streams"><b>2.0</b></a>.
-        Note, that a brokers on-disk message format must be on version 0.10 or higher to run a Kafka Streams application version 2.0 or higher.
-        See below a complete list of <a href="#streams_api_changes_0102">0.10.2</a>, <a href="#streams_api_changes_0110">0.11.0</a>,
-        <a href="#streams_api_changes_100">1.0</a>, <a href="#streams_api_changes_110">1.1</a>, and <a href="#streams_api_changes_200">2.0</a>
-        API and semantical changes that allow you to advance your application and/or simplify your code base, including the usage of new features.
+        We have also removed some public APIs that are deprecated prior to 1.0.x in 2.0.0.
+        See below for a detailed list of removed APIs.
     </p>
-
     <p>
-        Upgrading from 0.10.0.x to 2.0.0 directly is also possible.
-        Note, that a brokers must be on version 0.10.1 or higher and on-disk message format must be on version 0.10 or higher
-        to run a Kafka Streams application version 2.0 or higher.
-        See <a href="#streams_api_changes_0101">Streams API changes in 0.10.1</a>, <a href="#streams_api_changes_0102">Streams API changes in 0.10.2</a>,
-        <a href="#streams_api_changes_0110">Streams API changes in 0.11.0</a>, <a href="#streams_api_changes_100">Streams API changes in 1.0</a>, and
-        <a href="#streams_api_changes_110">Streams API changes in 1.1</a>, and <a href="#streams_api_changes_200">Streams API changes in 2.0</a>
-        for a complete list of API changes.
-        Upgrading to 2.0.0 requires two rolling bounces with config <code>upgrade.from="0.10.0"</code> set for first upgrade phase
-        (cf. <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-268%3A+Simplify+Kafka+Streams+Rebalance+Metadata+Upgrade">KIP-268</a>).
-        As an alternative, an offline upgrade is also possible.
+        In addition, if you using Java 8 method references in your Kafka Streams code you might need to update your code to resolve method ambiguities.
+        Hot-swapping the jar-file only might not work for this case.
+        See below a complete list of <a href="#streams_api_changes_200">2.0.0</a>
+        API and semantic changes that allow you to advance your application and/or simplify your code base.
     </p>
-    <ul>
-        <li> prepare your application instances for a rolling bounce and make sure that config <code>upgrade.from</code> is set to <code>"0.10.0"</code> for new version 2.0.0</li>
-        <li> bounce each instance of your application once </li>
-        <li> prepare your newly deployed 2.0.0 application instances for a second round of rolling bounces; make sure to remove the value for config <code>upgrade.mode</code> </li>
-        <li> bounce each instance of your application once more to complete the upgrade </li>
-    </ul>
-    <p> Upgrading from 0.10.0.x to 2.0.0 in offline mode: </p>
-    <ul>
-        <li> stop all old (0.10.0.x) application instances </li>
-        <li> update your code and swap old code and jar file with new code and new jar file </li>
-        <li> restart all new (2.0.0) application instances </li>
-    </ul>
 
     <h3><a id="streams_api_changes_200" href="#streams_api_changes_200">Streams API changes in 2.0.0</a></h3>
     <p>
@@ -169,6 +145,26 @@
         <a href="/{{version}}/documentation/streams/developer-guide/dsl-api.html#scala-dsl">Kafka Streams DSL for Scala documentation</a> and
         <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-270+-+A+Scala+Wrapper+Library+for+Kafka+Streams">KIP-270</a>.
     </p>
+    <p>
+        We have removed these deprecated APIs:
+    </p>
+    <ul>
+        <li><code>KafkaStreams#toString</code> no longer returns the topology and runtime metadata; to get topology metadata users can call <code>Topology#describe()</code> and to get thread runtime metadata users can call <code>KafkaStreams#localThreadsMetadata</code> (they are deprecated since 1.0.0).
+            For detailed guidance on how to update your code please read <a href="#streams_api_changes_100">here</a></li>
+        <li><code>TopologyBuilder</code> and <code>KStreamBuilder</code> are removed and replaced by <code>Topology</code> and <code>StreamsBuidler</code> respectively (they are deprecated since 1.0.0).
+            For detailed guidance on how to update your code please read <a href="#streams_api_changes_100">here</a></li>
+        <li><code>StateStoreSupplier</code> are removed and replaced with <code>StoreBuilder</code> (they are deprecated since 1.0.0);
+            and the corresponding <code>Stores#create</code> and <code>KStream, KTable, KGroupedStream</code> overloaded functions that use it have also been removed.
+            For detailed guidance on how to update your code please read <a href="#streams_api_changes_100">here</a></li>
+        <li><code>KStream, KTable, KGroupedStream</code> overloaded functions that requires serde and other specifications explicitly are removed and replaced with simpler overloaded functions that use <code>Consumed, Produced, Serialized, Materialized, Joined</code> (they are deprecated since 1.0.0).
+            For detailed guidance on how to update your code please read <a href="#streams_api_changes_100">here</a></li>
+        <li><code>Processor#punctuate</code>, <code>ValueTransformer#punctuate</code>, <code>ValueTransformer#punctuate</code> and <code>RecordContext#schedule(long)</code> are removed and replaced by <code>RecordContext#schedule(long, PunctuationType, Punctuator)</code> (they are deprecated in 1.0.0). </li>
+        <li>The second <code>boolean</code> typed parameter "loggingEnabled" in <code>ProcessorContext#register</code> has been removed; users can now use <code>StoreBuilder#withLoggingEnabled, withLoggingDisabled</code> to specify the behavior when they create the state store. </li>
+        <li><code>KTable#writeAs, print, foreach, to, through</code> are removed, users can call <code>KTable#tostream()#writeAs</code> instead for the same purpose (they are deprecated since 0.11.0.0).
+            For detailed list of removed APIs please read <a href="#streams_api_changes_0110">here</a></li>
+        <li><code>StreamsConfig#KEY_SERDE_CLASS_CONFIG, VALUE_SERDE_CLASS_CONFIG, TIMESTAMP_EXTRACTOR_CLASS_CONFIG</code> are removed and replaced with <code>StreamsConfig#DEFAULT_KEY_SERDE_CLASS_CONFIG, DEFAULT_VALUE_SERDE_CLASS_CONFIG, DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG</code> respectively (they are deprecated since 0.11.0.0). </li>
+        <li><code>StreamsConfig#ZOOKEEPER_CONNECT_CONFIG</code> are removed as we do not need ZooKeeper dependency in Streams any more (it is deprecated since 0.10.2.0). </li>
+    </ul>
 
     <h3><a id="streams_api_changes_110" href="#streams_api_changes_110">Streams API changes in 1.1.0</a></h3>
     <p>
diff --git a/docs/upgrade.html b/docs/upgrade.html
index f15191d..00f7ffe 100644
--- a/docs/upgrade.html
+++ b/docs/upgrade.html
@@ -19,7 +19,7 @@
 
 <script id="upgrade-template" type="text/x-handlebars-template">
 
-<h4><a id="upgrade_2_0_0" href="#upgrade_2_0_0">Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x, 1.0.x, 1.1.x, or 1.2.x to 2.0.0</a></h4>
+<h4><a id="upgrade_2_0_0" href="#upgrade_2_0_0">Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x, 1.0.x, or 1.1.x to 2.0.0</a></h4>
 <p>Kafka 2.0.0 introduces wire protocol changes. By following the recommended rolling upgrade plan below,
     you guarantee no downtime during the upgrade. However, please review the <a href="#upgrade_200_notable">notable changes in 2.0.0</a> before upgrading.
 </p>
@@ -36,7 +36,7 @@
             <li>log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION  (See <a href="#upgrade_10_performance_impact">potential performance impact
                 following the upgrade</a> for the details on what this configuration does.)</li>
         </ul>
-        If you are upgrading from 0.11.0.x, 1.0.x, 1.1.x or 1.2.x and you have not overridden the message format, then you only need to override
+        If you are upgrading from 0.11.0.x, 1.0.x, 1.1.x, or 1.2.x and you have not overridden the message format, then you only need to override
         the inter-broker protocol format.
         <ul>
             <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (0.11.0, 1.0, 1.1, 1.2).</li>
@@ -65,59 +65,6 @@
 
 <h5><a id="upgrade_200_notable" href="#upgrade_200_notable">Notable changes in 2.0.0</a></h5>
 <ul>
-</ul>
-
-<h5><a id="upgrade_200_new_protocols" href="#upgrade_200_new_protocols">New Protocol Versions</a></h5>
-<ul>
-    <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-279%3A+Fix+log+divergence+between+leader+and+follower+after+fast+leader+fail+over">KIP-279</a>: OffsetsForLeaderEpochResponse v1 introduces a partition-level <code>leader_epoch</code> field. </li>
-</ul>
-
-<h4><a id="upgrade_2_0_0" href="#upgrade_2_0_0">Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x, 1.0.x, or 1.1.x to 2.0.x</a></h4>
-<p>Kafka 2.0.0 introduces wire protocol changes. By following the recommended rolling upgrade plan below,
-    you guarantee no downtime during the upgrade. However, please review the <a href="#upgrade_200_notable">notable changes in 2.0.0</a> before upgrading.
-</p>
-
-<p><b>For a rolling upgrade:</b></p>
-
-<ol>
-    <li> Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
-        are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
-        overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
-        to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
-        <ul>
-            <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1).</li>
-            <li>log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION  (See <a href="#upgrade_10_performance_impact">potential performance impact
-                following the upgrade</a> for the details on what this configuration does.)</li>
-        </ul>
-        If you are upgrading from 0.11.0.x, 1.0.x, or 1.1.x and you have not overridden the message format, then you only need to override
-        the inter-broker protocol format.
-        <ul>
-            <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (0.11.0, 1.0, 1.1).</li>
-        </ul>
-    </li>
-    <li> Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. </li>
-    <li> Once the entire cluster is upgraded, bump the protocol version by editing <code>inter.broker.protocol.version</code> and setting it to 1.1.
-    <li> Restart the brokers one by one for the new protocol version to take effect.</li>
-    <li> If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
-        upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
-        change log.message.format.version to 2.0 on each broker and restart them one by one. Note that the older Scala consumer
-        does not support the new message format introduced in 0.11, so to avoid the performance cost of down-conversion (or to
-        take advantage of <a href="#upgrade_11_exactly_once_semantics">exactly once semantics</a>), the newer Java consumer must be used.</li>
-</ol>
-
-<p><b>Additional Upgrade Notes:</b></p>
-
-<ol>
-    <li>If you are willing to accept downtime, you can simply take all the brokers down, update the code and start them back up. They will start
-        with the new protocol by default.</li>
-    <li>Bumping the protocol version and restarting can be done any time after the brokers are upgraded. It does not have to be immediately after.
-        Similarly for the message format version.</li>
-    <li>If you are using Java8 method references in your Kafka Streams code you might need to update your code to resolve method ambiguties.
-        Hot-swapping the jar-file only might not work.</li>
-</ol>
-
-<h5><a id="upgrade_200_notable" href="#upgrade_200_notable">Notable changes in 2.0.0</a></h5>
-<ul>
     <li><a href="https://cwiki.apache.org/confluence/x/oYtjB">KIP-186</a> increases the default offset retention time from 1 day to 7 days. This makes it less likely to "lose" offsets in an application that commits infrequently. It also increases the active set of offsets and therefore can increase memory usage on the broker. Note that the console consumer currently enables offset commit by default and can be the source of a large number of offsets which this change will now preserve for [...]
     <li><a href="https://issues.apache.org/jira/browse/KAFKA-5674">KAFKA-5674</a> extends the lower interval of <code>max.connections.per.ip minimum</code> to zero and therefore allows IP-based filtering of inbound connections.</li>
     <li><a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric">KIP-272</a>
@@ -132,16 +79,20 @@
 </ul>
 
 <h5><a id="upgrade_200_new_protocols" href="#upgrade_200_new_protocols">New Protocol Versions</a></h5>
-<ul></ul>
+<ul>
+    <li> <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-279%3A+Fix+log+divergence+between+leader+and+follower+after+fast+leader+fail+over">KIP-279</a>: OffsetsForLeaderEpochResponse v1 introduces a partition-level <code>leader_epoch</code> field. </li>
+</ul>
+
 
 <h5><a id="upgrade_200_streams" href="#upgrade_200_streams">Upgrading a 2.0.0 Kafka Streams Application</a></h5>
 <ul>
     <li> Upgrading your Streams application from 1.1.0 to 2.0.0 does not require a broker upgrade.
          A Kafka Streams 2.0.0 application can connect to 2.0, 1.1, 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though). </li>
-    <li> See <a href="/{{version}}/documentation/streams/upgrade-guide#streams_api_changes_200">Streams API changes in 2.0.0</a> for more details. </li>
+    <li> Note that in 2.0 we have removed the public APIs that are deprecated prior to 1.0; users leveraging on those deprecated APIs need to make code changes accordingly.
+         See <a href="/{{version}}/documentation/streams/upgrade-guide#streams_api_changes_200">Streams API changes in 2.0.0</a> for more details. </li>
 </ul>
 
-<h4><a id="upgrade_1_1_0" href="#upgrade_1_1_0">Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x or 1.0.x to 1.1.x</a></h4>
+<h4><a id="upgrade_1_1_0" href="#upgrade_1_1_0">Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x, or 1.0.x to 1.1.x</a></h4>
 <p>Kafka 1.1.0 introduces wire protocol changes. By following the recommended rolling upgrade plan below,
     you guarantee no downtime during the upgrade. However, please review the <a href="#upgrade_110_notable">notable changes in 1.1.0</a> before upgrading.
 </p>
@@ -938,7 +889,7 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should be upgraded to 0.9
     <li> The new consumer API has been marked stable. </li>
 </ul>
 
-<h4><a id="upgrade_9" href="#upgrade_9">Upgrading from 0.8.0, 0.8.1.X or 0.8.2.X to 0.9.0.0</a></h4>
+<h4><a id="upgrade_9" href="#upgrade_9">Upgrading from 0.8.0, 0.8.1.X, or 0.8.2.X to 0.9.0.0</a></h4>
 
 0.9.0.0 has <a href="#upgrade_9_breaking">potential breaking changes</a> (please review before upgrading) and an inter-broker protocol change from previous versions. This means that upgraded brokers and clients may not be compatible with older versions. It is important that you upgrade your Kafka cluster before upgrading your clients. If you are using MirrorMaker downstream clusters should be upgraded first as well.
 

-- 
To stop receiving notification emails like this one, please contact
guozhang@apache.org.