You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by gw...@apache.org on 2016/04/30 01:50:26 UTC

[1/2] kafka-site git commit: Docs for 0.10.0.0, release candidate 3

Repository: kafka-site
Updated Branches:
  refs/heads/asf-site 87f504b46 -> 35b3bbb22


Docs for 0.10.0.0, release candidate 3


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/fb7c900a
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/fb7c900a
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/fb7c900a

Branch: refs/heads/asf-site
Commit: fb7c900a17524e3065702127e8b6ef03eac06009
Parents: 46fccbe
Author: Gwen Shapira <cs...@gmail.com>
Authored: Fri Apr 29 16:49:17 2016 -0700
Committer: Gwen Shapira <cs...@gmail.com>
Committed: Fri Apr 29 16:49:17 2016 -0700

----------------------------------------------------------------------
 0100/api.html                                   |   2 +-
 0100/configuration.html                         |   2 +-
 0100/connect.html                               |  60 +++++----
 0100/design.html                                |   2 +-
 0100/generated/connect_config.html              |   6 +-
 0100/generated/consumer_config.html             |   4 +-
 0100/generated/kafka_config.html                |   8 +-
 0100/generated/producer_config.html             |   4 +-
 0100/generated/protocol_api_keys.html           |   4 +
 0100/generated/protocol_errors.html             |   3 +
 0100/generated/protocol_messages.html           | 134 ++++++++++++++++++-
 0100/implementation.html                        |   4 +-
 0100/javadoc/allclasses-frame.html              |   4 +-
 0100/javadoc/allclasses-noframe.html            |   4 +-
 0100/javadoc/constant-values.html               |   4 +-
 0100/javadoc/deprecated-list.html               |   4 +-
 0100/javadoc/help-doc.html                      |   4 +-
 0100/javadoc/index-all.html                     |   4 +-
 0100/javadoc/index.html                         |   2 +-
 .../javaapi/consumer/ConsumerConnector.html     |   4 +-
 .../consumer/ConsumerRebalanceListener.html     |   4 +-
 .../kafka/javaapi/consumer/package-frame.html   |   4 +-
 .../kafka/javaapi/consumer/package-summary.html |   4 +-
 .../kafka/javaapi/consumer/package-tree.html    |   4 +-
 0100/javadoc/overview-tree.html                 |   4 +-
 0100/migration.html                             |   2 +-
 0100/ops.html                                   |  18 +--
 0100/streams.html                               |   2 +-
 0100/upgrade.html                               |   6 +-
 29 files changed, 238 insertions(+), 73 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/api.html
----------------------------------------------------------------------
diff --git a/0100/api.html b/0100/api.html
index d303244..8d5be9b 100644
--- a/0100/api.html
+++ b/0100/api.html
@@ -15,7 +15,7 @@
  limitations under the License.
 -->
 
-Apache Kafka includes new java clients (in the org.apache.kafka.clients package). These are meant to supplant the older Scala clients, but for compatability they will co-exist for some time. These clients are available in a separate jar with minimal dependencies, while the old Scala clients remain packaged with the server.
+Apache Kafka includes new java clients (in the org.apache.kafka.clients package). These are meant to supplant the older Scala clients, but for compatibility they will co-exist for some time. These clients are available in a separate jar with minimal dependencies, while the old Scala clients remain packaged with the server.
 
 <h3><a id="producerapi" href="#producerapi">2.1 Producer API</a></h3>
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/configuration.html
----------------------------------------------------------------------
diff --git a/0100/configuration.html b/0100/configuration.html
index e5280a5..f9bd1e4 100644
--- a/0100/configuration.html
+++ b/0100/configuration.html
@@ -207,7 +207,7 @@ The essential old consumer configurations are the following:
     <tr>
       <td>fetch.message.max.bytes</td>
       <td nowrap>1024 * 1024</td>
-      <td>The number of byes of messages to attempt to fetch for each topic-partition in each fetch request. These bytes will be read into memory for each partition, so this helps control the memory used by the consumer. The fetch request size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch.</td>
+      <td>The number of bytes of messages to attempt to fetch for each topic-partition in each fetch request. These bytes will be read into memory for each partition, so this helps control the memory used by the consumer. The fetch request size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch.</td>
     </tr>
      <tr>
       <td>num.consumer.fetchers</td>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/connect.html
----------------------------------------------------------------------
diff --git a/0100/connect.html b/0100/connect.html
index dc6ad6e..5cd4130 100644
--- a/0100/connect.html
+++ b/0100/connect.html
@@ -25,7 +25,7 @@ Kafka Connect features include:
     <li><b>Distributed and standalone modes</b> - scale up to a large, centrally managed service supporting an entire organization or scale down to development, testing, and small production deployments</li>
     <li><b>REST interface</b> - submit and manage connectors to your Kafka Connect cluster via an easy to use REST API</li>
     <li><b>Automatic offset management</b> - with just a little information from connectors, Kafka Connect can manage the offset commit process automatically so connector developers do not need to worry about this error prone part of connector development</li>
-    <li><b>Distributed and scalable by default</b> - Kafka Connect builds on the existing </li>
+    <li><b>Distributed and scalable by default</b> - Kafka Connect builds on the existing group management protocol. More workers can be added to scale up a Kafka Connect cluster.</li>
     <li><b>Streaming/batch integration</b> - leveraging Kafka's existing capabilities, Kafka Connect is an ideal solution for bridging streaming and batch data systems</li>
 </ul>
 
@@ -76,6 +76,8 @@ Most configurations are connector dependent, so they can't be outlined here. How
     <li><code>tasks.max</code> - The maximum number of tasks that should be created for this connector. The connector may create fewer tasks if it cannot achieve this level of parallelism.</li>
 </ul>
 
+The <code>connector.class</code> config supports several formats: the full name or alias of the class for this connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name or use FileStreamSink or FileStreamSinkConnector to make the configuration a bit shorter.
+
 Sink connectors also have one additional option to control their input:
 <ul>
     <li><code>topics</code> - A list of topics to use as input for this connector</li>
@@ -83,10 +85,9 @@ Sink connectors also have one additional option to control their input:
 
 For any other options, you should consult the documentation for the connector.
 
-
 <h4><a id="connect_rest" href="#connect_rest">REST API</a></h4>
 
-Since Kafka Connect is intended to be run as a service, it also supports a REST API for managing connectors. By default this service runs on port 8083. The following are the currently supported endpoints:
+Since Kafka Connect is intended to be run as a service, it also provides a REST API for managing connectors. By default this service runs on port 8083. The following are the currently supported endpoints:
 
 <ul>
     <li><code>GET /connectors</code> - return a list of active connectors</li>
@@ -98,6 +99,13 @@ Since Kafka Connect is intended to be run as a service, it also supports a REST
     <li><code>DELETE /connectors/{name}</code> - delete a connector, halting all tasks and deleting its configuration</li>
 </ul>
 
+Kafka Connect also provides a REST API for getting information about connector plugins:
+
+<ul>
+    <li><code>GET /connector-plugins</code>- return a list of connector plugins installed in the Kafka Connect cluster. Note that the API only checks for connectors on the worker that handles the request, which means you may see inconsistent results, especially during a rolling upgrade if you add new connector jars</li>
+    <li><code>PUT /connector-plugins/{connector-type}/config/validate</code> - validate the provided configuration values against the configuration definition. This API performs per config validation, returns suggested values and error messages during validation.</li>
+</ul>
+
 <h3><a id="connect_development" href="#connect_development">8.3 Connector Development Guide</a></h3>
 
 This guide describes how developers can write new connectors for Kafka Connect to move data between Kafka and other systems. It briefly reviews a few key concepts and then describes how to create a simple connector.
@@ -108,7 +116,7 @@ This guide describes how developers can write new connectors for Kafka Connect t
 
 To copy data between Kafka and another system, users create a <code>Connector</code> for the system they want to pull data from or push data to. Connectors come in two flavors: <code>SourceConnectors</code> import data from another system (e.g. <code>JDBCSourceConnector</code> would import a relational database into Kafka) and <code>SinkConnectors</code> export data (e.g. <code>HDFSSinkConnector</code> would export the contents of a Kafka topic to an HDFS file).
 
-<code>Connectors</code> do not perform any data copying themselves: their configuration describes the data to be copied, and the <code>Connector</code> is responsible for breaking that job into a set of <code>Tasks</code> that can be distributed to workers. These <code>Tasks</code> also come in two corresponding flavors: <code>SourceTask</code>and <code>SinkTask</code>.
+<code>Connectors</code> do not perform any data copying themselves: their configuration describes the data to be copied, and the <code>Connector</code> is responsible for breaking that job into a set of <code>Tasks</code> that can be distributed to workers. These <code>Tasks</code> also come in two corresponding flavors: <code>SourceTask</code> and <code>SinkTask</code>.
 
 With an assignment in hand, each <code>Task</code> must copy its subset of the data to or from Kafka. In Kafka Connect, it should always be possible to frame these assignments as a set of input and output streams consisting of records with consistent schemas. Sometimes this mapping is obvious: each file in a set of log files can be considered a stream with each parsed line forming a record using the same schema and offsets stored as byte offsets in the file. In other cases it may require more effort to map to this model: a JDBC connector can map each table to a stream, but the offset is less clear. One possible mapping uses a timestamp column to generate queries incrementally returning new data, and the last queried timestamp can be used as the offset.
 
@@ -183,6 +191,9 @@ public List&lt;Map&lt;String, String&gt;&gt; getTaskConfigs(int maxTasks) {
 }
 </pre>
 
+Although not used in the example, <code>SourceTask</code> also provides two APIs to commit offsets in the source system: <code>commit</code> and <code>commitSourceRecord</code>. The APIs are provided for source systems which have an acknowledgement mechanism for messages. Overriding these methods allows the source connector to acknowledge messages in the source system, either in bulk or individually, once they have been written to Kafka.
+The <code>commit<code> API stores the offsets in the source system, up to the offsets that have been returned by <code>poll</code>. The implementation of this API should block until the commit is complete. The <code>commitSourceRecord</code> API saves the offset in the source system for each <code>SourceRecord</code> after it is written to Kafka. As Kafka Connect will record offsets automatically, <code>SourceTask<code>s are not required to implement them. In cases where a connector does need to acknowledge messages in the source system, only one of the APIs is typically required.
+
 Even with multiple tasks, this method implementation is usually pretty simple. It just has to determine the number of input tasks, which may require contacting the remote service it is pulling data from, and then divvy them up. Because some patterns for splitting work among tasks are so common, some utilities are provided in <code>ConnectorUtils</code> to simplify these cases.
 
 Note that this simple example does not include dynamic input. See the discussion in the next section for how to trigger updates to task configs.
@@ -242,11 +253,11 @@ public List&lt;SourceRecord&gt; poll() throws InterruptedException {
 
 Again, we've omitted some details, but we can see the important steps: the <code>poll()</code> method is going to be called repeatedly, and for each call it will loop trying to read records from the file. For each line it reads, it also tracks the file offset. It uses this information to create an output <code>SourceRecord</code> with four pieces of information: the source partition (there is only one, the single file being read), source offset (byte offset in the file), output topic name, and output value (the line, and we include a schema indicating this value will always be a string). Other variants of the <code>SourceRecord</code> constructor can also include a specific output partition and a key.
 
-Note that this implementation uses the normal Java <code>InputStream</code>interface and may sleep if data is not available. This is acceptable because Kafka Connect provides each task with a dedicated thread. While task implementations have to conform to the basic <code>poll()</code>interface, they have a lot of flexibility in how they are implemented. In this case, an NIO-based implementation would be more efficient, but this simple approach works, is quick to implement, and is compatible with older versions of Java.
+Note that this implementation uses the normal Java <code>InputStream</code> interface and may sleep if data is not available. This is acceptable because Kafka Connect provides each task with a dedicated thread. While task implementations have to conform to the basic <code>poll()</code> interface, they have a lot of flexibility in how they are implemented. In this case, an NIO-based implementation would be more efficient, but this simple approach works, is quick to implement, and is compatible with older versions of Java.
 
 <h5><a id="connect_sinktasks" href="#connect_sinktasks">Sink Tasks</a></h5>
 
-The previous section described how to implement a simple <code>SourceTask</code>. Unlike <code>SourceConnector</code>and <code>SinkConnector</code>, <code>SourceTask</code>and <code>SinkTask</code>have very different interfaces because <code>SourceTask</code>uses a pull interface and <code>SinkTask</code>uses a push interface. Both share the common lifecycle methods, but the <code>SinkTask</code>interface is quite different:
+The previous section described how to implement a simple <code>SourceTask</code>. Unlike <code>SourceConnector</code> and <code>SinkConnector</code>, <code>SourceTask</code> and <code>SinkTask</code> have very different interfaces because <code>SourceTask</code> uses a pull interface and <code>SinkTask</code> uses a push interface. Both share the common lifecycle methods, but the <code>SinkTask</code> interface is quite different:
 
 <pre>
 public abstract class SinkTask implements Task {
@@ -257,17 +268,17 @@ public abstract void put(Collection&lt;SinkRecord&gt; records);
 public abstract void flush(Map&lt;TopicPartition, Long&gt; offsets);
 </pre>
 
-The <code>SinkTask</code> documentation contains full details, but this interface is nearly as simple as the the <code>SourceTask</code>. The <code>put()</code>method should contain most of the implementation, accepting sets of <code>SinkRecords</code>, performing any required translation, and storing them in the destination system. This method does not need to ensure the data has been fully written to the destination system before returning. In fact, in many cases internal buffering will be useful so an entire batch of records can be sent at once, reducing the overhead of inserting events into the downstream data store. The <code>SinkRecords</code>contain essentially the same information as <code>SourceRecords</code>: Kafka topic, partition, offset and the event key and value.
+The <code>SinkTask</code> documentation contains full details, but this interface is nearly as simple as the <code>SourceTask</code>. The <code>put()</code> method should contain most of the implementation, accepting sets of <code>SinkRecords</code>, performing any required translation, and storing them in the destination system. This method does not need to ensure the data has been fully written to the destination system before returning. In fact, in many cases internal buffering will be useful so an entire batch of records can be sent at once, reducing the overhead of inserting events into the downstream data store. The <code>SinkRecords</code> contain essentially the same information as <code>SourceRecords</code>: Kafka topic, partition, offset and the event key and value.
 
-The <code>flush()</code>method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The <code>offsets</code>parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once
-delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the <code>flush()</code>operation atomically commits the data and offsets to a final location in HDFS.
+The <code>flush()</code> method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The <code>offsets</code> parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once
+delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the <code>flush()</code> operation atomically commits the data and offsets to a final location in HDFS.
 
 
 <h5><a id="connect_resuming" href="#connect_resuming">Resuming from Previous Offsets</a></h5>
 
-The <code>SourceTask</code>implementation included a stream ID (the input filename) and offset (position in the file) with each record. The framework uses this to commit offsets periodically so that in the case of a failure, the task can recover and minimize the number of events that are reprocessed and possibly duplicated (or to resume from the most recent offset if Kafka Connect was stopped gracefully, e.g. in standalone mode or due to a job reconfiguration). This commit process is completely automated by the framework, but only the connector knows how to seek back to the right position in the input stream to resume from that location.
+The <code>SourceTask</code> implementation included a stream ID (the input filename) and offset (position in the file) with each record. The framework uses this to commit offsets periodically so that in the case of a failure, the task can recover and minimize the number of events that are reprocessed and possibly duplicated (or to resume from the most recent offset if Kafka Connect was stopped gracefully, e.g. in standalone mode or due to a job reconfiguration). This commit process is completely automated by the framework, but only the connector knows how to seek back to the right position in the input stream to resume from that location.
 
-To correctly resume upon startup, the task can use the <code>SourceContext</code>passed into its <code>initialize()</code>method to access the offset data. In <code>initialize()</code>, we would add a bit more code to read the offset (if it exists) and seek to that position:
+To correctly resume upon startup, the task can use the <code>SourceContext</code> passed into its <code>initialize()</code> method to access the offset data. In <code>initialize()</code>, we would add a bit more code to read the offset (if it exists) and seek to that position:
 
 <pre>
     stream = new FileInputStream(filename);
@@ -285,19 +296,18 @@ Of course, you might need to read many keys for each of the input streams. The <
 
 Kafka Connect is intended to define bulk data copying jobs, such as copying an entire database rather than creating many jobs to copy each table individually. One consequence of this design is that the set of input or output streams for a connector can vary over time.
 
-Source connectors need to monitor the source system for changes, e.g. table additions/deletions in a database. When they pick up changes, they should notify the framework via the <code>ConnectorContext</code>object that reconfiguration is necessary. For example, in a <code>SourceConnector</code>:
-
+Source connectors need to monitor the source system for changes, e.g. table additions/deletions in a database. When they pick up changes, they should notify the framework via the <code>ConnectorContext</code> object that reconfiguration is necessary. For example, in a <code>SourceConnector</code>:
 
 <pre>
 if (inputsChanged())
     this.context.requestTaskReconfiguration();
 </pre>
 
-The framework will promptly request new configuration information and update the tasks, allowing them to gracefully commit their progress before reconfiguring them. Note that in the <code>SourceConnector</code>this monitoring is currently left up to the connector implementation. If an extra thread is required to perform this monitoring, the connector must allocate it itself.
+The framework will promptly request new configuration information and update the tasks, allowing them to gracefully commit their progress before reconfiguring them. Note that in the <code>SourceConnector</code> this monitoring is currently left up to the connector implementation. If an extra thread is required to perform this monitoring, the connector must allocate it itself.
 
-Ideally this code for monitoring changes would be isolated to the <code>Connector</code>and tasks would not need to worry about them. However, changes can also affect tasks, most commonly when one of their input streams is destroyed in the input system, e.g. if a table is dropped from a database. If the <code>Task</code>encounters the issue before the <code>Connector</code>, which will be common if the <code>Connector</code>needs to poll for changes, the <code>Task</code>will need to handle the subsequent error. Thankfully, this can usually be handled simply by catching and handling the appropriate exception.
+Ideally this code for monitoring changes would be isolated to the <code>Connector</code> and tasks would not need to worry about them. However, changes can also affect tasks, most commonly when one of their input streams is destroyed in the input system, e.g. if a table is dropped from a database. If the <code>Task</code> encounters the issue before the <code>Connector</code>, which will be common if the <code>Connector</code> needs to poll for changes, the <code>Task</code> will need to handle the subsequent error. Thankfully, this can usually be handled simply by catching and handling the appropriate exception.
 
-<code>SinkConnectors</code> usually only have to handle the addition of streams, which may translate to new entries in their outputs (e.g., a new database table). The framework manages any changes to the Kafka input, such as when the set of input topics changes because of a regex subscription. <code>SinkTasks</code>should expect new input streams, which may require creating new resources in the downstream system, such as a new table in a database. The trickiest situation to handle in these cases may be conflicts between multiple <code>SinkTasks</code>seeing a new input stream for the first time and simultaneoulsy trying to create the new resource. <code>SinkConnectors</code>, on the other hand, will generally require no special code for handling a dynamic set of streams.
+<code>SinkConnectors</code> usually only have to handle the addition of streams, which may translate to new entries in their outputs (e.g., a new database table). The framework manages any changes to the Kafka input, such as when the set of input topics changes because of a regex subscription. <code>SinkTasks</code> should expect new input streams, which may require creating new resources in the downstream system, such as a new table in a database. The trickiest situation to handle in these cases may be conflicts between multiple <code>SinkTasks</code> seeing a new input stream for the first time and simultaneously trying to create the new resource. <code>SinkConnectors</code>, on the other hand, will generally require no special code for handling a dynamic set of streams.
 
 <h4><a id="connect_schemas" href="#connect_schemas">Working with Schemas</a></h4>
 
@@ -305,24 +315,24 @@ The FileStream connectors are good examples because they are simple, but they al
 
 To create more complex data, you'll need to work with the Kafka Connect <code>data</code> API. Most structured records will need to interact with two classes in addition to primitive types: <code>Schema</code> and <code>Struct</code>.
 
-The API documentation provides a complete reference, but here is a simple example creating a <code>Schema</code>and <code>Struct</code>:
+The API documentation provides a complete reference, but here is a simple example creating a <code>Schema</code> and <code>Struct</code>:
 
 <pre>
 Schema schema = SchemaBuilder.struct().name(NAME)
-                    .field("name", Schema.STRING_SCHEMA)
-                    .field("age", Schema.INT_SCHEMA)
-                    .field("admin", new SchemaBuilder.boolean().defaultValue(false).build())
-                    .build();
+    .field("name", Schema.STRING_SCHEMA)
+    .field("age", Schema.INT_SCHEMA)
+    .field("admin", new SchemaBuilder.boolean().defaultValue(false).build())
+    .build();
 
 Struct struct = new Struct(schema)
-                           .put("name", "Barbara Liskov")
-                           .put("age", 75)
-                           .build();
+    .put("name", "Barbara Liskov")
+    .put("age", 75)
+    .build();
 </pre>
 
 If you are implementing a source connector, you'll need to decide when and how to create schemas. Where possible, you should avoid recomputing them as much as possible. For example, if your connector is guaranteed to have a fixed schema, create it statically and reuse a single instance.
 
-However, many connectors will have dynamic schemas. One simple example of this is a database connector. Considering even just a single table, the schema will not be predefined for the entire connector (as it varies from table to table). But it also may not be fixed for a single table over the lifetime of the connector since the user may execute an <code>ALTER TABLE</code>command. The connector must be able to detect these changes and react appropriately.
+However, many connectors will have dynamic schemas. One simple example of this is a database connector. Considering even just a single table, the schema will not be predefined for the entire connector (as it varies from table to table). But it also may not be fixed for a single table over the lifetime of the connector since the user may execute an <code>ALTER TABLE</code> command. The connector must be able to detect these changes and react appropriately.
 
 Sink connectors are usually simpler because they are consuming data and therefore do not need to create schemas. However, they should take just as much care to validate that the schemas they receive have the expected format. When the schema does not match -- usually indicating the upstream producer is generating invalid data that cannot be correctly translated to the destination system -- sink connectors should throw an exception to indicate this error to the system.
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/design.html
----------------------------------------------------------------------
diff --git a/0100/design.html b/0100/design.html
index ad40431..a97a0ad 100644
--- a/0100/design.html
+++ b/0100/design.html
@@ -300,7 +300,7 @@ Log compaction is a mechanism to give finer-grained per-record retention, rather
 <p>
 This retention policy can be set per-topic, so a single cluster can have some topics where retention is enforced by size or time and other topics where retention is enforced by compaction.
 <p>
-This functionality is inspired by one of LinkedIn's oldest and most successful pieces of infrastructure&mdash;a database changelog caching service called <a href="https://github.com/linkedin/databus">Databus</a>. Unlike most log-structured storage systems Kafka is built for subscription and organizes data for fast linear reads and writes. Unlike Databus, Kafka acts a source-of-truth store so it is useful even in situations where the upstream data source would not otherwise be replayable.
+This functionality is inspired by one of LinkedIn's oldest and most successful pieces of infrastructure&mdash;a database changelog caching service called <a href="https://github.com/linkedin/databus">Databus</a>. Unlike most log-structured storage systems Kafka is built for subscription and organizes data for fast linear reads and writes. Unlike Databus, Kafka acts as a source-of-truth store so it is useful even in situations where the upstream data source would not otherwise be replayable.
 
 <h4><a id="design_compactionbasics" href="#design_compactionbasics">Log Compaction Basics</a></h4>
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/generated/connect_config.html
----------------------------------------------------------------------
diff --git a/0100/generated/connect_config.html b/0100/generated/connect_config.html
index 6a8e91b..e17301e 100644
--- a/0100/generated/connect_config.html
+++ b/0100/generated/connect_config.html
@@ -50,6 +50,8 @@
 <tr>
 <td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
 <tr>
+<td>sasl.mechanism</td><td>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</td><td>string</td><td>GSSAPI</td><td></td><td>medium</td></tr>
+<tr>
 <td>security.protocol</td><td>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
 <tr>
 <td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to use when sending data.</td><td>int</td><td>131072</td><td>[0,...]</td><td>medium</td></tr>
@@ -68,6 +70,8 @@
 <tr>
 <td>worker.unsync.backoff.ms</td><td>When the worker is out of sync with other workers and  fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining.</td><td>int</td><td>300000</td><td></td><td>medium</td></tr>
 <tr>
+<td>access.control.allow.methods</td><td>Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
 <td>access.control.allow.origin</td><td>Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
 <tr>
 <td>client.id</td><td>An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
@@ -94,7 +98,7 @@
 <tr>
 <td>rest.port</td><td>Port for the REST API to listen on.</td><td>int</td><td>8083</td><td></td><td>low</td></tr>
 <tr>
-<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to retry a failed fetch request to a given topic partition. This avoids repeated fetching-and-failing in a tight loop.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
+<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
 <tr>
 <td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/generated/consumer_config.html
----------------------------------------------------------------------
diff --git a/0100/generated/consumer_config.html b/0100/generated/consumer_config.html
index b8d9f75..fe15645 100644
--- a/0100/generated/consumer_config.html
+++ b/0100/generated/consumer_config.html
@@ -52,6 +52,8 @@
 <tr>
 <td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
 <tr>
+<td>sasl.mechanism</td><td>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</td><td>string</td><td>GSSAPI</td><td></td><td>medium</td></tr>
+<tr>
 <td>security.protocol</td><td>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
 <tr>
 <td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to use when sending data.</td><td>int</td><td>131072</td><td>[0,...]</td><td>medium</td></tr>
@@ -86,7 +88,7 @@
 <tr>
 <td>reconnect.backoff.ms</td><td>The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
 <tr>
-<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to retry a failed fetch request to a given topic partition. This avoids repeated fetching-and-failing in a tight loop.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
+<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
 <tr>
 <td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/generated/kafka_config.html
----------------------------------------------------------------------
diff --git a/0100/generated/kafka_config.html b/0100/generated/kafka_config.html
index 3858203..9caf73b 100644
--- a/0100/generated/kafka_config.html
+++ b/0100/generated/kafka_config.html
@@ -118,7 +118,7 @@ the port to listen and accept connections on</td><td>int</td><td>9092</td><td></
 <tr>
 <td>quota.producer.default</td><td>Any producer distinguished by clientId will get throttled if it produces more bytes than this value per-second</td><td>long</td><td>9223372036854775807</td><td>[1,...]</td><td>high</td></tr>
 <tr>
-<td>replica.fetch.max.bytes</td><td>The number of byes of messages to attempt to fetch</td><td>int</td><td>1048576</td><td></td><td>high</td></tr>
+<td>replica.fetch.max.bytes</td><td>The number of bytes of messages to attempt to fetch</td><td>int</td><td>1048576</td><td></td><td>high</td></tr>
 <tr>
 <td>replica.fetch.min.bytes</td><td>Minimum bytes expected for each fetch response. If not enough bytes, wait up to replicaMaxWaitTimeMs</td><td>int</td><td>1</td><td></td><td>high</td></tr>
 <tr>
@@ -166,7 +166,7 @@ the port to listen and accept connections on</td><td>int</td><td>9092</td><td></
 <tr>
 <td>fetch.purgatory.purge.interval.requests</td><td>The purge interval (in number of requests) of the fetch request purgatory</td><td>int</td><td>1000</td><td></td><td>medium</td></tr>
 <tr>
-<td>group.max.session.timeout.ms</td><td>The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.</td><td>int</td><td>30000</td><td></td><td>medium</td></tr>
+<td>group.max.session.timeout.ms</td><td>The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.</td><td>int</td><td>300000</td><td></td><td>medium</td></tr>
 <tr>
 <td>group.min.session.timeout.ms</td><td>The minimum allowed session timeout for registered consumers. Shorter timeouts leader to quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.</td><td>int</td><td>6000</td><td></td><td>medium</td></tr>
 <tr>
@@ -222,6 +222,8 @@ the port to listen and accept connections on</td><td>int</td><td>9092</td><td></
 <tr>
 <td>reserved.broker.max.id</td><td>Max number that can be used for a broker.id</td><td>int</td><td>1000</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
+<td>sasl.enabled.mechanisms</td><td>The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default.</td><td>list</td><td>[GSSAPI]</td><td></td><td>medium</td></tr>
+<tr>
 <td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>medium</td></tr>
 <tr>
 <td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time between refresh attempts.</td><td>long</td><td>60000</td><td></td><td>medium</td></tr>
@@ -234,6 +236,8 @@ the port to listen and accept connections on</td><td>int</td><td>9092</td><td></
 <tr>
 <td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</td><td>double</td><td>0.8</td><td></td><td>medium</td></tr>
 <tr>
+<td>sasl.mechanism.inter.broker.protocol</td><td>SASL mechanism used for inter-broker communication. Default is GSSAPI.</td><td>string</td><td>GSSAPI</td><td></td><td>medium</td></tr>
+<tr>
 <td>security.inter.broker.protocol</td><td>Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
 <tr>
 <td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.By default all the available cipher suites are supported.</td><td>list</td><td>null</td><td></td><td>medium</td></tr>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/generated/producer_config.html
----------------------------------------------------------------------
diff --git a/0100/generated/producer_config.html b/0100/generated/producer_config.html
index 9735813..2a19fa7 100644
--- a/0100/generated/producer_config.html
+++ b/0100/generated/producer_config.html
@@ -52,6 +52,8 @@
 <tr>
 <td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
 <tr>
+<td>sasl.mechanism</td><td>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</td><td>string</td><td>GSSAPI</td><td></td><td>medium</td></tr>
+<tr>
 <td>security.protocol</td><td>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
 <tr>
 <td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to use when sending data.</td><td>int</td><td>131072</td><td>[0,...]</td><td>medium</td></tr>
@@ -86,7 +88,7 @@
 <tr>
 <td>reconnect.backoff.ms</td><td>The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
 <tr>
-<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to retry a failed fetch request to a given topic partition. This avoids repeated fetching-and-failing in a tight loop.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
+<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
 <tr>
 <td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/generated/protocol_api_keys.html
----------------------------------------------------------------------
diff --git a/0100/generated/protocol_api_keys.html b/0100/generated/protocol_api_keys.html
index 6d4d827..4be5b40 100644
--- a/0100/generated/protocol_api_keys.html
+++ b/0100/generated/protocol_api_keys.html
@@ -35,5 +35,9 @@
 <td>DescribeGroups</td><td>15</td></tr>
 <tr>
 <td>ListGroups</td><td>16</td></tr>
+<tr>
+<td>SaslHandshake</td><td>17</td></tr>
+<tr>
+<td>ApiVersions</td><td>18</td></tr>
 </table>
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/generated/protocol_errors.html
----------------------------------------------------------------------
diff --git a/0100/generated/protocol_errors.html b/0100/generated/protocol_errors.html
index f02670d..61c6f84 100644
--- a/0100/generated/protocol_errors.html
+++ b/0100/generated/protocol_errors.html
@@ -38,5 +38,8 @@
 <tr><td>GROUP_AUTHORIZATION_FAILED</td><td>30</td><td>False</td><td>Not authorized to access group: Group authorization failed.</td></tr>
 <tr><td>CLUSTER_AUTHORIZATION_FAILED</td><td>31</td><td>False</td><td>Cluster authorization failed.</td></tr>
 <tr><td>INVALID_TIMESTAMP</td><td>32</td><td>False</td><td>The timestamp of the message is out of acceptable range.</td></tr>
+<tr><td>UNSUPPORTED_SASL_MECHANISM</td><td>33</td><td>False</td><td>The broker does not support the requested SASL mechanism.</td></tr>
+<tr><td>ILLEGAL_SASL_STATE</td><td>34</td><td>False</td><td>Request is not valid given the current SASL state.</td></tr>
+<tr><td>UNSUPPORTED_VERSION</td><td>35</td><td>False</td><td>The version of API is not supported.</td></tr>
 </table>
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/generated/protocol_messages.html
----------------------------------------------------------------------
diff --git a/0100/generated/protocol_messages.html b/0100/generated/protocol_messages.html
index df9baa3..166fa2e 100644
--- a/0100/generated/protocol_messages.html
+++ b/0100/generated/protocol_messages.html
@@ -419,6 +419,7 @@
     partition_responses => partition error_code [offsets] 
       partition => INT32
       error_code => INT16
+      offsets => INT64
 </pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -440,6 +441,7 @@
 
 <b>Requests:</b><br>
 <p><pre>Metadata Request (Version: 0) => [topics] 
+  topics => STRING
 </pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -447,6 +449,15 @@
 <td>topics</td><td>An array of topics to fetch metadata for. If no topics are specified fetch metadata for all topics.</td></tr>
 </table>
 </p>
+<p><pre>Metadata Request (Version: 1) => [topics] 
+  topics => STRING
+</pre><table class="data-table"><tbody>
+<tr><th>Field</th>
+<th>Description</th>
+</tr><tr>
+<td>topics</td><td>An array of topics to fetch metadata for. If the topics array is null fetch metadata for all topics.</td></tr>
+</table>
+</p>
 <b>Responses:</b><br>
 <p><pre>Metadata Response (Version: 0) => [brokers] [topic_metadata] 
   brokers => node_id host port 
@@ -460,6 +471,8 @@
       partition_error_code => INT16
       partition_id => INT32
       leader => INT32
+      replicas => INT32
+      isr => INT32
 </pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -491,6 +504,60 @@
 <td>isr</td><td>The set of nodes that are in sync with the leader for this partition.</td></tr>
 </table>
 </p>
+<p><pre>Metadata Response (Version: 1) => [brokers] controller_id [topic_metadata] 
+  brokers => node_id host port rack 
+    node_id => INT32
+    host => STRING
+    port => INT32
+    rack => NULLABLE_STRING
+  controller_id => INT32
+  topic_metadata => topic_error_code topic is_internal [partition_metadata] 
+    topic_error_code => INT16
+    topic => STRING
+    is_internal => BOOLEAN
+    partition_metadata => partition_error_code partition_id leader [replicas] [isr] 
+      partition_error_code => INT16
+      partition_id => INT32
+      leader => INT32
+      replicas => INT32
+      isr => INT32
+</pre><table class="data-table"><tbody>
+<tr><th>Field</th>
+<th>Description</th>
+</tr><tr>
+<td>brokers</td><td>Host and port information for all brokers.</td></tr>
+<tr>
+<td>node_id</td><td>The broker id.</td></tr>
+<tr>
+<td>host</td><td>The hostname of the broker.</td></tr>
+<tr>
+<td>port</td><td>The port on which the broker accepts requests.</td></tr>
+<tr>
+<td>rack</td><td>The rack of the broker.</td></tr>
+<tr>
+<td>controller_id</td><td>The broker id of the controller broker.</td></tr>
+<tr>
+<td>topic_metadata</td><td></td></tr>
+<tr>
+<td>topic_error_code</td><td>The error code for the given topic.</td></tr>
+<tr>
+<td>topic</td><td>The name of the topic</td></tr>
+<tr>
+<td>is_internal</td><td>Indicates if the topic is considered a Kafka internal topic</td></tr>
+<tr>
+<td>partition_metadata</td><td>Metadata for each partition of the topic.</td></tr>
+<tr>
+<td>partition_error_code</td><td>The error code for the partition, if any.</td></tr>
+<tr>
+<td>partition_id</td><td>The id of the partition.</td></tr>
+<tr>
+<td>leader</td><td>The id of the broker acting as leader for this partition.</td></tr>
+<tr>
+<td>replicas</td><td>The set of all nodes that host this partition.</td></tr>
+<tr>
+<td>isr</td><td>The set of nodes that are in sync with the leader for this partition.</td></tr>
+</table>
+</p>
 <h5>LeaderAndIsr API (Key: 4):</h5>
 
 <b>Requests:</b><br>
@@ -503,7 +570,9 @@
     controller_epoch => INT32
     leader => INT32
     leader_epoch => INT32
+    isr => INT32
     zk_version => INT32
+    replicas => INT32
   live_leaders => id host port 
     id => INT32
     host => STRING
@@ -626,7 +695,9 @@
     controller_epoch => INT32
     leader => INT32
     leader_epoch => INT32
+    isr => INT32
     zk_version => INT32
+    replicas => INT32
   live_brokers => id host port 
     id => INT32
     host => STRING
@@ -675,7 +746,9 @@
     controller_epoch => INT32
     leader => INT32
     leader_epoch => INT32
+    isr => INT32
     zk_version => INT32
+    replicas => INT32
   live_brokers => id [end_points] 
     id => INT32
     end_points => port host security_protocol_type 
@@ -730,7 +803,9 @@
     controller_epoch => INT32
     leader => INT32
     leader_epoch => INT32
+    isr => INT32
     zk_version => INT32
+    replicas => INT32
   live_brokers => id [end_points] rack 
     id => INT32
     end_points => port host security_protocol_type 
@@ -1298,6 +1373,7 @@
 
 <b>Requests:</b><br>
 <p><pre>DescribeGroups Request (Version: 0) => [group_ids] 
+  group_ids => STRING
 </pre><table class="data-table"><tbody>
 <tr><th>Field</th>
 <th>Description</th>
@@ -1331,7 +1407,7 @@
 <tr>
 <td>state</td><td>The current state of the group (one of: Dead, Stable, AwaitingSync, or PreparingRebalance, or empty if there is no active group)</td></tr>
 <tr>
-<td>protocol_type</td><td>The current group protocol type (will be empty if the there is no active group)</td></tr>
+<td>protocol_type</td><td>The current group protocol type (will be empty if there is no active group)</td></tr>
 <tr>
 <td>protocol</td><td>The current group protocol (only provided if the group is Stable)</td></tr>
 <tr>
@@ -1376,4 +1452,60 @@
 <td>protocol_type</td><td></td></tr>
 </table>
 </p>
+<h5>SaslHandshake API (Key: 17):</h5>
+
+<b>Requests:</b><br>
+<p><pre>SaslHandshake Request (Version: 0) => mechanism 
+  mechanism => STRING
+</pre><table class="data-table"><tbody>
+<tr><th>Field</th>
+<th>Description</th>
+</tr><tr>
+<td>mechanism</td><td>SASL Mechanism chosen by the client.</td></tr>
+</table>
+</p>
+<b>Responses:</b><br>
+<p><pre>SaslHandshake Response (Version: 0) => error_code [enabled_mechanisms] 
+  error_code => INT16
+  enabled_mechanisms => STRING
+</pre><table class="data-table"><tbody>
+<tr><th>Field</th>
+<th>Description</th>
+</tr><tr>
+<td>error_code</td><td></td></tr>
+<tr>
+<td>enabled_mechanisms</td><td>Array of mechanisms enabled in the server.</td></tr>
+</table>
+</p>
+<h5>ApiVersions API (Key: 18):</h5>
+
+<b>Requests:</b><br>
+<p><pre>ApiVersions Request (Version: 0) => 
+</pre><table class="data-table"><tbody>
+<tr><th>Field</th>
+<th>Description</th>
+</tr></table>
+</p>
+<b>Responses:</b><br>
+<p><pre>ApiVersions Response (Version: 0) => error_code [api_versions] 
+  error_code => INT16
+  api_versions => api_key min_version max_version 
+    api_key => INT16
+    min_version => INT16
+    max_version => INT16
+</pre><table class="data-table"><tbody>
+<tr><th>Field</th>
+<th>Description</th>
+</tr><tr>
+<td>error_code</td><td>Error code.</td></tr>
+<tr>
+<td>api_versions</td><td>API versions supported by the broker.</td></tr>
+<tr>
+<td>api_key</td><td>API key.</td></tr>
+<tr>
+<td>min_version</td><td>Minimum supported version.</td></tr>
+<tr>
+<td>max_version</td><td>Maximum supported version.</td></tr>
+</table>
+</p>
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/implementation.html
----------------------------------------------------------------------
diff --git a/0100/implementation.html b/0100/implementation.html
index ecd99e7..be81227 100644
--- a/0100/implementation.html
+++ b/0100/implementation.html
@@ -90,7 +90,7 @@ class SimpleConsumer {
    * Get a list of valid offsets (up to maxSize) before the given time.
    * The result is a list of offsets, in descending order.
    * @param time: time in millisecs,
-   *              if set to OffsetRequest$.MODULE$.LATIEST_TIME(), get from the latest offset available.
+   *              if set to OffsetRequest$.MODULE$.LATEST_TIME(), get from the latest offset available.
    *              if set to OffsetRequest$.MODULE$.EARLIEST_TIME(), get from the earliest offset available.
    */
   public long[] getOffsetsBefore(String topic, int partition, long time, int maxNumOffsets);
@@ -292,7 +292,7 @@ Since the broker registers itself in ZooKeeper using ephemeral znodes, this regi
 </p>
 <h4><a id="impl_zktopic" href="#impl_zktopic">Broker Topic Registry</a></h4>
 <pre>
-/brokers/topics/[topic]/[0...N] --> nPartions (ephemeral node)
+/brokers/topics/[topic]/[0...N] --> nPartitions (ephemeral node)
 </pre>
 
 <p>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/allclasses-frame.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/allclasses-frame.html b/0100/javadoc/allclasses-frame.html
index a35164d..57bbe7f 100644
--- a/0100/javadoc/allclasses-frame.html
+++ b/0100/javadoc/allclasses-frame.html
@@ -2,9 +2,9 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc (version 1.7.0_79) on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc (version 1.7.0_79) on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>All Classes (core 0.10.0.0 API)</title>
-<meta name="date" content="2016-03-28">
+<meta name="date" content="2016-04-29">
 <link rel="stylesheet" type="text/css" href="stylesheet.css" title="Style">
 </head>
 <body>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/allclasses-noframe.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/allclasses-noframe.html b/0100/javadoc/allclasses-noframe.html
index 73432c5..81e6038 100644
--- a/0100/javadoc/allclasses-noframe.html
+++ b/0100/javadoc/allclasses-noframe.html
@@ -2,9 +2,9 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc (version 1.7.0_79) on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc (version 1.7.0_79) on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>All Classes (core 0.10.0.0 API)</title>
-<meta name="date" content="2016-03-28">
+<meta name="date" content="2016-04-29">
 <link rel="stylesheet" type="text/css" href="stylesheet.css" title="Style">
 </head>
 <body>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/constant-values.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/constant-values.html b/0100/javadoc/constant-values.html
index 526f48d..3dc0a9c 100644
--- a/0100/javadoc/constant-values.html
+++ b/0100/javadoc/constant-values.html
@@ -2,9 +2,9 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc (version 1.7.0_79) on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc (version 1.7.0_79) on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>Constant Field Values (core 0.10.0.0 API)</title>
-<meta name="date" content="2016-03-28">
+<meta name="date" content="2016-04-29">
 <link rel="stylesheet" type="text/css" href="stylesheet.css" title="Style">
 </head>
 <body>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/deprecated-list.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/deprecated-list.html b/0100/javadoc/deprecated-list.html
index d7a8140..3a13a71 100644
--- a/0100/javadoc/deprecated-list.html
+++ b/0100/javadoc/deprecated-list.html
@@ -2,9 +2,9 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc (version 1.7.0_79) on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc (version 1.7.0_79) on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>Deprecated List (core 0.10.0.0 API)</title>
-<meta name="date" content="2016-03-28">
+<meta name="date" content="2016-04-29">
 <link rel="stylesheet" type="text/css" href="stylesheet.css" title="Style">
 </head>
 <body>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/help-doc.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/help-doc.html b/0100/javadoc/help-doc.html
index 3c44cf7..88ffc11 100644
--- a/0100/javadoc/help-doc.html
+++ b/0100/javadoc/help-doc.html
@@ -2,9 +2,9 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc (version 1.7.0_79) on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc (version 1.7.0_79) on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>API Help (core 0.10.0.0 API)</title>
-<meta name="date" content="2016-03-28">
+<meta name="date" content="2016-04-29">
 <link rel="stylesheet" type="text/css" href="stylesheet.css" title="Style">
 </head>
 <body>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/index-all.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/index-all.html b/0100/javadoc/index-all.html
index 5847195..d57397c 100644
--- a/0100/javadoc/index-all.html
+++ b/0100/javadoc/index-all.html
@@ -2,9 +2,9 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc (version 1.7.0_79) on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc (version 1.7.0_79) on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>Index (core 0.10.0.0 API)</title>
-<meta name="date" content="2016-03-28">
+<meta name="date" content="2016-04-29">
 <link rel="stylesheet" type="text/css" href="./stylesheet.css" title="Style">
 </head>
 <body>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/index.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/index.html b/0100/javadoc/index.html
index 1ce32aa..0ad29b6 100644
--- a/0100/javadoc/index.html
+++ b/0100/javadoc/index.html
@@ -2,7 +2,7 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>core 0.10.0.0 API</title>
 <script type="text/javascript">
     targetPage = "" + window.location.search;

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/kafka/javaapi/consumer/ConsumerConnector.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/kafka/javaapi/consumer/ConsumerConnector.html b/0100/javadoc/kafka/javaapi/consumer/ConsumerConnector.html
index bfb155d..dbf5ff3 100644
--- a/0100/javadoc/kafka/javaapi/consumer/ConsumerConnector.html
+++ b/0100/javadoc/kafka/javaapi/consumer/ConsumerConnector.html
@@ -2,9 +2,9 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc (version 1.7.0_79) on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc (version 1.7.0_79) on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>ConsumerConnector (core 0.10.0.0 API)</title>
-<meta name="date" content="2016-03-28">
+<meta name="date" content="2016-04-29">
 <link rel="stylesheet" type="text/css" href="../../../stylesheet.css" title="Style">
 </head>
 <body>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/kafka/javaapi/consumer/ConsumerRebalanceListener.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/kafka/javaapi/consumer/ConsumerRebalanceListener.html b/0100/javadoc/kafka/javaapi/consumer/ConsumerRebalanceListener.html
index 3117924..9bd27e6 100644
--- a/0100/javadoc/kafka/javaapi/consumer/ConsumerRebalanceListener.html
+++ b/0100/javadoc/kafka/javaapi/consumer/ConsumerRebalanceListener.html
@@ -2,9 +2,9 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc (version 1.7.0_79) on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc (version 1.7.0_79) on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>ConsumerRebalanceListener (core 0.10.0.0 API)</title>
-<meta name="date" content="2016-03-28">
+<meta name="date" content="2016-04-29">
 <link rel="stylesheet" type="text/css" href="../../../stylesheet.css" title="Style">
 </head>
 <body>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/kafka/javaapi/consumer/package-frame.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/kafka/javaapi/consumer/package-frame.html b/0100/javadoc/kafka/javaapi/consumer/package-frame.html
index 232a00d..2bde005 100644
--- a/0100/javadoc/kafka/javaapi/consumer/package-frame.html
+++ b/0100/javadoc/kafka/javaapi/consumer/package-frame.html
@@ -2,9 +2,9 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc (version 1.7.0_79) on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc (version 1.7.0_79) on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>kafka.javaapi.consumer (core 0.10.0.0 API)</title>
-<meta name="date" content="2016-03-28">
+<meta name="date" content="2016-04-29">
 <link rel="stylesheet" type="text/css" href="../../../stylesheet.css" title="Style">
 </head>
 <body>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/kafka/javaapi/consumer/package-summary.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/kafka/javaapi/consumer/package-summary.html b/0100/javadoc/kafka/javaapi/consumer/package-summary.html
index 4b5539a..6f665b5 100644
--- a/0100/javadoc/kafka/javaapi/consumer/package-summary.html
+++ b/0100/javadoc/kafka/javaapi/consumer/package-summary.html
@@ -2,9 +2,9 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc (version 1.7.0_79) on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc (version 1.7.0_79) on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>kafka.javaapi.consumer (core 0.10.0.0 API)</title>
-<meta name="date" content="2016-03-28">
+<meta name="date" content="2016-04-29">
 <link rel="stylesheet" type="text/css" href="../../../stylesheet.css" title="Style">
 </head>
 <body>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/kafka/javaapi/consumer/package-tree.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/kafka/javaapi/consumer/package-tree.html b/0100/javadoc/kafka/javaapi/consumer/package-tree.html
index c1642a0..36f919a 100644
--- a/0100/javadoc/kafka/javaapi/consumer/package-tree.html
+++ b/0100/javadoc/kafka/javaapi/consumer/package-tree.html
@@ -2,9 +2,9 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc (version 1.7.0_79) on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc (version 1.7.0_79) on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>kafka.javaapi.consumer Class Hierarchy (core 0.10.0.0 API)</title>
-<meta name="date" content="2016-03-28">
+<meta name="date" content="2016-04-29">
 <link rel="stylesheet" type="text/css" href="../../../stylesheet.css" title="Style">
 </head>
 <body>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/javadoc/overview-tree.html
----------------------------------------------------------------------
diff --git a/0100/javadoc/overview-tree.html b/0100/javadoc/overview-tree.html
index 762a6e6..5e842ac 100644
--- a/0100/javadoc/overview-tree.html
+++ b/0100/javadoc/overview-tree.html
@@ -2,9 +2,9 @@
 <!-- NewPage -->
 <html lang="en">
 <head>
-<!-- Generated by javadoc (version 1.7.0_79) on Mon Mar 28 13:06:16 PDT 2016 -->
+<!-- Generated by javadoc (version 1.7.0_79) on Fri Apr 29 16:30:04 PDT 2016 -->
 <title>Class Hierarchy (core 0.10.0.0 API)</title>
-<meta name="date" content="2016-03-28">
+<meta name="date" content="2016-04-29">
 <link rel="stylesheet" type="text/css" href="stylesheet.css" title="Style">
 </head>
 <body>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/migration.html
----------------------------------------------------------------------
diff --git a/0100/migration.html b/0100/migration.html
index 2da6a7e..5240d86 100644
--- a/0100/migration.html
+++ b/0100/migration.html
@@ -27,7 +27,7 @@
     <li>Use the 0.7 to 0.8 <a href="tools.html">migration tool</a> to mirror data from the 0.7 cluster into the 0.8 cluster.
     <li>When the 0.8 cluster is fully caught up, redeploy all data <i>consumers</i> running the 0.8 client and reading from the 0.8 cluster.
     <li>Finally migrate all 0.7 producers to 0.8 client publishing data to the 0.8 cluster.
-    <li>Decomission the 0.7 cluster.
+    <li>Decommission the 0.7 cluster.
     <li>Drink.
 </ol>
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/ops.html
----------------------------------------------------------------------
diff --git a/0100/ops.html b/0100/ops.html
index 541a01d..8b1cc23 100644
--- a/0100/ops.html
+++ b/0100/ops.html
@@ -70,7 +70,7 @@ Instructions for changing the replication factor of a topic can be found <a href
 
 <h4><a id="basic_ops_restarting" href="#basic_ops_restarting">Graceful shutdown</a></h4>
 
-The Kafka cluster will automatically detect any broker shutdown or failure and elect new leaders for the partitions on that machine. This will occur whether a server fails or it is brought down intentionally for maintenance or configuration changes. For the latter cases Kafka supports a more graceful mechanism for stoping a server than just killing it.
+The Kafka cluster will automatically detect any broker shutdown or failure and elect new leaders for the partitions on that machine. This will occur whether a server fails or it is brought down intentionally for maintenance or configuration changes. For the latter cases Kafka supports a more graceful mechanism for stopping a server than just killing it.
 
 When a server is stopped gracefully it has two optimizations it will take advantage of:
 <ol>
@@ -138,7 +138,7 @@ Note, however, after 0.9.0, the kafka.tools.ConsumerOffsetChecker tool is deprec
 
 <h4><a id="basic_ops_consumer_group" href="#basic_ops_consumer_group">Managing Consumer Groups</a></h4>
 
-With the ConumserGroupCommand tool, we can list, delete, or describe consumer groups. For example, to list all consumer groups across all topics:
+With the ConsumerGroupCommand tool, we can list, delete, or describe consumer groups. For example, to list all consumer groups across all topics:
 
 <pre>
  &gt; bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list
@@ -156,7 +156,7 @@ test-consumer-group            test-foo                       0          1
 </pre>
 
 
-When youre using the <a href="https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design">new consumer-groups API</a> where the broker handles coordination of partition handling and rebalance, you can manage the groups with the "--new-consumer" flags:
+When you're using the <a href="https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design">new consumer-groups API</a> where the broker handles coordination of partition handling and rebalance, you can manage the groups with the "--new-consumer" flags:
 
 <pre>
  &gt; bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server broker1:9092 --list
@@ -934,17 +934,17 @@ The final alerting we do is on the correctness of the data delivery. We audit th
 <h3><a id="zk" href="#zk">6.7 ZooKeeper</a></h3>
 
 <h4><a id="zkversion" href="#zkversion">Stable version</a></h4>
-At LinkedIn, we are running ZooKeeper 3.3.*. Version 3.3.3 has known serious issues regarding ephemeral node deletion and session expirations. After running into those issues in production, we upgraded to 3.3.4 and have been running that smoothly for over a year now.
+The current stable branch is 3.4 and the latest release of that branch is 3.4.6, which is the one ZkClient 0.7 uses. ZkClient is the client layer Kafka uses to interact with ZooKeeper.
 
 <h4><a id="zkops" href="#zkops">Operationalizing ZooKeeper</a></h4>
 Operationally, we do the following for a healthy ZooKeeper installation:
 <ul>
-  <li>Redundancy in the physical/hardware/network layout: try not to put them all in the same rack, decent (but don't go nuts) hardware, try to keep redundant power and network paths, etc.</li>
-  <li>I/O segregation: if you do a lot of write type traffic you'll almost definitely want the transaction logs on a different disk group than application logs and snapshots (the write to the ZooKeeper service has a synchronous write to disk, which can be slow).</li>
+  <li>Redundancy in the physical/hardware/network layout: try not to put them all in the same rack, decent (but don't go nuts) hardware, try to keep redundant power and network paths, etc. A typical ZooKeeper ensemble has 5 or 7 servers, which tolerates 2 and 3 servers down, respectively. If you have a small deployment, then using 3 servers is acceptable, but keep in mind that you'll only be able to tolerate 1 server down in this case. </li>
+  <li>I/O segregation: if you do a lot of write type traffic you'll almost definitely want the transaction logs on a dedicated disk group. Writes to the transaction log are synchronous (but batched for performance), and consequently, concurrent writes can significantly affect performance. ZooKeeper snapshots can be one such a source of concurrent writes, and ideally should be written on a disk group separate from the transaction log. Snapshots are writtent to disk asynchronously, so it is typically ok to share with the operating system and message log files. You can configure a server to use a separate disk group with the dataLogDir parameter.</li>
   <li>Application segregation: Unless you really understand the application patterns of other apps that you want to install on the same box, it can be a good idea to run ZooKeeper in isolation (though this can be a balancing act with the capabilities of the hardware).</li>
   <li>Use care with virtualization: It can work, depending on your cluster layout and read/write patterns and SLAs, but the tiny overheads introduced by the virtualization layer can add up and throw off ZooKeeper, as it can be very time sensitive</li>
-  <li>ZooKeeper configuration and monitoring: It's java, make sure you give it 'enough' heap space (We usually run them with 3-5G, but that's mostly due to the data set size we have here). Unfortunately we don't have a good formula for it. As far as monitoring, both JMX and the 4 letter words (4lw) commands are very useful, they do overlap in some cases (and in those cases we prefer the 4 letter commands, they seem more predictable, or at the very least, they work better with the LI monitoring infrastructure)</li>
-  <li>Don't overbuild the cluster: large clusters, especially in a write heavy usage pattern, means a lot of intracluster communication (quorums on the writes and subsequent cluster member updates), but don't underbuild it (and risk swamping the cluster).</li>
-  <li>Try to run on a 3-5 node cluster: ZooKeeper writes use quorums and inherently that means having an odd number of machines in a cluster. Remember that a 5 node cluster will cause writes to slow down compared to a 3 node cluster, but will allow more fault tolerance.</li>
+  <li>ZooKeeper configuration: It's java, make sure you give it 'enough' heap space (We usually run them with 3-5G, but that's mostly due to the data set size we have here). Unfortunately we don't have a good formula for it, but keep in mind that allowing for more ZooKeeper state means that snapshots can become large, and large snapshots affect recovery time. In fact, if the snapshot becomes too large (a few gigabytes), then you may need to increase the initLimit parameter to give enough time for servers to recover and join the ensemble.</li> 
+  <li>Monitoring: Both JMX and the 4 letter words (4lw) commands are very useful, they do overlap in some cases (and in those cases we prefer the 4 letter commands, they seem more predictable, or at the very least, they work better with the LI monitoring infrastructure)</li>
+  <li>Don't overbuild the cluster: large clusters, especially in a write heavy usage pattern, means a lot of intracluster communication (quorums on the writes and subsequent cluster member updates), but don't underbuild it (and risk swamping the cluster). Having more servers adds to your read capacity.</li>
 </ul>
 Overall, we try to keep the ZooKeeper system as small as will handle the load (plus standard growth capacity planning) and as simple as possible. We try not to do anything fancy with the configuration or application layout as compared to the official release as well as keep it as self contained as possible. For these reasons, we tend to skip the OS packaged versions, since it has a tendency to try to put things in the OS standard hierarchy, which can be 'messy', for want of a better way to word it.

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/streams.html
----------------------------------------------------------------------
diff --git a/0100/streams.html b/0100/streams.html
index 9b94bb3..91fda36 100644
--- a/0100/streams.html
+++ b/0100/streams.html
@@ -64,7 +64,7 @@ developers define and connect custom processors as well as to interact with <a h
 <h5><a id="streams_time" href="#streams_time">Time</a></h5>
 
 <p>
-A critical aspect in stream processing is the the notion of <b>time</b>, and how it is modeled and integrated.
+A critical aspect in stream processing is the notion of <b>time</b>, and how it is modeled and integrated.
 For example, some operations such as <b>windowing</b> are defined based on time boundaries.
 </p>
 <p>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/fb7c900a/0100/upgrade.html
----------------------------------------------------------------------
diff --git a/0100/upgrade.html b/0100/upgrade.html
index 060c3de..b9c4bec 100644
--- a/0100/upgrade.html
+++ b/0100/upgrade.html
@@ -79,12 +79,16 @@ work with 0.10.0.x brokers. Therefore, 0.9.0.0 clients should be upgraded to 0.9
     <li> MessageReader's package was changed from <code>kafka.tools</code> to <code>kafka.common</code> </li>
     <li> MirrorMakerMessageHandler no longer exposes the <code>handle(record: MessageAndMetadata[Array[Byte], Array[Byte]])</code> method as it was never called. </li>
     <li> The 0.7 KafkaMigrationTool is no longer packaged with Kafka. If you need to migrate from 0.7 to 0.10.0, please migrate to 0.8 first and then follow the documented upgrade process to upgrade from 0.8 to 0.10.0. </li>
+    <li> The new consumer has standardized its APIs to accept <code>java.util.Collection</code> as the sequence type for method parameters. Existing code may have to be updated to work with the 0.10.0 client library. </li>
 </ul>
 
 <h5><a id="upgrade_10_notable" href="#upgrade_10_notable">Notable changes in 0.10.0.0</a></h5>
 
 <ul>
-    <li> The default value of the configuration parameter <code>receive.buffer.bytes</code> is now 64K for the new consumer </li>
+    <li> The default value of the configuration parameter <code>receive.buffer.bytes</code> is now 64K for the new consumer.</li>
+    <li> The new consumer now exposes the configuration parameter <code>exclude.internal.topics</code> to restrict internal topics (such as the consumer offsets topic) from accidentally being included in regular expression subscriptions. By default, it is enabled.</li>
+    <li> The old Scala producer has been deprecated. Users should migrate their code to the Java producer included in the kafka-clients JAR as soon as possible. </li>
+    <li> The new consumer API has been marked stable. </li>
 </ul>
 
 <h4><a id="upgrade_9" href="#upgrade_9">Upgrading from 0.8.0, 0.8.1.X or 0.8.2.X to 0.9.0.0</a></h4>


[2/2] kafka-site git commit: Merge branch 'asf-site' of https://git-wip-us.apache.org/repos/asf/kafka-site into asf-site

Posted by gw...@apache.org.
Merge branch 'asf-site' of https://git-wip-us.apache.org/repos/asf/kafka-site into asf-site


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/35b3bbb2
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/35b3bbb2
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/35b3bbb2

Branch: refs/heads/asf-site
Commit: 35b3bbb226bf58445ced6c9f9f2dcb15cc07a0de
Parents: fb7c900 87f504b
Author: Gwen Shapira <cs...@gmail.com>
Authored: Fri Apr 29 16:49:56 2016 -0700
Committer: Gwen Shapira <cs...@gmail.com>
Committed: Fri Apr 29 16:49:56 2016 -0700

----------------------------------------------------------------------
 committers.html  |   9 +++++++++
 images/ijuma.jpg | Bin 0 -> 45125 bytes
 2 files changed, 9 insertions(+)
----------------------------------------------------------------------