You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by gu...@apache.org on 2017/11/01 22:41:07 UTC

[45/51] [partial] kafka-site git commit: MINOR: Follow-up Update on 1.0.0 release

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/generated/streams_config.html
----------------------------------------------------------------------
diff --git a/10/generated/streams_config.html b/10/generated/streams_config.html
new file mode 100644
index 0000000..be6e3c1
--- /dev/null
+++ b/10/generated/streams_config.html
@@ -0,0 +1,86 @@
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>application.id</td><td>An identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-id prefix, 2) the group-id for membership management, 3) the changelog topic prefix.</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>bootstrap.servers</td><td>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</td><td>list</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>replication.factor</td><td>The replication factor for change log topics and repartition topics created by the stream processing application.</td><td>int</td><td>1</td><td></td><td>high</td></tr>
+<tr>
+<td>state.dir</td><td>Directory location for state store.</td><td>string</td><td>/tmp/kafka-streams</td><td></td><td>high</td></tr>
+<tr>
+<td>cache.max.bytes.buffering</td><td>Maximum number of memory bytes to be used for buffering across all threads</td><td>long</td><td>10485760</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>client.id</td><td>An ID prefix string used for the client IDs of internal consumer, producer and restore-consumer, with pattern '<client.id>-StreamThread-<threadSequenceNumber>-<consumer|producer|restore-consumer>'.</td><td>string</td><td>""</td><td></td><td>medium</td></tr>
+<tr>
+<td>default.deserialization.exception.handler</td><td>Exception handling class that implements the <code>org.apache.kafka.streams.errors.DeserializationExceptionHandler</code> interface.</td><td>class</td><td>org.apache.kafka.streams.errors.LogAndFailExceptionHandler</td><td></td><td>medium</td></tr>
+<tr>
+<td>default.key.serde</td><td> Default serializer / deserializer class for key that implements the <code>org.apache.kafka.common.serialization.Serde</code> interface.</td><td>class</td><td>org.apache.kafka.common.serialization.Serdes$ByteArraySerde</td><td></td><td>medium</td></tr>
+<tr>
+<td>default.timestamp.extractor</td><td>Default timestamp extractor class that implements the <code>org.apache.kafka.streams.processor.TimestampExtractor</code> interface.</td><td>class</td><td>org.apache.kafka.streams.processor.FailOnInvalidTimestamp</td><td></td><td>medium</td></tr>
+<tr>
+<td>default.value.serde</td><td>Default serializer / deserializer class for value that implements the <code>org.apache.kafka.common.serialization.Serde</code> interface.</td><td>class</td><td>org.apache.kafka.common.serialization.Serdes$ByteArraySerde</td><td></td><td>medium</td></tr>
+<tr>
+<td>num.standby.replicas</td><td>The number of standby replicas for each task.</td><td>int</td><td>0</td><td></td><td>medium</td></tr>
+<tr>
+<td>num.stream.threads</td><td>The number of threads to execute stream processing.</td><td>int</td><td>1</td><td></td><td>medium</td></tr>
+<tr>
+<td>processing.guarantee</td><td>The processing guarantee that should be used. Possible values are <code>at_least_once</code> (default) and <code>exactly_once</code>.</td><td>string</td><td>at_least_once</td><td>[at_least_once, exactly_once]</td><td>medium</td></tr>
+<tr>
+<td>security.protocol</td><td>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<tr>
+<td>application.server</td><td>A host:port pair pointing to an embedded user defined endpoint that can be used for discovering the locations of state stores within a single KafkaStreams application</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>buffered.records.per.partition</td><td>The maximum number of records to buffer per partition.</td><td>int</td><td>1000</td><td></td><td>low</td></tr>
+<tr>
+<td>commit.interval.ms</td><td>The frequency with which to save the position of the processor. (Note, if 'processing.guarantee' is set to 'exactly_once', the default value is 100, otherwise the default value is 30000.</td><td>long</td><td>30000</td><td></td><td>low</td></tr>
+<tr>
+<td>connections.max.idle.ms</td><td>Close idle connections after the number of milliseconds specified by this config.</td><td>long</td><td>540000</td><td></td><td>low</td></tr>
+<tr>
+<td>key.serde</td><td>Serializer / deserializer class for key that implements the <code>org.apache.kafka.common.serialization.Serde</code> interface. This config is deprecated, use <code>default.key.serde</code> instead</td><td>class</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>metadata.max.age.ms</td><td>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</td><td>long</td><td>300000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>metric.reporters</td><td>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</td><td>list</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>metrics.num.samples</td><td>The number of samples maintained to compute metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metrics.recording.level</td><td>The highest recording level for metrics.</td><td>string</td><td>INFO</td><td>[INFO, DEBUG]</td><td>low</td></tr>
+<tr>
+<td>metrics.sample.window.ms</td><td>The window of time a metrics sample is computed over.</td><td>long</td><td>30000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>partition.grouper</td><td>Partition grouper class that implements the <code>org.apache.kafka.streams.processor.PartitionGrouper</code> interface.</td><td>class</td><td>org.apache.kafka.streams.processor.DefaultPartitionGrouper</td><td></td><td>low</td></tr>
+<tr>
+<td>poll.ms</td><td>The amount of time in milliseconds to block waiting for input.</td><td>long</td><td>100</td><td></td><td>low</td></tr>
+<tr>
+<td>receive.buffer.bytes</td><td>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</td><td>int</td><td>32768</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.max.ms</td><td>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</td><td>long</td><td>1000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.ms</td><td>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>request.timeout.ms</td><td>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</td><td>int</td><td>40000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>rocksdb.config.setter</td><td>A Rocks DB config setter class or class name that implements the <code>org.apache.kafka.streams.state.RocksDBConfigSetter</code> interface</td><td>class</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</td><td>int</td><td>131072</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>state.cleanup.delay.ms</td><td>The amount of time in milliseconds to wait before deleting state when a partition has migrated. Only state directories that have not been modified for at least state.cleanup.delay.ms will be removed</td><td>long</td><td>600000</td><td></td><td>low</td></tr>
+<tr>
+<td>timestamp.extractor</td><td>Timestamp extractor class that implements the <code>org.apache.kafka.streams.processor.TimestampExtractor</code> interface. This config is deprecated, use <code>default.timestamp.extractor</code> instead</td><td>class</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>value.serde</td><td>Serializer / deserializer class for value that implements the <code>org.apache.kafka.common.serialization.Serde</code> interface. This config is deprecated, use <code>default.value.serde</code> instead</td><td>class</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>windowstore.changelog.additional.retention.ms</td><td>Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day</td><td>long</td><td>86400000</td><td></td><td>low</td></tr>
+<tr>
+<td>zookeeper.connect</td><td>Zookeeper connect string for Kafka topics management. This config is deprecated and will be ignored as Streams API does not use Zookeeper anymore.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/generated/topic_config.html
----------------------------------------------------------------------
diff --git a/10/generated/topic_config.html b/10/generated/topic_config.html
new file mode 100644
index 0000000..67a8e39
--- /dev/null
+++ b/10/generated/topic_config.html
@@ -0,0 +1,59 @@
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Server Default Property</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>cleanup.policy</td><td>A string that is either "delete" or "compact". This string designates the retention policy to use on old log segments. The default policy ("delete") will discard old segments when their retention time or size limit has been reached. The "compact" setting will enable <a href="#compaction">log compaction</a> on the topic.</td><td>list</td><td>delete</td><td>[compact, delete]</td><td>log.cleanup.policy</td><td>medium</td></tr>
+<tr>
+<td>compression.type</td><td>Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', lz4). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.</td><td>string</td><td>producer</td><td>[uncompressed, snappy, lz4, gzip, producer]</td><td>compression.type</td><td>medium</td></tr>
+<tr>
+<td>delete.retention.ms</td><td>The amount of time to retain delete tombstone markers for <a href="#compaction">log compacted</a> topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan).</td><td>long</td><td>86400000</td><td>[0,...]</td><td>log.cleaner.delete.retention.ms</td><td>medium</td></tr>
+<tr>
+<td>file.delete.delay.ms</td><td>The time to wait before deleting a file from the filesystem</td><td>long</td><td>60000</td><td>[0,...]</td><td>log.segment.delete.delay.ms</td><td>medium</td></tr>
+<tr>
+<td>flush.messages</td><td>This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see <a href="#topicconfigs">the per-topic configuration section</a>).</td><td>long</td><td>9223372036854775807</td><td>[0,...]</td><td>log.flush.interval.messages</td><td>medium</td></tr>
+<tr>
+<td>flush.ms</td><td>This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient.</td><td>long</td><td>9223372036854775807</td><td>[0,...]</td><td>log.flush.interval.ms</td><td>medium</td></tr>
+<tr>
+<td>follower.replication.throttled.replicas</td><td>A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.</td><td>list</td><td>""</td><td>[partitionId],[brokerId]:[partitionId],[brokerId]:...</td><td>follower.replication.throttled.replicas</td><td>medium</td></tr>
+<tr>
+<td>index.interval.bytes</td><td>This setting controls how frequently Kafka adds an index entry to it's offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this.</td><td>int</td><td>4096</td><td>[0,...]</td><td>log.index.interval.bytes</td><td>medium</td></tr>
+<tr>
+<td>leader.replication.throttled.replicas</td><td>A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.</td><td>list</td><td>""</td><td>[partitionId],[brokerId]:[partitionId],[brokerId]:...</td><td>leader.replication.throttled.replicas</td><td>medium</td></tr>
+<tr>
+<td>max.message.bytes</td><td><p>The largest record batch size allowed by Kafka. If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that the they can fetch record batches this large.</p><p>In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.</p></td><td>int</td><td>1000012</td><td>[0,...]</td><td>message.max.bytes</td><td>medium</td></tr>
+<tr>
+<td>message.format.version</td><td>Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.</td><td>string</td><td>1.0-IV0</td><td></td><td>log.message.format.version</td><td>medium</td></tr>
+<tr>
+<td>message.timestamp.difference.max.ms</td><td>The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if message.timestamp.type=LogAppendTime.</td><td>long</td><td>9223372036854775807</td><td>[0,...]</td><td>log.message.timestamp.difference.max.ms</td><td>medium</td></tr>
+<tr>
+<td>message.timestamp.type</td><td>Define whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime`</td><td>string</td><td>CreateTime</td><td></td><td>log.message.timestamp.type</td><td>medium</td></tr>
+<tr>
+<td>min.cleanable.dirty.ratio</td><td>This configuration controls how frequently the log compactor will attempt to clean the log (assuming <a href="#compaction">log compaction</a> is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log.</td><td>double</td><td>0.5</td><td>[0,...,1]</td><td>log.cleaner.min.cleanable.ratio</td><td>medium</td></tr>
+<tr>
+<td>min.compaction.lag.ms</td><td>The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.</td><td>long</td><td>0</td><td>[0,...]</td><td>log.cleaner.min.compaction.lag.ms</td><td>medium</td></tr>
+<tr>
+<td>min.insync.replicas</td><td>When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).<br>When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.</td><td>int</td><td>1</td><td>[1,...]</td><td>min.insync.replicas</td><td>medium</td></tr>
+<tr>
+<td>preallocate</td><td>True if we should preallocate the file on disk when creating a new log segment.</td><td>boolean</td><td>false</td><td></td><td>log.preallocate</td><td>medium</td></tr>
+<tr>
+<td>retention.bytes</td><td>This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.</td><td>long</td><td>-1</td><td></td><td>log.retention.bytes</td><td>medium</td></tr>
+<tr>
+<td>retention.ms</td><td>This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data.</td><td>long</td><td>604800000</td><td></td><td>log.retention.ms</td><td>medium</td></tr>
+<tr>
+<td>segment.bytes</td><td>This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.</td><td>int</td><td>1073741824</td><td>[14,...]</td><td>log.segment.bytes</td><td>medium</td></tr>
+<tr>
+<td>segment.index.bytes</td><td>This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting.</td><td>int</td><td>10485760</td><td>[0,...]</td><td>log.index.size.max.bytes</td><td>medium</td></tr>
+<tr>
+<td>segment.jitter.ms</td><td>The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling</td><td>long</td><td>0</td><td>[0,...]</td><td>log.roll.jitter.ms</td><td>medium</td></tr>
+<tr>
+<td>segment.ms</td><td>This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data.</td><td>long</td><td>604800000</td><td>[0,...]</td><td>log.roll.ms</td><td>medium</td></tr>
+<tr>
+<td>unclean.leader.election.enable</td><td>Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.</td><td>boolean</td><td>false</td><td></td><td>unclean.leader.election.enable</td><td>medium</td></tr>
+</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/consumer-groups.png
----------------------------------------------------------------------
diff --git a/10/images/consumer-groups.png b/10/images/consumer-groups.png
new file mode 100644
index 0000000..16fe293
Binary files /dev/null and b/10/images/consumer-groups.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/icons/NYT.jpg
----------------------------------------------------------------------
diff --git a/10/images/icons/NYT.jpg b/10/images/icons/NYT.jpg
new file mode 100644
index 0000000..f4a7e8f
Binary files /dev/null and b/10/images/icons/NYT.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/icons/architecture--white.png
----------------------------------------------------------------------
diff --git a/10/images/icons/architecture--white.png b/10/images/icons/architecture--white.png
new file mode 100644
index 0000000..98b1b03
Binary files /dev/null and b/10/images/icons/architecture--white.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/icons/architecture.png
----------------------------------------------------------------------
diff --git a/10/images/icons/architecture.png b/10/images/icons/architecture.png
new file mode 100644
index 0000000..6f9fd40
Binary files /dev/null and b/10/images/icons/architecture.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/icons/documentation--white.png
----------------------------------------------------------------------
diff --git a/10/images/icons/documentation--white.png b/10/images/icons/documentation--white.png
new file mode 100644
index 0000000..1e8fd97
Binary files /dev/null and b/10/images/icons/documentation--white.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/icons/documentation.png
----------------------------------------------------------------------
diff --git a/10/images/icons/documentation.png b/10/images/icons/documentation.png
new file mode 100644
index 0000000..8d9da19
Binary files /dev/null and b/10/images/icons/documentation.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/icons/line.png
----------------------------------------------------------------------
diff --git a/10/images/icons/line.png b/10/images/icons/line.png
new file mode 100755
index 0000000..4587d21
Binary files /dev/null and b/10/images/icons/line.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/icons/new-york.png
----------------------------------------------------------------------
diff --git a/10/images/icons/new-york.png b/10/images/icons/new-york.png
new file mode 100755
index 0000000..42a4b0b
Binary files /dev/null and b/10/images/icons/new-york.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/icons/rabobank.png
----------------------------------------------------------------------
diff --git a/10/images/icons/rabobank.png b/10/images/icons/rabobank.png
new file mode 100755
index 0000000..ddad710
Binary files /dev/null and b/10/images/icons/rabobank.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/icons/tutorials--white.png
----------------------------------------------------------------------
diff --git a/10/images/icons/tutorials--white.png b/10/images/icons/tutorials--white.png
new file mode 100644
index 0000000..97a0c04
Binary files /dev/null and b/10/images/icons/tutorials--white.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/icons/tutorials.png
----------------------------------------------------------------------
diff --git a/10/images/icons/tutorials.png b/10/images/icons/tutorials.png
new file mode 100644
index 0000000..983da6c
Binary files /dev/null and b/10/images/icons/tutorials.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/icons/zalando.png
----------------------------------------------------------------------
diff --git a/10/images/icons/zalando.png b/10/images/icons/zalando.png
new file mode 100755
index 0000000..719a7dc
Binary files /dev/null and b/10/images/icons/zalando.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/kafka-apis.png
----------------------------------------------------------------------
diff --git a/10/images/kafka-apis.png b/10/images/kafka-apis.png
new file mode 100644
index 0000000..db6053c
Binary files /dev/null and b/10/images/kafka-apis.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/kafka_log.png
----------------------------------------------------------------------
diff --git a/10/images/kafka_log.png b/10/images/kafka_log.png
new file mode 100644
index 0000000..75abd96
Binary files /dev/null and b/10/images/kafka_log.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/kafka_multidc.png
----------------------------------------------------------------------
diff --git a/10/images/kafka_multidc.png b/10/images/kafka_multidc.png
new file mode 100644
index 0000000..7bc56f4
Binary files /dev/null and b/10/images/kafka_multidc.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/kafka_multidc_complex.png
----------------------------------------------------------------------
diff --git a/10/images/kafka_multidc_complex.png b/10/images/kafka_multidc_complex.png
new file mode 100644
index 0000000..ab88deb
Binary files /dev/null and b/10/images/kafka_multidc_complex.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/log_anatomy.png
----------------------------------------------------------------------
diff --git a/10/images/log_anatomy.png b/10/images/log_anatomy.png
new file mode 100644
index 0000000..a649499
Binary files /dev/null and b/10/images/log_anatomy.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/log_cleaner_anatomy.png
----------------------------------------------------------------------
diff --git a/10/images/log_cleaner_anatomy.png b/10/images/log_cleaner_anatomy.png
new file mode 100644
index 0000000..fb425b0
Binary files /dev/null and b/10/images/log_cleaner_anatomy.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/log_compaction.png
----------------------------------------------------------------------
diff --git a/10/images/log_compaction.png b/10/images/log_compaction.png
new file mode 100644
index 0000000..4e4a833
Binary files /dev/null and b/10/images/log_compaction.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/log_consumer.png
----------------------------------------------------------------------
diff --git a/10/images/log_consumer.png b/10/images/log_consumer.png
new file mode 100644
index 0000000..fbc45f2
Binary files /dev/null and b/10/images/log_consumer.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/mirror-maker.png
----------------------------------------------------------------------
diff --git a/10/images/mirror-maker.png b/10/images/mirror-maker.png
new file mode 100644
index 0000000..8f76b1f
Binary files /dev/null and b/10/images/mirror-maker.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/producer_consumer.png
----------------------------------------------------------------------
diff --git a/10/images/producer_consumer.png b/10/images/producer_consumer.png
new file mode 100644
index 0000000..4b10cc9
Binary files /dev/null and b/10/images/producer_consumer.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-architecture-overview.jpg
----------------------------------------------------------------------
diff --git a/10/images/streams-architecture-overview.jpg b/10/images/streams-architecture-overview.jpg
new file mode 100644
index 0000000..9222079
Binary files /dev/null and b/10/images/streams-architecture-overview.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-architecture-states.jpg
----------------------------------------------------------------------
diff --git a/10/images/streams-architecture-states.jpg b/10/images/streams-architecture-states.jpg
new file mode 100644
index 0000000..fde12db
Binary files /dev/null and b/10/images/streams-architecture-states.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-architecture-tasks.jpg
----------------------------------------------------------------------
diff --git a/10/images/streams-architecture-tasks.jpg b/10/images/streams-architecture-tasks.jpg
new file mode 100644
index 0000000..2e957f9
Binary files /dev/null and b/10/images/streams-architecture-tasks.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-architecture-threads.jpg
----------------------------------------------------------------------
diff --git a/10/images/streams-architecture-threads.jpg b/10/images/streams-architecture-threads.jpg
new file mode 100644
index 0000000..d5f10db
Binary files /dev/null and b/10/images/streams-architecture-threads.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-architecture-topology.jpg
----------------------------------------------------------------------
diff --git a/10/images/streams-architecture-topology.jpg b/10/images/streams-architecture-topology.jpg
new file mode 100644
index 0000000..f42e8cd
Binary files /dev/null and b/10/images/streams-architecture-topology.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-cache-and-commit-interval.png
----------------------------------------------------------------------
diff --git a/10/images/streams-cache-and-commit-interval.png b/10/images/streams-cache-and-commit-interval.png
new file mode 100644
index 0000000..a663bc6
Binary files /dev/null and b/10/images/streams-cache-and-commit-interval.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-concepts-topology.jpg
----------------------------------------------------------------------
diff --git a/10/images/streams-concepts-topology.jpg b/10/images/streams-concepts-topology.jpg
new file mode 100644
index 0000000..832f6d4
Binary files /dev/null and b/10/images/streams-concepts-topology.jpg differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-interactive-queries-01.png
----------------------------------------------------------------------
diff --git a/10/images/streams-interactive-queries-01.png b/10/images/streams-interactive-queries-01.png
new file mode 100644
index 0000000..d5d5031
Binary files /dev/null and b/10/images/streams-interactive-queries-01.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-interactive-queries-02.png
----------------------------------------------------------------------
diff --git a/10/images/streams-interactive-queries-02.png b/10/images/streams-interactive-queries-02.png
new file mode 100644
index 0000000..ea894b6
Binary files /dev/null and b/10/images/streams-interactive-queries-02.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-interactive-queries-03.png
----------------------------------------------------------------------
diff --git a/10/images/streams-interactive-queries-03.png b/10/images/streams-interactive-queries-03.png
new file mode 100644
index 0000000..403e3ae
Binary files /dev/null and b/10/images/streams-interactive-queries-03.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-interactive-queries-api-01.png
----------------------------------------------------------------------
diff --git a/10/images/streams-interactive-queries-api-01.png b/10/images/streams-interactive-queries-api-01.png
new file mode 100644
index 0000000..2b4aaed
Binary files /dev/null and b/10/images/streams-interactive-queries-api-01.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-interactive-queries-api-02.png
----------------------------------------------------------------------
diff --git a/10/images/streams-interactive-queries-api-02.png b/10/images/streams-interactive-queries-api-02.png
new file mode 100644
index 0000000..e5e7527
Binary files /dev/null and b/10/images/streams-interactive-queries-api-02.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-stateful_operations.png
----------------------------------------------------------------------
diff --git a/10/images/streams-stateful_operations.png b/10/images/streams-stateful_operations.png
new file mode 100644
index 0000000..b0fe3de
Binary files /dev/null and b/10/images/streams-stateful_operations.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-table-duality-01.png
----------------------------------------------------------------------
diff --git a/10/images/streams-table-duality-01.png b/10/images/streams-table-duality-01.png
new file mode 100644
index 0000000..4fa4d1b
Binary files /dev/null and b/10/images/streams-table-duality-01.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-table-duality-02.png
----------------------------------------------------------------------
diff --git a/10/images/streams-table-duality-02.png b/10/images/streams-table-duality-02.png
new file mode 100644
index 0000000..4e805c1
Binary files /dev/null and b/10/images/streams-table-duality-02.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-table-duality-03.png
----------------------------------------------------------------------
diff --git a/10/images/streams-table-duality-03.png b/10/images/streams-table-duality-03.png
new file mode 100644
index 0000000..b0b04f5
Binary files /dev/null and b/10/images/streams-table-duality-03.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-table-updates-01.png
----------------------------------------------------------------------
diff --git a/10/images/streams-table-updates-01.png b/10/images/streams-table-updates-01.png
new file mode 100644
index 0000000..3a2c35e
Binary files /dev/null and b/10/images/streams-table-updates-01.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-table-updates-02.png
----------------------------------------------------------------------
diff --git a/10/images/streams-table-updates-02.png b/10/images/streams-table-updates-02.png
new file mode 100644
index 0000000..a0a5b1f
Binary files /dev/null and b/10/images/streams-table-updates-02.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/streams-welcome.png
----------------------------------------------------------------------
diff --git a/10/images/streams-welcome.png b/10/images/streams-welcome.png
new file mode 100644
index 0000000..63918c4
Binary files /dev/null and b/10/images/streams-welcome.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/images/tracking_high_level.png
----------------------------------------------------------------------
diff --git a/10/images/tracking_high_level.png b/10/images/tracking_high_level.png
new file mode 100644
index 0000000..b643230
Binary files /dev/null and b/10/images/tracking_high_level.png differ

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/implementation.html
----------------------------------------------------------------------
diff --git a/10/implementation.html b/10/implementation.html
new file mode 100644
index 0000000..af234ea
--- /dev/null
+++ b/10/implementation.html
@@ -0,0 +1,319 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script id="implementation-template" type="text/x-handlebars-template">
+    <h3><a id="networklayer" href="#networklayer">5.1 Network Layer</a></h3>
+    <p>
+    The network layer is a fairly straight-forward NIO server, and will not be described in great detail. The sendfile implementation is done by giving the <code>MessageSet</code> interface a <code>writeTo</code> method. This allows the file-backed message set to use the more efficient <code>transferTo</code> implementation instead of an in-process buffered write. The threading model is a single acceptor thread and <i>N</i> processor threads which handle a fixed number of connections each. This design has been pretty thoroughly tested <a href="http://sna-projects.com/blog/2009/08/introducing-the-nio-socketserver-implementation">elsewhere</a> and found to be simple to implement and fast. The protocol is kept quite simple to allow for future implementation of clients in other languages.
+    </p>
+    <h3><a id="messages" href="#messages">5.2 Messages</a></h3>
+    <p>
+    Messages consist of a variable-length header, a variable length opaque key byte array and a variable length opaque value byte array. The format of the header is described in the following section.
+    Leaving the key and value opaque is the right decision: there is a great deal of progress being made on serialization libraries right now, and any particular choice is unlikely to be right for all uses. Needless to say a particular application using Kafka would likely mandate a particular serialization type as part of its usage. The <code>RecordBatch</code> interface is simply an iterator over messages with specialized methods for bulk reading and writing to an NIO <code>Channel</code>.</p>
+
+    <h3><a id="messageformat" href="#messageformat">5.3 Message Format</a></h3>
+    <p>
+    Messages (aka Records) are always written in batches. The technical term for a batch of messages is a record batch, and a record batch contains one or more records. In the degenerate case, we could have a record batch containing a single record.
+    Record batches and records have their own headers. The format of each is described below for Kafka version 0.11.0 and later (message format version v2, or magic=2). <a href="https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-Messagesets">Click here</a> for details about message formats 0 and 1.</p>
+
+    <h4><a id="recordbatch" href="#recordbatch">5.3.1 Record Batch</a></h4>
+	<p> The following is the on-disk format of a RecordBatch. </p>
+	<p><pre class="brush: java;">
+		baseOffset: int64
+		batchLength: int32
+		partitionLeaderEpoch: int32
+		magic: int8 (current magic value is 2)
+		crc: int32
+		attributes: int16
+			bit 0~2:
+				0: no compression
+				1: gzip
+				2: snappy
+				3: lz4
+			bit 3: timestampType
+			bit 4: isTransactional (0 means not transactional)
+			bit 5: isControlBatch (0 means not a control batch)
+			bit 6~15: unused
+		lastOffsetDelta: int32
+		firstTimestamp: int64
+		maxTimestamp: int64
+		producerId: int64
+		producerEpoch: int16
+		baseSequence: int32
+		records: [Record]
+	</pre></p>
+    <p> Note that when compression is enabled, the compressed record data is serialized directly following the count of the number of records. </p>
+
+    <p>The CRC covers the data from the attributes to the end of the batch (i.e. all the bytes that follow the CRC). It is located after the magic byte, which
+    means that clients must parse the magic byte before deciding how to interpret the bytes between the batch length and the magic byte. The partition leader
+    epoch field is not included in the CRC computation to avoid the need to recompute the CRC when this field is assigned for every batch that is received by
+    the broker. The CRC-32C (Castagnoli) polynomial is used for the computation.</p>
+
+    <p>On compaction: unlike the older message formats, magic v2 and above preserves the first and last offset/sequence numbers from the original batch when the log is cleaned. This is required in order to be able to restore the
+    producer's state when the log is reloaded. If we did not retain the last sequence number, for example, then after a partition leader failure, the producer might see an OutOfSequence error. The base sequence number must
+    be preserved for duplicate checking (the broker checks incoming Produce requests for duplicates by verifying that the first and last sequence numbers of the incoming batch match the last from that producer). As a result,
+    it is possible to have empty batches in the log when all the records in the batch are cleaned but batch is still retained in order to preserve a producer's last sequence number. One oddity here is that the baseTimestamp
+    field is not preserved during compaction, so it will change if the first record in the batch is compacted away.</p>
+
+    <h5><a id="controlbatch" href="#controlbatch">5.3.1.1 Control Batches</a></h5>
+    <p>A control batch contains a single record called the control record. Control records should not be passed on to applications. Instead, they are used by consumers to filter out aborted transactional messages.</p>
+    <p> The key of a control record conforms to the following schema: </p>
+    <p><pre class="brush: java">
+       version: int16 (current version is 0)
+       type: int16 (0 indicates an abort marker, 1 indicates a commit)
+    </pre></p>
+    <p>The schema for the value of a control record is dependent on the type. The value is opaque to clients.</p>
+
+	<h4><a id="record" href="#record">5.3.2 Record</a></h4>
+	<p>Record level headers were introduced in Kafka 0.11.0. The on-disk format of a record with Headers is delineated below. </p>
+	<p><pre class="brush: java;">
+		length: varint
+		attributes: int8
+			bit 0~7: unused
+		timestampDelta: varint
+		offsetDelta: varint
+		keyLength: varint
+		key: byte[]
+		valueLen: varint
+		value: byte[]
+		Headers => [Header]
+	</pre></p>
+	<h5><a id="recordheader" href="#recordheader">5.4.2.1 Record Header</a></h5>
+	<p><pre class="brush: java;">
+		headerKeyLength: varint
+		headerKey: String
+		headerValueLength: varint
+		Value: byte[]
+	</pre></p>
+    <p>We use the the same varint encoding as Protobuf. More information on the latter can be found <a href="https://developers.google.com/protocol-buffers/docs/encoding#varints">here</a>. The count of headers in a record
+    is also encoded as a varint.</p>
+
+    <h3><a id="log" href="#log">5.4 Log</a></h3>
+    <p>
+    A log for a topic named "my_topic" with two partitions consists of two directories (namely <code>my_topic_0</code> and <code>my_topic_1</code>) populated with data files containing the messages for that topic. The format of the log files is a sequence of "log entries""; each log entry is a 4 byte integer <i>N</i> storing the message length which is followed by the <i>N</i> message bytes. Each message is uniquely identified by a 64-bit integer <i>offset</i> giving the byte position of the start of this message in the stream of all messages ever sent to that topic on that partition. The on-disk format of each message is given below. Each log file is named with the offset of the first message it contains. So the first file created will be 00000000000.kafka, and each additional file will have an integer name roughly <i>S</i> bytes from the previous file where <i>S</i> is the max log file size given in the configuration.
+    </p>
+    <p>
+    The exact binary format for records is versioned and maintained as a standard interface so record batches can be transferred between producer, broker, and client without recopying or conversion when desirable. The previous section included details about the on-disk format of records.</p>
+    </p>
+   <p>
+    The use of the message offset as the message id is unusual. Our original idea was to use a GUID generated by the producer, and maintain a mapping from GUID to offset on each broker. But since a consumer must maintain an ID for each server, the global uniqueness of the GUID provides no value. Furthermore, the complexity of maintaining the mapping from a random id to an offset requires a heavy weight index structure which must be synchronized with disk, essentially requiring a full persistent random-access data structure. Thus to simplify the lookup structure we decided to use a simple per-partition atomic counter which could be coupled with the partition id and node id to uniquely identify a message; this makes the lookup structure simpler, though multiple seeks per consumer request are still likely. However once we settled on a counter, the jump to directly using the offset seemed natural&mdash;both after all are monotonically increasing integers unique to a partition. Since the
  offset is hidden from the consumer API this decision is ultimately an implementation detail and we went with the more efficient approach.
+    </p>
+    <img class="centered" src="/{{version}}/images/kafka_log.png">
+    <h4><a id="impl_writes" href="#impl_writes">Writes</a></h4>
+    <p>
+    The log allows serial appends which always go to the last file. This file is rolled over to a fresh file when it reaches a configurable size (say 1GB). The log takes two configuration parameters: <i>M</i>, which gives the number of messages to write before forcing the OS to flush the file to disk, and <i>S</i>, which gives a number of seconds after which a flush is forced. This gives a durability guarantee of losing at most <i>M</i> messages or <i>S</i> seconds of data in the event of a system crash.
+    </p>
+    <h4><a id="impl_reads" href="#impl_reads">Reads</a></h4>
+    <p>
+    Reads are done by giving the 64-bit logical offset of a message and an <i>S</i>-byte max chunk size. This will return an iterator over the messages contained in the <i>S</i>-byte buffer. <i>S</i> is intended to be larger than any single message, but in the event of an abnormally large message, the read can be retried multiple times, each time doubling the buffer size, until the message is read successfully. A maximum message and buffer size can be specified to make the server reject messages larger than some size, and to give a bound to the client on the maximum it needs to ever read to get a complete message. It is likely that the read buffer ends with a partial message, this is easily detected by the size delimiting.
+    </p>
+    <p>
+    The actual process of reading from an offset requires first locating the log segment file in which the data is stored, calculating the file-specific offset from the global offset value, and then reading from that file offset. The search is done as a simple binary search variation against an in-memory range maintained for each file.
+    </p>
+    <p>
+    The log provides the capability of getting the most recently written message to allow clients to start subscribing as of "right now". This is also useful in the case the consumer fails to consume its data within its SLA-specified number of days. In this case when the client attempts to consume a non-existent offset it is given an OutOfRangeException and can either reset itself or fail as appropriate to the use case.
+    </p>
+
+    <p> The following is the format of the results sent to the consumer.
+
+    <pre class="brush: text;">
+    MessageSetSend (fetch result)
+
+    total length     : 4 bytes
+    error code       : 2 bytes
+    message 1        : x bytes
+    ...
+    message n        : x bytes
+    </pre>
+
+    <pre class="brush: text;">
+    MultiMessageSetSend (multiFetch result)
+
+    total length       : 4 bytes
+    error code         : 2 bytes
+    messageSetSend 1
+    ...
+    messageSetSend n
+    </pre>
+    <h4><a id="impl_deletes" href="#impl_deletes">Deletes</a></h4>
+    <p>
+    Data is deleted one log segment at a time. The log manager allows pluggable delete policies to choose which files are eligible for deletion. The current policy deletes any log with a modification time of more than <i>N</i> days ago, though a policy which retained the last <i>N</i> GB could also be useful. To avoid locking reads while still allowing deletes that modify the segment list we use a copy-on-write style segment list implementation that provides consistent views to allow a binary search to proceed on an immutable static snapshot view of the log segments while deletes are progressing.
+    </p>
+    <h4><a id="impl_guarantees" href="#impl_guarantees">Guarantees</a></h4>
+    <p>
+    The log provides a configuration parameter <i>M</i> which controls the maximum number of messages that are written before forcing a flush to disk. On startup a log recovery process is run that iterates over all messages in the newest log segment and verifies that each message entry is valid. A message entry is valid if the sum of its size and offset are less than the length of the file AND the CRC32 of the message payload matches the CRC stored with the message. In the event corruption is detected the log is truncated to the last valid offset.
+    </p>
+    <p>
+    Note that two kinds of corruption must be handled: truncation in which an unwritten block is lost due to a crash, and corruption in which a nonsense block is ADDED to the file. The reason for this is that in general the OS makes no guarantee of the write order between the file inode and the actual block data so in addition to losing written data the file can gain nonsense data if the inode is updated with a new size but a crash occurs before the block containing that data is written. The CRC detects this corner case, and prevents it from corrupting the log (though the unwritten messages are, of course, lost).
+    </p>
+
+    <h3><a id="distributionimpl" href="#distributionimpl">5.5 Distribution</a></h3>
+    <h4><a id="impl_offsettracking" href="#impl_offsettracking">Consumer Offset Tracking</a></h4>
+    <p>
+    The high-level consumer tracks the maximum offset it has consumed in each partition and periodically commits its offset vector so that it can resume from those offsets in the event of a restart. Kafka provides the option to store all the offsets for a given consumer group in a designated broker (for that group) called the <i>offset manager</i>. i.e., any consumer instance in that consumer group should send its offset commits and fetches to that offset manager (broker). The high-level consumer handles this automatically. If you use the simple consumer you will need to manage offsets manually. This is currently unsupported in the Java simple consumer which can only commit or fetch offsets in ZooKeeper. If you use the Scala simple consumer you can discover the offset manager and explicitly commit or fetch offsets to the offset manager. A consumer can look up its offset manager by issuing a GroupCoordinatorRequest to any Kafka broker and reading the GroupCoordinatorResponse which wi
 ll contain the offset manager. The consumer can then proceed to commit or fetch offsets from the offsets manager broker. In case the offset manager moves, the consumer will need to rediscover the offset manager. If you wish to manage your offsets manually, you can take a look at these <a href="https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka">code samples that explain how to issue OffsetCommitRequest and OffsetFetchRequest</a>.
+    </p>
+
+    <p>
+    When the offset manager receives an OffsetCommitRequest, it appends the request to a special <a href="#compaction">compacted</a> Kafka topic named <i>__consumer_offsets</i>. The offset manager sends a successful offset commit response to the consumer only after all the replicas of the offsets topic receive the offsets. In case the offsets fail to replicate within a configurable timeout, the offset commit will fail and the consumer may retry the commit after backing off. (This is done automatically by the high-level consumer.) The brokers periodically compact the offsets topic since it only needs to maintain the most recent offset commit per partition. The offset manager also caches the offsets in an in-memory table in order to serve offset fetches quickly.
+    </p>
+
+    <p>
+    When the offset manager receives an offset fetch request, it simply returns the last committed offset vector from the offsets cache. In case the offset manager was just started or if it just became the offset manager for a new set of consumer groups (by becoming a leader for a partition of the offsets topic), it may need to load the offsets topic partition into the cache. In this case, the offset fetch will fail with an OffsetsLoadInProgress exception and the consumer may retry the OffsetFetchRequest after backing off. (This is done automatically by the high-level consumer.)
+    </p>
+
+    <h5><a id="offsetmigration" href="#offsetmigration">Migrating offsets from ZooKeeper to Kafka</a></h5>
+    <p>
+    Kafka consumers in earlier releases store their offsets by default in ZooKeeper. It is possible to migrate these consumers to commit offsets into Kafka by following these steps:
+    <ol>
+    <li>Set <code>offsets.storage=kafka</code> and <code>dual.commit.enabled=true</code> in your consumer config.
+    </li>
+    <li>Do a rolling bounce of your consumers and then verify that your consumers are healthy.
+    </li>
+    <li>Set <code>dual.commit.enabled=false</code> in your consumer config.
+    </li>
+    <li>Do a rolling bounce of your consumers and then verify that your consumers are healthy.
+    </li>
+    </ol>
+    A roll-back (i.e., migrating from Kafka back to ZooKeeper) can also be performed using the above steps if you set <code>offsets.storage=zookeeper</code>.
+    </p>
+
+    <h4><a id="impl_zookeeper" href="#impl_zookeeper">ZooKeeper Directories</a></h4>
+    <p>
+    The following gives the ZooKeeper structures and algorithms used for co-ordination between consumers and brokers.
+    </p>
+
+    <h4><a id="impl_zknotation" href="#impl_zknotation">Notation</a></h4>
+    <p>
+    When an element in a path is denoted [xyz], that means that the value of xyz is not fixed and there is in fact a ZooKeeper znode for each possible value of xyz. For example /topics/[topic] would be a directory named /topics containing a sub-directory for each topic name. Numerical ranges are also given such as [0...5] to indicate the subdirectories 0, 1, 2, 3, 4. An arrow -> is used to indicate the contents of a znode. For example /hello -> world would indicate a znode /hello containing the value "world".
+    </p>
+
+    <h4><a id="impl_zkbroker" href="#impl_zkbroker">Broker Node Registry</a></h4>
+    <pre class="brush: json;">
+    /brokers/ids/[0...N] --> {"jmx_port":...,"timestamp":...,"endpoints":[...],"host":...,"version":...,"port":...} (ephemeral node)
+    </pre>
+    <p>
+    This is a list of all present broker nodes, each of which provides a unique logical broker id which identifies it to consumers (which must be given as part of its configuration). On startup, a broker node registers itself by creating a znode with the logical broker id under /brokers/ids. The purpose of the logical broker id is to allow a broker to be moved to a different physical machine without affecting consumers. An attempt to register a broker id that is already in use (say because two servers are configured with the same broker id) results in an error.
+    </p>
+    <p>
+    Since the broker registers itself in ZooKeeper using ephemeral znodes, this registration is dynamic and will disappear if the broker is shutdown or dies (thus notifying consumers it is no longer available).
+    </p>
+    <h4><a id="impl_zktopic" href="#impl_zktopic">Broker Topic Registry</a></h4>
+    <pre class="brush: json;">
+    /brokers/topics/[topic]/partitions/[0...N]/state --> {"controller_epoch":...,"leader":...,"version":...,"leader_epoch":...,"isr":[...]} (ephemeral node)
+    </pre>
+
+    <p>
+    Each broker registers itself under the topics it maintains and stores the number of partitions for that topic.
+    </p>
+
+    <h4><a id="impl_zkconsumers" href="#impl_zkconsumers">Consumers and Consumer Groups</a></h4>
+    <p>
+    Consumers of topics also register themselves in ZooKeeper, in order to coordinate with each other and balance the consumption of data. Consumers can also store their offsets in ZooKeeper by setting <code>offsets.storage=zookeeper</code>. However, this offset storage mechanism will be deprecated in a future release. Therefore, it is recommended to <a href="#offsetmigration">migrate offsets storage to Kafka</a>.
+    </p>
+
+    <p>
+    Multiple consumers can form a group and jointly consume a single topic. Each consumer in the same group is given a shared group_id.
+    For example if one consumer is your foobar process, which is run across three machines, then you might assign this group of consumers the id "foobar". This group id is provided in the configuration of the consumer, and is your way to tell the consumer which group it belongs to.
+    </p>
+
+    <p>
+    The consumers in a group divide up the partitions as fairly as possible, each partition is consumed by exactly one consumer in a consumer group.
+    </p>
+
+    <h4><a id="impl_zkconsumerid" href="#impl_zkconsumerid">Consumer Id Registry</a></h4>
+    <p>
+    In addition to the group_id which is shared by all consumers in a group, each consumer is given a transient, unique consumer_id (of the form hostname:uuid) for identification purposes. Consumer ids are registered in the following directory.
+    <pre class="brush: json;">
+    /consumers/[group_id]/ids/[consumer_id] --> {"version":...,"subscription":{...:...},"pattern":...,"timestamp":...} (ephemeral node)
+    </pre>
+    Each of the consumers in the group registers under its group and creates a znode with its consumer_id. The value of the znode contains a map of &lt;topic, #streams&gt;. This id is simply used to identify each of the consumers which is currently active within a group. This is an ephemeral node so it will disappear if the consumer process dies.
+    </p>
+
+    <h4><a id="impl_zkconsumeroffsets" href="#impl_zkconsumeroffsets">Consumer Offsets</a></h4>
+    <p>
+    Consumers track the maximum offset they have consumed in each partition. This value is stored in a ZooKeeper directory if <code>offsets.storage=zookeeper</code>.
+    </p>
+    <pre class="brush: json;">
+    /consumers/[group_id]/offsets/[topic]/[partition_id] --> offset_counter_value (persistent node)
+    </pre>
+
+    <h4><a id="impl_zkowner" href="#impl_zkowner">Partition Owner registry</a></h4>
+
+    <p>
+    Each broker partition is consumed by a single consumer within a given consumer group. The consumer must establish its ownership of a given partition before any consumption can begin. To establish its ownership, a consumer writes its own id in an ephemeral node under the particular broker partition it is claiming.
+    </p>
+
+    <pre class="brush: json;">
+    /consumers/[group_id]/owners/[topic]/[partition_id] --> consumer_node_id (ephemeral node)
+    </pre>
+
+    <h4><a id="impl_clusterid" href="#impl_clusterid">Cluster Id</a></h4>
+
+    <p>
+        The cluster id is a unique and immutable identifier assigned to a Kafka cluster. The cluster id can have a maximum of 22 characters and the allowed characters are defined by the regular expression [a-zA-Z0-9_\-]+, which corresponds to the characters used by the URL-safe Base64 variant with no padding. Conceptually, it is auto-generated when a cluster is started for the first time.
+    </p>
+    <p>
+        Implementation-wise, it is generated when a broker with version 0.10.1 or later is successfully started for the first time. The broker tries to get the cluster id from the <code>/cluster/id</code> znode during startup. If the znode does not exist, the broker generates a new cluster id and creates the znode with this cluster id.
+    </p>
+
+    <h4><a id="impl_brokerregistration" href="#impl_brokerregistration">Broker node registration</a></h4>
+
+    <p>
+    The broker nodes are basically independent, so they only publish information about what they have. When a broker joins, it registers itself under the broker node registry directory and writes information about its host name and port. The broker also register the list of existing topics and their logical partitions in the broker topic registry. New topics are registered dynamically when they are created on the broker.
+    </p>
+
+    <h4><a id="impl_consumerregistration" href="#impl_consumerregistration">Consumer registration algorithm</a></h4>
+
+    <p>
+    When a consumer starts, it does the following:
+    <ol>
+    <li> Register itself in the consumer id registry under its group.
+    </li>
+    <li> Register a watch on changes (new consumers joining or any existing consumers leaving) under the consumer id registry. (Each change triggers rebalancing among all consumers within the group to which the changed consumer belongs.)
+    </li>
+    <li> Register a watch on changes (new brokers joining or any existing brokers leaving) under the broker id registry. (Each change triggers rebalancing among all consumers in all consumer groups.) </li>
+    <li> If the consumer creates a message stream using a topic filter, it also registers a watch on changes (new topics being added) under the broker topic registry. (Each change will trigger re-evaluation of the available topics to determine which topics are allowed by the topic filter. A new allowed topic will trigger rebalancing among all consumers within the consumer group.)</li>
+    <li> Force itself to rebalance within in its consumer group.
+    </li>
+    </ol>
+    </p>
+
+    <h4><a id="impl_consumerrebalance" href="#impl_consumerrebalance">Consumer rebalancing algorithm</a></h4>
+    <p>
+    The consumer rebalancing algorithms allows all the consumers in a group to come into consensus on which consumer is consuming which partitions. Consumer rebalancing is triggered on each addition or removal of both broker nodes and other consumers within the same group. For a given topic and a given consumer group, broker partitions are divided evenly among consumers within the group. A partition is always consumed by a single consumer. This design simplifies the implementation. Had we allowed a partition to be concurrently consumed by multiple consumers, there would be contention on the partition and some kind of locking would be required. If there are more consumers than partitions, some consumers won't get any data at all. During rebalancing, we try to assign partitions to consumers in such a way that reduces the number of broker nodes each consumer has to connect to.
+    </p>
+    <p>
+    Each consumer does the following during rebalancing:
+    </p>
+    <pre class="brush: text;">
+    1. For each topic T that C<sub>i</sub> subscribes to
+    2.   let P<sub>T</sub> be all partitions producing topic T
+    3.   let C<sub>G</sub> be all consumers in the same group as C<sub>i</sub> that consume topic T
+    4.   sort P<sub>T</sub> (so partitions on the same broker are clustered together)
+    5.   sort C<sub>G</sub>
+    6.   let i be the index position of C<sub>i</sub> in C<sub>G</sub> and let N = size(P<sub>T</sub>)/size(C<sub>G</sub>)
+    7.   assign partitions from i*N to (i+1)*N - 1 to consumer C<sub>i</sub>
+    8.   remove current entries owned by C<sub>i</sub> from the partition owner registry
+    9.   add newly assigned partitions to the partition owner registry
+            (we may need to re-try this until the original partition owner releases its ownership)
+    </pre>
+    <p>
+    When rebalancing is triggered at one consumer, rebalancing should be triggered in other consumers within the same group about the same time.
+    </p>
+</script>
+
+<div class="p-implementation"></div>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/introduction.html
----------------------------------------------------------------------
diff --git a/10/introduction.html b/10/introduction.html
new file mode 100644
index 0000000..5b3bb4a
--- /dev/null
+++ b/10/introduction.html
@@ -0,0 +1,209 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="js/templateData.js" --></script>
+
+<script id="introduction-template" type="text/x-handlebars-template">
+  <h3> Apache Kafka&reg; is <i>a distributed streaming platform</i>. What exactly does that mean?</h3>
+  <p>We think of a streaming platform as having three key capabilities:</p>
+  <ol>
+    <li>It lets you publish and subscribe to streams of records. In this respect it is similar to a message queue or enterprise messaging system.
+    <li>It lets you store streams of records in a fault-tolerant way.
+    <li>It lets you process streams of records as they occur.
+  </ol>
+  <p>What is Kafka good for?</p>
+  <p>It gets used for two broad classes of application:</p>
+  <ol>
+    <li>Building real-time streaming data pipelines that reliably get data between systems or applications
+    <li>Building real-time streaming applications that transform or react to the streams of data
+  </ol>
+  <p>To understand how Kafka does these things, let's dive in and explore Kafka's capabilities from the bottom up.</p>
+  <p>First a few concepts:</p>
+  <ul>
+    <li>Kafka is run as a cluster on one or more servers.
+      <li>The Kafka cluster stores streams of <i>records</i> in categories called <i>topics</i>.
+    <li>Each record consists of a key, a value, and a timestamp.
+  </ul>
+  <p>Kafka has four core APIs:</p>
+  <div style="overflow: hidden;">
+      <ul style="float: left; width: 40%;">
+      <li>The <a href="/documentation.html#producerapi">Producer API</a> allows an application to publish a stream of records to one or more Kafka topics.
+      <li>The <a href="/documentation.html#consumerapi">Consumer API</a> allows an application to subscribe to one or more topics and process the stream of records produced to them.
+    <li>The <a href="/documentation/streams">Streams API</a> allows an application to act as a <i>stream processor</i>, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.
+    <li>The <a href="/documentation.html#connect">Connector API</a> allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to a table.
+  </ul>
+      <img src="/{{version}}/images/kafka-apis.png" style="float: right; width: 50%;">
+      </div>
+  <p>
+  In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="https://kafka.apache.org/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older version. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.</p>
+
+  <h4><a id="intro_topics" href="#intro_topics">Topics and Logs</a></h4>
+  <p>Let's first dive into the core abstraction Kafka provides for a stream of records&mdash;the topic.</p>
+  <p>A topic is a category or feed name to which records are published. Topics in Kafka are always multi-subscriber; that is, a topic can have zero, one, or many consumers that subscribe to the data written to it.</p>
+  <p> For each topic, the Kafka cluster maintains a partitioned log that looks like this: </p>
+  <img class="centered" src="/{{version}}/images/log_anatomy.png">
+
+  <p> Each partition is an ordered, immutable sequence of records that is continually appended to&mdash;a structured commit log. The records in the partitions are each assigned a sequential id number called the <i>offset</i> that uniquely identifies each record within the partition.
+  </p>
+  <p>
+  The Kafka cluster retains all published records&mdash;whether or not they have been consumed&mdash;using a configurable retention period. For example, if the retention policy is set to two days, then for the two days after a record is published, it is available for consumption, after which it will be discarded to free up space. Kafka's performance is effectively constant with respect to data size so storing data for a long time is not a problem.
+  </p>
+  <img class="centered" src="/{{version}}/images/log_consumer.png" style="width:400px">
+  <p>
+  In fact, the only metadata retained on a per-consumer basis is the offset or position of that consumer in the log. This offset is controlled by the consumer: normally a consumer will advance its offset linearly as it reads records, but, in fact, since the position is controlled by the consumer it can consume records in any order it likes. For example a consumer can reset to an older offset to reprocess data from the past or skip ahead to the most recent record and start consuming from "now".
+  </p>
+  <p>
+  This combination of features means that Kafka consumers are very cheap&mdash;they can come and go without much impact on the cluster or on other consumers. For example, you can use our command line tools to "tail" the contents of any topic without changing what is consumed by any existing consumers.
+  </p>
+  <p>
+  The partitions in the log serve several purposes. First, they allow the log to scale beyond a size that will fit on a single server. Each individual partition must fit on the servers that host it, but a topic may have many partitions so it can handle an arbitrary amount of data. Second they act as the unit of parallelism&mdash;more on that in a bit.
+  </p>
+
+  <h4><a id="intro_distribution" href="#intro_distribution">Distribution</a></h4>
+
+  <p>
+  The partitions of the log are distributed over the servers in the Kafka cluster with each server handling data and requests for a share of the partitions. Each partition is replicated across a configurable number of servers for fault tolerance.
+  </p>
+  <p>
+  Each partition has one server which acts as the "leader" and zero or more servers which act as "followers". The leader handles all read and write requests for the partition while the followers passively replicate the leader. If the leader fails, one of the followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others so load is well balanced within the cluster.
+  </p>
+
+  <h4><a id="intro_producers" href="#intro_producers">Producers</a></h4>
+  <p>
+  Producers publish data to the topics of their choice. The producer is responsible for choosing which record to assign to which partition within the topic. This can be done in a round-robin fashion simply to balance load or it can be done according to some semantic partition function (say based on some key in the record). More on the use of partitioning in a second!
+  </p>
+
+  <h4><a id="intro_consumers" href="#intro_consumers">Consumers</a></h4>
+
+  <p>
+  Consumers label themselves with a <i>consumer group</i> name, and each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Consumer instances can be in separate processes or on separate machines.
+  </p>
+  <p>
+  If all the consumer instances have the same consumer group, then the records will effectively be load balanced over the consumer instances.</p>
+  <p>
+  If all the consumer instances have different consumer groups, then each record will be broadcast to all the consumer processes.
+  </p>
+  <img class="centered" src="/{{version}}/images/consumer-groups.png">
+  <p>
+    A two server Kafka cluster hosting four partitions (P0-P3) with two consumer groups. Consumer group A has two consumer instances and group B has four.
+  </p>
+
+  <p>
+  More commonly, however, we have found that topics have a small number of consumer groups, one for each "logical subscriber". Each group is composed of many consumer instances for scalability and fault tolerance. This is nothing more than publish-subscribe semantics where the subscriber is a cluster of consumers instead of a single process.
+  </p>
+  <p>
+  The way consumption is implemented in Kafka is by dividing up the partitions in the log over the consumer instances so that each instance is the exclusive consumer of a "fair share" of partitions at any point in time. This process of maintaining membership in the group is handled by the Kafka protocol dynamically. If new instances join the group they will take over some partitions from other members of the group; if an instance dies, its partitions will be distributed to the remaining instances.
+  </p>
+  <p>
+  Kafka only provides a total order over records <i>within</i> a partition, not between different partitions in a topic. Per-partition ordering combined with the ability to partition data by key is sufficient for most applications. However, if you require a total order over records this can be achieved with a topic that has only one partition, though this will mean only one consumer process per consumer group.
+  </p>
+  <h4><a id="intro_guarantees" href="#intro_guarantees">Guarantees</a></h4>
+  <p>
+  At a high-level Kafka gives the following guarantees:
+  </p>
+  <ul>
+    <li>Messages sent by a producer to a particular topic partition will be appended in the order they are sent. That is, if a record M1 is sent by the same producer as a record M2, and M1 is sent first, then M1 will have a lower offset than M2 and appear earlier in the log.
+    <li>A consumer instance sees records in the order they are stored in the log.
+    <li>For a topic with replication factor N, we will tolerate up to N-1 server failures without losing any records committed to the log.
+  </ul>
+  <p>
+  More details on these guarantees are given in the design section of the documentation.
+  </p>
+  <h4><a id="kafka_mq" href="#kafka_mq">Kafka as a Messaging System</a></h4>
+  <p>
+  How does Kafka's notion of streams compare to a traditional enterprise messaging system?
+  </p>
+  <p>
+  Messaging traditionally has two models: <a href="http://en.wikipedia.org/wiki/Message_queue">queuing</a> and <a href="http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern">publish-subscribe</a>. In a queue, a pool of consumers may read from a server and each record goes to one of them; in publish-subscribe the record is broadcast to all consumers. Each of these two models has a strength and a weakness. The strength of queuing is that it allows you to divide up the processing of data over multiple consumer instances, which lets you scale your processing. Unfortunately, queues aren't multi-subscriber&mdash;once one process reads the data it's gone. Publish-subscribe allows you broadcast data to multiple processes, but has no way of scaling processing since every message goes to every subscriber.
+  </p>
+  <p>
+  The consumer group concept in Kafka generalizes these two concepts. As with a queue the consumer group allows you to divide up processing over a collection of processes (the members of the consumer group). As with publish-subscribe, Kafka allows you to broadcast messages to multiple consumer groups.
+  </p>
+  <p>
+  The advantage of Kafka's model is that every topic has both these properties&mdash;it can scale processing and is also multi-subscriber&mdash;there is no need to choose one or the other.
+  </p>
+  <p>
+  Kafka has stronger ordering guarantees than a traditional messaging system, too.
+  </p>
+  <p>
+  A traditional queue retains records in-order on the server, and if multiple consumers consume from the queue then the server hands out records in the order they are stored. However, although the server hands out records in order, the records are delivered asynchronously to consumers, so they may arrive out of order on different consumers. This effectively means the ordering of the records is lost in the presence of parallel consumption. Messaging systems often work around this by having a notion of "exclusive consumer" that allows only one process to consume from a queue, but of course this means that there is no parallelism in processing.
+  </p>
+  <p>
+  Kafka does it better. By having a notion of parallelism&mdash;the partition&mdash;within the topics, Kafka is able to provide both ordering guarantees and load balancing over a pool of consumer processes. This is achieved by assigning the partitions in the topic to the consumers in the consumer group so that each partition is consumed by exactly one consumer in the group. By doing this we ensure that the consumer is the only reader of that partition and consumes the data in order. Since there are many partitions this still balances the load over many consumer instances. Note however that there cannot be more consumer instances in a consumer group than partitions.
+  </p>
+
+  <h4 id="kafka_storage">Kafka as a Storage System</h4>
+
+  <p>
+  Any message queue that allows publishing messages decoupled from consuming them is effectively acting as a storage system for the in-flight messages. What is different about Kafka is that it is a very good storage system.
+  </p>
+  <p>
+  Data written to Kafka is written to disk and replicated for fault-tolerance. Kafka allows producers to wait on acknowledgement so that a write isn't considered complete until it is fully replicated and guaranteed to persist even if the server written to fails.
+  </p>
+  <p>
+  The disk structures Kafka uses scale well&mdash;Kafka will perform the same whether you have 50 KB or 50 TB of persistent data on the server.
+  </p>
+  <p>
+  As a result of taking storage seriously and allowing the clients to control their read position, you can think of Kafka as a kind of special purpose distributed filesystem dedicated to high-performance, low-latency commit log storage, replication, and propagation.
+  </p>
+  <p>
+  For details about the Kafka's commit log storage and replication design, please read <a href="https://kafka.apache.org/documentation/#design">this</a> page.
+  </p>
+  <h4>Kafka for Stream Processing</h4>
+  <p>
+  It isn't enough to just read, write, and store streams of data, the purpose is to enable real-time processing of streams.
+  </p>
+  <p>
+  In Kafka a stream processor is anything that takes continual streams of  data from input topics, performs some processing on this input, and produces continual streams of data to output topics.
+  </p>
+  <p>
+  For example, a retail application might take in input streams of sales and shipments, and output a stream of reorders and price adjustments computed off this data.
+  </p>
+  <p>
+  It is possible to do simple processing directly using the producer and consumer APIs. However for more complex transformations Kafka provides a fully integrated <a href="/documentation/streams">Streams API</a>. This allows building applications that do non-trivial processing that compute aggregations off of streams or join streams together.
+  </p>
+  <p>
+  This facility helps solve the hard problems this type of application faces: handling out-of-order data, reprocessing input as code changes, performing stateful computations, etc.
+  </p>
+  <p>
+  The streams API builds on the core primitives Kafka provides: it uses the producer and consumer APIs for input, uses Kafka for stateful storage, and uses the same group mechanism for fault tolerance among the stream processor instances.
+  </p>
+  <h4>Putting the Pieces Together</h4>
+  <p>
+  This combination of messaging, storage, and stream processing may seem unusual but it is essential to Kafka's role as a streaming platform.
+  </p>
+  <p>
+  A distributed file system like HDFS allows storing static files for batch processing. Effectively a system like this allows storing and processing <i>historical</i> data from the past.
+  </p>
+  <p>
+  A traditional enterprise messaging system allows processing future messages that will arrive after you subscribe. Applications built in this way process future data as it arrives.
+  </p>
+  <p>
+  Kafka combines both of these capabilities, and the combination is critical both for Kafka usage as a platform for streaming applications as well as for streaming data pipelines.
+  </p>
+  <p>
+  By combining storage and low-latency subscriptions, streaming applications can treat both past and future data the same way. That is a single application can process historical, stored data but rather than ending when it reaches the last record it can keep processing as future data arrives. This is a generalized notion of stream processing that subsumes batch processing as well as message-driven applications.
+  </p>
+  <p>
+  Likewise for streaming data pipelines the combination of subscription to real-time events make it possible to use Kafka for very low-latency pipelines; but the ability to store data reliably make it possible to use it for critical data where the delivery of data must be guaranteed or for integration with offline systems that load data only periodically or may go down for extended periods of time for maintenance. The stream processing facilities make it possible to transform data as it arrives.
+  </p>
+  <p>
+  For more information on the guarantees, APIs, and capabilities Kafka provides see the rest of the <a href="/documentation.html">documentation</a>.
+  </p>
+</script>
+
+<div class="p-introduction"></div>