You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by ij...@apache.org on 2017/06/28 09:53:40 UTC

kafka-site git commit: Updates to config documentation from 0.11.0 branch

Repository: kafka-site
Updated Branches:
  refs/heads/asf-site 6699e22d4 -> 0f672e80e


Updates to config documentation from 0.11.0 branch


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/0f672e80
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/0f672e80
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/0f672e80

Branch: refs/heads/asf-site
Commit: 0f672e80e4085f91fd906a59511d004ff2f08a72
Parents: 6699e22
Author: Ismael Juma <is...@juma.me.uk>
Authored: Wed Jun 28 10:52:45 2017 +0100
Committer: Ismael Juma <is...@juma.me.uk>
Committed: Wed Jun 28 10:52:45 2017 +0100

----------------------------------------------------------------------
 0110/generated/consumer_config.html | 4 ++--
 0110/generated/kafka_config.html    | 6 +++---
 0110/generated/producer_config.html | 4 ++--
 0110/generated/streams_config.html  | 2 +-
 0110/generated/topic_config.html    | 6 +++---
 5 files changed, 11 insertions(+), 11 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/0f672e80/0110/generated/consumer_config.html
----------------------------------------------------------------------
diff --git a/0110/generated/consumer_config.html b/0110/generated/consumer_config.html
index 83e7b44..20cbce8 100644
--- a/0110/generated/consumer_config.html
+++ b/0110/generated/consumer_config.html
@@ -20,7 +20,7 @@
 <tr>
 <td>heartbeat.interval.ms</td><td>The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.</td><td>int</td><td>3000</td><td></td><td>high</td></tr>
 <tr>
-<td>max.partition.fetch.bytes</td><td>The maximum amount of data per-partition the server will return. If the first message in the first non-empty partition of the fetch is larger than this limit, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). See fetch.max.bytes for limiting the consumer request size.</td><td>int</td><td>1048576</td><td>[0,...]</td><td>high</td></tr>
+<td>max.partition.fetch.bytes</td><td>The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). See fetch.max.bytes for limiting the consumer request size.</td><td>int</td><td>1048576</td><td>[0,...]</td><td>high</td></tr>
 <tr>
 <td>session.timeout.ms</td><td>The timeout used to detect consumer failures when using Kafka's group management facility. The consumer sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this consumer from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code> and <code>group.max.session.timeout.ms</code>.</td><td>int</td><td>10000</td><td></td><td>high</td></tr>
 <tr>
@@ -42,7 +42,7 @@
 <tr>
 <td>exclude.internal.topics</td><td>Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to <code>true</code> the only way to receive records from an internal topic is subscribing to it.</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
 <tr>
-<td>fetch.max.bytes</td><td>The maximum amount of data the server should return for a fetch request. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). Note that the consumer performs multiple fetches in parallel.</td><td>int</td><td>52428800</td><td>[0,...]</td><td>medium</td></tr>
+<td>fetch.max.bytes</td><td>The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). Note that the consumer performs multiple fetches in parallel.</td><td>int</td><td>52428800</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
 <td>isolation.level</td><td><p>Controls how to read messages written transactionally. If set to <code>read_committed</code>, consumer.poll() will only return transactional messages which have been committed. If set to <code>read_uncommitted</code>' (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode.</p> <p>Messages will always be returned in offset order. Hence, in  <code>read_committed</code> mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, <code>read_committed</code> consumers will not be able to read up to the high watermark when there are in flight transactions.</p><p> Further, whe
 n in <code>read_committed</mode> the seekToEnd method will return the LSO</td><td>string</td><td>read_uncommitted</td><td>[read_committed, read_uncommitted]</td><td>medium</td></tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/0f672e80/0110/generated/kafka_config.html
----------------------------------------------------------------------
diff --git a/0110/generated/kafka_config.html b/0110/generated/kafka_config.html
index 7046ca3..8ea1021 100644
--- a/0110/generated/kafka_config.html
+++ b/0110/generated/kafka_config.html
@@ -79,7 +79,7 @@ hostname of broker. If this is set, it will only bind to this address. If this i
 <tr>
 <td>log.segment.delete.delay.ms</td><td>The amount of time to wait before deleting a file from the filesystem</td><td>long</td><td>60000</td><td>[0,...]</td><td>high</td></tr>
 <tr>
-<td>message.max.bytes</td><td>The maximum message size that the server can receive. Note that this limit also applies to the total size of a compressed batch of messages (when compression is enabled). Additionally, in versions 0.11 and later, all messages are written as batches and this setting applies to the total size of the batch.</td><td>int</td><td>1000012</td><td>[0,...]</td><td>high</td></tr>
+<td>message.max.bytes</td><td><p>The largest record batch size allowed by Kafka. If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that the they can fetch record batches this large.</p><p>In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to asingle record in that case.</p><p>This can be set per topic with the topic level <code>max.message.bytes</code> config.</p></td><td>int</td><td>1000012</td><td>[0,...]</td><td>high</td></tr>
 <tr>
 <td>min.insync.replicas</td><td>When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).<br>When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.</td><td>int</td><td>1</td><td>[1,...]</td><td>high</td></tr>
 <tr>
@@ -242,9 +242,9 @@ the port to listen and accept connections on</td><td>int</td><td>9092</td><td></
 <tr>
 <td>replica.fetch.backoff.ms</td><td>The amount of time to sleep when fetch partition error occurs.</td><td>int</td><td>1000</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
-<td>replica.fetch.max.bytes</td><td>The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that progress can be made. The maximum message size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config).</td><td>int</td><td>1048576</td><td>[0,...]</td><td>medium</td></tr>
+<td>replica.fetch.max.bytes</td><td>The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config).</td><td>int</td><td>1048576</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
-<td>replica.fetch.response.max.bytes</td><td>Maximum bytes expected for the entire fetch response. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that progress can be made. The maximum message size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config).</td><td>int</td><td>10485760</td><td>[0,...]</td><td>medium</td></tr>
+<td>replica.fetch.response.max.bytes</td><td>Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config).</td><td>int</td><td>10485760</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
 <td>reserved.broker.max.id</td><td>Max number that can be used for a broker.id</td><td>int</td><td>1000</td><td>[0,...]</td><td>medium</td></tr>
 <tr>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/0f672e80/0110/generated/producer_config.html
----------------------------------------------------------------------
diff --git a/0110/generated/producer_config.html b/0110/generated/producer_config.html
index 215d1c0..819dfbc 100644
--- a/0110/generated/producer_config.html
+++ b/0110/generated/producer_config.html
@@ -42,7 +42,7 @@
 <tr>
 <td>max.block.ms</td><td>The configuration controls how long <code>KafkaProducer.send()</code> and <code>KafkaProducer.partitionsFor()</code> will block.These methods can be blocked either because the buffer is full or metadata unavailable.Blocking in the user-supplied serializers or partitioner will not be counted against this timeout.</td><td>long</td><td>60000</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
-<td>max.request.size</td><td>The maximum size of a request in bytes. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.</td><td>int</td><td>1048576</td><td>[0,...]</td><td>medium</td></tr>
+<td>max.request.size</td><td>The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum record batch size. Note that the server has its own cap on record batch size which may be different from this.</td><td>int</td><td>1048576</td><td>[0,...]</td><td>medium</td></tr>
 <tr>
 <td>partitioner.class</td><td>Partitioner class that implements the <code>Partitioner</code> interface.</td><td>class</td><td>org.apache.kafka.clients.producer.internals.DefaultPartitioner</td><td></td><td>medium</td></tr>
 <tr>
@@ -112,5 +112,5 @@
 <tr>
 <td>transaction.timeout.ms</td><td>The maximum amount of time in ms that the transaction coordinator will wait for a transaction status update from the producer before proactively aborting the ongoing transaction.If this value is larger than the max.transaction.timeout.ms setting in the broker, the request will fail with a `InvalidTransactionTimeout` error.</td><td>int</td><td>60000</td><td></td><td>low</td></tr>
 <tr>
-<td>transactional.id</td><td>The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. Note that enable.idempotence must be enabled if a TransactionalId is configured. The default is empty, which means transactions cannot be used.</td><td>string</td><td>null</td><td>org.apache.kafka.common.config.ConfigDef$NonEmptyString@6dffeaea</td><td>low</td></tr>
+<td>transactional.id</td><td>The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. Note that enable.idempotence must be enabled if a TransactionalId is configured. The default is empty, which means transactions cannot be used.</td><td>string</td><td>null</td><td>org.apache.kafka.common.config.ConfigDef$NonEmptyString@4fca772d</td><td>low</td></tr>
 </tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/0f672e80/0110/generated/streams_config.html
----------------------------------------------------------------------
diff --git a/0110/generated/streams_config.html b/0110/generated/streams_config.html
index 53e229b..dad8110 100644
--- a/0110/generated/streams_config.html
+++ b/0110/generated/streams_config.html
@@ -80,5 +80,5 @@
 <tr>
 <td>windowstore.changelog.additional.retention.ms</td><td>Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day</td><td>long</td><td>86400000</td><td></td><td>low</td></tr>
 <tr>
-<td>zookeeper.connect</td><td>Zookeeper connect string for Kafka topics management.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<td>zookeeper.connect</td><td>Zookeeper connect string for Kafka topics management. This config is deprecated and will be ignored as Streams API does not use Zookeeper anymore.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
 </tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/0f672e80/0110/generated/topic_config.html
----------------------------------------------------------------------
diff --git a/0110/generated/topic_config.html b/0110/generated/topic_config.html
index 1653db7..ebf03a1 100644
--- a/0110/generated/topic_config.html
+++ b/0110/generated/topic_config.html
@@ -21,13 +21,13 @@
 <tr>
 <td>flush.ms</td><td>This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient.</td><td>long</td><td>9223372036854775807</td><td>[0,...]</td><td>log.flush.interval.ms</td><td>medium</td></tr>
 <tr>
-<td>follower.replication.throttled.replicas</td><td>A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.</td><td>list</td><td>""</td><td>kafka.server.ThrottledReplicaListValidator$@10e76514</td><td>follower.replication.throttled.replicas</td><td>medium</td></tr>
+<td>follower.replication.throttled.replicas</td><td>A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.</td><td>list</td><td>""</td><td>kafka.server.ThrottledReplicaListValidator$@52feb982</td><td>follower.replication.throttled.replicas</td><td>medium</td></tr>
 <tr>
 <td>index.interval.bytes</td><td>This setting controls how frequently Kafka adds an index entry to it's offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this.</td><td>int</td><td>4096</td><td>[0,...]</td><td>log.index.interval.bytes</td><td>medium</td></tr>
 <tr>
-<td>leader.replication.throttled.replicas</td><td>A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.</td><td>list</td><td>""</td><td>kafka.server.ThrottledReplicaListValidator$@10e76514</td><td>leader.replication.throttled.replicas</td><td>medium</td></tr>
+<td>leader.replication.throttled.replicas</td><td>A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.</td><td>list</td><td>""</td><td>kafka.server.ThrottledReplicaListValidator$@52feb982</td><td>leader.replication.throttled.replicas</td><td>medium</td></tr>
 <tr>
-<td>max.message.bytes</td><td>This is largest message size Kafka will allow to be appended. Note that if you increase this size you must also increase your consumer's fetch size so they can fetch messages this large.</td><td>int</td><td>1000012</td><td>[0,...]</td><td>message.max.bytes</td><td>medium</td></tr>
+<td>max.message.bytes</td><td><p>The largest record batch size allowed by Kafka. If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that the they can fetch record batches this large.</p><p>In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to asingle record in that case.</p></td><td>int</td><td>1000012</td><td>[0,...]</td><td>message.max.bytes</td><td>medium</td></tr>
 <tr>
 <td>message.format.version</td><td>Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.</td><td>string</td><td>0.11.0-IV2</td><td></td><td>log.message.format.version</td><td>medium</td></tr>
 <tr>