You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by gu...@apache.org on 2017/11/01 22:41:10 UTC

[48/51] [partial] kafka-site git commit: MINOR: Follow-up Update on 1.0.0 release

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/generated/admin_client_config.html
----------------------------------------------------------------------
diff --git a/10/generated/admin_client_config.html b/10/generated/admin_client_config.html
new file mode 100644
index 0000000..5ce83e2
--- /dev/null
+++ b/10/generated/admin_client_config.html
@@ -0,0 +1,86 @@
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>bootstrap.servers</td><td>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</td><td>list</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.key.password</td><td>The password of the private key in the key store file. This is optional for client.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.location</td><td>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.password</td><td>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.location</td><td>The location of the trust store file. </td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.password</td><td>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>client.id</td><td>An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.</td><td>string</td><td>""</td><td></td><td>medium</td></tr>
+<tr>
+<td>connections.max.idle.ms</td><td>Close idle connections after the number of milliseconds specified by this config.</td><td>long</td><td>300000</td><td></td><td>medium</td></tr>
+<tr>
+<td>receive.buffer.bytes</td><td>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</td><td>int</td><td>65536</td><td>[-1,...]</td><td>medium</td></tr>
+<tr>
+<td>request.timeout.ms</td><td>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</td><td>int</td><td>120000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>sasl.jaas.config</td><td>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html">here</a>. The format for the value is: '<loginModuleClass> <controlFlag> (<optionName>=<optionValue>)*;'</td><td>password</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.mechanism</td><td>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</td><td>string</td><td>GSSAPI</td><td></td><td>medium</td></tr>
+<tr>
+<td>security.protocol</td><td>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<tr>
+<td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</td><td>int</td><td>131072</td><td>[-1,...]</td><td>medium</td></tr>
+<tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL connections.</td><td>list</td><td>TLSv1.2,TLSv1.1,TLSv1</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This is optional for client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.provider</td><td>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>metadata.max.age.ms</td><td>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</td><td>long</td><td>300000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>metric.reporters</td><td>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</td><td>list</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>metrics.num.samples</td><td>The number of samples maintained to compute metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metrics.recording.level</td><td>The highest recording level for metrics.</td><td>string</td><td>INFO</td><td>[INFO, DEBUG]</td><td>low</td></tr>
+<tr>
+<td>metrics.sample.window.ms</td><td>The window of time a metrics sample is computed over.</td><td>long</td><td>30000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.max.ms</td><td>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</td><td>long</td><td>1000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.ms</td><td>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>retries</td><td>The maximum number of times to retry a call before failing it.</td><td>int</td><td>5</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to retry a failed request. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time between refresh attempts.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.jitter</td><td>Percentage of random jitter added to the renewal time.</td><td>double</td><td>0.05</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</td><td>double</td><td>0.8</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.endpoint.identification.algorithm</td><td>The endpoint identification algorithm to validate server hostname using server certificate. </td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.keymanager.algorithm</td><td>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</td><td>string</td><td>SunX509</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.secure.random.implementation</td><td>The SecureRandom PRNG implementation to use for SSL cryptography operations. </td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.trustmanager.algorithm</td><td>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</td><td>string</td><td>PKIX</td><td></td><td>low</td></tr>
+</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/generated/connect_config.html
----------------------------------------------------------------------
diff --git a/10/generated/connect_config.html b/10/generated/connect_config.html
new file mode 100644
index 0000000..bf124c1
--- /dev/null
+++ b/10/generated/connect_config.html
@@ -0,0 +1,145 @@
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>config.storage.topic</td><td>The name of the Kafka topic where connector configurations are stored</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>group.id</td><td>A unique string that identifies the Connect cluster group this worker belongs to.</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>key.converter</td><td>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>offset.storage.topic</td><td>The name of the Kafka topic where connector offsets are stored</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>status.storage.topic</td><td>The name of the Kafka topic where connector and task status are stored</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>value.converter</td><td>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>internal.key.converter</td><td>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation.</td><td>class</td><td></td><td></td><td>low</td></tr>
+<tr>
+<td>internal.value.converter</td><td>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation.</td><td>class</td><td></td><td></td><td>low</td></tr>
+<tr>
+<td>bootstrap.servers</td><td>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</td><td>list</td><td>localhost:9092</td><td></td><td>high</td></tr>
+<tr>
+<td>heartbeat.interval.ms</td><td>The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.</td><td>int</td><td>3000</td><td></td><td>high</td></tr>
+<tr>
+<td>rebalance.timeout.ms</td><td>The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures.</td><td>int</td><td>60000</td><td></td><td>high</td></tr>
+<tr>
+<td>session.timeout.ms</td><td>The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code> and <code>group.max.session.timeout.ms</code>.</td><td>int</td><td>10000</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.key.password</td><td>The password of the private key in the key store file. This is optional for client.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.location</td><td>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.password</td><td>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.location</td><td>The location of the trust store file. </td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.password</td><td>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>connections.max.idle.ms</td><td>Close idle connections after the number of milliseconds specified by this config.</td><td>long</td><td>540000</td><td></td><td>medium</td></tr>
+<tr>
+<td>receive.buffer.bytes</td><td>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</td><td>int</td><td>32768</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>request.timeout.ms</td><td>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</td><td>int</td><td>40000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>sasl.jaas.config</td><td>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html">here</a>. The format for the value is: '<loginModuleClass> <controlFlag> (<optionName>=<optionValue>)*;'</td><td>password</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.mechanism</td><td>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</td><td>string</td><td>GSSAPI</td><td></td><td>medium</td></tr>
+<tr>
+<td>security.protocol</td><td>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<tr>
+<td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</td><td>int</td><td>131072</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL connections.</td><td>list</td><td>TLSv1.2,TLSv1.1,TLSv1</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This is optional for client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.provider</td><td>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>worker.sync.timeout.ms</td><td>When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining.</td><td>int</td><td>3000</td><td></td><td>medium</td></tr>
+<tr>
+<td>worker.unsync.backoff.ms</td><td>When the worker is out of sync with other workers and  fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining.</td><td>int</td><td>300000</td><td></td><td>medium</td></tr>
+<tr>
+<td>access.control.allow.methods</td><td>Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>access.control.allow.origin</td><td>Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>client.id</td><td>An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>config.storage.replication.factor</td><td>Replication factor used when creating the configuration storage topic</td><td>short</td><td>3</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metadata.max.age.ms</td><td>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</td><td>long</td><td>300000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>metric.reporters</td><td>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</td><td>list</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>metrics.num.samples</td><td>The number of samples maintained to compute metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metrics.recording.level</td><td>The highest recording level for metrics.</td><td>string</td><td>INFO</td><td>[INFO, DEBUG]</td><td>low</td></tr>
+<tr>
+<td>metrics.sample.window.ms</td><td>The window of time a metrics sample is computed over.</td><td>long</td><td>30000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>offset.flush.interval.ms</td><td>Interval at which to try committing offsets for tasks.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
+<tr>
+<td>offset.flush.timeout.ms</td><td>Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt.</td><td>long</td><td>5000</td><td></td><td>low</td></tr>
+<tr>
+<td>offset.storage.partitions</td><td>The number of partitions used when creating the offset storage topic</td><td>int</td><td>25</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>offset.storage.replication.factor</td><td>Replication factor used when creating the offset storage topic</td><td>short</td><td>3</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>plugin.path</td><td>List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of top level directories that include any combination of: 
+a) directories immediately containing jars with plugins and their dependencies
+b) uber-jars with plugins and their dependencies
+c) directories immediately containing the package directory structure of classes of plugins and their dependencies
+Note: symlinks will be followed to discover dependencies or plugins.
+Examples: plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors</td><td>list</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.max.ms</td><td>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</td><td>long</td><td>1000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.ms</td><td>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>rest.advertised.host.name</td><td>If this is set, this is the hostname that will be given out to other workers to connect to.</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>rest.advertised.port</td><td>If this is set, this is the port that will be given out to other workers to connect to.</td><td>int</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>rest.host.name</td><td>Hostname for the REST API. If this is set, it will only bind to this interface.</td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>rest.port</td><td>Port for the REST API to listen on.</td><td>int</td><td>8083</td><td></td><td>low</td></tr>
+<tr>
+<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time between refresh attempts.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.jitter</td><td>Percentage of random jitter added to the renewal time.</td><td>double</td><td>0.05</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</td><td>double</td><td>0.8</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.endpoint.identification.algorithm</td><td>The endpoint identification algorithm to validate server hostname using server certificate. </td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.keymanager.algorithm</td><td>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</td><td>string</td><td>SunX509</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.secure.random.implementation</td><td>The SecureRandom PRNG implementation to use for SSL cryptography operations. </td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.trustmanager.algorithm</td><td>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</td><td>string</td><td>PKIX</td><td></td><td>low</td></tr>
+<tr>
+<td>status.storage.partitions</td><td>The number of partitions used when creating the status storage topic</td><td>int</td><td>5</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>status.storage.replication.factor</td><td>Replication factor used when creating the status storage topic</td><td>short</td><td>3</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>task.shutdown.graceful.timeout.ms</td><td>Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially.</td><td>long</td><td>5000</td><td></td><td>low</td></tr>
+</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/generated/connect_metrics.html
----------------------------------------------------------------------
diff --git a/10/generated/connect_metrics.html b/10/generated/connect_metrics.html
new file mode 100644
index 0000000..e1c4fb3
--- /dev/null
+++ b/10/generated/connect_metrics.html
@@ -0,0 +1,158 @@
+<table class="data-table"><tbody>
+<tr>
+<td colspan=3 class="mbeanName" style="background-color:#ccc; font-weight: bold;">kafka.connect:type=connect-worker-metrics</td></tr>
+<tr>
+<th style="width: 90px"></th>
+<th>Attribute name</th>
+<th>Description</th>
+</tr>
+<tr>
+<td></td><td>connector-count</td><td>The number of connectors run in this worker.</td></tr>
+<tr>
+<td></td><td>connector-startup-attempts-total</td><td>The total number of connector startups that this worker has attempted.</td></tr>
+<tr>
+<td></td><td>connector-startup-failure-percentage</td><td>The average percentage of this worker's connectors starts that failed.</td></tr>
+<tr>
+<td></td><td>connector-startup-failure-total</td><td>The total number of connector starts that failed.</td></tr>
+<tr>
+<td></td><td>connector-startup-success-percentage</td><td>The average percentage of this worker's connectors starts that succeeded.</td></tr>
+<tr>
+<td></td><td>connector-startup-success-total</td><td>The total number of connector starts that succeeded.</td></tr>
+<tr>
+<td></td><td>task-count</td><td>The number of tasks run in this worker.</td></tr>
+<tr>
+<td></td><td>task-startup-attempts-total</td><td>The total number of task startups that this worker has attempted.</td></tr>
+<tr>
+<td></td><td>task-startup-failure-percentage</td><td>The average percentage of this worker's tasks starts that failed.</td></tr>
+<tr>
+<td></td><td>task-startup-failure-total</td><td>The total number of task starts that failed.</td></tr>
+<tr>
+<td></td><td>task-startup-success-percentage</td><td>The average percentage of this worker's tasks starts that succeeded.</td></tr>
+<tr>
+<td></td><td>task-startup-success-total</td><td>The total number of task starts that succeeded.</td></tr>
+<tr>
+<td colspan=3 class="mbeanName" style="background-color:#ccc; font-weight: bold;">kafka.connect:type=connect-worker-rebalance-metrics</td></tr>
+<tr>
+<th style="width: 90px"></th>
+<th>Attribute name</th>
+<th>Description</th>
+</tr>
+<tr>
+<td></td><td>completed-rebalances-total</td><td>The total number of rebalances completed by this worker.</td></tr>
+<tr>
+<td></td><td>epoch</td><td>The epoch or generation number of this worker.</td></tr>
+<tr>
+<td></td><td>leader-name</td><td>The name of the group leader.</td></tr>
+<tr>
+<td></td><td>rebalance-avg-time-ms</td><td>The average time in milliseconds spent by this worker to rebalance.</td></tr>
+<tr>
+<td></td><td>rebalance-max-time-ms</td><td>The maximum time in milliseconds spent by this worker to rebalance.</td></tr>
+<tr>
+<td></td><td>rebalancing</td><td>Whether this worker is currently rebalancing.</td></tr>
+<tr>
+<td></td><td>time-since-last-rebalance-ms</td><td>The time in milliseconds since this worker completed the most recent rebalance.</td></tr>
+<tr>
+<td colspan=3 class="mbeanName" style="background-color:#ccc; font-weight: bold;">kafka.connect:type=connector-metrics,connector="{connector}"</td></tr>
+<tr>
+<th style="width: 90px"></th>
+<th>Attribute name</th>
+<th>Description</th>
+</tr>
+<tr>
+<td></td><td>connector-class</td><td>The name of the connector class.</td></tr>
+<tr>
+<td></td><td>connector-type</td><td>The type of the connector. One of 'source' or 'sink'.</td></tr>
+<tr>
+<td></td><td>connector-version</td><td>The version of the connector class, as reported by the connector.</td></tr>
+<tr>
+<td></td><td>status</td><td>The status of the connector. One of 'unassigned', 'running', 'paused', 'failed', or 'destroyed'.</td></tr>
+<tr>
+<td colspan=3 class="mbeanName" style="background-color:#ccc; font-weight: bold;">kafka.connect:type=connector-task-metrics,connector="{connector}",task="{task}"</td></tr>
+<tr>
+<th style="width: 90px"></th>
+<th>Attribute name</th>
+<th>Description</th>
+</tr>
+<tr>
+<td></td><td>batch-size-avg</td><td>The average size of the batches processed by the connector.</td></tr>
+<tr>
+<td></td><td>batch-size-max</td><td>The maximum size of the batches processed by the connector.</td></tr>
+<tr>
+<td></td><td>offset-commit-avg-time-ms</td><td>The average time in milliseconds taken by this task to commit offsets.</td></tr>
+<tr>
+<td></td><td>offset-commit-failure-percentage</td><td>The average percentage of this task's offset commit attempts that failed.</td></tr>
+<tr>
+<td></td><td>offset-commit-max-time-ms</td><td>The maximum time in milliseconds taken by this task to commit offsets.</td></tr>
+<tr>
+<td></td><td>offset-commit-success-percentage</td><td>The average percentage of this task's offset commit attempts that succeeded.</td></tr>
+<tr>
+<td></td><td>pause-ratio</td><td>The fraction of time this task has spent in the pause state.</td></tr>
+<tr>
+<td></td><td>running-ratio</td><td>The fraction of time this task has spent in the running state.</td></tr>
+<tr>
+<td></td><td>status</td><td>The status of the connector task. One of 'unassigned', 'running', 'paused', 'failed', or 'destroyed'.</td></tr>
+<tr>
+<td colspan=3 class="mbeanName" style="background-color:#ccc; font-weight: bold;">kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"</td></tr>
+<tr>
+<th style="width: 90px"></th>
+<th>Attribute name</th>
+<th>Description</th>
+</tr>
+<tr>
+<td></td><td>offset-commit-completion-rate</td><td>The average per-second number of offset commit completions that were completed successfully.</td></tr>
+<tr>
+<td></td><td>offset-commit-completion-total</td><td>The total number of offset commit completions that were completed successfully.</td></tr>
+<tr>
+<td></td><td>offset-commit-seq-no</td><td>The current sequence number for offset commits.</td></tr>
+<tr>
+<td></td><td>offset-commit-skip-rate</td><td>The average per-second number of offset commit completions that were received too late and skipped/ignored.</td></tr>
+<tr>
+<td></td><td>offset-commit-skip-total</td><td>The total number of offset commit completions that were received too late and skipped/ignored.</td></tr>
+<tr>
+<td></td><td>partition-count</td><td>The number of topic partitions assigned to this task belonging to the named sink connector in this worker.</td></tr>
+<tr>
+<td></td><td>put-batch-avg-time-ms</td><td>The average time taken by this task to put a batch of sinks records.</td></tr>
+<tr>
+<td></td><td>put-batch-max-time-ms</td><td>The maximum time taken by this task to put a batch of sinks records.</td></tr>
+<tr>
+<td></td><td>sink-record-active-count</td><td>The number of records that have been read from Kafka but not yet completely committed/flushed/acknowledged by the sink task.</td></tr>
+<tr>
+<td></td><td>sink-record-active-count-avg</td><td>The average number of records that have been read from Kafka but not yet completely committed/flushed/acknowledged by the sink task.</td></tr>
+<tr>
+<td></td><td>sink-record-active-count-max</td><td>The maximum number of records that have been read from Kafka but not yet completely committed/flushed/acknowledged by the sink task.</td></tr>
+<tr>
+<td></td><td>sink-record-lag-max</td><td>The maximum lag in terms of number of records that the sink task is behind the consumer's position for any topic partitions.</td></tr>
+<tr>
+<td></td><td>sink-record-read-rate</td><td>The average per-second number of records read from Kafka for this task belonging to the named sink connector in this worker. This is before transformations are applied.</td></tr>
+<tr>
+<td></td><td>sink-record-read-total</td><td>The total number of records read from Kafka by this task belonging to the named sink connector in this worker, since the task was last restarted.</td></tr>
+<tr>
+<td></td><td>sink-record-send-rate</td><td>The average per-second number of records output from the transformations and sent/put to this task belonging to the named sink connector in this worker. This is after transformations are applied and excludes any records filtered out by the transformations.</td></tr>
+<tr>
+<td></td><td>sink-record-send-total</td><td>The total number of records output from the transformations and sent/put to this task belonging to the named sink connector in this worker, since the task was last restarted.</td></tr>
+<tr>
+<td colspan=3 class="mbeanName" style="background-color:#ccc; font-weight: bold;">kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"</td></tr>
+<tr>
+<th style="width: 90px"></th>
+<th>Attribute name</th>
+<th>Description</th>
+</tr>
+<tr>
+<td></td><td>poll-batch-avg-time-ms</td><td>The average time in milliseconds taken by this task to poll for a batch of source records.</td></tr>
+<tr>
+<td></td><td>poll-batch-max-time-ms</td><td>The maximum time in milliseconds taken by this task to poll for a batch of source records.</td></tr>
+<tr>
+<td></td><td>source-record-active-count</td><td>The number of records that have been produced by this task but not yet completely written to Kafka.</td></tr>
+<tr>
+<td></td><td>source-record-active-count-avg</td><td>The average number of records that have been produced by this task but not yet completely written to Kafka.</td></tr>
+<tr>
+<td></td><td>source-record-active-count-max</td><td>The maximum number of records that have been produced by this task but not yet completely written to Kafka.</td></tr>
+<tr>
+<td></td><td>source-record-poll-rate</td><td>The average per-second number of records produced/polled (before transformation) by this task belonging to the named source connector in this worker.</td></tr>
+<tr>
+<td></td><td>source-record-poll-total</td><td>The total number of records produced/polled (before transformation) by this task belonging to the named source connector in this worker.</td></tr>
+<tr>
+<td></td><td>source-record-write-rate</td><td>The average per-second number of records output from the transformations and written to Kafka for this task belonging to the named source connector in this worker. This is after transformations are applied and excludes any records filtered out by the transformations.</td></tr>
+<tr>
+<td></td><td>source-record-write-total</td><td>The number of records output from the transformations and written to Kafka for this task belonging to the named source connector in this worker, since the task was last restarted.</td></tr>
+</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/generated/connect_transforms.html
----------------------------------------------------------------------
diff --git a/10/generated/connect_transforms.html b/10/generated/connect_transforms.html
new file mode 100644
index 0000000..b56232a
--- /dev/null
+++ b/10/generated/connect_transforms.html
@@ -0,0 +1,228 @@
+<div id="org.apache.kafka.connect.transforms.InsertField">
+<h5>org.apache.kafka.connect.transforms.InsertField</h5>
+Insert field(s) using attributes from the record metadata or a configured static value.<p/>Use the concrete transformation type designed for the record key (<code>org.apache.kafka.connect.transforms.InsertField$Key</code>) or value (<code>org.apache.kafka.connect.transforms.InsertField$Value</code>).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>offset.field</td><td>Field name for Kafka offset - only applicable to sink connectors.<br/>Suffix with <code>!</code> to make this a required field, or <code>?</code> to keep it optional (the default).</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>partition.field</td><td>Field name for Kafka partition. Suffix with <code>!</code> to make this a required field, or <code>?</code> to keep it optional (the default).</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>static.field</td><td>Field name for static data field. Suffix with <code>!</code> to make this a required field, or <code>?</code> to keep it optional (the default).</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>static.value</td><td>Static field value, if field name configured.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>timestamp.field</td><td>Field name for record timestamp. Suffix with <code>!</code> to make this a required field, or <code>?</code> to keep it optional (the default).</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>topic.field</td><td>Field name for Kafka topic. Suffix with <code>!</code> to make this a required field, or <code>?</code> to keep it optional (the default).</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.ReplaceField">
+<h5>org.apache.kafka.connect.transforms.ReplaceField</h5>
+Filter or rename fields.<p/>Use the concrete transformation type designed for the record key (<code>org.apache.kafka.connect.transforms.ReplaceField$Key</code>) or value (<code>org.apache.kafka.connect.transforms.ReplaceField$Value</code>).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>blacklist</td><td>Fields to exclude. This takes precedence over the whitelist.</td><td>list</td><td>""</td><td></td><td>medium</td></tr>
+<tr>
+<td>renames</td><td>Field rename mappings.</td><td>list</td><td>""</td><td>list of colon-delimited pairs, e.g. <code>foo:bar,abc:xyz</code></td><td>medium</td></tr>
+<tr>
+<td>whitelist</td><td>Fields to include. If specified, only these fields will be used.</td><td>list</td><td>""</td><td></td><td>medium</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.MaskField">
+<h5>org.apache.kafka.connect.transforms.MaskField</h5>
+Mask specified fields with a valid null value for the field type (i.e. 0, false, empty string, and so on).<p/>Use the concrete transformation type designed for the record key (<code>org.apache.kafka.connect.transforms.MaskField$Key</code>) or value (<code>org.apache.kafka.connect.transforms.MaskField$Value</code>).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>fields</td><td>Names of fields to mask.</td><td>list</td><td></td><td>non-empty list</td><td>high</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.ValueToKey">
+<h5>org.apache.kafka.connect.transforms.ValueToKey</h5>
+Replace the record key with a new key formed from a subset of fields in the record value.
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>fields</td><td>Field names on the record value to extract as the record key.</td><td>list</td><td></td><td>non-empty list</td><td>high</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.HoistField">
+<h5>org.apache.kafka.connect.transforms.HoistField</h5>
+Wrap data using the specified field name in a Struct when schema present, or a Map in the case of schemaless data.<p/>Use the concrete transformation type designed for the record key (<code>org.apache.kafka.connect.transforms.HoistField$Key</code>) or value (<code>org.apache.kafka.connect.transforms.HoistField$Value</code>).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>field</td><td>Field name for the single field that will be created in the resulting Struct or Map.</td><td>string</td><td></td><td></td><td>medium</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.ExtractField">
+<h5>org.apache.kafka.connect.transforms.ExtractField</h5>
+Extract the specified field from a Struct when schema present, or a Map in the case of schemaless data. Any null values are passed through unmodified.<p/>Use the concrete transformation type designed for the record key (<code>org.apache.kafka.connect.transforms.ExtractField$Key</code>) or value (<code>org.apache.kafka.connect.transforms.ExtractField$Value</code>).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>field</td><td>Field name to extract.</td><td>string</td><td></td><td></td><td>medium</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.SetSchemaMetadata">
+<h5>org.apache.kafka.connect.transforms.SetSchemaMetadata</h5>
+Set the schema name, version or both on the record's key (<code>org.apache.kafka.connect.transforms.SetSchemaMetadata$Key</code>) or value (<code>org.apache.kafka.connect.transforms.SetSchemaMetadata$Value</code>) schema.
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>schema.name</td><td>Schema name to set.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>schema.version</td><td>Schema version to set.</td><td>int</td><td>null</td><td></td><td>high</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.TimestampRouter">
+<h5>org.apache.kafka.connect.transforms.TimestampRouter</h5>
+Update the record's topic field as a function of the original topic value and the record timestamp.<p/>This is mainly useful for sink connectors, since the topic field is often used to determine the equivalent entity name in the destination system(e.g. database table or search index name).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>timestamp.format</td><td>Format string for the timestamp that is compatible with <code>java.text.SimpleDateFormat</code>.</td><td>string</td><td>yyyyMMdd</td><td></td><td>high</td></tr>
+<tr>
+<td>topic.format</td><td>Format string which can contain <code>${topic}</code> and <code>${timestamp}</code> as placeholders for the topic and timestamp, respectively.</td><td>string</td><td>${topic}-${timestamp}</td><td></td><td>high</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.RegexRouter">
+<h5>org.apache.kafka.connect.transforms.RegexRouter</h5>
+Update the record topic using the configured regular expression and replacement string.<p/>Under the hood, the regex is compiled to a <code>java.util.regex.Pattern</code>. If the pattern matches the input topic, <code>java.util.regex.Matcher#replaceFirst()</code> is used with the replacement string to obtain the new topic.
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>regex</td><td>Regular expression to use for matching.</td><td>string</td><td></td><td>valid regex</td><td>high</td></tr>
+<tr>
+<td>replacement</td><td>Replacement string.</td><td>string</td><td></td><td></td><td>high</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.Flatten">
+<h5>org.apache.kafka.connect.transforms.Flatten</h5>
+Flatten a nested data structure, generating names for each field by concatenating the field names at each level with a configurable delimiter character. Applies to Struct when schema present, or a Map in the case of schemaless data. The default delimiter is '.'.<p/>Use the concrete transformation type designed for the record key (<code>org.apache.kafka.connect.transforms.Flatten$Key</code>) or value (<code>org.apache.kafka.connect.transforms.Flatten$Value</code>).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>delimiter</td><td>Delimiter to insert between field names from the input record when generating field names for the output record</td><td>string</td><td>.</td><td></td><td>medium</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.Cast">
+<h5>org.apache.kafka.connect.transforms.Cast</h5>
+Cast fields or the entire key or value to a specific type, e.g. to force an integer field to a smaller width. Only simple primitive types are supported -- integers, floats, boolean, and string. <p/>Use the concrete transformation type designed for the record key (<code>org.apache.kafka.connect.transforms.Cast$Key</code>) or value (<code>org.apache.kafka.connect.transforms.Cast$Value</code>).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>spec</td><td>List of fields and the type to cast them to of the form field1:type,field2:type to cast fields of Maps or Structs. A single type to cast the entire value. Valid types are int8, int16, int32, int64, float32, float64, boolean, and string.</td><td>list</td><td></td><td>list of colon-delimited pairs, e.g. <code>foo:bar,abc:xyz</code></td><td>high</td></tr>
+</tbody></table>
+</div>
+<div id="org.apache.kafka.connect.transforms.TimestampConverter">
+<h5>org.apache.kafka.connect.transforms.TimestampConverter</h5>
+Convert timestamps between different formats such as Unix epoch, strings, and Connect Date/Timestamp types.Applies to individual fields or to the entire value.<p/>Use the concrete transformation type designed for the record key (<code>org.apache.kafka.connect.transforms.TimestampConverter$Key</code>) or value (<code>org.apache.kafka.connect.transforms.TimestampConverter$Value</code>).
+<p/>
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>target.type</td><td>The desired timestamp representation: string, unix, Date, Time, or Timestamp</td><td>string</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>field</td><td>The field containing the timestamp, or empty if the entire value is a timestamp</td><td>string</td><td>""</td><td></td><td>high</td></tr>
+<tr>
+<td>format</td><td>A SimpleDateFormat-compatible format for the timestamp. Used to generate the output when type=string or used to parse the input if the input is a string.</td><td>string</td><td>""</td><td></td><td>medium</td></tr>
+</tbody></table>
+</div>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/generated/consumer_config.html
----------------------------------------------------------------------
diff --git a/10/generated/consumer_config.html b/10/generated/consumer_config.html
new file mode 100644
index 0000000..b5b63f8
--- /dev/null
+++ b/10/generated/consumer_config.html
@@ -0,0 +1,122 @@
+<table class="data-table"><tbody>
+<tr>
+<th>Name</th>
+<th>Description</th>
+<th>Type</th>
+<th>Default</th>
+<th>Valid Values</th>
+<th>Importance</th>
+</tr>
+<tr>
+<td>bootstrap.servers</td><td>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</td><td>list</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>key.deserializer</td><td>Deserializer class for key that implements the <code>org.apache.kafka.common.serialization.Deserializer</code> interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>value.deserializer</td><td>Deserializer class for value that implements the <code>org.apache.kafka.common.serialization.Deserializer</code> interface.</td><td>class</td><td></td><td></td><td>high</td></tr>
+<tr>
+<td>fetch.min.bytes</td><td>The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.</td><td>int</td><td>1</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>group.id</td><td>A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using <code>subscribe(topic)</code> or the Kafka-based offset management strategy.</td><td>string</td><td>""</td><td></td><td>high</td></tr>
+<tr>
+<td>heartbeat.interval.ms</td><td>The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.</td><td>int</td><td>3000</td><td></td><td>high</td></tr>
+<tr>
+<td>max.partition.fetch.bytes</td><td>The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). See fetch.max.bytes for limiting the consumer request size.</td><td>int</td><td>1048576</td><td>[0,...]</td><td>high</td></tr>
+<tr>
+<td>session.timeout.ms</td><td>The timeout used to detect consumer failures when using Kafka's group management facility. The consumer sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this consumer from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code> and <code>group.max.session.timeout.ms</code>.</td><td>int</td><td>10000</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.key.password</td><td>The password of the private key in the key store file. This is optional for client.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.location</td><td>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.keystore.password</td><td>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.location</td><td>The location of the trust store file. </td><td>string</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>ssl.truststore.password</td><td>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</td><td>password</td><td>null</td><td></td><td>high</td></tr>
+<tr>
+<td>auto.offset.reset</td><td>What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted): <ul><li>earliest: automatically reset the offset to the earliest offset<li>latest: automatically reset the offset to the latest offset</li><li>none: throw exception to the consumer if no previous offset is found for the consumer's group</li><li>anything else: throw exception to the consumer.</li></ul></td><td>string</td><td>latest</td><td>[latest, earliest, none]</td><td>medium</td></tr>
+<tr>
+<td>connections.max.idle.ms</td><td>Close idle connections after the number of milliseconds specified by this config.</td><td>long</td><td>540000</td><td></td><td>medium</td></tr>
+<tr>
+<td>enable.auto.commit</td><td>If true the consumer's offset will be periodically committed in the background.</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
+<tr>
+<td>exclude.internal.topics</td><td>Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to <code>true</code> the only way to receive records from an internal topic is subscribing to it.</td><td>boolean</td><td>true</td><td></td><td>medium</td></tr>
+<tr>
+<td>fetch.max.bytes</td><td>The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). Note that the consumer performs multiple fetches in parallel.</td><td>int</td><td>52428800</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>isolation.level</td><td><p>Controls how to read messages written transactionally. If set to <code>read_committed</code>, consumer.poll() will only return transactional messages which have been committed. If set to <code>read_uncommitted</code>' (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode.</p> <p>Messages will always be returned in offset order. Hence, in  <code>read_committed</code> mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, <code>read_committed</code> consumers will not be able to read up to the high watermark when there are in flight transactions.</p><p> Further, whe
 n in <code>read_committed</mode> the seekToEnd method will return the LSO</td><td>string</td><td>read_uncommitted</td><td>[read_committed, read_uncommitted]</td><td>medium</td></tr>
+<tr>
+<td>max.poll.interval.ms</td><td>The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. </td><td>int</td><td>300000</td><td>[1,...]</td><td>medium</td></tr>
+<tr>
+<td>max.poll.records</td><td>The maximum number of records returned in a single call to poll().</td><td>int</td><td>500</td><td>[1,...]</td><td>medium</td></tr>
+<tr>
+<td>partition.assignment.strategy</td><td>The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used</td><td>list</td><td>class org.apache.kafka.clients.consumer.RangeAssignor</td><td></td><td>medium</td></tr>
+<tr>
+<td>receive.buffer.bytes</td><td>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</td><td>int</td><td>65536</td><td>[-1,...]</td><td>medium</td></tr>
+<tr>
+<td>request.timeout.ms</td><td>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</td><td>int</td><td>305000</td><td>[0,...]</td><td>medium</td></tr>
+<tr>
+<td>sasl.jaas.config</td><td>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html">here</a>. The format for the value is: '<loginModuleClass> <controlFlag> (<optionName>=<optionValue>)*;'</td><td>password</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.kerberos.service.name</td><td>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>sasl.mechanism</td><td>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</td><td>string</td><td>GSSAPI</td><td></td><td>medium</td></tr>
+<tr>
+<td>security.protocol</td><td>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</td><td>string</td><td>PLAINTEXT</td><td></td><td>medium</td></tr>
+<tr>
+<td>send.buffer.bytes</td><td>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</td><td>int</td><td>131072</td><td>[-1,...]</td><td>medium</td></tr>
+<tr>
+<td>ssl.enabled.protocols</td><td>The list of protocols enabled for SSL connections.</td><td>list</td><td>TLSv1.2,TLSv1.1,TLSv1</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.keystore.type</td><td>The file format of the key store file. This is optional for client.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.protocol</td><td>The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.</td><td>string</td><td>TLS</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.provider</td><td>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</td><td>string</td><td>null</td><td></td><td>medium</td></tr>
+<tr>
+<td>ssl.truststore.type</td><td>The file format of the trust store file.</td><td>string</td><td>JKS</td><td></td><td>medium</td></tr>
+<tr>
+<td>auto.commit.interval.ms</td><td>The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if <code>enable.auto.commit</code> is set to <code>true</code>.</td><td>int</td><td>5000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>check.crcs</td><td>Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.</td><td>boolean</td><td>true</td><td></td><td>low</td></tr>
+<tr>
+<td>client.id</td><td>An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.</td><td>string</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>fetch.max.wait.ms</td><td>The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.</td><td>int</td><td>500</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>interceptor.classes</td><td>A list of classes to use as interceptors. Implementing the <code>org.apache.kafka.clients.consumer.ConsumerInterceptor</code> interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>metadata.max.age.ms</td><td>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</td><td>long</td><td>300000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>metric.reporters</td><td>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</td><td>list</td><td>""</td><td></td><td>low</td></tr>
+<tr>
+<td>metrics.num.samples</td><td>The number of samples maintained to compute metrics.</td><td>int</td><td>2</td><td>[1,...]</td><td>low</td></tr>
+<tr>
+<td>metrics.recording.level</td><td>The highest recording level for metrics.</td><td>string</td><td>INFO</td><td>[INFO, DEBUG]</td><td>low</td></tr>
+<tr>
+<td>metrics.sample.window.ms</td><td>The window of time a metrics sample is computed over.</td><td>long</td><td>30000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.max.ms</td><td>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</td><td>long</td><td>1000</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>reconnect.backoff.ms</td><td>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</td><td>long</td><td>50</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>retry.backoff.ms</td><td>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</td><td>long</td><td>100</td><td>[0,...]</td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.kinit.cmd</td><td>Kerberos kinit command path.</td><td>string</td><td>/usr/bin/kinit</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.min.time.before.relogin</td><td>Login thread sleep time between refresh attempts.</td><td>long</td><td>60000</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.jitter</td><td>Percentage of random jitter added to the renewal time.</td><td>double</td><td>0.05</td><td></td><td>low</td></tr>
+<tr>
+<td>sasl.kerberos.ticket.renew.window.factor</td><td>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</td><td>double</td><td>0.8</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.cipher.suites</td><td>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</td><td>list</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.endpoint.identification.algorithm</td><td>The endpoint identification algorithm to validate server hostname using server certificate. </td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.keymanager.algorithm</td><td>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</td><td>string</td><td>SunX509</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.secure.random.implementation</td><td>The SecureRandom PRNG implementation to use for SSL cryptography operations. </td><td>string</td><td>null</td><td></td><td>low</td></tr>
+<tr>
+<td>ssl.trustmanager.algorithm</td><td>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</td><td>string</td><td>PKIX</td><td></td><td>low</td></tr>
+</tbody></table>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/2e200cfc/10/generated/consumer_metrics.html
----------------------------------------------------------------------
diff --git a/10/generated/consumer_metrics.html b/10/generated/consumer_metrics.html
new file mode 100644
index 0000000..5ebe1bf
--- /dev/null
+++ b/10/generated/consumer_metrics.html
@@ -0,0 +1,64 @@
+<table class="data-table"><tbody>
+<tr>
+<td colspan=3 class="mbeanName" style="background-color:#ccc; font-weight: bold;">kafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"</td></tr>
+<tr>
+<th style="width: 90px"></th>
+<th>Attribute name</th>
+<th>Description</th>
+</tr>
+<tr>
+<td></td><td>bytes-consumed-rate</td><td>The average number of bytes consumed per second</td></tr>
+<tr>
+<td></td><td>bytes-consumed-total</td><td>The total number of bytes consumed</td></tr>
+<tr>
+<td></td><td>fetch-latency-avg</td><td>The average time taken for a fetch request.</td></tr>
+<tr>
+<td></td><td>fetch-latency-max</td><td>The max time taken for any fetch request.</td></tr>
+<tr>
+<td></td><td>fetch-rate</td><td>The number of fetch requests per second.</td></tr>
+<tr>
+<td></td><td>fetch-size-avg</td><td>The average number of bytes fetched per request</td></tr>
+<tr>
+<td></td><td>fetch-size-max</td><td>The maximum number of bytes fetched per request</td></tr>
+<tr>
+<td></td><td>fetch-throttle-time-avg</td><td>The average throttle time in ms</td></tr>
+<tr>
+<td></td><td>fetch-throttle-time-max</td><td>The maximum throttle time in ms</td></tr>
+<tr>
+<td></td><td>fetch-total</td><td>The total number of fetch requests.</td></tr>
+<tr>
+<td></td><td>records-consumed-rate</td><td>The average number of records consumed per second</td></tr>
+<tr>
+<td></td><td>records-consumed-total</td><td>The total number of records consumed</td></tr>
+<tr>
+<td></td><td>records-lag-max</td><td>The maximum lag in terms of number of records for any partition in this window</td></tr>
+<tr>
+<td></td><td>records-per-request-avg</td><td>The average number of records in each request</td></tr>
+<tr>
+<td></td><td>{topic}-{partition}.records-lag</td><td>The latest lag of the partition</td></tr>
+<tr>
+<td></td><td>{topic}-{partition}.records-lag-avg</td><td>The average lag of the partition</td></tr>
+<tr>
+<td></td><td>{topic}-{partition}.records-lag-max</td><td>The max lag of the partition</td></tr>
+<tr>
+<td colspan=3 class="mbeanName" style="background-color:#ccc; font-weight: bold;">kafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}",topic="{topic}"</td></tr>
+<tr>
+<th style="width: 90px"></th>
+<th>Attribute name</th>
+<th>Description</th>
+</tr>
+<tr>
+<td></td><td>bytes-consumed-rate</td><td>The average number of bytes consumed per second for a topic</td></tr>
+<tr>
+<td></td><td>bytes-consumed-total</td><td>The total number of bytes consumed for a topic</td></tr>
+<tr>
+<td></td><td>fetch-size-avg</td><td>The average number of bytes fetched per request for a topic</td></tr>
+<tr>
+<td></td><td>fetch-size-max</td><td>The maximum number of bytes fetched per request for a topic</td></tr>
+<tr>
+<td></td><td>records-consumed-rate</td><td>The average number of records consumed per second for a topic</td></tr>
+<tr>
+<td></td><td>records-consumed-total</td><td>The total number of records consumed for a topic</td></tr>
+<tr>
+<td></td><td>records-per-request-avg</td><td>The average number of records in each request for a topic</td></tr>
+</tbody></table>