You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by ju...@apache.org on 2015/11/13 22:34:27 UTC

kafka-site git commit: minor 0.9.0 doc changes

Repository: kafka-site
Updated Branches:
  refs/heads/asf-site e047c4b24 -> 1e91e258e


minor 0.9.0 doc changes


Project: http://git-wip-us.apache.org/repos/asf/kafka-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka-site/commit/1e91e258
Tree: http://git-wip-us.apache.org/repos/asf/kafka-site/tree/1e91e258
Diff: http://git-wip-us.apache.org/repos/asf/kafka-site/diff/1e91e258

Branch: refs/heads/asf-site
Commit: 1e91e258e4986c15407e3c9bfaf77aa7e8816340
Parents: e047c4b
Author: Jun Rao <ju...@gmail.com>
Authored: Fri Nov 13 13:34:12 2015 -0800
Committer: Jun Rao <ju...@gmail.com>
Committed: Fri Nov 13 13:34:12 2015 -0800

----------------------------------------------------------------------
 090/api.html           |  2 +-
 090/documentation.html |  2 +-
 090/security.html      | 12 +++++++-----
 3 files changed, 9 insertions(+), 7 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1e91e258/090/api.html
----------------------------------------------------------------------
diff --git a/090/api.html b/090/api.html
index 3aad872..9b739da 100644
--- a/090/api.html
+++ b/090/api.html
@@ -155,4 +155,4 @@ As of the 0.9.0 release we have added a replacement for our existing simple and
 </pre>
 
 Examples showing how to use the producer are given in the
-<a href="http://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/producer/KafkaConsumer.html" title="Kafka 0.9.0 Javadoc">javadocs</a>.
+<a href="http://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html" title="Kafka 0.9.0 Javadoc">javadocs</a>.

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1e91e258/090/documentation.html
----------------------------------------------------------------------
diff --git a/090/documentation.html b/090/documentation.html
index 69b9ba5..29376e0 100644
--- a/090/documentation.html
+++ b/090/documentation.html
@@ -116,7 +116,7 @@ Prior releases: <a href="/07/documentation.html">0.7.x</a>, <a href="/08/documen
             <li><a href="#security_authz">7.4 Authorization and ACLs</a></li>
             <li><a href="#zk_authz">7.5 ZooKeeper Authentication</a></li>
             <ul>
-                <li><a href="zk_authz_new"</li>
+                <li><a href="zk_authz_new"</a></li>
                 <li><a href="zk_authz_migration">Migrating Clusters</a></li>
                 <li><a href="zk_authz_ensemble">Migrating the ZooKeeper Ensemble</a></li>
             </ul>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1e91e258/090/security.html
----------------------------------------------------------------------
diff --git a/090/security.html b/090/security.html
index f4c8668..210eefe 100644
--- a/090/security.html
+++ b/090/security.html
@@ -20,7 +20,7 @@ In release 0.9.0.0, the Kafka community added a number of features that, used ei
 <ol>
     <li>Authenticating clients (Producers and consumers) connections to brokers, using either SSL or SASL (Kerberos)</li>
     <li>Authorizing read / write operations by clients</li>
-    <li>Encryption of data sent between brokers and clients, or between brokers, using SSL</li>
+    <li>Encryption of data sent between brokers and clients, or between brokers, using SSL (Note there is performance degradation in the clients when SSL is enabled. The magnitude of the degradation depends on the CPU type.)</li>
     <li>Authenticate brokers connecting to ZooKeeper</li>
     <li>Security is optional - non-secured clusters are supported, as well as a mix of authenticated, unauthenticated, encrypted and non-encrypted clients.</li>
     <li>Authorization is pluggable and supports integration with external authorization services</li>
@@ -54,7 +54,7 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled but
         The next step is to add the generated CA to the **clients’ truststore** so that the clients can trust this CA:
         <pre>keytool -keystore server.truststore.jks -alias CARoot <b>-import</b> -file ca-cert</pre>
 
-        <b>Note:</b> If you configure Kafka brokers to require client authentication by setting ssl.client.auth to be "requested" or "required" on <a href="#config_broker">Kafka broker config</a> then you must provide a truststore for kafka broker as well and it should have all the CA certificates that clients keys signed by.
+        <b>Note:</b> If you configure Kafka brokers to require client authentication by setting ssl.client.auth to be "requested" or "required" on <a href="#config_broker">Kafka broker config</a> then you must provide a truststore for Kafka broker as well and it should have all the CA certificates that clients keys signed by.
         <pre>keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert</pre>
 
         In contrast to the keystore in step 1 that stores each machine’s own identity, the truststore of a client stores all the certificates that the client should trust. Importing a certificate into one’s truststore also means that trusting all certificates that are signed by that certificate. As the analogy above, trusting the government (CA) also means that trusting all passports (certificates) that it has issued. This attribute is called the chains of trust, and it is particularly useful when deploying SSL on a large Kafka cluster. You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. That way all machines can authenticate all other machines.</li>
@@ -261,7 +261,10 @@ Apache Kafka allows clients to connect over SSL. By default SSL is disabled but
 </ol>
 
 <h3><a id="security_authz">7.4 Authorization and ACLs</a></h3>
-Kafka ships with a pluggable Authorizer and an out-of-box authorizer implementation that uses zookeeper to store all the acls. Kafka acls are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H On Resource R". You can read more about the acl structure on KIP-11. In order to add, remove or list acls you can use the Kafka authorizer CLI.
+Kafka ships with a pluggable Authorizer and an out-of-box authorizer implementation that uses zookeeper to store all the acls. Kafka acls are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H On Resource R". You can read more about the acl structure on KIP-11. In order to add, remove or list acls you can use the Kafka authorizer CLI. By default, if a Resource R has no associated acl, no one is allowed to access R. If you want change that behavior, you can include the following in broker.properties.
+<pre>allow.everyone.if.no.acl.found=true</pre>
+One can also add super users in broker.properties like the following.
+<pre>super.users=User:Bob,User:Alice</pre>
 <h4>Command Line Interface</h4>
 Kafka Authorization management CLI can be found under bin directory with all the other CLIs. The CLI script is called <b>kafka-acls.sh</b>. Following lists all the options that the script supports:
 <p></p>
@@ -428,5 +431,4 @@ It is also necessary to enable authentication on the ZooKeeper ensemble. To do i
 <ol>
 	<li><a href="http://zookeeper.apache.org/doc/r3.4.6/zookeeperProgrammers.html#sc_ZooKeeperAccessControl">Apache ZooKeeper documentation</a></li>
 	<li><a href="https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL">Apache ZooKeeper wiki</a></li>
-	<li><a href="http://www.cloudera.com/content/www/en-us/documentation/cdh/5-1-x/CDH5-Security-Guide/cdh5sg_zookeeper_security.html">Cloudera ZooKeeper security configuration</a></li>
-</ol>
\ No newline at end of file
+</ol>