You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@zookeeper.apache.org by bu...@apache.org on 2016/07/20 21:55:53 UTC

svn commit: r993236 - in /websites/staging/zookeeper/trunk/content: ./ doc/trunk/ doc/trunk/skin/images/

Author: buildbot
Date: Wed Jul 20 21:55:53 2016
New Revision: 993236

Log:
Staging update by buildbot for zookeeper

Modified:
    websites/staging/zookeeper/trunk/content/   (props changed)
    websites/staging/zookeeper/trunk/content/credits.html
    websites/staging/zookeeper/trunk/content/doc/trunk/index.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/javaExample.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/linkmap.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/recipes.html
    websites/staging/zookeeper/trunk/content/doc/trunk/recipes.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/releasenotes.html
    websites/staging/zookeeper/trunk/content/doc/trunk/releasenotes.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-b-l-15-1body-2menu-3menu.png
    websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-b-r-15-1body-2menu-3menu.png
    websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
    websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
    websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
    websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
    websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-r-15-1body-2menu-3menu.png
    websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
    websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
    websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperAdmin.html
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperAdmin.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperHierarchicalQuorums.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperInternals.html
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperInternals.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperJMX.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperObservers.html
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperObservers.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperOver.html
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperOver.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperProgrammers.html
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperProgrammers.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperQuotas.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperReconfig.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperStarted.html
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperStarted.pdf
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperTutorial.html
    websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperTutorial.pdf

Propchange: websites/staging/zookeeper/trunk/content/
------------------------------------------------------------------------------
--- cms:source-revision (original)
+++ cms:source-revision Wed Jul 20 21:55:53 2016
@@ -1 +1 @@
-1731484
+1753616

Modified: websites/staging/zookeeper/trunk/content/credits.html
==============================================================================
--- websites/staging/zookeeper/trunk/content/credits.html (original)
+++ websites/staging/zookeeper/trunk/content/credits.html Wed Jul 20 21:55:53 2016
@@ -65,13 +65,13 @@
 
 <p>ZooKeeper's active <span class="caps">PMC </span>members are</p>
 
-<table><tr><td>username</td><td>name</td><td>organization</td><td>timezone</td></tr><tr><td>tdunning</td><td>Ted Dunning</td><td>MapR Technologies</td><td>-8</td></tr><tr><td>camille</td><td>Camille Fournier</td><td>RentTheRunway</td><td>-5</td></tr><tr><td>phunt</td><td>Patrick Hunt</td><td>Cloudera Inc.</td><td>-8</td></tr><tr><td>fpj</td><td>Flavio Junqueira</td><td>Confluent</td><td>+1</td></tr><tr><td>mahadev</td><td>Mahadev Konar</td><td>Hortonworks Inc.</td><td>-8</td></tr><tr><td>breed</td><td>Benjamin Reed</td><td>Facebook</td><td>-8</td></tr><tr><td>henry</td><td>Henry Robinson</td><td>Cloudera Inc.</td><td>-8</td></tr><tr><td>ivank</td><td>Ivan Kelly</td><td>Midokura</td><td>+2</td></tr><tr><td>michim</td><td>Michi Mutsuzaki</td><td>Nicira</td><td>-8</td></tr></table>
+<table><tr><td>username</td><td>name</td><td>organization</td><td>timezone</td></tr><tr><td>tdunning</td><td>Ted Dunning</td><td>MapR Technologies</td><td>-8</td></tr><tr><td>camille</td><td>Camille Fournier</td><td>RentTheRunway</td><td>-5</td></tr><tr><td>phunt</td><td>Patrick Hunt</td><td>Cloudera Inc.</td><td>-8</td></tr><tr><td>fpj</td><td>Flavio Junqueira</td><td>Confluent</td><td>+1</td></tr><tr><td>mahadev</td><td>Mahadev Konar</td><td>Hortonworks Inc.</td><td>-8</td></tr><tr><td>breed</td><td>Benjamin Reed</td><td>Facebook</td><td>-8</td></tr><tr><td>henry</td><td>Henry Robinson</td><td>Cloudera Inc.</td><td>-8</td></tr><tr><td>ivank</td><td>Ivan Kelly</td><td>Midokura</td><td>+2</td></tr><tr><td>michim</td><td>Michi Mutsuzaki</td><td>Nicira</td><td>-8</td></tr><tr><td>rgs</td><td>Raul Gutierrez Segales</td><td>Pinterest</td><td>-8</td></tr></table>
 
 <h2 id="committers">Committers</h2>
 
 <p>ZooKeeper's active committers are</p>
 
-<table><tr><td>username</td><td>name</td><td>organization</td><td>timezone</td></tr><tr><td>camille</td><td>Camille Fournier</td><td>RentTheRunway</td><td>-5</td></tr><tr><td>phunt</td><td>Patrick Hunt</td><td colspan="2">Cloudera Inc.</td></tr><tr><td>fpj</td><td>Flavio Junqueira</td><td>Confluent</td><td>+1</td></tr><tr><td>cnauroth</td><td>Chris Nauroth</td><td>Hortonworks Inc.</td><td>-8</td></tr><tr><td>mahadev</td><td>Mahadev Konar</td><td>Hortonworks Inc.</td><td>-8</td></tr><tr><td>gkesavan</td><td>Giridharan Kesavan</td><td>Hortonworks Inc.</td><td>-8</td></tr><tr><td>akornev</td><td colspan="3">Andrew Kornev</td></tr><tr><td>michim</td><td>Michi Mutsuzaki</td><td>Nicira</td><td>-8</td></tr><tr><td>breed</td><td>Benjamin Reed</td><td>Facebook</td><td>-8</td></tr><tr><td>henry</td><td>Henry Robinson</td><td>Cloudera Inc.</td><td>-8</td></tr><tr><td>shralex</td><td>Alex Shraer</td><td>Google</td><td>-8</td></tr><tr><td>thawan</td><td>Thawan Kooburat</td><td>Facebook</td><td>-
 8</td></tr><tr><td>rakeshr</td><td>Rakesh R</td><td>Intel</td><td>+5:30</td></tr><tr><td>hdeng</td><td>Hongchao Deng</td><td>Cloudera Inc.</td><td>-8</td></tr><tr><td>rgs</td><td>Raul Gutierrez Segales</td><td>Twitter</td><td>-8</td></tr></table>
+<table><tr><td>username</td><td>name</td><td>organization</td><td>timezone</td></tr><tr><td>camille</td><td>Camille Fournier</td><td>RentTheRunway</td><td>-5</td></tr><tr><td>phunt</td><td>Patrick Hunt</td><td colspan="2">Cloudera Inc.</td></tr><tr><td>fpj</td><td>Flavio Junqueira</td><td>Confluent</td><td>+1</td></tr><tr><td>cnauroth</td><td>Chris Nauroth</td><td>Hortonworks Inc.</td><td>-8</td></tr><tr><td>mahadev</td><td>Mahadev Konar</td><td>Hortonworks Inc.</td><td>-8</td></tr><tr><td>gkesavan</td><td>Giridharan Kesavan</td><td>Hortonworks Inc.</td><td>-8</td></tr><tr><td>akornev</td><td colspan="3">Andrew Kornev</td></tr><tr><td>michim</td><td>Michi Mutsuzaki</td><td>Nicira</td><td>-8</td></tr><tr><td>breed</td><td>Benjamin Reed</td><td>Facebook</td><td>-8</td></tr><tr><td>henry</td><td>Henry Robinson</td><td>Cloudera Inc.</td><td>-8</td></tr><tr><td>shralex</td><td>Alex Shraer</td><td>Google</td><td>-8</td></tr><tr><td>thawan</td><td>Thawan Kooburat</td><td>Facebook</td><td>-
 8</td></tr><tr><td>rakeshr</td><td>Rakesh R</td><td>Intel</td><td>+5:30</td></tr><tr><td>hdeng</td><td>Hongchao Deng</td><td>CoreOS</td><td>-8</td></tr><tr><td>rgs</td><td>Raul Gutierrez Segales</td><td>Pinterest</td><td>-8</td></tr></table>
 
 <h2 id="contributors">Contributors</h2>
 

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/index.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/index.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/index.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/javaExample.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/javaExample.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/javaExample.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/linkmap.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/linkmap.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/linkmap.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/recipes.html
==============================================================================
--- websites/staging/zookeeper/trunk/content/doc/trunk/recipes.html (original)
+++ websites/staging/zookeeper/trunk/content/doc/trunk/recipes.html Wed Jul 20 21:55:53 2016
@@ -904,7 +904,7 @@ document.write("Last Published: " + docu
     processes watching upon the current smallest znode, and checking if they
     are the new leader when the smallest znode goes away (note that the
     smallest znode will go away if the leader fails because the node is
-    ephemeral). But this causes a herd effect: upon of failure of the current
+    ephemeral). But this causes a herd effect: upon a failure of the current
     leader, all other processes receive a notification, and execute
     getChildren on "/election" to obtain the current list of children of
     "/election". If the number of clients is large, it causes a spike on the

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/recipes.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/recipes.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/recipes.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/releasenotes.html
==============================================================================
--- websites/staging/zookeeper/trunk/content/doc/trunk/releasenotes.html (original)
+++ websites/staging/zookeeper/trunk/content/doc/trunk/releasenotes.html Wed Jul 20 21:55:53 2016
@@ -304,7 +304,7 @@ This meant that developers had to track
 In this release the client library tracks watches that a client has registered and reregisters the watches when a connection is made to a new server.
 Applications that still manually reregister interest should continue working properly as long as they are able to handle unsolicited watches.
 For example, an old application may register a watch for /foo and /goo, lose the connection, and reregister only /goo.
-As long as the application is able to recieve a notification for /foo, (probably ignoring it) the applications does not to be changes.
+As long as the application is able to recieve a notification for /foo, (probably ignoring it) it does not need to be changed.
 One caveat to the watch management: it is possible to miss an event for the creation and deletion of a znode if watching for creation and both the create and delete happens while the client is disconnected from ZooKeeper.
 </p>
 <p>

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/releasenotes.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/releasenotes.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/releasenotes.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-b-l-15-1body-2menu-3menu.png
==============================================================================
Binary files - no diff available.

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-b-r-15-1body-2menu-3menu.png
==============================================================================
Binary files - no diff available.

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
==============================================================================
Binary files - no diff available.

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
==============================================================================
Binary files - no diff available.

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
==============================================================================
Binary files - no diff available.

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
==============================================================================
Binary files - no diff available.

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-r-15-1body-2menu-3menu.png
==============================================================================
Binary files - no diff available.

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
==============================================================================
Binary files - no diff available.

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
==============================================================================
Binary files - no diff available.

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
==============================================================================
Binary files - no diff available.

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperAdmin.html
==============================================================================
--- websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperAdmin.html (original)
+++ websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperAdmin.html Wed Jul 20 21:55:53 2016
@@ -276,7 +276,7 @@ document.write("Last Published: " + docu
 <a href="#sc_clusterOptions">Cluster Options</a>
 </li>
 <li>
-<a href="#sc_authOptions">Authentication &amp; Authorization Options</a>
+<a href="#sc_authOptions">Encryption, Authentication, Authorization Options</a>
 </li>
 <li>
 <a href="#Experimental+Options%2FFeatures">Experimental Options/Features</a>
@@ -384,54 +384,124 @@ document.write("Last Published: " + docu
 <h3 class="h4">System Requirements</h3>
 <a name="sc_supportedPlatforms"></a>
 <h4>Supported Platforms</h4>
+<p>ZooKeeper consists of multiple components.  Some components are
+        supported broadly, and other components are supported only on a smaller
+        set of platforms.</p>
 <ul>
           
 <li>
             
-<p>GNU/Linux is supported as a development and production
-              platform for both server and client.</p>
+<p>
+<strong>Client</strong> is the Java client
+            library, used by applications to connect to a ZooKeeper ensemble.
+            </p>
           
 </li>
           
 <li>
             
-<p>Sun Solaris is supported as a development and production
-              platform for both server and client.</p>
+<p>
+<strong>Server</strong> is the Java server
+            that runs on the ZooKeeper ensemble nodes.</p>
           
 </li>
           
 <li>
             
-<p>FreeBSD is supported as a development and production
-              platform for both server and client.</p>
+<p>
+<strong>Native Client</strong> is a client
+            implemented in C, similar to the Java client, used by applications
+            to connect to a ZooKeeper ensemble.</p>
           
 </li>
           
 <li>
             
-<p>Win32 is supported as a <em>development
-            platform</em> only for both server and client.</p>
+<p>
+<strong>Contrib</strong> refers to multiple
+            optional add-on components.</p>
           
 </li>
+        
+</ul>
+<p>The following matrix describes the level of support committed for
+        running each component on different operating system platforms.</p>
+<table class="ForrestTable" cellspacing="1" cellpadding="4">
+<caption>Support Matrix</caption>
           
-<li>
-            
-<p>Win64 is supported as a <em>development
-            platform</em> only for both server and client.</p>
+<title>Support Matrix</title>
           
-</li>
-          
-<li>
+              
+<tr>
+                
+<th>Operating System</th>
+                <th>Client</th>
+                <th>Server</th>
+                <th>Native Client</th>
+                <th>Contrib</th>
+              
+</tr>
+            
+              
+<tr>
+                
+<td>GNU/Linux</td>
+                <td>Development and Production</td>
+                <td>Development and Production</td>
+                <td>Development and Production</td>
+                <td>Development and Production</td>
+              
+</tr>
+              
+<tr>
+                
+<td>Solaris</td>
+                <td>Development and Production</td>
+                <td>Development and Production</td>
+                <td>Not Supported</td>
+                <td>Not Supported</td>
+              
+</tr>
+              
+<tr>
+                
+<td>FreeBSD</td>
+                <td>Development and Production</td>
+                <td>Development and Production</td>
+                <td>Not Supported</td>
+                <td>Not Supported</td>
+              
+</tr>
+              
+<tr>
+                
+<td>Windows</td>
+                <td>Development and Production</td>
+                <td>Development and Production</td>
+                <td>Not Supported</td>
+                <td>Not Supported</td>
+              
+</tr>
+              
+<tr>
+                
+<td>Mac OS X</td>
+                <td>Development Only</td>
+                <td>Development Only</td>
+                <td>Not Supported</td>
+                <td>Not Supported</td>
+              
+</tr>
             
-<p>MacOSX is supported as a <em>development
-            platform</em> only for both server and client.</p>
-          
-</li>
         
-</ul>
+</table>
+<p>For any operating system not explicitly mentioned as supported in
+        the matrix, components may or may not work.  The ZooKeeper community
+        will fix obvious bugs that are reported for other platforms, but there
+        is no full support.</p>
 <a name="sc_requiredSoftware"></a>
 <h4>Required Software </h4>
-<p>ZooKeeper runs in Java, release 1.6 or greater (JDK 6 or
+<p>ZooKeeper runs in Java, release 1.7 or greater (JDK 7 or
         greater, FreeBSD support requires openjdk7).  It runs as an
         <em>ensemble</em> of ZooKeeper servers. Three
         ZooKeeper servers is the minimum recommended size for an
@@ -449,27 +519,35 @@ document.write("Last Published: " + docu
       only handle the failure of a single machine; if two machines fail, the
       remaining two machines do not constitute a majority. However, with five
       machines ZooKeeper can handle the failure of two machines. </p>
-
 <div class="note">
 <div class="label">Note</div>
 <div class="content">
-<p>As mentioned in the Getting Started guide, a minimum of three servers are
-      required for a fault tolerant clustered setup, and it is strongly
-      recommended that you have an odd number of servers.</p>
-<p>Usually three servers is more than enough for a production install, but
-      for maximum reliability during maintenance, you may wish to install
-      five servers.  With three servers, if you perform maintenance on
-      one of them, you are vulnerable to a failure on one of the other
-      two servers during that maintenance.  If you have five of them
-      running, you can take one down for maintenance, and know that
-      you're still OK if one of the other four suddenly fails.</p>
-<p>Your redundancy considerations should include all aspects of your
-      environment.  If you have three zookeeper servers, but their
-      network cables are all plugged into the same network switch, then
-      the failure of that switch will take down your entire ensemble.</p>
+         
+<p>
+            As mentioned in the
+            <a href="zookeeperStarted.html">ZooKeeper Getting Started Guide</a>
+            , a minimum of three servers are required for a fault tolerant
+            clustered setup, and it is strongly recommended that you have an
+            odd number of servers.
+         </p>
+         
+<p>Usually three servers is more than enough for a production
+            install, but for maximum reliability during maintenance, you may
+            wish to install five servers. With three servers, if you perform
+            maintenance on one of them, you are vulnerable to a failure on one
+            of the other two servers during that maintenance. If you have five
+            of them running, you can take one down for maintenance, and know
+            that you're still OK if one of the other four suddenly fails.
+         </p>
+         
+<p>Your redundancy considerations should include all aspects of
+            your environment. If you have three ZooKeeper servers, but their
+            network cables are all plugged into the same network switch, then
+            the failure of that switch will take down your entire ensemble.
+         </p>
+      
 </div>
 </div>
-
 <p>Here are the steps to setting a server that will be part of an
       ensemble. These steps should be performed on every host in the
       ensemble:</p>
@@ -566,7 +644,7 @@ server.3=zoo3:2888:3888</pre>
 
           
 <p>
-<span class="codefrag computeroutput">$ java -cp zookeeper.jar:lib/slf4j-api-1.7.5.jar:lib/slf4j-log4j12-1.7.5.jar:lib/log4j-1.2.16.jar:conf \
+<span class="codefrag computeroutput">$ java -cp zookeeper.jar:lib/slf4j-api-1.7.5.jar:lib/slf4j-log4j12-1.7.5.jar:lib/log4j-1.2.17.jar:conf \
               org.apache.zookeeper.server.quorum.QuorumPeerMain zoo.cfg
           </span>
 </p>
@@ -603,7 +681,7 @@ server.3=zoo3:2888:3888</pre>
 
               
 <p>
-<span class="codefrag computeroutput">$ java -cp zookeeper.jar:lib/slf4j-api-1.7.5.jar:lib/slf4j-log4j12-1.7.5.jar:lib/log4j-1.2.16.jar:conf:src/java/lib/jline-2.11.jar \
+<span class="codefrag computeroutput">$ java -cp zookeeper.jar:lib/slf4j-api-1.7.5.jar:lib/slf4j-log4j12-1.7.5.jar:lib/log4j-1.2.17.jar:conf:src/java/lib/jline-2.11.jar \
       org.apache.zookeeper.ZooKeeperMain -server 127.0.0.1:2181</span>
 </p>
             
@@ -846,7 +924,7 @@ server.3=zoo3:2888:3888</pre>
 <a name="Single+Machine+Requirements"></a>
 <h4>Single Machine Requirements</h4>
 <p>If ZooKeeper has to contend with other applications for
-        access to resourses like storage media, CPU, network, or
+        access to resources like storage media, CPU, network, or
         memory, its performance will suffer markedly.  ZooKeeper has
         strong durability guarantees, which means it uses storage
         media to log changes before the operation responsible for the
@@ -927,7 +1005,7 @@ server.3=zoo3:2888:3888</pre>
         in the unlikely event a recent log has become corrupted). This
         can be run as a cron job on the ZooKeeper server machines to
         clean up the logs daily.</p>
-<pre class="code"> java -cp zookeeper.jar:lib/slf4j-api-1.7.5.jar:lib/slf4j-log4j12-1.7.5.jar:lib/log4j-1.2.16.jar:conf org.apache.zookeeper.server.PurgeTxnLog &lt;dataDir&gt; &lt;snapDir&gt; -n &lt;count&gt;</pre>
+<pre class="code"> java -cp zookeeper.jar:lib/slf4j-api-1.7.5.jar:lib/slf4j-log4j12-1.7.5.jar:lib/log4j-1.2.17.jar:conf org.apache.zookeeper.server.PurgeTxnLog &lt;dataDir&gt; &lt;snapDir&gt; -n &lt;count&gt;</pre>
 <p>Automatic purging of the snapshots and corresponding
         transaction logs was introduced in version 3.4.0 and can be
         enabled via the following configuration parameters <strong>autopurge.snapRetainCount</strong> and <strong>autopurge.purgeInterval</strong>. For more on
@@ -960,6 +1038,16 @@ server.3=zoo3:2888:3888</pre>
       examples) managing your ZooKeeper server ensures that if the
       process does exit abnormally it will automatically be restarted
       and will quickly rejoin the cluster.</p>
+<p>It is also recommended to configure the ZooKeeper server process to
+      terminate and dump its heap if an
+      <span class="codefrag computeroutput">OutOfMemoryError</span> occurs.  This is achieved
+      by launching the JVM with the following arguments on Linux and Windows
+      respectively.  The <span class="codefrag filename">zkServer.sh</span> and
+      <span class="codefrag filename">zkServer.cmd</span> scripts that ship with ZooKeeper set
+      these options.
+      </p>
+<pre class="code">-XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError='kill -9 %p'</pre>
+<pre class="code">"-XX:+HeapDumpOnOutOfMemoryError" "-XX:OnOutOfMemoryError=cmd /c taskkill /pid %%%%p /t /f"</pre>
 <a name="sc_monitoring"></a>
 <h3 class="h4">Monitoring</h3>
 <p>The ZooKeeper service can be monitored in one of two
@@ -967,12 +1055,22 @@ server.3=zoo3:2888:3888</pre>
       your environment/requirements.</p>
 <a name="sc_logging"></a>
 <h3 class="h4">Logging</h3>
-<p>ZooKeeper uses <strong>log4j</strong> version 1.2 as 
-      its logging infrastructure. The  ZooKeeper default <span class="codefrag filename">log4j.properties</span> 
-      file resides in the <span class="codefrag filename">conf</span> directory. Log4j requires that 
-      <span class="codefrag filename">log4j.properties</span> either be in the working directory 
-      (the directory from which ZooKeeper is run) or be accessible from the classpath.</p>
-<p>For more information, see 
+<p>
+        ZooKeeper uses <strong><a href="http://www.slf4j.org">SLF4J</a></strong>
+        version 1.7.5 as its logging infrastructure. For backward compatibility it is bound to
+        <strong>LOG4J</strong> but you can use
+        <strong><a href="http://logback.qos.ch/">LOGBack</a></strong>
+        or any other supported logging framework of your choice.
+    </p>
+<p>
+        The ZooKeeper default <span class="codefrag filename">log4j.properties</span>
+        file resides in the <span class="codefrag filename">conf</span> directory. Log4j requires that
+        <span class="codefrag filename">log4j.properties</span> either be in the working directory
+        (the directory from which ZooKeeper is run) or be accessible from the classpath.
+    </p>
+<p>For more information about SLF4J, see
+      <a href="http://www.slf4j.org/manual.html">its manual</a>.</p>
+<p>For more information about LOG4J, see
       <a href="http://logging.apache.org/log4j/1.2/manual.html#defaultInit">Log4j Default Initialization Procedure</a> 
       of the log4j manual.</p>
 <a name="sc_troubleshooting"></a>
@@ -1031,6 +1129,22 @@ server.3=zoo3:2888:3888</pre>
 
           
 <dt>
+<term>secureClientPort</term>
+</dt>
+<dd>
+<p>the port to listen on for secure client connections using SSL.
+
+              <strong>clientPort</strong> specifies
+                the port for plaintext connections while <strong>
+                  secureClientPort</strong> specifies the port for SSL
+                connections. Specifying both enables mixed-mode while omitting
+                either will disable that mode.</p>
+<p>Note that SSL feature will be enabled when user plugs-in
+                zookeeper.serverCnxnFactory, zookeeper.clientCnxnSocket as Netty.</p>
+</dd>
+
+          
+<dt>
 <term>dataDir</term>
 </dt>
 <dd>
@@ -1136,20 +1250,6 @@ server.3=zoo3:2888:3888</pre>
 
           
 <dt>
-<term>traceFile</term>
-</dt>
-<dd>
-<p>(Java system property: <strong>requestTraceFile</strong>)</p>
-<p>If this option is defined, requests will be will logged to
-              a trace file named traceFile.year.month.day. Use of this option
-              provides useful debugging information, but will impact
-              performance. (Note: The system property has no zookeeper prefix,
-              and the configuration variable name is different from the system
-              property. Yes - it's not consistent, and it's annoying.)</p>
-</dd>
-
-          
-<dt>
 <term>maxClientCnxns</term>
 </dt>
 <dd>
@@ -1208,7 +1308,7 @@ server.3=zoo3:2888:3888</pre>
 <term>fsync.warningthresholdms</term>
 </dt>
 <dd>
-<p>(Java system property: <strong>fsync.warningthresholdms</strong>)</p>
+<p>(Java system property: <strong>zookeeper.fsync.warningthresholdms</strong>)</p>
 <p>
 <strong>New in 3.3.4:</strong> A
                warning message will be output to the log whenever an
@@ -1434,16 +1534,16 @@ server.3=zoo3:2888:3888</pre>
 </dl>
 <p></p>
 <a name="sc_authOptions"></a>
-<h4>Authentication &amp; Authorization Options</h4>
+<h4>Encryption, Authentication, Authorization Options</h4>
 <p>The options in this section allow control over
-        authentication/authorization performed by the service.</p>
+        encryption/authentication/authorization performed by the service.</p>
 <dl>
           
 <dt>
-<term>zookeeper.DigestAuthenticationProvider.superDigest</term>
+<term>DigestAuthenticationProvider.superDigest</term>
 </dt>
 <dd>
-<p>(Java system property only: <strong>zookeeper.DigestAuthenticationProvider.superDigest</strong>)</p>
+<p>(Java system property: <strong>zookeeper.DigestAuthenticationProvider.superDigest</strong>)</p>
 <p>By default this feature is <strong>disabled</strong>
 </p>
 <p>
@@ -1465,6 +1565,80 @@ server.3=zoo3:2888:3888</pre>
               localhost (not over the network) or over an encrypted
               connection.</p>
 </dd>
+
+          
+<dt>
+<term>X509AuthenticationProvider.superUser</term>
+</dt>
+<dd>
+<p>(Java system property: <strong>zookeeper.X509AuthenticationProvider.superUser</strong>)</p>
+<p>The SSL-backed way to enable a ZooKeeper ensemble
+              administrator to access the znode hierarchy as a "super" user.
+              When this parameter is set to an X500 principal name, only an
+              authenticated client with that principal will be able to bypass
+              ACL checking and have full privileges to all znodes.</p>
+</dd>
+
+          
+<dt>
+<term>ssl.keyStore.location and ssl.keyStore.password</term>
+</dt>
+<dd>
+<p>(Java system properties: <strong>
+                zookeeper.ssl.keyStore.location</strong> and <strong>zookeeper.ssl.keyStore.password</strong>)</p>
+<p>Specifies the file path to a JKS containing the local
+                credentials to be used for SSL connections, and the
+                password to unlock the file.</p>
+</dd>
+
+          
+<dt>
+<term>ssl.trustStore.location and ssl.trustStore.password</term>
+</dt>
+<dd>
+<p>(Java system properties: <strong>
+                zookeeper.ssl.trustStore.location</strong> and <strong>zookeeper.ssl.trustStore.password</strong>)</p>
+<p>Specifies the file path to a JKS containing the remote
+                credentials to be used for SSL connections, and the
+                password to unlock the file.</p>
+</dd>
+
+          
+<dt>
+<term>ssl.authProvider</term>
+</dt>
+<dd>
+<p>(Java system property: <strong>zookeeper.ssl.authProvider</strong>)</p>
+<p>Specifies a subclass of <strong>
+              org.apache.zookeeper.auth.X509AuthenticationProvider</strong>
+              to use for secure client authentication. This is useful in
+              certificate key infrastructures that do not use JKS. It may be
+              necessary to extend <strong>javax.net.ssl.X509KeyManager
+              </strong> and <strong>javax.net.ssl.X509TrustManager</strong>
+              to get the desired behavior from the SSL stack. To configure the
+              ZooKeeper server to use the custom provider for authentication,
+              choose a scheme name for the custom AuthenticationProvider and
+              set the property <strong>zookeeper.authProvider.[scheme]
+              </strong> to the fully-qualified class name of the custom
+              implementation. This will load the provider into the ProviderRegistry.
+              Then set this property <strong>
+              zookeeper.ssl.authProvider=[scheme]</strong> and that provider
+              will be used for secure authentication.</p>
+</dd>
+
+          
+<dt>
+<term>zookeeper.client.secure</term>
+</dt>
+<dd>
+<p>(Java system property only: <strong>zookeeper.client.secure</strong>)</p>
+<p>If you want to connect to server's secure client port, you need to
+                set this property to <strong>true</strong> on client.
+                This will connect to server using SSL with specified credentials. Note that
+                you also need to plug-in Netty client.
+              </p>
+</dd>
+
         
 </dl>
 <a name="Experimental+Options%2FFeatures"></a>
@@ -1660,13 +1834,36 @@ server.3=zoo3:2888:3888</pre>
               </p>
 </dd>
 
+          
+<dt>
+<term>znode.container.checkIntervalMs</term>
+</dt>
+<dd>
+<p>(Java system property only)</p>
+<p>
+<strong>New in 3.6.0:</strong> The
+                time interval in milliseconds for each check of candidate container
+                nodes. Default is "60000".</p>
+</dd>
+
+          
+<dt>
+<term>znode.container.maxPerMinute</term>
+</dt>
+<dd>
+<p>(Java system property only)</p>
+<p>
+<strong>New in 3.6.0:</strong> The
+                maximum number of container nodes that can be deleted per
+                minute. This prevents herding during container deletion.
+                Default is "10000".</p>
+</dd>
         
 </dl>
 <a name="Communication+using+the+Netty+framework"></a>
 <h4>Communication using the Netty framework</h4>
 <p>
-<strong>New in
-            3.4:</strong> <a href="http://jboss.org/netty">Netty</a>
+<a href="http://netty.io">Netty</a>
             is an NIO based client/server communication framework, it
             simplifies (over NIO being used directly) many of the
             complexities of network level communication for java
@@ -1675,16 +1872,12 @@ server.3=zoo3:2888:3888</pre>
             (certificates). These are optional features and can be
             turned on or off individually.
         </p>
-<p>Prior to version 3.4 ZooKeeper has always used NIO
-            directly, however in versions 3.4 and later Netty is
-            supported as an option to NIO (replaces). NIO continues to
-            be the default, however Netty based communication can be
-            used in place of NIO by setting the environment variable
-            "zookeeper.serverCnxnFactory" to
-            "org.apache.zookeeper.server.NettyServerCnxnFactory". You
-            have the option of setting this on either the client(s) or
-            server(s), typically you would want to set this on both,
-            however that is at your discretion.
+<p>In versions 3.5+, a ZooKeeper server can use Netty
+            instead of NIO (default option) by setting the environment
+            variable <strong>zookeeper.serverCnxnFactory</strong>
+            to <strong>org.apache.zookeeper.server.NettyServerCnxnFactory</strong>;
+            for the client, set <strong>zookeeper.clientCnxnSocket</strong>
+            to <strong>org.apache.zookeeper.ClientCnxnSocketNetty</strong>.
         </p>
 <p>
           TBD - tuning options for netty - currently there are none that are netty specific but we should add some. Esp around max bound on the number of reader worker threads netty creates.
@@ -1713,6 +1906,15 @@ server.3=zoo3:2888:3888</pre>
 
           
 <dt>
+<term>admin.serverAddress</term>
+</dt>
+<dd>
+<p>(Java system property: <strong>zookeeper.admin.serverAddress</strong>)</p>
+<p>The address the embedded Jetty server listens on. Defaults to 0.0.0.0.</p>
+</dd>
+
+          
+<dt>
 <term>admin.serverPort</term>
 </dt>
 <dd>
@@ -1860,6 +2062,17 @@ server.3=zoo3:2888:3888</pre>
 
           
 <dt>
+<term>dirs</term>
+</dt>
+<dd>
+<p>
+<strong>New in 3.5.1:</strong>
+                Shows the total size of snapshot and log files in bytes
+              </p>
+</dd>
+
+          
+<dt>
 <term>wchp</term>
 </dt>
 <dd>
@@ -1907,6 +2120,141 @@ server.3=zoo3:2888:3888</pre>
 <p>The output contains multiple lines with the following format:</p>
 <pre class="code">key \t value</pre>
 </dd>
+
+          
+<dt>
+<term>isro</term>
+</dt>
+<dd>
+<p>
+<strong>New in 3.4.0:</strong> Tests if
+              server is running in read-only mode.  The server will respond with
+              "ro" if in read-only mode or "rw" if not in read-only mode.</p>
+</dd>
+
+          
+<dt>
+<term>gtmk</term>
+</dt>
+<dd>
+<p>Gets the current trace mask as a 64-bit signed long value in
+              decimal format.  See <span class="codefrag command">stmk</span> for an explanation of
+              the possible values.</p>
+</dd>
+
+          
+<dt>
+<term>stmk</term>
+</dt>
+<dd>
+<p>Sets the current trace mask.  The trace mask is 64 bits,
+              where each bit enables or disables a specific category of trace
+              logging on the server.  Log4J must be configured to enable
+              <span class="codefrag command">TRACE</span> level first in order to see trace logging
+              messages.  The bits of the trace mask correspond to the following
+              trace logging categories.</p>
+<table class="ForrestTable" cellspacing="1" cellpadding="4">
+<caption>Trace Mask Bit Values</caption>
+                
+<title>Trace Mask Bit Values</title>
+                
+                    
+<tr>
+                      
+<td>0b0000000000</td>
+                      <td>Unused, reserved for future use.</td>
+                    
+</tr>
+                    
+<tr>
+                      
+<td>0b0000000010</td>
+                      <td>Logs client requests, excluding ping
+                      requests.</td>
+                    
+</tr>
+                    
+<tr>
+                      
+<td>0b0000000100</td>
+                      <td>Unused, reserved for future use.</td>
+                    
+</tr>
+                    
+<tr>
+                      
+<td>0b0000001000</td>
+                      <td>Logs client ping requests.</td>
+                    
+</tr>
+                    
+<tr>
+                      
+<td>0b0000010000</td>
+                      <td>Logs packets received from the quorum peer that is
+                      the current leader, excluding ping requests.</td>
+                    
+</tr>
+                    
+<tr>
+                      
+<td>0b0000100000</td>
+                      <td>Logs addition, removal and validation of client
+                      sessions.</td>
+                    
+</tr>
+                    
+<tr>
+                      
+<td>0b0001000000</td>
+                      <td>Logs delivery of watch events to client
+                      sessions.</td>
+                    
+</tr>
+                    
+<tr>
+                      
+<td>0b0010000000</td>
+                      <td>Logs ping packets received from the quorum peer
+                      that is the current leader.</td>
+                    
+</tr>
+                    
+<tr>
+                      
+<td>0b0100000000</td>
+                      <td>Unused, reserved for future use.</td>
+                    
+</tr>
+                    
+<tr>
+                      
+<td>0b1000000000</td>
+                      <td>Unused, reserved for future use.</td>
+                    
+</tr>
+                  
+              
+</table>
+<p>All remaining bits in the 64-bit value are unused and
+              reserved for future use.  Multiple trace logging categories are
+              specified by calculating the bitwise OR of the documented values.
+              The default trace mask is 0b0100110010.  Thus, by default, trace
+              logging includes client requests, packets received from the
+              leader and sessions.</p>
+<p>To set a different trace mask, send a request containing the
+              <span class="codefrag command">stmk</span> four-letter word followed by the trace
+              mask represented as a 64-bit signed long value.  This example uses
+              the Perl <span class="codefrag command">pack</span> function to construct a trace
+              mask that enables all trace logging categories described above and
+              convert it to a 64-bit signed long value with big-endian byte
+              order.  The result is appended to <span class="codefrag command">stmk</span> and sent
+              to the server using netcat.  The server responds with the new
+              trace mask in decimal format.</p>
+<pre class="code">$ perl -e "print 'stmk', pack('q&gt;', 0b0011111010)" | nc localhost 2181
+250
+              </pre>
+</dd>
         
 </dl>
 <p>Here's an example of the <strong>ruok</strong>

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperAdmin.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperAdmin.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperAdmin.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperHierarchicalQuorums.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperHierarchicalQuorums.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperHierarchicalQuorums.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperInternals.html
==============================================================================
--- websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperInternals.html (original)
+++ websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperInternals.html Wed Jul 20 21:55:53 2016
@@ -598,7 +598,7 @@ message when that proposal is committed.
 </ul>
 <a name="sc_summary"></a>
 <h3 class="h4">Summary</h3>
-<p>So there you go. Why does it work? Specifically, why does is set of proposals 
+<p>So there you go. Why does it work? Specifically, why does a set of proposals 
 believed by a new leader always contain any proposal that has actually been committed? 
 First, all proposals have a unique zxid, so unlike other protocols, we never have 
 to worry about two different values being proposed for the same zxid; followers 

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperInternals.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperInternals.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperInternals.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperJMX.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperJMX.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperJMX.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperObservers.html
==============================================================================
--- websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperObservers.html (original)
+++ websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperObservers.html Wed Jul 20 21:55:53 2016
@@ -290,7 +290,7 @@ document.write("Last Published: " + docu
 <div class="section">
 <p>
       Two example use cases for Observers are listed below. In fact, wherever
-      you wish to scale the numbe of clients of your ZooKeeper ensemble, or
+      you wish to scale the number of clients of your ZooKeeper ensemble, or
       where you wish to insulate the critical part of an ensemble from the load
       of dealing with client requests, Observers are a good architectural
       choice.

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperObservers.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperObservers.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperObservers.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperOver.html
==============================================================================
--- websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperOver.html (original)
+++ websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperOver.html Wed Jul 20 21:55:53 2016
@@ -329,7 +329,7 @@ document.write("Last Published: " + docu
 </table>
 <a name="Nodes+and+ephemeral+nodes"></a>
 <h3 class="h4">Nodes and ephemeral nodes</h3>
-<p>Unlike is standard file systems, each node in a ZooKeeper
+<p>Unlike standard file systems, each node in a ZooKeeper
       namespace can have data associated with it as well as children. It is
       like having a file-system that allows a file to also be a directory.
       (ZooKeeper was designed to store coordination data: status information,
@@ -353,9 +353,9 @@ document.write("Last Published: " + docu
 <a name="Conditional+updates+and+watches"></a>
 <h3 class="h4">Conditional updates and watches</h3>
 <p>ZooKeeper supports the concept of <em>watches</em>.
-      Clients can set a watch on a znodes. A watch will be triggered and
-      removed when the znode changes. When a watch is triggered the client
-      receives a packet saying that the znode has changed. And if the
+      Clients can set a watch on a znode. A watch will be triggered and
+      removed when the znode changes. When a watch is triggered, the client
+      receives a packet saying that the znode has changed. If the
       connection between the client and one of the Zoo Keeper servers is
       broken, the client will receive a local notification. These can be used
       to <em>[tbd]</em>.</p>
@@ -487,7 +487,7 @@ document.write("Last Published: " + docu
       of the ZooKeeper service. With the exception of the request processor,
      each of
       the servers that make up the ZooKeeper service replicates its own copy
-      of each of components.</p>
+      of each of the components.</p>
 <table class="ForrestTable" cellspacing="1" cellpadding="4">
 <tr>
 <td>ZooKeeper Components</td>

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperOver.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperOver.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperOver.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperProgrammers.html
==============================================================================
--- websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperProgrammers.html (original)
+++ websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperProgrammers.html Wed Jul 20 21:55:53 2016
@@ -216,6 +216,9 @@ document.write("Last Published: " + docu
 <li>
 <a href="#Sequence+Nodes+--+Unique+Naming">Sequence Nodes -- Unique Naming</a>
 </li>
+<li>
+<a href="#Container+Nodes">Container Nodes</a>
+</li>
 </ul>
 </li>
 <li>
@@ -562,6 +565,20 @@ document.write("Last Published: " + docu
         (4bytes) maintained by the parent node, the counter will
         overflow when incremented beyond 2147483647 (resulting in a
         name "&lt;path&gt;-2147483647").</p>
+<a name="Container+Nodes"></a>
+<h4>Container Nodes</h4>
+<p>
+<strong>Added in 3.6.0</strong>
+</p>
+<p>ZooKeeper has the notion of container nodes. Container nodes are
+          special purpose nodes useful for recipes such as leader, lock, etc.
+          When the last child of a container is deleted, the container becomes
+          a candidate to be deleted by the server at some point in the future.</p>
+<p>Given this property, you should be prepared to get
+          KeeperException.NoNodeException when creating children inside of
+          container nodes. i.e. when creating child nodes inside of container nodes
+          always check for KeeperException.NoNodeException and recreate the container
+          node when it occurs.</p>
 <a name="sc_timeInZk"></a>
 <h3 class="h4">Time in ZooKeeper</h3>
 <p>ZooKeeper tracks time multiple ways:</p>
@@ -665,6 +682,18 @@ document.write("Last Published: " + docu
 <li>
           
 <p>
+<strong>pzxid</strong>
+</p>
+
+          
+<p>The zxid of the change that last modified children of this znode.</p>
+        
+</li>
+
+        
+<li>
+          
+<p>
 <strong>ctime</strong>
 </p>
 
@@ -1379,6 +1408,16 @@ document.write("Last Published: " + docu
         IP.</p>
 </li>
 
+        
+<li>
+<p>
+<strong>x509</strong> uses the client
+        X500 Principal as an ACL ID identity. The ACL expression is the exact
+        X500 Principal name of a client. When using the secure port, clients
+        are automatically authenticated and their auth info for the x509 scheme
+        is set.</p>
+</li>
+
       
 </ul>
 <a name="ZooKeeper+C+client+API"></a>

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperProgrammers.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperProgrammers.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperProgrammers.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperQuotas.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperQuotas.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperQuotas.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperReconfig.pdf
==============================================================================
Binary files - no diff available.

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperStarted.html
==============================================================================
--- websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperStarted.html (original)
+++ websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperStarted.html Wed Jul 20 21:55:53 2016
@@ -523,22 +523,29 @@ numChildren = 0
       application is called a <em>quorum</em>, and in replicated
       mode, all servers in the quorum have copies of the same configuration
       file.</p>
-
 <div class="note">
 <div class="label">Note</div>
 <div class="content">
-<p>For replicated mode, a minimum of three servers are required, and it is
-      strongly recommended that you have an odd number of servers.  If you
-      only have two servers, then you are in a situation where if one of
-      them fails, there are not enough machines to form a majority quorum.
-      Two servers is inherently <strong>less</strong> stable than a single
-      server, because there are two single points of failure.</p>
+      
+<p>
+         For replicated mode, a minimum of three servers are required,
+         and it is strongly recommended that you have an odd number of
+         servers. If you only have two servers, then you are in a
+         situation where if one of them fails, there are not enough
+         machines to form a majority quorum. Two servers is inherently
+         <strong>less</strong>
+         stable than a single server, because there are two single
+         points of failure.
+      </p>
+   
 </div>
 </div>
-
-<p>The required <strong>conf/zoo.cfg</strong> file for replicated mode is
-      similar to the one used in standalone mode, but with a few differences.
-      Here is an example:</p>
+<p>
+      The required
+      <strong>conf/zoo.cfg</strong>
+      file for replicated mode is similar to the one used in standalone
+      mode, but with a few differences. Here is an example:
+   </p>
 <pre class="code">
 tickTime=2000
 dataDir=/var/lib/zookeeper
@@ -587,15 +594,15 @@ server.3=zoo3:2888:3888
         (in the above replicated example, running on a
         single <em>localhost</em>, you would still have
         three config files).</p>
-
-<p>Please be aware that setting up multiple servers on a single machine
-        will not create any redundancy.  If something were to happen
-        which caused the machine to die, all of the zookeeper servers
-        would be offline.  Full redundancy requires that each server have
-        its own machine.  It must be a completely separate physical server.
-        Multiple virtual machines on the same physical host are still
-        vulnerable to the complete failure of that host.</p>
-
+        
+<p>Please be aware that setting up multiple servers on a single
+            machine will not create any redundancy. If something were to
+            happen which caused the machine to die, all of the zookeeper
+            servers would be offline. Full redundancy requires that each
+            server have its own machine. It must be a completely separate
+            physical server. Multiple virtual machines on the same physical
+            host are still vulnerable to the complete failure of that host.</p>
+      
 </div>
 </div>
 <a name="Other+Optimizations"></a>

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperStarted.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperStarted.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperStarted.pdf Wed Jul 20 21:55:53 2016 differ

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperTutorial.html
==============================================================================
--- websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperTutorial.html (original)
+++ websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperTutorial.html Wed Jul 20 21:55:53 2016
@@ -373,14 +373,14 @@ a boolean flag that enables the process
 </pre>
 <p>
 Note that enter() throws both KeeperException and InterruptedException, so it is 
-the reponsability of the application to catch and handle such exceptions.</p>
+the responsibility of the application to catch and handle such exceptions.</p>
 <p>
 Once the computation is finished, a process calls leave() to leave the barrier. 
 First it deletes its corresponding node, and then it gets the children of the root 
 node. If there is at least one child, then it waits for a notification (obs: note 
 that the second parameter of the call to getChildren() is true, meaning that 
 ZooKeeper has to set a watch on the the root node). Upon reception of a notification, 
-it checks once more whether the root node has any child.</p>
+it checks once more whether the root node has any children.</p>
 <pre class="code">
         /**
          * Wait until all reach barrier
@@ -411,7 +411,7 @@ it checks once more whether the root nod
 <h2 class="h3">Producer-Consumer Queues</h2>
 <div class="section">
 <p>
-A producer-consumer queue is a distributed data estructure thata group of processes 
+A producer-consumer queue is a distributed data structure that groups of processes 
 use to generate and consume items. Producer processes create new elements and add 
 them to the queue. Consumer processes remove elements from the list, and process them. 
 In this implementation, the elements are simple integers. The queue is represented 

Modified: websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperTutorial.pdf
==============================================================================
Binary files websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperTutorial.pdf (original) and websites/staging/zookeeper/trunk/content/doc/trunk/zookeeperTutorial.pdf Wed Jul 20 21:55:53 2016 differ