You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@knox.apache.org by mo...@apache.org on 2017/02/10 16:31:09 UTC

svn commit: r1782487 [7/12] - in /knox: site/books/knox-0-12-0/ trunk/books/0.12.0/ trunk/books/0.12.0/dev-guide/

Added: knox/trunk/books/0.12.0/config_authn.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.12.0/config_authn.md?rev=1782487&view=auto
==============================================================================
--- knox/trunk/books/0.12.0/config_authn.md (added)
+++ knox/trunk/books/0.12.0/config_authn.md Fri Feb 10 16:31:08 2017
@@ -0,0 +1,159 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Authentication ###
+
+There are two types of providers supported in Knox for establishing a user's identity:
+
+1. Authentication Providers
+2. Federation Providers
+
+Authentication providers directly accept a user's credentials and validates them against some particular user store. Federation providers, on the other hand, validate a token that has been issued for the user by a trusted Identity Provider (IdP).
+
+The current release of Knox ships with an authentication provider based on the Apache Shiro project and is initially configured for BASIC authentication against an LDAP store. This has been specifically tested against Apache Directory Server and Active Directory.
+
+This section will cover the general approach to leveraging Shiro within the bundled provider including:
+
+1. General mapping of provider config to shiro.ini config
+2. Specific configuration for the bundled BASIC/LDAP configuration
+3. Some tips into what may need to be customized for your environment
+4. How to setup the use of LDAP over SSL or LDAPS
+
+#### General Configuration for Shiro Provider ####
+
+As is described in the configuration section of this document, providers have a name-value based configuration - as is the common pattern in the rest of Hadoop.
+
+The following example shows the format of the configuration for a given provider:
+
+    <provider>
+        <role>authentication</role>
+        <name>ShiroProvider</name>
+        <enabled>true</enabled>
+        <param>
+            <name>{name}</name>
+            <value>{value}</value>
+        </param>
+    </provider>
+
+Conversely, the Shiro provider currently expects a shiro.ini file in the web-inf directory of the cluster specific web application.
+
+The following example illustrates a configuration of the bundled BASIC/LDAP authentication config in a shiro.ini file:
+
+    [urls]
+    /**=authcBasic
+    [main]
+    ldapRealm=org.apache.shiro.realm.ldap.JndiLdapRealm
+    ldapRealm.contextFactory.authenticationMechanism=simple
+    ldapRealm.contextFactory.url=ldap://localhost:33389
+    ldapRealm.userDnTemplate=uid={0},ou=people,dc=hadoop,dc=apache,dc=org
+
+In order to fit into the context of an INI file format, at deployment time we interrogate the parameters provided in the provider configuration and parse the INI section out of the parameter names. The following provider config illustrates this approach. Notice that the section names in the above shiro.ini match the beginning of the param names that are in the following config:
+
+    <gateway>
+        <provider>
+            <role>authentication</role>
+            <name>ShiroProvider</name>
+            <enabled>true</enabled>
+            <param>
+                <name>main.ldapRealm</name>
+                <value>org.apache.shiro.realm.ldap.JndiLdapRealm</value>
+            </param>
+            <param>
+                <name>main.ldapRealm.userDnTemplate</name>
+                <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+            </param>
+            <param>
+                <name>main.ldapRealm.contextFactory.url</name>
+                <value>ldap://localhost:33389</value>
+            </param>
+            <param>
+                <name>main.ldapRealm.contextFactory.authenticationMechanism</name>
+                <value>simple</value>
+            </param>
+            <param>
+                <name>urls./**</name>
+                <value>authcBasic</value>
+            </param>
+        </provider>
+
+This happens to be the way that we are currently configuring Shiro for BASIC/LDAP authentication. This same config approach may be used to achieve other authentication mechanisms or variations on this one. We however have not tested additional uses for it for this release.
+
+#### LDAP Configuration ####
+
+This section discusses the LDAP configuration used above for the Shiro Provider. Some of these configuration elements will need to be customized to reflect your deployment environment.
+
+**main.ldapRealm** - this element indicates the fully qualified class name of the Shiro realm to be used in authenticating the user. The class name provided by default in the sample is the `org.apache.shiro.realm.ldap.JndiLdapRealm` this implementation provides us with the ability to authenticate but by default has authorization disabled. In order to provide authorization - which is seen by Shiro as dependent on an LDAP schema that is specific to each organization - an extension of JndiLdapRealm is generally used to override and implement the doGetAuhtorizationInfo method. In this particular release we are providing a simple authorization provider that can be used along with the Shiro authentication provider.
+
+**main.ldapRealm.userDnTemplate** - in order to bind a simple username to an LDAP server that generally requires a full distinguished name (DN), we must provide the template into which the simple username will be inserted. This template allows for the creation of a DN by injecting the simple username into the common name (CN) portion of the DN. **This element will need to be customized to reflect your deployment environment.** The template provided in the sample is only an example and is valid only within the LDAP schema distributed with Knox and is represented by the users.ldif file in the `{GATEWAY_HOME}/conf` directory.
+
+**main.ldapRealm.contextFactory.url** - this element is the URL that represents the host and port of LDAP server. It also includes the scheme of the protocol to use. This may be either ldap or ldaps depending on whether you are communicating with the LDAP over SSL (highly recommended). **This element will need to be customized to reflect your deployment environment.**.
+
+**main.ldapRealm.contextFactory.authenticationMechanism** - this element indicates the type of authentication that should be performed against the LDAP server. The current default value is `simple` which indicates a simple bind operation. This element should not need to be modified and no mechanism other than a simple bind has been tested for this particular release.
+
+**urls./**** - this element represents a single URL_Ant_Path_Expression and the value the Shiro filter chain to apply to it. This particular sample indicates that all paths into the application have the same Shiro filter chain applied. The paths are relative to the application context path. The use of the value `authcBasic` here indicates that BASIC authentication is expected for every path into the application. Adding an additional Shiro filter to that chain for validating that the request isSecure() and over SSL can be achieved by changing the value to `ssl, authcBasic`. It is not likely that you need to change this element for your environment.
+
+#### Active Directory - Special Note ####
+
+You would use LDAP configuration as documented above to authenticate against Active Directory as well.
+
+Some Active Directory specific things to keep in mind:
+
+Typical AD main.ldapRealm.userDnTemplate value looks slightly different, such as
+
+    cn={0},cn=users,DC=lab,DC=sample,dc=com
+
+Please compare this with a typical Apache DS main.ldapRealm.userDnTemplate value and make note of the difference:
+
+    `uid={0},ou=people,dc=hadoop,dc=apache,dc=org`
+
+If your AD is configured to authenticate based on just the cn and password and does not require user DN, you do not have to specify value for  main.ldapRealm.userDnTemplate.
+
+
+#### LDAP over SSL (LDAPS) Configuration ####
+In order to communicate with your LDAP server over SSL (again, highly recommended), you will need to modify the topology file in a couple ways and possibly provision some keying material.
+
+1. **main.ldapRealm.contextFactory.url** must be changed to have the `ldaps` protocol scheme and the port must be the SSL listener port on your LDAP server.
+2. Identity certificate (keypair) provisioned to LDAP server - your LDAP server specific documentation should indicate what is required for providing a cert or keypair to represent the LDAP server identity to connecting clients.
+3. Trusting the LDAP Server's public key - if the LDAP Server's identity certificate is issued by a well known and trusted certificate authority and is already represented in the JRE's cacerts truststore then you don't need to do anything for trusting the LDAP server's cert. If, however, the cert is selfsigned or issued by an untrusted authority you will need to either add it to the cacerts keystore or to another truststore that you may direct Knox to utilize through a system property.
+
+#### Session Configuration ####
+
+Knox maps each cluster topology to a web application and leverages standard JavaEE session management.
+
+To configure session idle timeout for the topology, please specify value of parameter sessionTimeout for ShiroProvider in your topology file. If you do not specify the value for this parameter, it defaults to 30 minutes.
+
+The definition would look like the following in the topoloogy file:
+
+    ...
+    <provider>
+        <role>authentication</role>
+        <name>ShiroProvider</name>
+        <enabled>true</enabled>
+        <param>
+            <!--
+            Session timeout in minutes. This is really idle timeout.
+            Defaults to 30 minutes, if the property value is not defined.
+            Current client authentication will expire if client idles
+            continuously for more than this value
+            -->
+            <name>sessionTimeout</name>
+            <value>30</value>
+        </param>
+    <provider>
+    ...
+
+At present, ShiroProvider in Knox leverages JavaEE session to maintain authentication state for a user across requests using JSESSIONID cookie. So, a client that authenticated with Knox could pass the JSESSIONID cookie with repeated requests as long as the session has not timed out instead of submitting userid/password with every request. Presenting a valid session cookie in place of userid/password would also perform better as additional credential store lookups are avoided.
\ No newline at end of file

Added: knox/trunk/books/0.12.0/config_authz.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.12.0/config_authz.md?rev=1782487&view=auto
==============================================================================
--- knox/trunk/books/0.12.0/config_authz.md (added)
+++ knox/trunk/books/0.12.0/config_authz.md Fri Feb 10 16:31:08 2017
@@ -0,0 +1,319 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Authorization ###
+
+#### Service Level Authorization ####
+
+The Knox Gateway has an out-of-the-box authorization provider that allows administrators to restrict access to the individual services within a Hadoop cluster.
+
+This provider utilizes a simple and familiar pattern of using ACLs to protect Hadoop resources by specifying users, groups and ip addresses that are permitted access.
+
+#### Configuration ####
+
+ACLs are bound to services within the topology descriptors by introducing the authorization provider with configuration like:
+
+    <provider>
+        <role>authorization</role>
+        <name>AclsAuthz</name>
+        <enabled>true</enabled>
+    </provider>
+
+The above configuration enables the authorization provider but does not indicate any ACLs yet and therefore there is no restriction to accessing the Hadoop services. In order to indicate the resources to be protected and the specific users, groups or ip's to grant access, we need to provide parameters like the following:
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>username[,*|username...];group[,*|group...];ipaddr[,*|ipaddr...]</value>
+    </param>
+    
+where `{serviceName}` would need to be the name of a configured Hadoop service within the topology.
+
+NOTE: ipaddr is unique among the parts of the ACL in that you are able to specify a wildcard within an ipaddr to indicate that the remote address must being with the String prior to the asterisk within the ipaddr acl. For instance:
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>*;*;192.168.*</value>
+    </param>
+    
+This indicates that the request must come from an IP address that begins with '192.168.' in order to be granted access.
+
+Note also that configuration without any ACLs defined is equivalent to:
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>*;*;*</value>
+    </param>
+
+meaning: all users, groups and IPs have access.
+Each of the elements of the acl param support multiple values via comma separated list and the `*` wildcard to match any.
+
+For instance:
+
+    <param>
+        <name>webhdfs.acl</name>
+        <value>hdfs;admin;127.0.0.2,127.0.0.3</value>
+    </param>
+
+this configuration indicates that ALL of the following have to be satisfied to be granted access:
+
+1. The user name must be "hdfs" AND
+2. the user must be in the group "admin" AND
+3. the user must come from either 127.0.0.2 or 127.0.0.3
+
+This allows us to craft policy that restricts the members of a large group to a subset that should have access.
+The user being removed from the group will allow access to be denied even though their username may have been in the ACL.
+
+An additional configuration element may be used to alter the processing of the ACL to be OR instead of the default AND behavior:
+
+    <param>
+        <name>{serviceName}.acl.mode</name>
+        <value>OR</value>
+    </param>
+
+this processing behavior requires that the effective user satisfy one of the parts of the ACL definition in order to be granted access.
+For instance:
+
+    <param>
+        <name>webhdfs.acl</name>
+        <value>hdfs,guest;admin;127.0.0.2,127.0.0.3</value>
+    </param>
+
+You may also set the ACL processing mode at the top level for the topology. This essentially sets the default for the managed cluster.
+It may then be overridden at the service level as well.
+
+    <param>
+        <name>acl.mode</name>
+        <value>OR</value>
+    </param>
+
+this configuration indicates that ONE of the following must be satisfied to be granted access:
+
+1. The user is "hdfs" or "guest" OR
+2. the user is in "admin" group OR
+3. the request is coming from 127.0.0.2 or 127.0.0.3
+
+
+Following are a few concrete examples on how to use this feature.
+
+Note: In the examples below `{serviceName}` represents a real service name (e.g. WEBHDFS) and would be replaced with these values in an actual configuration.
+
+##### Usecases #####
+
+###### USECASE-1: Restrict access to specific Hadoop services to specific Users
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;*;*</value>
+    </param>
+
+###### USECASE-2: Restrict access to specific Hadoop services to specific Groups
+
+    <param>
+        <name>{serviceName}.acls</name>
+        <value>*;admins;*</value>
+    </param>
+
+###### USECASE-3: Restrict access to specific Hadoop services to specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>*;*;127.0.0.1</value>
+    </param>
+
+###### USECASE-4: Restrict access to specific Hadoop services to specific Users OR users within specific Groups
+
+    <param>
+        <name>{serviceName}.acl.mode</name>
+        <value>OR</value>
+    </param>
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;admin;*</value>
+    </param>
+
+###### USECASE-5: Restrict access to specific Hadoop services to specific Users OR users from specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl.mode</name>
+        <value>OR</value>
+    </param>
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;*;127.0.0.1</value>
+    </param>
+
+###### USECASE-6: Restrict access to specific Hadoop services to users within specific Groups OR from specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl.mode</name>
+        <value>OR</value>
+    </param>
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>*;admin;127.0.0.1</value>
+    </param>
+
+###### USECASE-7: Restrict access to specific Hadoop services to specific Users OR users within specific Groups OR from specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl.mode</name>
+        <value>OR</value>
+    </param>
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;admin;127.0.0.1</value>
+    </param>
+
+###### USECASE-8: Restrict access to specific Hadoop services to specific Users AND users within specific Groups
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;admin;*</value>
+    </param>
+
+###### USECASE-9: Restrict access to specific Hadoop services to specific Users AND users from specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;*;127.0.0.1</value>
+    </param>
+
+###### USECASE-10: Restrict access to specific Hadoop services to users within specific Groups AND from specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>*;admins;127.0.0.1</value>
+    </param>
+
+###### USECASE-11: Restrict access to specific Hadoop services to specific Users AND users within specific Groups AND from specific Remote IPs
+
+    <param>
+        <name>{serviceName}.acl</name>
+        <value>guest;admins;127.0.0.1</value>
+    </param>
+
+###### USECASE-12: Full example including identity assertion/principal mapping ######
+
+The principal mapping aspect of the identity assertion provider is important to understand in order to fully utilize the authorization features of this provider.
+
+This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend. It is fully documented in the Identity Assertion section of this guide.
+
+These additional mapping capabilities are used together with the authorization ACL policy.
+An example of a full topology that illustrates these together is below.
+
+    <topology>
+        <gateway>
+            <provider>
+                <role>authentication</role>
+                <name>ShiroProvider</name>
+                <enabled>true</enabled>
+                <param>
+                    <name>main.ldapRealm</name>
+                    <value>org.apache.shiro.realm.ldap.JndiLdapRealm</value>
+                </param>
+                <param>
+                    <name>main.ldapRealm.userDnTemplate</name>
+                    <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+                </param>
+                <param>
+                    <name>main.ldapRealm.contextFactory.url</name>
+                    <value>ldap://localhost:33389</value>
+                </param>
+                <param>
+                    <name>main.ldapRealm.contextFactory.authenticationMechanism</name>
+                    <value>simple</value>
+                </param>
+                <param>
+                    <name>urls./**</name>
+                    <value>authcBasic</value>
+                </param>
+            </provider>
+            <provider>
+                <role>identity-assertion</role>
+                <name>Default</name>
+                <enabled>true</enabled>
+                <param>
+                    <name>principal.mapping</name>
+                    <value>guest=hdfs;</value>
+                </param>
+                <param>
+                    <name>group.principal.mapping</name>
+                    <value>*=users;hdfs=admin</value>
+                </param>
+            </provider>
+            <provider>
+                <role>authorization</role>
+                <name>AclsAuthz</name>
+                <enabled>true</enabled>
+                <param>
+                    <name>acl.mode</name>
+                    <value>OR</value>
+                </param>
+                <param>
+                    <name>webhdfs.acl.mode</name>
+                    <value>AND</value>
+                </param>
+                <param>
+                    <name>webhdfs.acl</name>
+                    <value>hdfs;admin;127.0.0.2,127.0.0.3</value>
+                </param>
+                <param>
+                    <name>webhcat.acl</name>
+                    <value>hdfs;admin;127.0.0.2,127.0.0.3</value>
+                </param>
+            </provider>
+            <provider>
+                <role>hostmap</role>
+                <name>static</name>
+                <enabled>true</enabled>
+                <param>
+                    <name>localhost</name>
+                    <value>sandbox,sandbox.hortonworks.com</value>
+                </param>
+            </provider>
+        </gateway>
+
+        <service>
+            <role>JOBTRACKER</role>
+            <url>rpc://localhost:8050</url>
+        </service>
+
+        <service>
+            <role>WEBHDFS</role>
+            <url>http://localhost:50070/webhdfs</url>
+        </service>
+
+        <service>
+            <role>WEBHCAT</role>
+            <url>http://localhost:50111/templeton</url>
+        </service>
+
+        <service>
+            <role>OOZIE</role>
+            <url>http://localhost:11000/oozie</url>
+        </service>
+
+        <service>
+            <role>WEBHBASE</role>
+            <url>http://localhost:8080</url>
+        </service>
+
+        <service>
+            <role>HIVE</role>
+            <url>http://localhost:10001/cliservice</url>
+        </service>
+    </topology>

Added: knox/trunk/books/0.12.0/config_ha.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.12.0/config_ha.md?rev=1782487&view=auto
==============================================================================
--- knox/trunk/books/0.12.0/config_ha.md (added)
+++ knox/trunk/books/0.12.0/config_ha.md Fri Feb 10 16:31:08 2017
@@ -0,0 +1,127 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### High Availability ###
+
+This describes how Knox itself can be made highly available.
+
+#### Configure Knox instances ####
+
+All Knox instances must be synced to use the same topology credential keystores.
+These files are located under `{GATEWAY_HOME}/conf/security/keystores/{TOPOLOGY_NAME}-credentials.jceks`.
+They are generated after the first topology deployment.
+Currently these files need to be synced manually.
+Here are the steps to sync topologies credentials keystores:
+
+1. Choose a Knox instance that will be the source for topology credential keystores. Let's call it _keystores master_
+2. Replace the topology credential keystores in the other Knox instances with topology credential keystores from the _keystores master_
+3. Restart Knox instances
+
+#### High Availability with Apache HTTP Server + mod_proxy + mod_proxy_balancer ####
+
+##### 1 - Requirements #####
+
+###### openssl-devel ######
+
+openssl-devel is required for Apache Module mod_ssl.
+
+    sudo yum install openssl-devel
+
+###### Apache HTTP Server ######
+
+Apache HTTP Server 2.4.6 or later is required. See this document for installing and setting up Apache HTTP Server: http://httpd.apache.org/docs/2.4/install.html
+
+Hint: pass `--enable-ssl` to the `./configure` command to enable the generation of the Apache Module _mod_ssl_.
+
+###### Apache Module mod_proxy ######
+
+See this document for setting up Apache Module mod_proxy: http://httpd.apache.org/docs/2.4/mod/mod_proxy.html
+
+###### Apache Module mod_proxy_balancer ######
+
+See this document for setting up Apache Module mod_proxy_balancer: http://httpd.apache.org/docs/2.4/mod/mod_proxy_balancer.html
+
+###### Apache Module mod_ssl ######
+
+See this document for setting up Apache Module mod_ssl: http://httpd.apache.org/docs/2.4/mod/mod_ssl.html
+
+##### 2 - Configuration example #####
+
+###### Generate certificate for Apache HTTP Server ######
+
+See this document for an example: http://www.akadia.com/services/ssh_test_certificate.html
+
+By convention, Apache HTTP Server and Knox certificates are put into /etc/apache2/ssl/ folder.
+
+###### Update Apache HTTP Server configuration file ######
+
+This file is located under {APACHE_HOME}/conf/httpd.conf.
+
+Following directives have to be added or uncommented in the configuration file:
+
+* LoadModule proxy_module modules/mod_proxy.so
+* LoadModule proxy_http_module modules/mod_proxy_http.so
+* LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
+* LoadModule ssl_module modules/mod_ssl.so
+* LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so
+* LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so
+* LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so
+* LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so
+* LoadModule slotmem_shm_module modules/mod_slotmem_shm.so
+
+Also following lines have to be added to file. Replace placeholders (${...}) with real data:
+
+    Listen 443
+    <VirtualHost *:443>
+       SSLEngine On
+       SSLProxyEngine On
+       SSLCertificateFile ${PATH_TO_CERTIFICATE_FILE}
+       SSLCertificateKeyFile ${PATH_TO_CERTIFICATE_KEY_FILE}
+       SSLProxyCACertificateFile ${PATH_TO_PROXY_CA_CERTIFICATE_FILE}
+
+       ProxyRequests Off
+       ProxyPreserveHost Off
+
+       RequestHeader set X-Forwarded-Port "443"
+       Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
+       <Proxy balancer://mycluster>
+         BalancerMember ${HOST_#1} route=1
+         BalancerMember ${HOST_#2} route=2
+         ...
+         BalancerMember ${HOST_#N} route=N
+
+         ProxySet failontimeout=On lbmethod=${LB_METHOD} stickysession=ROUTEID 
+       </Proxy>
+
+       ProxyPass / balancer://mycluster/
+       ProxyPassReverse / balancer://mycluster/
+    </VirtualHost>
+
+Note:
+
+* SSLProxyEngine enables SSL between Apache HTTP Server and Knox instances;
+* SSLCertificateFile and SSLCertificateKeyFile have to point to certificate data of Apache HTTP Server. User will use this certificate for communications with Apache HTTP Server;
+* SSLProxyCACertificateFile has to point to Knox certificates.
+
+###### Start/stop Apache HTTP Server ######
+
+    APACHE_HOME/bin/apachectl -k start
+    APACHE_HOME/bin/apachectl -k stop
+
+###### Verify ######
+
+Use Knox samples.

Added: knox/trunk/books/0.12.0/config_hadoop_auth_provider.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.12.0/config_hadoop_auth_provider.md?rev=1782487&view=auto
==============================================================================
--- knox/trunk/books/0.12.0/config_hadoop_auth_provider.md (added)
+++ knox/trunk/books/0.12.0/config_hadoop_auth_provider.md Fri Feb 10 16:31:08 2017
@@ -0,0 +1,98 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### HadoopAuth Authentication Provider ###
+The HadoopAuth authentication provider for Knox integrates the use of the Apache Hadoop module for SPNEGO and delegation token based authentication. This introduces the same authentication pattern used across much of the Hadoop ecosystem to Apache Knox and allows clients to using the strong authentication and SSO capabilities of Kerberos.
+
+#### Configuration ####
+##### Overview #####
+As with all providers in the Knox gateway, the HadoopAuth provider is configured through provider params. The configuration parameters are the same parameters used within Apache Hadoop for the same capabilities. In this section, we provide an example configuration and description of each of the parameters. We do encourage the reader to refer to the Hadoop documentation for this as well. (see http://hadoop.apache.org/docs/current/hadoop-auth/Configuration.html)
+
+One of the interesting things to note about this configuration is the use of the config.prefix parameter. In Hadoop there may be multiple components with their own specific configuration values for these parameters and since they may get mixed into the same Configuration object - there needs to be a way to identify the component specific values. The config.prefix parameter is used for this and is prepended to each of the configuration parameters for this provider. Below, you see an example configuration where the value for config.prefix happens to be 'hadoop.auth.config'. You will also notice that this same value is prepended to the name of the rest of the configuration parameters.
+
+    <provider>
+      <role>authentication</role>
+      <name>HadoopAuth</name>
+      <enabled>true</enabled>
+      <param>
+        <name>config.prefix</name>
+        <value>hadoop.auth.config</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.signature.secret</name>
+        <value>knox-signature-secret</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.type</name>
+        <value>kerberos</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.simple.anonymous.allowed</name>
+        <value>false</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.token.validity</name>
+        <value>1800</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.cookie.domain</name>
+        <value>novalocal</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.cookie.path</name>
+        <value>gateway/default</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.kerberos.principal</name>
+        <value>HTTP/lmccay-knoxft-24m-r6-sec-160422-1327-2.novalocal@EXAMPLE.COM</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.kerberos.keytab</name>
+        <value>/etc/security/keytabs/spnego.service.keytab</value>
+      </param>
+      <param>
+        <name>hadoop.auth.config.kerberos.name.rules</name>
+        <value>DEFAULT</value>
+      </param>
+    </provider>
+  
+
+#### Descriptions ####
+The following tables describes the configuration parameters for the HadoopAuth provider:
+
+###### Config
+
+Name | Description | Default
+---------|-----------
+config.prefix|If specified, all other configuration parameter names must start with the prefix.|none
+signature.secret|This is the secret used to sign the delegation token in the hadoop.auth cookie. This same secret needs to be used across all instances of the Knox gateway in a given cluster. Otherwise, the delegation token will fail validation and authentication will be repeated each request.|a simple random number  
+type|This parameter needs to be set to kerberos.|none, would throw exception
+simple.anonymous.allowed|This should always be false for a secure deployment.|true
+token.validity|The validity -in seconds- of the generated authentication token. This is also used for the rollover interval when signer.secret.provider is set to random or zookeeper.|36000 seconds
+cookie.domain|domain to use for the HTTP cookie that stores the authentication token|null
+cookie.path|path to use for the HTTP cookie that stores the authentication token|null
+kerberos.principal|The web-application Kerberos principal name. The Kerberos principal name must start with HTTP/.... For example: HTTP/localhost@LOCALHOST|null
+kerberos.keytab|The path to the keytab file containing the credentials for the kerberos principal. For example: /Users/lmccay/lmccay.keytab|null
+kerberos.name.rules|The name of the ruleset for extracting the username from the kerberos principal.|DEFAULT
+
+###### REST Invocation
+Once a user logs in with kinit then their kerberos session may be used across client requests with things like curl.
+The following curl command can be used to request a directory listing from HDFS while authenticating with SPNEGO via the --negotiate flag
+
+    curl -k -i --negotiate -u https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+
+

Added: knox/trunk/books/0.12.0/config_id_assertion.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.12.0/config_id_assertion.md?rev=1782487&view=auto
==============================================================================
--- knox/trunk/books/0.12.0/config_id_assertion.md (added)
+++ knox/trunk/books/0.12.0/config_id_assertion.md Fri Feb 10 16:31:08 2017
@@ -0,0 +1,275 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Identity Assertion ###
+The identity assertion provider within Knox plays the critical role of communicating the identity principal to be used within the Hadoop cluster to represent the identity that has been authenticated at the gateway.
+
+The general responsibilities of the identity assertion provider is to interrogate the current Java Subject that has been established by the authentication or federation provider and:
+
+1. determine whether it matches any principal mapping rules and apply them appropriately
+2. determine whether it matches any group principal mapping rules and apply them
+3. if it is determined that the principal will be impersonating another through a principal mapping rule then a Subject.doAS is required in order for providers farther downstream can determine the appropriate effective principal name and groups for the user
+
+#### Default Identity Assertion Provider ####
+The following configuration is required for asserting the users identity to the Hadoop cluster using Pseudo or Simple "authentication" and for using kerberos/SPNEGO for secure clusters.
+
+    <provider>
+        <role>identity-assertion</role>
+        <name>Default</name>
+        <enabled>true</enabled>
+    </provider>
+
+This particular configuration indicates that the Default identity assertion provider is enabled and that there are no principal mapping rules to apply to identities flowing from the authentication in the gateway to the backend Hadoop cluster services. The primary principal of the current subject will therefore be asserted via a query parameter or as a form parameter - ie. `?user.name={primaryPrincipal}`
+
+    <provider>
+        <role>identity-assertion</role>
+        <name>Default</name>
+        <enabled>true</enabled>
+        <param>
+            <name>principal.mapping</name>
+            <value>guest=hdfs;</value>
+        </param>
+        <param>
+            <name>group.principal.mapping</name>
+            <value>*=users;hdfs=admin</value>
+        </param>
+    </provider>
+
+This configuration identifies the same identity assertion provider but does provide principal and group mapping rules. In this case, when a user is authenticated as "guest" his identity is actually asserted to the Hadoop cluster as "hdfs". In addition, since there are group principal mappings defined, he will also be considered as a member of the groups "users" and "admin". In this particular example the wildcard "*" is used to indicate that all authenticated users need to be considered members of the "users" group and that only the user "hdfs" is mapped to be a member of the "admin" group.
+
+**NOTE: These group memberships are currently only meaningful for Service Level Authorization using the AclsAuthorization provider. The groups are not currently asserted to the Hadoop cluster at this time. See the Authorization section within this guide to see how this is used.**
+
+The principal mapping aspect of the identity assertion provider is important to understand in order to fully utilize the authorization features of this provider.
+
+This feature allows us to map the authenticated principal to a runas or impersonated principal to be asserted to the Hadoop services in the backend.
+
+When a principal mapping is defined that results in an impersonated principal, this impersonated principal is then the effective principal.
+
+If there is no mapping to another principal then the authenticated or primary principal is then the effective principal.
+
+#### Principal Mapping ####
+
+    <param>
+        <name>principal.mapping</name>
+        <value>{primaryPrincipal}[,...]={impersonatedPrincipal}[;...]</value>
+    </param>
+
+For instance:
+
+    <param>
+        <name>principal.mapping</name>
+        <value>guest=hdfs</value>
+    </param>
+
+For multiple mappings:
+
+    <param>
+        <name>principal.mapping</name>
+        <value>guest,alice=hdfs;mary=alice2</value>
+    </param>
+
+#### Group Principal Mapping ####
+
+    <param>
+        <name>group.principal.mapping</name>
+        <value>{userName[,*|userName...]}={groupName[,groupName...]}[,...]</value>
+    </param>
+
+For instance:
+
+    <param>
+        <name>group.principal.mapping</name>
+        <value>*=users;hdfs=admin</value>
+    </param>
+
+this configuration indicates that all (*) authenticated users are members of the "users" group and that user "hdfs" is a member of the admin group. Group principal mapping has been added along with the authorization provider described in this document.
+
+#### Concat Identity Assertion Provider ####
+The Concat identity assertion provider allows for composition of a new user principal through the concatenation of optionally configured prefix and/or suffix provider parameters. This is a useful assertion provider for converting an incoming identity into a disambiguated identity within the Hadoop cluster based on what topology is used to access Hadoop.
+
+The following configuration would convert the user principal into a value that represents a domain specific identity where the identities used inside the Hadoop cluster represent this same separation.
+
+    <provider>
+        <role>identity-assertion</role>
+        <name>Concat</name>
+        <enabled>true</enabled>
+        <param>
+            <name>concat.suffix</name>
+            <value>_domain1</value>
+        </param>
+    </provider>
+
+The above configuration will result in all user interactions through that topology to have their principal communicated to the Hadoop cluster with a domain designator concatenated to the username. Possibly useful for multi-tenant deployment scenarios.
+
+In addition to the concat.suffix parameter, the provider supports the setting of a prefix through a concat.prefix parameter.
+
+#### SwitchCase Identity Assertion Provider ####
+The SwitchCase identity assertion provider solves issues where down stream ecosystem components require user and group principal names to be a specific case.
+An example of how this provider is enabled and configured within the \<gateway> section of a topology file is shown below.
+This particular example will switch user principals names to lower case and group principal names to upper case.
+
+    <provider>
+        <role>identity-assertion</role>
+        <name>SwitchCase</name>
+        <param>
+            <name>principal.case</name>
+            <value>lower</value>
+        </param>
+        <param>
+            <name>group.principal.case</name>
+            <value>upper</value>
+        </param>
+        <enabled>true</enabled>
+    </provider>
+
+These are the configuration parameters used to control the behavior of the provider.
+
+Param                | Description
+---------------------|------------
+principal.case       | The case mapping of user principal names.  Choices are: lower, upper, none.  Defaults to lower.
+group.principal.case | The case mapping of group principal names.  Choices are: lower, upper, none. Defaults to setting of principal.case.
+
+If no parameters are provided the full defaults will results in both user and group principal names being switched to lower case.
+A setting of "none" or anything other than "upper" or "lower" leaves the case of the principal name unchanged.
+
+#### Regular Expression Identity Assertion Provider ####
+The regular expression identity assertion provider allows incoming identities to be translated using a regular expression, template and lookup table.
+This will probably be most useful in conjunction with the HeaderPreAuth federation provider.
+
+There are three configuration parameters used to control the behavior of the provider.
+
+Param | Description
+------|-----------
+input | This is a regular expression that will be applied to the incoming identity. The most critical part of the regular expression is the group notation within the expression. In regular expressions, groups are expressed within parenthesis. For example in the regular expression "(.*)@(.*?)\..*" there are two groups. When this regular expression is applied to "nobody@us.imaginary.tld" group 1 matches "nobody" and group 2 matches "us". 
+output| This is a template that assembles the result identity. The result is assembled from the static text and the matched groups from the input regular expression. In addition, the matched group values can be looked up in the lookup table. An output value of "{1}_{2}" of will result in "nobody_us".                 
+lookup| This lookup table provides a simple (albeit limited) way to translate text in the incoming identities. This configuration takes the form of "=" separated name values pairs separated by ";". For example a lookup setting is "us=USA;ca=CANADA". The lookup is invoked in the output setting by surrounding the desired group number in square brackets (i.e. []). Putting it all together, output setting of "{1}_[{2}]" combined with input of "(.*)@(.*?)\..*" and lookup of "us=USA;ca=CANADA" will turn "nobody@us.imaginary.tld" into "nobody@USA".      
+
+Within the topology file the provider configuration might look like this.
+
+    <provider>
+        <role>identity-assertion</role>
+        <name>Regex</name>
+        <enabled>true</enabled>
+        <param>
+            <name>input</name>
+            <value>(.*)@(.*?)\..*</value>
+        </param>
+        <param>
+            <name>output</name>
+            <value>{1}_{[2]}</value>
+        </param>
+        <param>
+            <name>lookup</name>
+            <value>us=USA;ca=CANADA</value>
+        </param>
+    </provider>  
+
+Using curl with this type of configuration might produce the following results. 
+
+    curl -k --header "SM_USER: nobody@us.imaginary.tld" 'https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY'
+    
+    {"Path":"/user/member_USA"}
+    
+    url -k --header "SM_USER: nobody@ca.imaginary.tld" 'https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY'
+    
+    {"Path":"/user/member_CANADA"}
+
+### Hadoop Group Lookup Provider ###
+
+An identity assertion provider that looks up user's 'group membership' for authenticated users using Hadoop's group mapping service (GroupMappingServiceProvider).
+
+This allows existing investments in the Hadoop to be leveraged within Knox and used within the access control policy enforcement at the perimeter.
+
+The 'role' for this provider is 'identity-assertion' and name is 'HadoopGroupProvider'.
+
+        <provider>
+            <role>identity-assertion</role>
+            <name>HadoopGroupProvider</name>
+            <enabled>true</enabled>
+            <<param> ... </param>
+        </provider>
+
+### Configuration ###
+
+All the configuration for 'HadoopGroupProvider' resides in the provider section in a gateway topology file.
+The 'hadoop.security.group.mapping' property determines the implementation. Some of the valid implementation are as follows 
+#### org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback
+
+This is the default implementation and will be picked up if 'hadoop.security.group.mapping' is not specified. This implementation will determine if the Java Native Interface (JNI) is available. If JNI is available, the implementation will use the API within Hadoop to resolve a list of groups for a user. If JNI is not available then the shell implementation, org.apache.hadoop.security.ShellBasedUnixGroupsMapping, is used, which shells out with the 'bash -c groups' command (for a Linux/Unix environment) or the 'net group' command (for a Windows environment) to resolve a list of groups for a user.
+
+#### org.apache.hadoop.security.LdapGroupsMapping
+
+This implementation connects directly to an LDAP server to resolve the list of groups. However, this should only be used if the required groups reside exclusively in LDAP, and are not materialized on the Unix servers.
+
+For more information on the implementation and properties refer to Hadoop Group Mapping.
+
+### Example ###
+
+The following example snippet works with the demo ldap server that ships with Apache Knox. Replace the existing 'Default' identity-assertion provider with the one below (HadoopGroupProvider).
+
+        <provider>
+            <role>identity-assertion</role>
+            <name>HadoopGroupProvider</name>
+            <enabled>true</enabled>
+            <param>
+                <name>hadoop.security.group.mapping</name>
+                <value>org.apache.hadoop.security.LdapGroupsMapping</value>
+            </param>
+            <param>
+                <name>hadoop.security.group.mapping.ldap.bind.user</name>
+                <value>uid=tom,ou=people,dc=hadoop,dc=apache,dc=org</value>
+            </param>
+            <param>
+                <name>hadoop.security.group.mapping.ldap.bind.password</name>
+                <value>tom-password</value>
+            </param>
+            <param>
+                <name>hadoop.security.group.mapping.ldap.url</name>
+                <value>ldap://localhost:33389</value>
+            </param>
+            <param>
+                <name>hadoop.security.group.mapping.ldap.base</name>
+                <value></value>
+            </param>
+            <param>
+                <name>hadoop.security.group.mapping.ldap.search.filter.user</name>
+                <value>(&amp;(|(objectclass=person)(objectclass=applicationProcess))(cn={0}))</value>
+            </param>
+            <param>
+                <name>hadoop.security.group.mapping.ldap.search.filter.group</name>
+                <value>(objectclass=groupOfNames)</value>
+            </param>
+            <param>
+                <name>hadoop.security.group.mapping.ldap.search.attr.member</name>
+                <value>member</value>
+            </param>
+            <param>
+                <name>hadoop.security.group.mapping.ldap.search.attr.group.name</name>
+                <value>cn</value>
+            </param>
+        </provider>
+
+
+Here, we are working with the demo ldap server running at 'ldap://localhost:33389' which populates some dummy users for testing that we will use in this example. This example uses the user 'tom' for LDAP binding.  If you have different LDAP/AD settings you will have to update the properties accordingly. 
+
+Let's test our setup using the following command (assuming the gateway is started and listening on localhost:8443). Note that we are using credentials for the user 'sam' along with the command. 
+
+        curl -i -k -u sam:sam-password -X GET 'https://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS' 
+
+The command should be executed successfully and you should see the groups 'scientist' and 'analyst' to which user 'sam' belongs to in gateway-audit.log i.e.
+
+        ||a99aa0ab-fc06-48f2-8df3-36e6fe37c230|audit|WEBHDFS|sam|||identity-mapping|principal|sam|success|Groups: [scientist, analyst]

Added: knox/trunk/books/0.12.0/config_kerberos.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.12.0/config_kerberos.md?rev=1782487&view=auto
==============================================================================
--- knox/trunk/books/0.12.0/config_kerberos.md (added)
+++ knox/trunk/books/0.12.0/config_kerberos.md Fri Feb 10 16:31:08 2017
@@ -0,0 +1,68 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Secure Clusters ###
+
+See the Hadoop documentation for setting up a secure Hadoop cluster
+http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html
+
+Once you have a Hadoop cluster that is using Kerberos for authentication, you have to do the following to configure Knox to work with that cluster.
+
+#### Create Unix account for Knox on Hadoop master nodes ####
+
+    useradd -g hadoop knox
+
+#### Create Kerberos principal, keytab for Knox ####
+
+One way of doing this, assuming your KDC realm is EXAMPLE.COM, is to ssh into your host running KDC and execute `kadmin.local`
+That will result in an interactive session in which you can execute commands.
+
+ssh into your host running KDC
+
+    kadmin.local
+    add_principal -randkey knox/knox@EXAMPLE.COM
+    ktadd -k knox.service.keytab -norandkey knox/knox@EXAMPLE.COM
+    exit
+
+
+#### Copy knox keytab to Knox host ####
+
+Add unix account for the knox user on Knox host
+
+    useradd -g hadoop knox
+
+Copy knox.service.keytab created on KDC host on to your Knox host `{GATEWAY_HOME}/conf/knox.service.keytab`
+
+    chown knox knox.service.keytab
+    chmod 400 knox.service.keytab
+
+#### Update `krb5.conf` at `{GATEWAY_HOME}/conf/krb5.conf` on Knox host ####
+
+You could copy the `{GATEWAY_HOME}/templates/krb5.conf` file provided in the Knox binary download and customize it to suit your cluster.
+
+#### Update `krb5JAASLogin.conf` at `/etc/knox/conf/krb5JAASLogin.conf` on Knox host ####
+
+You could copy the `{GATEWAY_HOME}/templates/krb5JAASLogin.conf` file provided in the Knox binary download and customize it to suit your cluster.
+
+#### Update `gateway-site.xml` on Knox host ####
+
+Update `conf/gateway-site.xml` in your Knox installation and set the value of `gateway.hadoop.kerberos.secured` to true.
+
+#### Restart Knox ####
+
+After you do the above configurations and restart Knox, Knox would use SPNego to authenticate with Hadoop services and Oozie.
+There is no change in the way you make calls to Knox whether you use Curl or Knox DSL.

Added: knox/trunk/books/0.12.0/config_knox_sso.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.12.0/config_knox_sso.md?rev=1782487&view=auto
==============================================================================
--- knox/trunk/books/0.12.0/config_knox_sso.md (added)
+++ knox/trunk/books/0.12.0/config_knox_sso.md Fri Feb 10 16:31:08 2017
@@ -0,0 +1,87 @@
+## KnoxSSO Setup and Configuration
+
+### Introduction
+---
+
+Authentication of the Hadoop component UIs, and those of the overall ecosystem, is usually limited to Kerberos (which requires SPNEGO to be configured for the user's browser) and simple/psuedo. This often results in the UIs not being secured - even in secured clusters. This is where KnoxSSO provides value by providing WebSSO capabilities to the Hadoop cluster.
+
+By leveraging the hadoop-auth module in Hadoop common, we have introduced the ability to consume a common SSO cookie for web UIs while retaining the non-web browser authentication through kerberos/SPNEGO. We do this by extending the AltKerberosAuthenticationHandler class which provides the useragent based multiplexing. 
+
+We also provide integration guidance within the developers guide for other applications to be able to participate in these SSO capabilities.
+
+The flexibility of the Apache Knox authentication and federation providers allows KnoxSSO to provide a normalization of authentication events through token exchange resulting in a common JWT (JSON WebToken) based token.
+
+KnoxSSO provides an abstraction for integrating any number of authentication systems and SSO solutions and enables participating web applications to scale to those solutions more easily. Without the token exchange capabilities offered by KnoxSSO each component UI would need to integrate with each desired solution on its own. With KnoxSSO they only need to integrate with the single solution and common token.
+
+This document describes the overall setup requirements for KnoxSSO and participating applications.
+
+### KnoxSSO Setup
+
+#### knoxsso.xml Topology
+To enable KnoxSSO, we use the KnoxSSO topology for exposing an API that can be used to abstract the use of any number of enterprise or customer IDPs. By default, the knoxsso.xml file is configured for using the simple KnoxAuth application for form-based authentication against LDAP/AD. By swapping the Shiro authentication provider that is there out-of-the-box with another authentication or federation provider, an admin may leverage many of the existing providers for SSO for the UI components that participate in KnoxSSO.
+
+Just as with any Knox service, the KNOXSSO service is protected by the gateway providers defined above it. In this case, the ShiroProvider is taking care of HTTP Basic Auth against LDAP for us. Once the user authenticates the request processing continues to the KNOXSSO service that will create the required cookie and do the necessary redirects.
+
+The knoxsso.xml topology will result in a KnoxSSO URL that looks something like:
+
+    https://{gateway_host}:{gateway_port}/gateway/knoxsso/api/v1/websso
+
+This URL is needed when configuring applications that participate in KnoxSSO for a given deployment. We will refer to this as the Provider URL.
+
+#### KnoxSSO Configuration Parameters
+
+Parameter                        | Description | Default
+-------------------------------- |------------ |----------- 
+knoxsso.cookie.name       | This optional setting allows the admin to set the name of the sso cookie to use to represent a successful authentication event. | hadoop-jwt
+knoxsso.cookie.secure.only       | This determines whether the browser is allowed to send the cookie over unsecured channels. This should always be set to true in production systems. If during development a relying party is not running ssl then you can turn this off. Running with it off exposes the cookie and underlying token for capture and replay by others. | true
+knoxsso.cookie.max.age           | optional: This indicates that a cookie can only live for a specified amount of time - in seconds. This should probably be left to the default which makes it a session cookie. Session cookies are discarded once the browser session is closed. | session
+knoxsso.cookie.domain.suffix     | optional: This indicates the portion of the request hostname that represents the domain to be used for the cookie domain. For single host development scenarios the default behavior should be fine. For production deployments, the expected domain should be set and all configured URLs that are related to SSO should use this domain. Otherwise, the cookie will not be presented by the browser to mismatched URLs. | Default cookie domain or a domain derived from a hostname that includes more than 2 dots.
+knoxsso.token.ttl                | This indicates the lifespan of the token within the cookie. Once it expires a new cookie must be acquired from KnoxSSO. This is in milliseconds. The 36000000 in the topology above gives you 10 hrs. | 30000 That is 30 seconds.
+knoxsso.token.audiences          | This is a comma separated list of audiences to add to the JWT token. This is used to ensure that a token received by a participating application knows that the token was intended for use with that application. It is optional. In the event that an application has expected audiences and they are not present the token must be rejected. In the event where the token has audiences and the application has none expected then the token is accepted.| empty
+knoxsso.redirect.whitelist.regex | A semicolon separated list of regex expressions. The incoming originalUrl must match one of the expressions in order for KnoxSSO to redirect to it after authentication. Defaults to only relative paths and localhost with or without SSL for development usecases. This needs to be opened up for production use and actual participating applications. Note that cookie use is still constrained to redirect destinations in the same domain as the KnoxSSO service - regardless of the expressions specified here. | ^/.\*$;^https?://localhost:\\d{0,9}/.\*$
+
+
+### Participating Application Configuration
+#### Hadoop Configuration Example
+The following is used as the KnoxSSO configuration in the Hadoop JWTRedirectAuthenticationHandler implementation. Any participating application will need similar configuration. Since JWTRedirectAuthenticationHandler extends the AltKerberosAuthenticationHandler, the typical Kerberos configuration parameters for authentication are also required.
+
+
+    <property>
+        <name>hadoop.http.authentication.type</name
+        <value>org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler</value>
+    </property>
+
+
+This is the handler classname in Hadoop auth for JWT token (KnoxSSO) support.
+
+
+    <property>
+        <name>hadoop.http.authentication.authentication.provider.url</name>
+        <value>https://c6401.ambari.apache.org:8443/gateway/knoxsso/api/v1/websso</value>
+    </property>
+
+
+The above property is the SSO provider URL that points to the knoxsso endpoint.
+
+    <property>
+        <name>hadoop.http.authentication.public.key.pem</name>
+        <value>MIICVjCCAb+gAwIBAgIJAPPvOtuTxFeiMA0GCSqGSIb3DQEBBQUAMG0xCzAJBgNV
+      BAYTAlVTMQ0wCwYDVQQIEwRUZXN0MQ0wCwYDVQQHEwRUZXN0MQ8wDQYDVQQKEwZI
+      YWRvb3AxDTALBgNVBAsTBFRlc3QxIDAeBgNVBAMTF2M2NDAxLmFtYmFyaS5hcGFj
+      aGUub3JnMB4XDTE1MDcxNjE4NDcyM1oXDTE2MDcxNTE4NDcyM1owbTELMAkGA1UE
+      BhMCVVMxDTALBgNVBAgTBFRlc3QxDTALBgNVBAcTBFRlc3QxDzANBgNVBAoTBkhh
+      ZG9vcDENMAsGA1UECxMEVGVzdDEgMB4GA1UEAxMXYzY0MDEuYW1iYXJpLmFwYWNo
+      ZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMFs/rymbiNvg8lDhsdA
+      qvh5uHP6iMtfv9IYpDleShjkS1C+IqId6bwGIEO8yhIS5BnfUR/fcnHi2ZNrXX7x
+      QUtQe7M9tDIKu48w//InnZ6VpAqjGShWxcSzR6UB/YoGe5ytHS6MrXaormfBg3VW
+      tDoy2MS83W8pweS6p5JnK7S5AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEANyVg6EzE
+      2q84gq7wQfLt9t047nYFkxcRfzhNVL3LB8p6IkM4RUrzWq4kLA+z+bpY2OdpkTOe
+      wUpEdVKzOQd4V7vRxpdANxtbG/XXrJAAcY/S+eMy1eDK73cmaVPnxPUGWmMnQXUi
+      TLab+w8tBQhNbq6BOQ42aOrLxA8k/M4cV1A=</value>
+    </property>
+
+The above property holds the KnoxSSO server's public key for signature verification. Adding it directly to the config like this is convenient and is easily done through Ambari to existing config files that take custom properties. Config is generally protected as root access only as well - so it is a pretty good solution.
+
+Individual UIs within the Hadoop ecosystem will have similar configuration for participating in the KnoxSSO websso capabilities.
+
+Blogs will be provided on the Apache Knox project site for these usecases as they become available.
\ No newline at end of file

Added: knox/trunk/books/0.12.0/config_ldap_authc_cache.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.12.0/config_ldap_authc_cache.md?rev=1782487&view=auto
==============================================================================
--- knox/trunk/books/0.12.0/config_ldap_authc_cache.md (added)
+++ knox/trunk/books/0.12.0/config_ldap_authc_cache.md Fri Feb 10 16:31:08 2017
@@ -0,0 +1,211 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### LDAP Authentication Caching ###
+
+Knox can be configured to cache LDAP authentication information. Knox leverages Shiro's built in
+caching mechanisms and has been tested with Shiro's EhCache cache manager implementation.
+
+The following provider snippet demonstrates how to configure turning on the cache using the ShiroProvider. In addition to
+using `org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm` in the Shiro configuration, and setting up the cache you *must* set
+the flag for enabling caching authentication to true. Please see the property, `main.ldapRealm.authenticationCachingEnabled` below.
+
+
+    <provider>
+        <role>authentication</role>
+        <name>ShiroProvider</name>
+        <enabled>true</enabled>
+        <param>
+            <name>main.ldapRealm</name>
+            <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
+        </param>
+        <param>
+            <name>main.ldapGroupContextFactory</name>
+            <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory</name>
+            <value>$ldapGroupContextFactory</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.url</name>
+            <value>ldap://localhost:33389</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.userDnTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.authorizationEnabled</name>
+            <!-- defaults to: false -->
+            <value>true</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.searchBase</name>
+            <value>ou=groups,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.cacheManager</name>
+            <value>org.apache.shiro.cache.ehcache.EhCacheManager</value>
+        </param>
+        <param>
+            <name>main.securityManager.cacheManager</name>
+            <value>$cacheManager</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.authenticationCachingEnabled</name>
+            <value>true</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.memberAttributeValueTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.systemUsername</name>
+            <value>uid=guest,ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.systemPassword</name>
+            <value>guest-password</value>
+        </param>
+        <param>
+            <name>urls./**</name>
+            <value>authcBasic</value>
+        </param>
+    </provider>
+
+
+### Trying out caching ###
+
+Knox bundles a template topology files that can be used to try out the caching functionality.
+The template file located under `{GATEWAY_HOME}/templates` is `sandbox.knoxrealm.ehcache.xml`.
+
+To try this out
+
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealm.ehcache.xml conf/topologies/sandbox.xml
+    bin/ldap.sh start
+    bin/gateway.sh start
+
+The following call to WebHDFS should report: {"Path":"/user/tom"}
+
+    curl  -i -v  -k -u tom:tom-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+In order to see the cache working, LDAP can now be shutdown and the user will still authenticate successfully.
+
+    bin/ldap.sh stop
+
+and then the following should still return successfully like it did earlier.
+
+    curl  -i -v  -k -u tom:tom-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+
+#### Advanced Caching Config ####
+
+By default the ehcache support in shiro contains a ehcache.xml in its classpath which is the following
+
+    <ehcache>
+
+        <!-- Sets the path to the directory where cache .data files are created.
+
+             If the path is a Java System Property it is replaced by
+             its value in the running VM. The following properties are translated:
+
+                user.home - User's home directory
+                user.dir - User's current working directory
+                java.io.tmpdir - Default temp file path
+        -->
+        <diskStore path="java.io.tmpdir/shiro-ehcache"/>
+
+
+        <!--Default Cache configuration. These will applied to caches programmatically created through
+        the CacheManager.
+
+        The following attributes are required:
+
+        maxElementsInMemory            - Sets the maximum number of objects that will be created in memory
+        eternal                        - Sets whether elements are eternal. If eternal,  timeouts are ignored and the
+                                         element is never expired.
+        overflowToDisk                 - Sets whether elements can overflow to disk when the in-memory cache
+                                         has reached the maxInMemory limit.
+
+        The following attributes are optional:
+        timeToIdleSeconds              - Sets the time to idle for an element before it expires.
+                                         i.e. The maximum amount of time between accesses before an element expires
+                                         Is only used if the element is not eternal.
+                                         Optional attribute. A value of 0 means that an Element can idle for infinity.
+                                         The default value is 0.
+        timeToLiveSeconds              - Sets the time to live for an element before it expires.
+                                         i.e. The maximum time between creation time and when an element expires.
+                                         Is only used if the element is not eternal.
+                                         Optional attribute. A value of 0 means that and Element can live for infinity.
+                                         The default value is 0.
+        diskPersistent                 - Whether the disk store persists between restarts of the Virtual Machine.
+                                         The default value is false.
+        diskExpiryThreadIntervalSeconds- The number of seconds between runs of the disk expiry thread. The default value
+                                         is 120 seconds.
+        memoryStoreEvictionPolicy      - Policy would be enforced upon reaching the maxElementsInMemory limit. Default
+                                         policy is Least Recently Used (specified as LRU). Other policies available -
+                                         First In First Out (specified as FIFO) and Less Frequently Used
+                                         (specified as LFU)
+        -->
+
+        <defaultCache
+                maxElementsInMemory="10000"
+                eternal="false"
+                timeToIdleSeconds="120"
+                timeToLiveSeconds="120"
+                overflowToDisk="false"
+                diskPersistent="false"
+                diskExpiryThreadIntervalSeconds="120"
+                />
+
+        <!-- We want eternal="true" and no timeToIdle or timeToLive settings because Shiro manages session
+             expirations explicitly.  If we set it to false and then set corresponding timeToIdle and timeToLive properties,
+             ehcache would evict sessions without Shiro's knowledge, which would cause many problems
+            (e.g. "My Shiro session timeout is 30 minutes - why isn't a session available after 2 minutes?"
+                   Answer - ehcache expired it due to the timeToIdle property set to 120 seconds.)
+
+            diskPersistent=true since we want an enterprise session management feature - ability to use sessions after
+            even after a JVM restart.  -->
+        <cache name="shiro-activeSessionCache"
+               maxElementsInMemory="10000"
+               overflowToDisk="true"
+               eternal="true"
+               timeToLiveSeconds="0"
+               timeToIdleSeconds="0"
+               diskPersistent="true"
+               diskExpiryThreadIntervalSeconds="600"/>
+
+        <cache name="org.apache.shiro.realm.text.PropertiesRealm-0-accounts"
+               maxElementsInMemory="1000"
+               eternal="true"
+               overflowToDisk="true"/>
+
+    </ehcache>
+
+A custom configuration file (ehcache.xml) can be used in place of this in order to set specific caching configuration.
+
+In order to set the ehcache.xml file to use for a particular topology, set the following parameter in the configuration
+for the ShiroProvider:
+
+    <param>
+        <name>main.cacheManager.cacheManagerConfigFile</name>
+        <value>classpath:ehcache.xml</value>
+    </param>
+
+In the above example, place the ehcache.xml file under `{GATEWAY_HOME}/conf` and restart the gateway server.

Added: knox/trunk/books/0.12.0/config_ldap_group_lookup.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.12.0/config_ldap_group_lookup.md?rev=1782487&view=auto
==============================================================================
--- knox/trunk/books/0.12.0/config_ldap_group_lookup.md (added)
+++ knox/trunk/books/0.12.0/config_ldap_group_lookup.md Fri Feb 10 16:31:08 2017
@@ -0,0 +1,228 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### LDAP Group Lookup ###
+
+Knox can be configured to look up LDAP groups that the authenticated user belong to.
+Knox can look up both Static LDAP Groups and Dynamic LDAP Groups.
+The looked up groups are populated as Principal(s) in the Java Subject of authenticated user.
+Therefore service authorization rules can be defined in terms of LDAP groups looked up from a LDAP directory.
+
+To look up LDAP groups of authenticated user from LDAP, you have to use `org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm` in Shiro configuration.
+
+Please see below a sample Shiro configuration snippet from a topology file that was tested looking LDAP groups.
+
+    <provider>
+        <role>authentication</role>
+        <name>ShiroProvider</name>
+        <enabled>true</enabled>
+        <!-- 
+        session timeout in minutes,  this is really idle timeout,
+        defaults to 30mins, if the property value is not defined,, 
+        current client authentication would expire if client idles continuously for more than this value
+        -->
+        <!-- defaults to: 30 minutes
+        <param>
+            <name>sessionTimeout</name>
+            <value>30</value>
+        </param>
+        -->
+
+        <!--
+          Use single KnoxLdapRealm to do authentication and ldap group look up
+        -->
+        <param>
+            <name>main.ldapRealm</name>
+            <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
+        </param>
+        <param>
+            <name>main.ldapGroupContextFactory</name>
+            <value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory</name>
+            <value>$ldapGroupContextFactory</value>
+        </param>
+        <!-- defaults to: simple
+        <param>
+            <name>main.ldapRealm.contextFactory.authenticationMechanism</name>
+            <value>simple</value>
+        </param>
+        -->
+        <param>
+            <name>main.ldapRealm.contextFactory.url</name>
+            <value>ldap://localhost:33389</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.userDnTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+
+        <param>
+            <name>main.ldapRealm.authorizationEnabled</name>
+            <!-- defaults to: false -->
+            <value>true</value>
+        </param>
+        <!-- defaults to: simple
+        <param>
+            <name>main.ldapRealm.contextFactory.systemAuthenticationMechanism</name>
+            <value>simple</value>
+        </param>
+        -->
+        <param>
+            <name>main.ldapRealm.searchBase</name>
+            <value>ou=groups,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <!-- defaults to: groupOfNames
+        <param>
+            <name>main.ldapRealm.groupObjectClass</name>
+            <value>groupOfNames</value>
+        </param>
+        -->
+        <!-- defaults to: member
+        <param>
+            <name>main.ldapRealm.memberAttribute</name>
+            <value>member</value>
+        </param>
+        -->
+        <param>
+             <name>main.cacheManager</name>
+             <value>org.apache.shiro.cache.MemoryConstrainedCacheManager</value>
+        </param>
+        <param>
+            <name>main.securityManager.cacheManager</name>
+            <value>$cacheManager</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.memberAttributeValueTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <!-- the above element is the template for most ldap servers 
+            for active directory use the following instead and
+            remove the above configuration.
+        <param>
+            <name>main.ldapRealm.memberAttributeValueTemplate</name>
+            <value>cn={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        -->
+        <param>
+            <name>main.ldapRealm.contextFactory.systemUsername</name>
+            <value>uid=guest,ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.systemPassword</name>
+            <value>${ALIAS=ldcSystemPassword}</value>
+        </param>
+
+        <param>
+            <name>urls./**</name> 
+            <value>authcBasic</value>
+        </param>
+
+    </provider>
+
+The configuration shown above would look up Static LDAP groups of authenticated user and populate the group principals in the Java Subject corresponding to authenticated user.
+
+If you want to look up Dynamic LDAP Groups instead of Static LDAP Groups, you would have to specify groupObjectClass and memberAttribute params as shown below:
+
+    <param>
+        <name>main.ldapRealm.groupObjectClass</name>
+        <value>groupOfUrls</value>
+    </param>
+    <param>
+        <name>main.ldapRealm.memberAttribute</name>
+        <value>memberUrl</value>
+    </param>
+
+### Template topology files and LDIF files to try out LDAP Group Look up ###
+
+Knox bundles some template topology files and ldif files that you can use to try and test LDAP Group Lookup and associated authorization ACLs.
+All these template files are located under {GATEWAY_HOME}/templates.
+
+
+#### LDAP Static Group Lookup Templates, authentication and group lookup from the same directory ####
+
+* topology file: sandbox.knoxrealm1.xml
+* ldif file: users.ldapgroups.ldif
+
+To try this out
+
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealm1.xml conf/topologies/sandbox.xml
+    cp templates/users.ldapgroups.ldif conf/users.ldif
+    java -jar bin/ldap.jar conf
+    java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar -persist-master
+
+Following call to WebHDFS should report HTTP/1.1 401 Unauthorized
+As guest is not a member of group "analyst", authorization provider states user should be member of group "analyst"
+
+    curl  -i -v  -k -u guest:guest-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+Following call to WebHDFS should report: {"Path":"/user/sam"}
+As sam is a member of group "analyst", authorization provider states user should be member of group "analyst"
+
+    curl  -i -v  -k -u sam:sam-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+
+#### LDAP Static Group Lookup Templates, authentication and group lookup from different  directories ####
+
+* topology file: sandbox.knoxrealm2.xml
+* ldif file: users.ldapgroups.ldif
+
+To try this out
+
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealm2.xml conf/topologies/sandbox.xml
+    cp templates/users.ldapgroups.ldif conf/users.ldif
+    java -jar bin/ldap.jar conf
+    java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar -persist-master
+
+Following call to WebHDFS should report HTTP/1.1 401 Unauthorized
+As guest is not a member of group "analyst", authorization provider states user should be member of group "analyst"
+
+    curl  -i -v  -k -u guest:guest-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+Following call to WebHDFS should report: {"Path":"/user/sam"}
+As sam is a member of group "analyst", authorization provider states user should be member of group "analyst"
+
+    curl  -i -v  -k -u sam:sam-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+#### LDAP Dynamic Group Lookup Templates, authentication and dynamic group lookup from same  directory ####
+
+* topology file: sandbox.knoxrealmdg.xml
+* ldif file: users.ldapdynamicgroups.ldif
+
+To try this out
+
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealmdg.xml conf/topologies/sandbox.xml
+    cp templates/users.ldapdynamicgroups.ldif conf/users.ldif
+    java -jar bin/ldap.jar conf
+    java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar -persist-master
+
+Please note that user.ldapdynamicgroups.ldif also loads necessary schema to create dynamic groups in Apache DS.
+
+Following call to WebHDFS should report HTTP/1.1 401 Unauthorized
+As guest is not a member of dynamic group "directors", authorization provider states user should be member of group "directors"
+
+    curl  -i -v  -k -u guest:guest-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+Following call to WebHDFS should report: {"Path":"/user/bob"}
+As bob is a member of dynamic group "directors", authorization provider states user should be member of group "directors"
+
+    curl  -i -v  -k -u sam:sam-password  -X GET https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+

Added: knox/trunk/books/0.12.0/config_metrics.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.12.0/config_metrics.md?rev=1782487&view=auto
==============================================================================
--- knox/trunk/books/0.12.0/config_metrics.md (added)
+++ knox/trunk/books/0.12.0/config_metrics.md Fri Feb 10 16:31:08 2017
@@ -0,0 +1,50 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Metrics ###
+
+See the KIP for details on the implementation of metrics available in the gateway.
+
+[Metrics KIP](https://cwiki.apache.org/confluence/display/KNOX/KIP-2+Metrics)
+
+#### Metrics Configuration ####
+
+Metrics configuration can be done in `gateway-site.xml`.
+
+The initial configuration is mainly for turning on or off the metrics collection and then enabling reporters with their required config.
+
+The two initial reporters implemented are JMX and Graphite.
+
+    gateway.metrics.enabled 
+
+Turns on or off the metrics, default is 'true'
+ 
+    gateway.jmx.metrics.reporting.enabled
+
+Turns on or off the jmx reporter, default is 'true'
+
+    gateway.graphite.metrics.reporting.enabled
+
+Turns on or off the graphite reporter, default is 'false'
+
+    gateway.graphite.metrics.reporting.host
+    gateway.graphite.metrics.reporting.port
+    gateway.graphite.metrics.reporting.frequency
+
+The above are the host, port and frequency of reporting (in seconds) parameters for the graphite reporter.
+ 
+ 

Added: knox/trunk/books/0.12.0/config_mutual_authentication_ssl.md
URL: http://svn.apache.org/viewvc/knox/trunk/books/0.12.0/config_mutual_authentication_ssl.md?rev=1782487&view=auto
==============================================================================
--- knox/trunk/books/0.12.0/config_mutual_authentication_ssl.md (added)
+++ knox/trunk/books/0.12.0/config_mutual_authentication_ssl.md Fri Feb 10 16:31:08 2017
@@ -0,0 +1,39 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Mutual Authentication with SSL ###
+
+To establish a stronger trust relationship between client and server, we provide mutual authentication with SSL via client certs. This is particularly useful in providing additional validation for Preauthenticated SSO with HTTP Headers. Rather than just ip address validation, connections will only be accepted by Knox from clients presenting trusted certificates.
+
+This behavior is configured for the entire gateway instance within the gateway-site.xml file. All topologies deployed within the gateway instance with mutual authentication enabled will require incoming connections to present trusted client certificates during the SSL handshake. Otherwise, connections will be refused.
+
+The following table describes the configuration elements related to mutual authentication and their defaults:
+
+| Configuration Element                          | Description                                               |
+| -----------------------------------------------|-----------------------------------------------------------|
+| gateway.client.auth.needed                     | True\|False - indicating the need for client authentication. Default is False.|
+| gateway.truststore.path                        | Fully qualified path to the trust store to use. Default is the gateway.jks.|
+| gateway.truststore.type                        | Keystore type of the trust store. Default is JKS.         |
+| gateway.trust.all.certs                        | Allows for all certificates to be trusted. Default is false.|
+
+By only indicating that it is needed with `gateway.client.auth.needed`, the `{GATEWAY_HOME}/data/security/keystores/gateway.jks` keystore is used. This is the identity keystore for the server and can also be used as the truststore.
+We can specify the path to a dedicated truststore via `gateway.truststore.path`. If the truststore password is different from the gateway master secret then it can be set using
+
+    knoxcli.sh create-alias gateway-truststore-password --value {pwd} 
+  
+Otherwise, the master secret will be used.
+If the truststore is not a JKS type then it can be set via `gateway.truststore.type`.
\ No newline at end of file