You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@zeppelin.apache.org by ah...@apache.org on 2017/01/08 05:53:31 UTC

svn commit: r1777868 [3/4] - in /zeppelin/site/docs/0.7.0-SNAPSHOT: ./ development/ displaysystem/ install/ interpreter/ manual/ quickstart/ rest-api/ security/ storage/

Modified: zeppelin/site/docs/0.7.0-SNAPSHOT/search_data.json
URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.7.0-SNAPSHOT/search_data.json?rev=1777868&r1=1777867&r2=1777868&view=diff
==============================================================================
--- zeppelin/site/docs/0.7.0-SNAPSHOT/search_data.json (original)
+++ zeppelin/site/docs/0.7.0-SNAPSHOT/search_data.json Sun Jan  8 05:53:30 2017
@@ -83,7 +83,7 @@
 
     "/install/build.html": {
       "title": "Build from Source",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Building from SourceIf you want to build from source, you must first install the following dependencies:      Name    Value        Git    (Any Version)        Maven    3.1.x or higher        JDK    1.7  If you haven't installed Git and Maven yet, check the Build requirements section and follow the step by step instructions from there.1. Clone the Apache Zeppelin repositorygit clone https://github.com/apache/zeppelin.git2. B
 uild sourceYou can build Zeppelin with following maven command:mvn clean package -DskipTests [Options]If you're unsure about the options, use the same commands that creates official binary package.# update all pom.xml to use scala 2.11./dev/change_scala_version.sh 2.11# build zeppelin with all interpreters and include latest version of Apache spark support for local mode.mvn clean package -DskipTests -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pr -Pscala-2.113. DoneYou can directly start Zeppelin by running after successful build:./bin/zeppelin-daemon.sh startCheck build-profiles section for further build options.If you are behind proxy, follow instructions in Proxy setting section.If you're interested in contribution, please check Contributing to Apache Zeppelin (Code) and Contributing to Apache Zeppelin (Website).Build profilesSpark InterpreterTo build with a specific Spark version, Hadoop version or specific features, define one or more of the following pr
 ofiles and options:-Pspark-[version]Set spark major versionAvailable profiles are-Pspark-2.0-Pspark-1.6-Pspark-1.5-Pspark-1.4-Pcassandra-spark-1.5-Pcassandra-spark-1.4-Pcassandra-spark-1.3-Pcassandra-spark-1.2-Pcassandra-spark-1.1minor version can be adjusted by -Dspark.version=x.x.x-Phadoop-[version]set hadoop major versionAvailable profiles are-Phadoop-0.23-Phadoop-1-Phadoop-2.2-Phadoop-2.3-Phadoop-2.4-Phadoop-2.6-Phadoop-2.7minor version can be adjusted by -Dhadoop.version=x.x.x-Pscala-[version] (optional)set scala version (default 2.10)Available profiles are-Pscala-2.10-Pscala-2.11-Pyarn (optional)enable YARN support for local modeYARN for local mode is not supported for Spark v1.5.0 or higher. Set SPARK_HOME instead.-Ppyspark (optional)enable PySpark support for local mode.-Pr (optional)enable R support with SparkR integration.-Psparkr (optional)another R support with SparkR integration as well as local mode support.-Pvendor-repo (optional)enable 3rd party vendor repository (cl
 oudera)-Pmapr[version] (optional)For the MapR Hadoop Distribution, these profiles will handle the Hadoop version. As MapR allows different versions of Spark to be installed, you should specify which version of Spark is installed on the cluster by adding a Spark profile (-Pspark-1.6, -Pspark-2.0, etc.) as needed.The correct Maven artifacts can be found for every version of MapR at http://doc.mapr.comAvailable profiles are-Pmapr3-Pmapr40-Pmapr41-Pmapr50-Pmapr51-Pexamples (optional)Bulid examples under zeppelin-examples directoryBuild command examplesHere are some examples with several options:# build with spark-2.0, scala-2.11./dev/change_scala_version.sh 2.11mvn clean package -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pscala-2.11 -DskipTests# build with spark-1.6, scala-2.10mvn clean package -Pspark-1.6 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -DskipTests# spark-cassandra integrationmvn clean package -Pcassandra-spark-1.5 -Dhadoop.version=2.6.0 -Phadoop-2.6 -DskipTests -DskipT
 ests# with CDHmvn clean package -Pspark-1.5 -Dhadoop.version=2.6.0-cdh5.5.0 -Phadoop-2.6 -Pvendor-repo -DskipTests# with MapRmvn clean package -Pspark-1.5 -Pmapr50 -DskipTestsIgnite Interpretermvn clean package -Dignite.version=1.6.0 -DskipTestsScalding Interpretermvn clean package -Pscalding -DskipTestsBuild requirementsInstall requirementsIf you don't have requirements prepared, install it.(The installation method may vary according to your environment, example is for Ubuntu.)sudo apt-get updatesudo apt-get install gitsudo apt-get install openjdk-7-jdksudo apt-get install npmsudo apt-get install libfontconfigInstall mavenwget http://www.eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gzsudo tar -zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/sudo ln -s /usr/local/apache-maven-3.3.9/bin/mvn /usr/local/bin/mvnNotes: - Ensure node is installed by running node --version - Ensure maven is running version 3.1.x or higher with mvn -version - Configure 
 maven to use more memory than usual by export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=1024m"Proxy setting (optional)If you're behind the proxy, you'll need to configure maven and npm to pass through it.First of all, configure maven in your ~/.m2/settings.xml.<settings>  <proxies>    <proxy>      <id>proxy-http</id>      <active>true</active>      <protocol>http</protocol>      <host>localhost</host>      <port>3128</port>      <!-- <username>usr</username>      <password>pwd</password> -->      <nonProxyHosts>localhost|127.0.0.1</nonProxyHosts>    </proxy>    <proxy>      <id>proxy-https</id>      <active>true&
 amp;lt;/active>      <protocol>https</protocol>      <host>localhost</host>      <port>3128</port>      <!-- <username>usr</username>      <password>pwd</password> -->      <nonProxyHosts>localhost|127.0.0.1</nonProxyHosts>    </proxy>  </proxies></settings>Then, next commands will configure npm.npm config set proxy http://localhost:3128npm config set https-proxy http://localhost:3128npm config set registry "http://registry.npmjs.org/"npm config set strict-ssl falseConfigure git as wellgit config --global http.proxy http://localhost:3128git config --global https.proxy http://localhost:3128git config --global url."http://".insteadOf git://To clean up, set active false in Maven settings.xml and run these commands.npm confi
 g rm proxynpm config rm https-proxygit config --global --unset http.proxygit config --global --unset https.proxygit config --global --unset url."http://".insteadOfNotes: - If you are behind NTLM proxy you can use Cntlm Authentication Proxy. - Replace localhost:3128 with the standard pattern http://user:pwd@host:port.PackageTo package the final distribution including the compressed archive, run:mvn clean package -Pbuild-distrTo build a distribution with specific profiles, run:mvn clean package -Pbuild-distr -Pspark-1.5 -Phadoop-2.4 -Pyarn -PpysparkThe profiles -Pspark-1.5 -Phadoop-2.4 -Pyarn -Ppyspark can be adjusted if you wish to build to a specific spark versions, or omit support such as yarn.  The archive is generated under zeppelin-distribution/target directoryRun end-to-end testsZeppelin comes with a set of end-to-end acceptance tests driving headless selenium browser# assumes zeppelin-server running on localhost:8080 (use -Durl=.. to override)mvn verify# or t
 ake care of starting/stoping zeppelin-server from packaged zeppelin-distribuion/targetmvn verify -P using-packaged-distr",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Building from SourceIf you want to build from source, you must first install the following dependencies:      Name    Value        Git    (Any Version)        Maven    3.1.x or higher        JDK    1.7  If you haven't installed Git and Maven yet, check the Build requirements section and follow the step by step instructions from there.1. Clone the Apache Zeppelin repositorygit clone https://github.com/apache/zeppelin.git2. B
 uild sourceYou can build Zeppelin with following maven command:mvn clean package -DskipTests [Options]If you're unsure about the options, use the same commands that creates official binary package.# update all pom.xml to use scala 2.11./dev/change_scala_version.sh 2.11# build zeppelin with all interpreters and include latest version of Apache spark support for local mode.mvn clean package -DskipTests -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pr -Pscala-2.113. DoneYou can directly start Zeppelin by running after successful build:./bin/zeppelin-daemon.sh startCheck build-profiles section for further build options.If you are behind proxy, follow instructions in Proxy setting section.If you're interested in contribution, please check Contributing to Apache Zeppelin (Code) and Contributing to Apache Zeppelin (Website).Build profilesSpark InterpreterTo build with a specific Spark version, Hadoop version or specific features, define one or more of the following pr
 ofiles and options:-Pspark-[version]Set spark major versionAvailable profiles are-Pspark-2.0-Pspark-1.6-Pspark-1.5-Pspark-1.4-Pcassandra-spark-1.5-Pcassandra-spark-1.4-Pcassandra-spark-1.3-Pcassandra-spark-1.2-Pcassandra-spark-1.1minor version can be adjusted by -Dspark.version=x.x.x-Phadoop-[version]set hadoop major versionAvailable profiles are-Phadoop-0.23-Phadoop-1-Phadoop-2.2-Phadoop-2.3-Phadoop-2.4-Phadoop-2.6-Phadoop-2.7minor version can be adjusted by -Dhadoop.version=x.x.x-Pscala-[version] (optional)set scala version (default 2.10)Available profiles are-Pscala-2.10-Pscala-2.11-Pyarn (optional)enable YARN support for local modeYARN for local mode is not supported for Spark v1.5.0 or higher. Set SPARK_HOME instead.-Ppyspark (optional)enable PySpark support for local mode.-Pr (optional)enable R support with SparkR integration.-Psparkr (optional)another R support with SparkR integration as well as local mode support.-Pvendor-repo (optional)enable 3rd party vendor repository (cl
 oudera)-Pmapr[version] (optional)For the MapR Hadoop Distribution, these profiles will handle the Hadoop version. As MapR allows different versions of Spark to be installed, you should specify which version of Spark is installed on the cluster by adding a Spark profile (-Pspark-1.6, -Pspark-2.0, etc.) as needed.The correct Maven artifacts can be found for every version of MapR at http://doc.mapr.comAvailable profiles are-Pmapr3-Pmapr40-Pmapr41-Pmapr50-Pmapr51-Pexamples (optional)Bulid examples under zeppelin-examples directoryBuild command examplesHere are some examples with several options:# build with spark-2.0, scala-2.11./dev/change_scala_version.sh 2.11mvn clean package -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pscala-2.11 -DskipTests# build with spark-1.6, scala-2.10mvn clean package -Pspark-1.6 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -DskipTests# spark-cassandra integrationmvn clean package -Pcassandra-spark-1.5 -Dhadoop.version=2.6.0 -Phadoop-2.6 -DskipTests -DskipT
 ests# with CDHmvn clean package -Pspark-1.5 -Dhadoop.version=2.6.0-cdh5.5.0 -Phadoop-2.6 -Pvendor-repo -DskipTests# with MapRmvn clean package -Pspark-1.5 -Pmapr50 -DskipTestsIgnite Interpretermvn clean package -Dignite.version=1.8.0 -DskipTestsScalding Interpretermvn clean package -Pscalding -DskipTestsBuild requirementsInstall requirementsIf you don't have requirements prepared, install it.(The installation method may vary according to your environment, example is for Ubuntu.)sudo apt-get updatesudo apt-get install gitsudo apt-get install openjdk-7-jdksudo apt-get install npmsudo apt-get install libfontconfigInstall mavenwget http://www.eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gzsudo tar -zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/sudo ln -s /usr/local/apache-maven-3.3.9/bin/mvn /usr/local/bin/mvnNotes: - Ensure node is installed by running node --version - Ensure maven is running version 3.1.x or higher with mvn -version - Configure 
 maven to use more memory than usual by export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=1024m"Proxy setting (optional)If you're behind the proxy, you'll need to configure maven and npm to pass through it.First of all, configure maven in your ~/.m2/settings.xml.<settings>  <proxies>    <proxy>      <id>proxy-http</id>      <active>true</active>      <protocol>http</protocol>      <host>localhost</host>      <port>3128</port>      <!-- <username>usr</username>      <password>pwd</password> -->      <nonProxyHosts>localhost|127.0.0.1</nonProxyHosts>    </proxy>    <proxy>      <id>proxy-https</id>      <active>true&
 amp;lt;/active>      <protocol>https</protocol>      <host>localhost</host>      <port>3128</port>      <!-- <username>usr</username>      <password>pwd</password> -->      <nonProxyHosts>localhost|127.0.0.1</nonProxyHosts>    </proxy>  </proxies></settings>Then, next commands will configure npm.npm config set proxy http://localhost:3128npm config set https-proxy http://localhost:3128npm config set registry "http://registry.npmjs.org/"npm config set strict-ssl falseConfigure git as wellgit config --global http.proxy http://localhost:3128git config --global https.proxy http://localhost:3128git config --global url."http://".insteadOf git://To clean up, set active false in Maven settings.xml and run these commands.npm confi
 g rm proxynpm config rm https-proxygit config --global --unset http.proxygit config --global --unset https.proxygit config --global --unset url."http://".insteadOfNotes: - If you are behind NTLM proxy you can use Cntlm Authentication Proxy. - Replace localhost:3128 with the standard pattern http://user:pwd@host:port.PackageTo package the final distribution including the compressed archive, run:mvn clean package -Pbuild-distrTo build a distribution with specific profiles, run:mvn clean package -Pbuild-distr -Pspark-1.5 -Phadoop-2.4 -Pyarn -PpysparkThe profiles -Pspark-1.5 -Phadoop-2.4 -Pyarn -Ppyspark can be adjusted if you wish to build to a specific spark versions, or omit support such as yarn.  The archive is generated under zeppelin-distribution/target directoryRun end-to-end testsZeppelin comes with a set of end-to-end acceptance tests driving headless selenium browser# assumes zeppelin-server running on localhost:8080 (use -Durl=.. to override)mvn verify# or t
 ake care of starting/stoping zeppelin-server from packaged zeppelin-distribuion/targetmvn verify -P using-packaged-distr",
       "url": " /install/build.html",
       "group": "install",
       "excerpt": "How to build Zeppelin from source"
@@ -103,6 +103,17 @@
     
   
 
+    "/install/configuration.html": {
+      "title": "Apache Zeppelin Configuration",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Apache Zeppelin ConfigurationZeppelin PropertiesThere are two locations you can configure Apache Zeppelin.Environment variables can be defined conf/zeppelin-env.sh(confzeppelin-env.cmd for Windows). Java properties can ba defined in conf/zeppelin-site.xml.If both are defined, then the environment variables will take priority.      zeppelin-env.sh    zeppelin-site.xml    Default value    Description        ZEPPELIN_PORT    zeppelin.
 server.port    8080    Zeppelin server port        ZEPPELIN_SSL_PORT    zeppelin.server.ssl.port    8443    Zeppelin Server ssl port (used when ssl environment/property is set to true)        ZEPPELIN_MEM    N/A    -Xmx1024m -XX:MaxPermSize=512m    JVM mem options        ZEPPELIN_INTP_MEM    N/A    ZEPPELIN_MEM    JVM mem options for interpreter process        ZEPPELIN_JAVA_OPTS    N/A        JVM options        ZEPPELIN_ALLOWED_ORIGINS    zeppelin.server.allowed.origins    *    Enables a way to specify a ',' separated list of allowed origins for REST and websockets.  e.g. http://localhost:8080           N/A    zeppelin.anonymous.allowed    true    The anonymous user is allowed by default.        ZEPPELIN_SERVER_CONTEXT_PATH    zeppelin.server.context.path    /    Context path of the web application        ZEPPELIN_SSL    zeppelin.ssl    false            ZEPPELIN_SSL_CLIENT_AUTH    zeppelin.ssl.client.auth    false            ZEPPELIN_SSL_KEYSTORE_PATH    zeppelin.ssl.keystor
 e.path    keystore            ZEPPELIN_SSL_KEYSTORE_TYPE    zeppelin.ssl.keystore.type    JKS            ZEPPELIN_SSL_KEYSTORE_PASSWORD    zeppelin.ssl.keystore.password                ZEPPELIN_SSL_KEY_MANAGER_PASSWORD    zeppelin.ssl.key.manager.password                ZEPPELIN_SSL_TRUSTSTORE_PATH    zeppelin.ssl.truststore.path                ZEPPELIN_SSL_TRUSTSTORE_TYPE    zeppelin.ssl.truststore.type                ZEPPELIN_SSL_TRUSTSTORE_PASSWORD    zeppelin.ssl.truststore.password                ZEPPELIN_NOTEBOOK_HOMESCREEN    zeppelin.notebook.homescreen        Display note IDs on the Apache Zeppelin homescreen e.g. 2A94M5J1Z        ZEPPELIN_NOTEBOOK_HOMESCREEN_HIDE    zeppelin.notebook.homescreen.hide    false    Hide the note ID set by ZEPPELIN_NOTEBOOK_HOMESCREEN on the Apache Zeppelin homescreen. For the further information, please read Customize your Zeppelin homepage.        ZEPPELIN_WAR_TEMPDIR    zeppelin.war.tempdir    webapps    Location of the jetty temporary direc
 tory        ZEPPELIN_NOTEBOOK_DIR    zeppelin.notebook.dir    notebook    The root directory where notebook directories are saved        ZEPPELIN_NOTEBOOK_S3_BUCKET    zeppelin.notebook.s3.bucket    zeppelin    S3 Bucket where notebook files will be saved        ZEPPELIN_NOTEBOOK_S3_USER    zeppelin.notebook.s3.user    user    User name of an S3 buckete.g. bucket/user/notebook/2A94M5J1Z/note.json        ZEPPELIN_NOTEBOOK_S3_ENDPOINT    zeppelin.notebook.s3.endpoint    s3.amazonaws.com    Endpoint for the bucket        ZEPPELIN_NOTEBOOK_S3_KMS_KEY_ID    zeppelin.notebook.s3.kmsKeyID        AWS KMS Key ID to use for encrypting data in S3 (optional)        ZEPPELIN_NOTEBOOK_S3_EMP    zeppelin.notebook.s3.encryptionMaterialsProvider        Class name of a custom S3 encryption materials provider implementation to use for encrypting data in S3 (optional)        ZEPPELIN_NOTEBOOK_AZURE_CONNECTION_STRING    zeppelin.notebook.azure.connectionString        The Azure storage account connection
  stringe.g. DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>        ZEPPELIN_NOTEBOOK_AZURE_SHARE    zeppelin.notebook.azure.share    zeppelin    Azure Share where the notebook files will be saved        ZEPPELIN_NOTEBOOK_AZURE_USER    zeppelin.notebook.azure.user    user    Optional user name of an Azure file sharee.g. share/user/notebook/2A94M5J1Z/note.json        ZEPPELIN_NOTEBOOK_STORAGE    zeppelin.notebook.storage    org.apache.zeppelin.notebook.repo.VFSNotebookRepo    Comma separated list of notebook storage locations        ZEPPELIN_NOTEBOOK_ONE_WAY_SYNC    zeppelin.notebook.one.way.sync    false    If there are multiple notebook storage locations, should we treat the first one as the only source of truth?        ZEPPELIN_NOTEBOOK_PUBLIC    zeppelin.notebook.public    true    Make notebook public (set only owners) by default when created/imported. If set to false will add user to readers and writers as well, making 
 it private and invisible to other users unless permissions are granted.        ZEPPELIN_INTERPRETERS    zeppelin.interpreters      org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.shell.ShellInterpreter,    ...              Comma separated interpreter configurations [Class]       NOTE: This property is deprecated since Zeppelin-0.6.0 and will not be supported from Zeppelin-0.7.0.            ZEPPELIN_INTERPRETER_DIR    zeppelin.interpreter.dir    interpreter    Interpreter directory        ZEPPELIN_WEBSOCKET_MAX_TEXT_MESSAGE_SIZE    zeppelin.websocket.max.text.message.size    1024000    Size (in characters) of the maximum text message that can be received by websocket.  SSL ConfigurationEnabling SSL requires a few configuration changes. First, you need to create certificates and then update necessary co
 nfigurations to enable server side SSL and/or client side certificate authentication.Creating and configuring the CertificatesInformation how about to generate certificates and a keystore can be found here.A condensed example can be found in the top answer to this StackOverflow post.The keystore holds the private key and certificate on the server end. The trustore holds the trusted client certificates. Be sure that the path and password for these two stores are correctly configured in the password fields below. They can be obfuscated using the Jetty password tool. After Maven pulls in all the dependency to build Zeppelin, one of the Jetty jars contain the Password tool. Invoke this command from the Zeppelin home build directory with the appropriate version, user, and password.java -cp ./zeppelin-server/target/lib/jetty-all-server-<version>.jar org.eclipse.jetty.util.security.Password <user> <password>If you are using a self-signed, a certifi
 cate signed by an untrusted CA, or if client authentication is enabled, then the client must have a browser create exceptions for both the normal HTTPS port and WebSocket port. This can by done by trying to establish an HTTPS connection to both ports in a browser (e.g. if the ports are 443 and 8443, then visit https://127.0.0.1:443 and https://127.0.0.1:8443). This step can be skipped if the server certificate is signed by a trusted CA and client auth is disabled.Configuring server side SSLThe following properties needs to be updated in the zeppelin-site.xml in order to enable server side SSL.<property>  <name>zeppelin.server.ssl.port</name>  <value>8443</value>  <description>Server ssl port. (used when ssl property is set to true)</description></property><property>  <name>zeppelin.ssl</name>  <value>true</valu
 e>  <description>Should SSL be used by the servers?</description></property><property>  <name>zeppelin.ssl.keystore.path</name>  <value>keystore</value>  <description>Path to keystore relative to Zeppelin configuration directory</description></property><property>  <name>zeppelin.ssl.keystore.type</name>  <value>JKS</value>  <description>The format of the given keystore (e.g. JKS or PKCS12)</description></property><property>  <name>zeppelin.ssl.keystore.password</name>  <value>change me</value>  <description>Keystore password. Can be obfuscated by the Jetty Password tool</description></property><pro
 perty>  <name>zeppelin.ssl.key.manager.password</name>  <value>change me</value>  <description>Key Manager password. Defaults to keystore password. Can be obfuscated.</description></property>Enabling client side certificate authenticationThe following properties needs to be updated in the zeppelin-site.xml in order to enable client side certificate authentication.<property>  <name>zeppelin.server.ssl.port</name>  <value>8443</value>  <description>Server ssl port. (used when ssl property is set to true)</description></property><property>  <name>zeppelin.ssl.client.auth</name>  <value>true</value>  <description>Should client authentication be used for SSL connections?</description&a
 mp;gt;</property><property>  <name>zeppelin.ssl.truststore.path</name>  <value>truststore</value>  <description>Path to truststore relative to Zeppelin configuration directory. Defaults to the keystore path</description></property><property>  <name>zeppelin.ssl.truststore.type</name>  <value>JKS</value>  <description>The format of the given truststore (e.g. JKS or PKCS12). Defaults to the same type as the keystore type</description></property><property>  <name>zeppelin.ssl.truststore.password</name>  <value>change me</value>  <description>Truststore password. Can be obfuscated by the Jetty Password tool. Defaults to the keystore password</description>&a
 mp;lt;/property>Obfuscating Passwords using the Jetty Password ToolSecurity best practices advise to not use plain text passwords and Jetty provides a password tool to help obfuscating the passwords used to access the KeyStore and TrustStore.The Password tool documentation can be found here.After using the tool:java -cp $ZEPPELIN_HOME/zeppelin-server/target/lib/jetty-util-9.2.15.v20160210.jar          org.eclipse.jetty.util.security.Password           password2016-12-15 10:46:47.931:INFO::main: Logging initialized @101mspasswordOBF:1v2j1uum1xtv1zej1zer1xtn1uvk1v1vMD5:5f4dcc3b5aa765d61d8327deb882cf99update your configuration with the obfuscated password :<property>  <name>zeppelin.ssl.keystore.password</name>  <value>OBF:1v2j1uum1xtv1zej1zer1xtn1uvk1v1v</value>  <description>Keystore password. Can be obfuscated by the Jetty Password tool</description></property>Note:
  After updating these configurations, Zeppelin server needs to be restarted.",
+      "url": " /install/configuration.html",
+      "group": "install",
+      "excerpt": "This page will guide you to configure Apache Zeppelin using either environment variables or Java properties. Also, you can configure SSL for Zeppelin."
+    }
+    ,
+    
+  
+
     "/install/docker.html": {
       "title": "Apache Zeppelin Releases Docker Images",
       "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Apache Zeppelin Releases Docker ImagesOverviewThis document contains instructions about making docker containers for Zeppelin. It mainly provides guidance into how to create, publish and run docker images for zeppelin releases.Quick StartInstalling DockerYou need to install docker on your machine.Creating and Publishing Zeppelin docker imageIn order to be able to create and/or publish an image, you need to set the DockerHub credent
 ials DOCKER_USERNAME, DOCKER_PASSWORD, DOCKER_EMAIL variables as environment variables.To create an image for some release use :create_release.sh <release-version> <git-tag>.To publish the created image use :publish_release.sh <release-version> <git-tag>Running a Zeppelin  docker imageTo start Zeppelin, you need to pull the zeppelin release image: ```docker pull ${DOCKER_USERNAME}/zeppelin-release:docker run --rm -it -p 7077:7077 -p 8080:8080 ${DOCKER_USERNAME}/zeppelin-release: -c bash``* Then a docker container will start with a Zeppelin release on path :/usr/local/zeppelin/`Run zeppelin inside docker:/usr/local/zeppelin/bin/zeppelin.shTo Run Zeppelin in daemon modeMounting logs and notebooks zeppelin to folders on your host machinedocker run -p 7077:7077 -p 8080:8080 --privileged=true -v $PWD/logs:/logs -v $PWD/notebook:/notebook -e ZEPPELIN_NOTEBOOK_DIR='/notebook' -e ZEPPELIN_LOG_DIR='/logs'
  -d ${DOCKER_USERNAME}/zeppelin-release:<release-version> /usr/local/zeppelin/bin/zeppelin.shZeppelin will run at http://localhost:8080.",
@@ -116,10 +127,10 @@
 
     "/install/install.html": {
       "title": "Quick Start",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Quick StartWelcome to Apache Zeppelin! On this page are instructions to help you get started.InstallationApache Zeppelin officially supports and is tested on the following environments:      Name    Value        Oracle JDK    1.7  (set JAVA_HOME)        OS    Mac OSX  Ubuntu 14.X  CentOS 6.X  Windows 7 Pro SP1  Downloading Binary PackageTwo binary packages are available on the Apache Zeppelin Download Page. Only difference between 
 these two binaries is interpreters are included in the package file.Package with all interpreters.Just unpack it in a directory of your choice and you're ready to go.Package with net-install interpreters.Unpack and follow install additional interpreters to install interpreters. If you're unsure, just run ./bin/install-interpreter.sh --all and install all interpreters.Starting Apache Zeppelin from the Command LineStarting Apache ZeppelinOn all unix like platforms:bin/zeppelin-daemon.sh startIf you are on Windows:binzeppelin.cmdAfter Zeppelin has started successfully, go to http://localhost:8080 with your web browser.Stopping Zeppelinbin/zeppelin-daemon.sh stopNext StepsCongratulations, you have successfully installed Apache Zeppelin! Here are few steps you might find useful:New to Apache Zeppelin...For an in-depth overview, head to Explore Apache Zeppelin UI.And then, try run tutorial notebook in your Zeppelin.And see how to change configurations like port number, etc
 .Zeppelin with Apache Spark ...To know more about deep integration with Apache Spark, check Spark Interpreter.Zeppelin with JDBC data sources ...Check JDBC Interpreter to know more about configure and uses multiple JDBC data sources.Zeppelin with Python ...Check Python interpreter to know more about Matplotlib, Pandas, Conda/Docker environment integration.Multi-user environment ...Turn on authentication.Manage your notebook permission.For more informations, go to More -> Security section.Other useful informations ...Learn how Display System works.Use Service Manager to start Zeppelin.If you're using previous version please see Upgrade Zeppelin version.Apache Zeppelin ConfigurationYou can configure Apache Zeppelin with either environment variables in conf/zeppelin-env.sh (confzeppelin-env.cmd for Windows) or Java properties in conf/zeppelin-site.xml. If both are defined, then the environment variables will take priority.      zeppelin-env.sh    zeppelin-site.xml    Def
 ault value    Description        ZEPPELIN_PORT    zeppelin.server.port    8080    Zeppelin server port        ZEPPELIN_SSL_PORT    zeppelin.server.ssl.port    8443    Zeppelin Server ssl port (used when ssl environment/property is set to true)        ZEPPELIN_MEM    N/A    -Xmx1024m -XX:MaxPermSize=512m    JVM mem options        ZEPPELIN_INTP_MEM    N/A    ZEPPELIN_MEM    JVM mem options for interpreter process        ZEPPELIN_JAVA_OPTS    N/A        JVM options        ZEPPELIN_ALLOWED_ORIGINS    zeppelin.server.allowed.origins    *    Enables a way to specify a ',' separated list of allowed origins for REST and websockets.  i.e. http://localhost:8080           N/A    zeppelin.anonymous.allowed    true    The anonymous user is allowed by default.        ZEPPELIN_SERVER_CONTEXT_PATH    zeppelin.server.context.path    /    Context path of the web application        ZEPPELIN_SSL    zeppelin.ssl    false            ZEPPELIN_SSL_CLIENT_AUTH    zeppelin.ssl.client.auth    false   
          ZEPPELIN_SSL_KEYSTORE_PATH    zeppelin.ssl.keystore.path    keystore            ZEPPELIN_SSL_KEYSTORE_TYPE    zeppelin.ssl.keystore.type    JKS            ZEPPELIN_SSL_KEYSTORE_PASSWORD    zeppelin.ssl.keystore.password                ZEPPELIN_SSL_KEY_MANAGER_PASSWORD    zeppelin.ssl.key.manager.password                ZEPPELIN_SSL_TRUSTSTORE_PATH    zeppelin.ssl.truststore.path                ZEPPELIN_SSL_TRUSTSTORE_TYPE    zeppelin.ssl.truststore.type                ZEPPELIN_SSL_TRUSTSTORE_PASSWORD    zeppelin.ssl.truststore.password                ZEPPELIN_NOTEBOOK_HOMESCREEN    zeppelin.notebook.homescreen        Display note IDs on the Apache Zeppelin homescreen i.e. 2A94M5J1Z        ZEPPELIN_NOTEBOOK_HOMESCREEN_HIDE    zeppelin.notebook.homescreen.hide    false    Hide the note ID set by ZEPPELIN_NOTEBOOK_HOMESCREEN on the Apache Zeppelin homescreen. For the further information, please read Customize your Zeppelin homepage.        ZEPPELIN_WAR_TEMPDIR    zeppelin.war.
 tempdir    webapps    Location of the jetty temporary directory        ZEPPELIN_NOTEBOOK_DIR    zeppelin.notebook.dir    notebook    The root directory where notebook directories are saved        ZEPPELIN_NOTEBOOK_S3_BUCKET    zeppelin.notebook.s3.bucket    zeppelin    S3 Bucket where notebook files will be saved        ZEPPELIN_NOTEBOOK_S3_USER    zeppelin.notebook.s3.user    user    User name of an S3 bucketi.e. bucket/user/notebook/2A94M5J1Z/note.json        ZEPPELIN_NOTEBOOK_S3_ENDPOINT    zeppelin.notebook.s3.endpoint    s3.amazonaws.com    Endpoint for the bucket        ZEPPELIN_NOTEBOOK_S3_KMS_KEY_ID    zeppelin.notebook.s3.kmsKeyID        AWS KMS Key ID to use for encrypting data in S3 (optional)        ZEPPELIN_NOTEBOOK_S3_EMP    zeppelin.notebook.s3.encryptionMaterialsProvider        Class name of a custom S3 encryption materials provider implementation to use for encrypting data in S3 (optional)        ZEPPELIN_NOTEBOOK_AZURE_CONNECTION_STRING    zeppelin.notebook.azure.c
 onnectionString        The Azure storage account connection stringi.e. DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>        ZEPPELIN_NOTEBOOK_AZURE_SHARE    zeppelin.notebook.azure.share    zeppelin    Azure Share where the notebook files will be saved        ZEPPELIN_NOTEBOOK_AZURE_USER    zeppelin.notebook.azure.user    user    Optional user name of an Azure file sharei.e. share/user/notebook/2A94M5J1Z/note.json        ZEPPELIN_NOTEBOOK_STORAGE    zeppelin.notebook.storage    org.apache.zeppelin.notebook.repo.VFSNotebookRepo    Comma separated list of notebook storage locations        ZEPPELIN_NOTEBOOK_ONE_WAY_SYNC    zeppelin.notebook.one.way.sync    false    If there are multiple notebook storage locations, should we treat the first one as the only source of truth?        ZEPPELIN_NOTEBOOK_PUBLIC    zeppelin.notebook.public    true    Make notebook public (set only `owners`) by default when created/imported. If set t
 o `false` will add `user` to `readers` and `writers` as well, making it private and invisible to other users unless permissions are granted.        ZEPPELIN_INTERPRETERS    zeppelin.interpreters      org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.shell.ShellInterpreter,    ...              Comma separated interpreter configurations [Class]       NOTE: This property is deprecated since Zeppelin-0.6.0 and will not be supported from Zeppelin-0.7.0 on.            ZEPPELIN_INTERPRETER_DIR    zeppelin.interpreter.dir    interpreter    Interpreter directory        ZEPPELIN_WEBSOCKET_MAX_TEXT_MESSAGE_SIZE    zeppelin.websocket.max.text.message.size    1024000    Size (in characters) of the maximum text message that can be received by websocket.  Start Apache Zeppelin with a service managerNote : The below de
 scription was written based on Ubuntu Linux.Apache Zeppelin can be auto-started as a service with an init script, using a service manager like upstart.This is an example upstart script saved as /etc/init/zeppelin.confThis allows the service to be managed with commands such assudo service zeppelin start  sudo service zeppelin stop  sudo service zeppelin restartOther service managers could use a similar approach with the upstart argument passed to the zeppelin-daemon.sh script.bin/zeppelin-daemon.sh upstartzeppelin.confdescription "zeppelin"start on (local-filesystems and net-device-up IFACE!=lo)stop on shutdown# Respawn the process on unexpected terminationrespawn# respawn the job up to 7 times within a 5 second period.# If the job exceeds these values, it will be stopped and marked as failed.respawn limit 7 5# zeppelin was installed in /usr/share/zeppelin in this examplechdir /usr/share/zeppelinexec bin/zeppelin-daemon.sh upstartBuilding from SourceIf you want to b
 uild from source instead of using binary package, follow the instructions here.",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Quick StartWelcome to Apache Zeppelin! On this page are instructions to help you get started.InstallationApache Zeppelin officially supports and is tested on the following environments:      Name    Value        Oracle JDK    1.7  (set JAVA_HOME)        OS    Mac OSX  Ubuntu 14.X  CentOS 6.X  Windows 7 Pro SP1  Downloading Binary PackageTwo binary packages are available on the Apache Zeppelin Download Page. Only difference between 
 these two binaries is interpreters are included in the package file.Package with all interpreters.Just unpack it in a directory of your choice and you're ready to go.Package with net-install interpreters.Unpack and follow install additional interpreters to install interpreters. If you're unsure, just run ./bin/install-interpreter.sh --all and install all interpreters.Starting Apache ZeppelinStarting Apache Zeppelin from the Command LineOn all unix like platforms:bin/zeppelin-daemon.sh startIf you are on Windows:binzeppelin.cmdAfter Zeppelin has started successfully, go to http://localhost:8080 with your web browser.Stopping Zeppelinbin/zeppelin-daemon.sh stopStart Apache Zeppelin with a service managerNote : The below description was written based on Ubuntu Linux.Apache Zeppelin can be auto-started as a service with an init script, using a service manager like upstart.This is an example upstart script saved as /etc/init/zeppelin.confThis allows the service to be mana
 ged with commands such assudo service zeppelin start  sudo service zeppelin stop  sudo service zeppelin restartOther service managers could use a similar approach with the upstart argument passed to the zeppelin-daemon.sh script.bin/zeppelin-daemon.sh upstartzeppelin.confdescription "zeppelin"start on (local-filesystems and net-device-up IFACE!=lo)stop on shutdown# Respawn the process on unexpected terminationrespawn# respawn the job up to 7 times within a 5 second period.# If the job exceeds these values, it will be stopped and marked as failed.respawn limit 7 5# zeppelin was installed in /usr/share/zeppelin in this examplechdir /usr/share/zeppelinexec bin/zeppelin-daemon.sh upstartNext StepsCongratulations, you have successfully installed Apache Zeppelin! Here are few steps you might find useful:New to Apache Zeppelin...For an in-depth overview, head to Explore Apache Zeppelin UI.And then, try run tutorial notebook in your Zeppelin.And see how to change configura
 tions like port number, etc.Zeppelin with Apache Spark ...To know more about deep integration with Apache Spark, check Spark Interpreter.Zeppelin with JDBC data sources ...Check JDBC Interpreter to know more about configure and uses multiple JDBC data sources.Zeppelin with Python ...Check Python interpreter to know more about Matplotlib, Pandas, Conda/Docker environment integration.Multi-user environment ...Turn on authentication.Manage your notebook permission.For more informations, go to More -> Security section.Other useful informations ...Learn how Display System works.Use Service Manager to start Zeppelin.If you're using previous version please see Upgrade Zeppelin version.Building Apache Zeppelin from SourceIf you want to build from source instead of using binary package, follow the instructions here.",
       "url": " /install/install.html",
       "group": "install",
-      "excerpt": "This page will help you get started and will guide you through installing Apache Zeppelin, running it in the command line and configuring options."
+      "excerpt": "This page will help you get started and will guide you through installing Apache Zeppelin and running it in the command line."
     }
     ,
     
@@ -127,7 +138,7 @@
 
     "/install/spark_cluster_mode.html": {
       "title": "Apache Zeppelin on Spark cluster mode",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Apache Zeppelin on Spark Cluster ModeOverviewApache Spark has supported three cluster manager types(Standalone, Apache Mesos and Hadoop YARN) so far.This document will guide you how you can build and configure the environment on 3 types of Spark cluster manager with Apache Zeppelin using Docker scripts.So install docker on the machine first.Spark standalone modeSpark standalone is a simple cluster manager included with Spark that m
 akes it easy to set up a cluster.You can simply set up Spark standalone environment with below steps.Note : Since Apache Zeppelin and Spark use same 8080 port for their web UI, you might need to change zeppelin.server.port in conf/zeppelin-site.xml.1. Build Docker fileYou can find docker script files under scripts/docker/spark-cluster-managers.cd $ZEPPELIN_HOME/scripts/docker/spark-cluster-managers/spark_standalonedocker build -t "spark_standalone" .2. Run dockerdocker run -it -p 8080:8080 -p 7077:7077 -p 8888:8888 -p 8081:8081 -h sparkmaster --name spark_standalone spark_standalone bash;Note that sparkmaster hostname used here to run docker container should be defined in your /etc/hosts.3. Configure Spark interpreter in ZeppelinSet Spark master as spark://<hostname>:7077 in Zeppelin Interpreters setting page.4. Run Zeppelin with Spark interpreterAfter running single paragraph with Spark interpreter in Zeppelin, browse https://<hostname>
 :8080 and check whether Spark cluster is running well or not.You can also simply verify that Spark is running well in Docker with below command.ps -ef | grep sparkSpark on YARN modeYou can simply set up Spark on YARN docker environment with below steps.Note : Since Apache Zeppelin and Spark use same 8080 port for their web UI, you might need to change zeppelin.server.port in conf/zeppelin-site.xml.1. Build Docker fileYou can find docker script files under scripts/docker/spark-cluster-managers.cd $ZEPPELIN_HOME/scripts/docker/spark-cluster-managers/spark_yarn_clusterdocker build -t "spark_yarn" .2. Run dockerdocker run -it  -p 5000:5000  -p 9000:9000  -p 9001:9001  -p 8088:8088  -p 8042:8042  -p 8030:8030  -p 8031:8031  -p 8032:8032  -p 8033:8033  -p 8080:8080  -p 7077:7077  -p 8888:8888  -p 8081:8081  -p 50010:50010  -p 50075:50075  -p 50020:50020  -p 50070:50070  --name spark_yarn  -h sparkmaster  spark_yarn bash;Note that sparkmaster hostname used here to run doc
 ker container should be defined in your /etc/hosts.3. Verify running Spark on YARN.You can simply verify the processes of Spark and YARN are running well in Docker with below command.ps -efYou can also check each application web UI for HDFS on http://<hostname>:50070/, YARN on http://<hostname>:8088/cluster and Spark on http://<hostname>:8080/.4. Configure Spark interpreter in ZeppelinSet following configurations to conf/zeppelin-env.sh.export MASTER=yarn-clientexport HADOOP_CONF_DIR=[your_hadoop_conf_path]export SPARK_HOME=[your_spark_home_path]HADOOP_CONF_DIR(Hadoop configuration path) is defined in /scripts/docker/spark-cluster-managers/spark_yarn_cluster/hdfs_conf.Don't forget to set Spark master as yarn-client in Zeppelin Interpreters setting page like below.5. Run Zeppelin with Spark interpreterAfter running a single paragraph with Spark interpreter in Zeppelin, browse http://<hostname>:8088/cluster/apps and check
  Zeppelin application is running well or not.Spark on Mesos modeYou can simply set up Spark on Mesos docker environment with below steps.1. Build Docker filecd $ZEPPELIN_HOME/scripts/docker/spark-cluster-managers/spark_mesosdocker build -t "spark_mesos" .2. Run dockerdocker run --net=host -it -p 8080:8080 -p 7077:7077 -p 8888:8888 -p 8081:8081 -p 8082:8082 -p 5050:5050 -p 5051:5051 -p 4040:4040 -h sparkmaster --name spark_mesos spark_mesos bash;Note that sparkmaster hostname used here to run docker container should be defined in your /etc/hosts.3. Verify running Spark on Mesos.You can simply verify the processes of Spark and Mesos are running well in Docker with below command.ps -efYou can also check each application web UI for Mesos on http://<hostname>:5050/cluster and Spark on http://<hostname>:8080/.4. Configure Spark interpreter in Zeppelinexport MASTER=mesos://127.0.1.1:5050export MESOS_NATIVE_JAVA_LIBRARY=[PATH OF libmesos.so]expo
 rt SPARK_HOME=[PATH OF SPARK HOME]Don't forget to set Spark master as mesos://127.0.1.1:5050 in Zeppelin Interpreters setting page like below.5. Run Zeppelin with Spark interpreterAfter running a single paragraph with Spark interpreter in Zeppelin, browse http://<hostname>:5050/#/frameworks and check Zeppelin application is running well or not.",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Apache Zeppelin on Spark Cluster ModeOverviewApache Spark has supported three cluster manager types(Standalone, Apache Mesos and Hadoop YARN) so far.This document will guide you how you can build and configure the environment on 3 types of Spark cluster manager with Apache Zeppelin using Docker scripts.So install docker on the machine first.Spark standalone modeSpark standalone is a simple cluster manager included with Spark that m
 akes it easy to set up a cluster.You can simply set up Spark standalone environment with below steps.Note : Since Apache Zeppelin and Spark use same 8080 port for their web UI, you might need to change zeppelin.server.port in conf/zeppelin-site.xml.1. Build Docker fileYou can find docker script files under scripts/docker/spark-cluster-managers.cd $ZEPPELIN_HOME/scripts/docker/spark-cluster-managers/spark_standalonedocker build -t "spark_standalone" .2. Run dockerdocker run -it -p 8080:8080 -p 7077:7077 -p 8888:8888 -p 8081:8081 -h sparkmaster --name spark_standalone spark_standalone bash;Note that sparkmaster hostname used here to run docker container should be defined in your /etc/hosts.3. Configure Spark interpreter in ZeppelinSet Spark master as spark://<hostname>:7077 in Zeppelin Interpreters setting page.4. Run Zeppelin with Spark interpreterAfter running single paragraph with Spark interpreter in Zeppelin, browse https://<hostname>
 :8080 and check whether Spark cluster is running well or not.You can also simply verify that Spark is running well in Docker with below command.ps -ef | grep sparkSpark on YARN modeYou can simply set up Spark on YARN docker environment with below steps.Note : Since Apache Zeppelin and Spark use same 8080 port for their web UI, you might need to change zeppelin.server.port in conf/zeppelin-site.xml.1. Build Docker fileYou can find docker script files under scripts/docker/spark-cluster-managers.cd $ZEPPELIN_HOME/scripts/docker/spark-cluster-managers/spark_yarn_clusterdocker build -t "spark_yarn" .2. Run dockerdocker run -it  -p 5000:5000  -p 9000:9000  -p 9001:9001  -p 8088:8088  -p 8042:8042  -p 8030:8030  -p 8031:8031  -p 8032:8032  -p 8033:8033  -p 8080:8080  -p 7077:7077  -p 8888:8888  -p 8081:8081  -p 50010:50010  -p 50075:50075  -p 50020:50020  -p 50070:50070  --name spark_yarn  -h sparkmaster  spark_yarn bash;Note that sparkmaster hostname used here to run doc
 ker container should be defined in your /etc/hosts.3. Verify running Spark on YARN.You can simply verify the processes of Spark and YARN are running well in Docker with below command.ps -efYou can also check each application web UI for HDFS on http://<hostname>:50070/, YARN on http://<hostname>:8088/cluster and Spark on http://<hostname>:8080/.4. Configure Spark interpreter in ZeppelinSet following configurations to conf/zeppelin-env.sh.export MASTER=yarn-clientexport HADOOP_CONF_DIR=[your_hadoop_conf_path]export SPARK_HOME=[your_spark_home_path]HADOOP_CONF_DIR(Hadoop configuration path) is defined in /scripts/docker/spark-cluster-managers/spark_yarn_cluster/hdfs_conf.Don't forget to set Spark master as yarn-client in Zeppelin Interpreters setting page like below.5. Run Zeppelin with Spark interpreterAfter running a single paragraph with Spark interpreter in Zeppelin, browse http://<hostname>:8088/cluster/apps and check
  Zeppelin application is running well or not.Spark on Mesos modeYou can simply set up Spark on Mesos docker environment with below steps.1. Build Docker filecd $ZEPPELIN_HOME/scripts/docker/spark-cluster-managers/spark_mesosdocker build -t "spark_mesos" .2. Run dockerdocker run --net=host -it -p 8080:8080 -p 7077:7077 -p 8888:8888 -p 8081:8081 -p 8082:8082 -p 5050:5050 -p 5051:5051 -p 4040:4040 -h sparkmaster --name spark_mesos spark_mesos bash;Note that sparkmaster hostname used here to run docker container should be defined in your /etc/hosts.3. Verify running Spark on Mesos.You can simply verify the processes of Spark and Mesos are running well in Docker with below command.ps -efYou can also check each application web UI for Mesos on http://<hostname>:5050/cluster and Spark on http://<hostname>:8080/.4. Configure Spark interpreter in Zeppelinexport MASTER=mesos://127.0.1.1:5050export MESOS_NATIVE_JAVA_LIBRARY=[PATH OF libmesos.so]expo
 rt SPARK_HOME=[PATH OF SPARK HOME]Don't forget to set Spark master as mesos://127.0.1.1:5050 in Zeppelin Interpreters setting page like below.5. Run Zeppelin with Spark interpreterAfter running a single paragraph with Spark interpreter in Zeppelin, browse http://<hostname>:5050/#/frameworks and check Zeppelin application is running well or not.Troubleshooting for Spark on MesosIf you have problem with hostname, use --add-host option when executing dockerrun## use `--add-host=moby:127.0.0.1` option to resolve## since docker container couldn't resolve `moby`: java.net.UnknownHostException: moby: moby: Name or service not known        at java.net.InetAddress.getLocalHost(InetAddress.java:1496)        at org.apache.spark.util.Utils$.findLocalInetAddress(Utils.scala:789)        at org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress$lzycompute(Utils.scala:782)        at org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddr
 ess(Utils.scala:782)If you have problem with mesos master, try mesos://127.0.0.1 instead of mesos://127.0.1.1I0103 20:17:22.329269   340 sched.cpp:330] New master detected at master@127.0.1.1:5050I0103 20:17:22.330749   340 sched.cpp:341] No credentials provided. Attempting to register without authenticationW0103 20:17:22.333531   340 sched.cpp:736] Ignoring framework registered message because it was sentfrom 'master@127.0.0.1:5050' instead of the leading master 'master@127.0.1.1:5050'W0103 20:17:24.040252   339 sched.cpp:736] Ignoring framework registered message because it was sentfrom 'master@127.0.0.1:5050' instead of the leading master 'master@127.0.1.1:5050'W0103 20:17:26.150250   339 sched.cpp:736] Ignoring framework registered message because it was sentfrom 'master@127.0.0.1:5050' instead of the leading master 'master@127.0.1.1:5050'W0103 20:17:26.737604   339 sched.cpp:736] Ign
 oring framework registered message because it was sentfrom 'master@127.0.0.1:5050' instead of the leading master 'master@127.0.1.1:5050'W0103 20:17:35.241714   336 sched.cpp:736] Ignoring framework registered message because it was sentfrom 'master@127.0.0.1:5050' instead of the leading master 'master@127.0.1.1:5050'",
       "url": " /install/spark_cluster_mode.html",
       "group": "install",
       "excerpt": "This document will guide you how you can build and configure the environment on 3 types of Spark cluster manager(Standalone, Hadoop Yarn, Apache Mesos) with Apache Zeppelin using docker scripts."
@@ -325,7 +336,7 @@
 
     "/interpreter/livy.html": {
       "title": "Livy Interpreter for Apache Zeppelin",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Livy Interpreter for Apache ZeppelinOverviewLivy is an open source REST interface for interacting with Spark from anywhere. It supports executing snippets of code or programs in a Spark context that runs locally or in YARN.Interactive Scala, Python and R shellsBatch submissions in Scala, Java, PythonMulti users can share the same server (impersonation support)Can be used for submitting jobs from anywhere with RESTDoes not require a
 ny code change to your programsRequirementsAdditional requirements for the Livy interpreter are:Spark 1.3 or above.Livy server.ConfigurationWe added some common configurations for spark, and you can set any configuration you want.This link contains all spark configurations: http://spark.apache.org/docs/latest/configuration.html#available-properties.And instead of starting property with spark. it should be replaced with livy.spark..Example: spark.master to livy.spark.master      Property    Default    Description          livy.spark.master      local[*]      Spark master uri. ex) spark://masterhost:7077          zeppelin.livy.url    http://localhost:8998    URL where livy server is running        zeppelin.livy.spark.maxResult    1000    Max number of Spark SQL result to display.        zeppelin.livy.displayAppInfo    false    Whether to display app info        livy.spark.driver.cores        Driver cores. ex) 1, 2.          livy.spark.driver.memory        Driver memory. ex) 512m, 32g.
           livy.spark.executor.instances        Executor instances. ex) 1, 4.          livy.spark.executor.cores        Num cores per executor. ex) 1, 4.        livy.spark.executor.memory        Executor memory per worker instance. ex) 512m, 32g.        livy.spark.dynamicAllocation.enabled        Use dynamic resource allocation. ex) True, False.        livy.spark.dynamicAllocation.cachedExecutorIdleTimeout        Remove an executor which has cached data blocks.        livy.spark.dynamicAllocation.minExecutors        Lower bound for the number of executors.        livy.spark.dynamicAllocation.initialExecutors        Initial number of executors to run.        livy.spark.dynamicAllocation.maxExecutors        Upper bound for the number of executors.            livy.spark.jars.packages            Adding extra libraries to livy interpreter    Adding External librariesYou can load dynamic library to livy interpreter by set livy.spark.jars.packages property to comma-separated list of maven c
 oordinates of jars to include on the driver and executor classpaths. The format for the coordinates should be groupId:artifactId:version. Example      Property    Example    Description          livy.spark.jars.packages      io.spray:spray-json_2.10:1.3.1      Adding extra libraries to livy interpreter        How to useBasically, you can usespark%livy.sparksc.versionpyspark%livy.pysparkprint "1"sparkR%livy.sparkrhello <- function( name ) {    sprintf( "Hello, %s", name );}hello("livy")ImpersonationWhen Zeppelin server is running with authentication enabled, then this interpreter utilizes Livy’s user impersonation feature i.e. sends extra parameter for creating and running a session ("proxyUser": "${loggedInUser}"). This is particularly useful when multi users are sharing a Notebook server.Apply Zeppelin Dynamic FormsYou can leverage Zeppelin Dynamic Form. You can use both the text i
 nput and select form parameterization features.%livy.pysparkprint "${group_by=product_id,product_id|product_name|customer_id|store_id}"FAQLivy debugging: If you see any of these in error consoleConnect to livyhost:8998 [livyhost/127.0.0.1, livyhost/0:0:0:0:0:0:0:1] failed: Connection refusedLooks like the livy server is not up yet or the config is wrongException: Session not found, Livy server would have restarted, or lost session.The session would have timed out, you may need to restart the interpreter.Blacklisted configuration values in session config: spark.masterEdit conf/spark-blacklist.conf file in livy server and comment out #spark.master line.If you choose to work on livy in apps/spark/java directory in https://github.com/cloudera/hue,copy spark-user-configurable-options.template to spark-user-configurable-options.conf file in livy server and comment out #spark.master. ",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Livy Interpreter for Apache ZeppelinOverviewLivy is an open source REST interface for interacting with Spark from anywhere. It supports executing snippets of code or programs in a Spark context that runs locally or in YARN.Interactive Scala, Python and R shellsBatch submissions in Scala, Java, PythonMulti users can share the same server (impersonation support)Can be used for submitting jobs from anywhere with RESTDoes not require a
 ny code change to your programsRequirementsAdditional requirements for the Livy interpreter are:Spark 1.3 or above.Livy server.ConfigurationWe added some common configurations for spark, and you can set any configuration you want.You can find all Spark configurations in here.And instead of starting property with spark. it should be replaced with livy.spark..Example: spark.driver.memory to livy.spark.driver.memory      Property    Default    Description        zeppelin.livy.url    http://localhost:8998    URL where livy server is running        zeppelin.livy.spark.maxResult    1000    Max number of Spark SQL result to display.        zeppelin.livy.displayAppInfo    false    Whether to display app info        livy.spark.driver.cores        Driver cores. ex) 1, 2.          livy.spark.driver.memory        Driver memory. ex) 512m, 32g.          livy.spark.executor.instances        Executor instances. ex) 1, 4.          livy.spark.executor.cores        Num cores per executor. ex) 1, 4.   
      livy.spark.executor.memory        Executor memory per worker instance. ex) 512m, 32g.        livy.spark.dynamicAllocation.enabled        Use dynamic resource allocation. ex) True, False.        livy.spark.dynamicAllocation.cachedExecutorIdleTimeout        Remove an executor which has cached data blocks.        livy.spark.dynamicAllocation.minExecutors        Lower bound for the number of executors.        livy.spark.dynamicAllocation.initialExecutors        Initial number of executors to run.        livy.spark.dynamicAllocation.maxExecutors        Upper bound for the number of executors.            livy.spark.jars.packages            Adding extra libraries to livy interpreter    We remove livy.spark.master in zeppelin-0.7. Because we sugguest user to use livy 0.3 in zeppelin-0.7. And livy 0.3 don't allow to specify livy.spark.master, it enfornce yarn-cluster mode.Adding External librariesYou can load dynamic library to livy interpreter by set livy.spark.jars.packages pr
 operty to comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. The format for the coordinates should be groupId:artifactId:version. Example      Property    Example    Description          livy.spark.jars.packages      io.spray:spray-json_2.10:1.3.1      Adding extra libraries to livy interpreter        How to useBasically, you can usespark%livy.sparksc.versionpyspark%livy.pysparkprint "1"sparkR%livy.sparkrhello <- function( name ) {    sprintf( "Hello, %s", name );}hello("livy")ImpersonationWhen Zeppelin server is running with authentication enabled, then this interpreter utilizes Livy’s user impersonation feature i.e. sends extra parameter for creating and running a session ("proxyUser": "${loggedInUser}"). This is particularly useful when multi users are sharing a Notebook server.Apply Zeppelin Dynamic FormsYou can leverage Zeppelin 
 Dynamic Form. You can use both the text input and select form parameterization features.%livy.pysparkprint "${group_by=product_id,product_id|product_name|customer_id|store_id}"FAQLivy debugging: If you see any of these in error consoleConnect to livyhost:8998 [livyhost/127.0.0.1, livyhost/0:0:0:0:0:0:0:1] failed: Connection refusedLooks like the livy server is not up yet or the config is wrongException: Session not found, Livy server would have restarted, or lost session.The session would have timed out, you may need to restart the interpreter.Blacklisted configuration values in session config: spark.masterEdit conf/spark-blacklist.conf file in livy server and comment out #spark.master line.If you choose to work on livy in apps/spark/java directory in https://github.com/cloudera/hue,copy spark-user-configurable-options.template to spark-user-configurable-options.conf file in livy server and comment out #spark.master. ",
       "url": " /interpreter/livy.html",
       "group": "interpreter",
       "excerpt": "Livy is an open source REST interface for interacting with Spark from anywhere. It supports executing snippets of code or programs in a Spark context that runs locally or in YARN."
@@ -649,7 +660,7 @@
 
     "/security/notebook_authorization.html": {
       "title": "Notebook Authorization in Apache Zeppelin",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->{% include JB/setup %}# Zeppelin Notebook Authorization## OverviewWe assume that there is an **Shiro Authentication** component that associates a user string and a set of group strings with every NotebookSocket. If you don't set the authentication components yet, please check [Shiro authentication for Apache Zeppelin](./shiroauthentication.html) first.## Authorization SettingYou can set Zeppelin notebook permissions in each not
 ebooks. Of course only **notebook owners** can change this configuration. Just click **Lock icon** and open the permission setting page in your notebook.As you can see, each Zeppelin notebooks has 3 entities : * Owners ( users or groups )* Readers ( users or groups )* Writers ( users or groups )Fill out the each forms with comma seperated **users** and **groups** configured in `conf/shiro.ini` file.If the form is empty (*), it means that any users can perform that operation.If someone who doesn't have **read** permission is trying to access the notebook or someone who doesn't have **write** permission is trying to edit the notebook, Zeppelin will ask to login or block the user. ## How it worksIn this section, we will explain the detail about how the notebook authorization works in backend side.### NotebookServerThe [NotebookServer](https://github.com/apache/zeppelin/blob/master/zeppelin-server/src/main/java/org/apache/zeppelin/socket/NotebookServer.java) classifies every not
 ebook operations into three categories: **Read**, **Write**, **Manage**.Before executing a notebook operation, it checks if the user and the groups associated with the `NotebookSocket` have permissions. For example, before executing a **Read** operation, it checks if the user and the groups have at least one entity that belongs to the **Reader** entities.### Notebook REST API callZeppelin executes a [REST API call](https://github.com/apache/zeppelin/blob/master/zeppelin-server/src/main/java/org/apache/zeppelin/rest/NotebookRestApi.java) for the notebook permission information.In the backend side, Zeppelin gets the user information for the connection and allows the operation if the users and groupsassociated with the current user have at least one entity that belongs to owner entities for the notebook.",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->{% include JB/setup %}# Zeppelin Notebook Authorization## OverviewWe assume that there is an **Shiro Authentication** component that associates a user string and a set of group strings with every NotebookSocket. If you don't set the authentication components yet, please check [Shiro authentication for Apache Zeppelin](./shiroauthentication.html) first.## Authorization SettingYou can set Zeppelin notebook permissions in each not
 ebooks. Of course only **notebook owners** can change this configuration. Just click **Lock icon** and open the permission setting page in your notebook.As you can see, each Zeppelin notebooks has 3 entities : * Owners ( users or groups )* Readers ( users or groups )* Writers ( users or groups )Fill out the each forms with comma seperated **users** and **groups** configured in `conf/shiro.ini` file.If the form is empty (*), it means that any users can perform that operation.If someone who doesn't have **read** permission is trying to access the notebook or someone who doesn't have **write** permission is trying to edit the notebook, Zeppelin will ask to login or block the user. By default when you create a new note, the owner is the user who create it. And the readers/writers is empty which means it is shared publicly. But if you don't want it to be shared by default. You can set `zeppelin.notebook.public` to be false in `zeppelin-site.xml`.## How it worksIn this section
 , we will explain the detail about how the notebook authorization works in backend side.### NotebookServerThe [NotebookServer](https://github.com/apache/zeppelin/blob/master/zeppelin-server/src/main/java/org/apache/zeppelin/socket/NotebookServer.java) classifies every notebook operations into three categories: **Read**, **Write**, **Manage**.Before executing a notebook operation, it checks if the user and the groups associated with the `NotebookSocket` have permissions. For example, before executing a **Read** operation, it checks if the user and the groups have at least one entity that belongs to the **Reader** entities.### Notebook REST API callZeppelin executes a [REST API call](https://github.com/apache/zeppelin/blob/master/zeppelin-server/src/main/java/org/apache/zeppelin/rest/NotebookRestApi.java) for the notebook permission information.In the backend side, Zeppelin gets the user information for the connection and allows the operation if the users and groupsassociated with the
  current user have at least one entity that belongs to owner entities for the notebook.",
       "url": " /security/notebook_authorization.html",
       "group": "security",
       "excerpt": "This page will guide you how you can set the permission for Zeppelin notebooks. This document assumes that Apache Shiro authentication was set up."
@@ -660,7 +671,7 @@
 
     "/security/shiroauthentication.html": {
       "title": "Apache Shiro Authentication for Apache Zeppelin",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->{% include JB/setup %}# Apache Shiro authentication for Apache Zeppelin## Overview[Apache Shiro](http://shiro.apache.org/) is a powerful and easy-to-use Java security framework that performs authentication, authorization, cryptography, and session management. In this documentation, we will explain step by step how Shiro works for Zeppelin notebook authentication.When you connect to Apache Zeppelin, you will be asked to enter your c
 redentials. Once you logged in, then you have access to all notes including other user's notes.## Security SetupYou can setup **Zeppelin notebook authentication** in some simple steps.### 1. Enable ShiroBy default in `conf`, you will find `shiro.ini.template`, this file is used as an example and it is strongly recommendedto create a `shiro.ini` file by doing the following command line```bashcp conf/shiro.ini.template conf/shiro.ini```For the further information about  `shiro.ini` file format, please refer to [Shiro Configuration](http://shiro.apache.org/configuration.html#Configuration-INISections).### 3. Secure the Websocket channelSet to property **zeppelin.anonymous.allowed** to **false** in `conf/zeppelin-site.xml`. If you don't have this file yet, just copy `conf/zeppelin-site.xml.template` to `conf/zeppelin-site.xml`.### 4. Start Zeppelin```bin/zeppelin-daemon.sh start (or restart)```Then you can browse Zeppelin at [http://localhost:8080](http://localhost:8080).### 5. 
 LoginFinally, you can login using one of the below **username/password** combinations.```[users]admin = password1, adminuser1 = password2, role1, role2user2 = password3, role3user3 = password4, role2```You can set the roles for each users next to the password.## Groups and permissions (optional)In case you want to leverage user groups and permissions, use one of the following configuration for LDAP or AD under `[main]` segment in `shiro.ini`.```activeDirectoryRealm = org.apache.zeppelin.realm.ActiveDirectoryGroupRealmactiveDirectoryRealm.systemUsername = userNameAactiveDirectoryRealm.systemPassword = passwordAactiveDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COMactiveDirectoryRealm.url = ldap://ldap.test.com:389activeDirectoryRealm.groupRolesMap = "CN=aGroupName,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"group1"activeDirectoryRealm.authorizationCachingEnabled = falseldapRealm = org.apache.zeppelin.server.LdapGroupRealm# search base for ldap 
 groups (only relevant for LdapGroupRealm):ldapRealm.contextFactory.environment[ldap.searchBase] = dc=COMPANY,dc=COMldapRealm.contextFactory.url = ldap://ldap.test.com:389ldapRealm.userDnTemplate = uid={0},ou=Users,dc=COMPANY,dc=COMldapRealm.contextFactory.authenticationMechanism = SIMPLE```also define roles/groups that you want to have in system, like below;```[roles]admin = *hr = *finance = *group1 = *```## Configure Realm (optional)Realms are responsible for authentication and authorization in Apache Zeppelin. By default, Apache Zeppelin uses [IniRealm](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/text/IniRealm.html) (users and groups are configurable in `conf/shiro.ini` file under `[user]` and `[group]` section). You can also leverage Shiro Realms like [JndiLdapRealm](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/ldap/JndiLdapRealm.html), [JdbcRealm](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/jdbc/JdbcRea
 lm.html) or create [our own](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/AuthorizingRealm.html).To learn more about Apache Shiro Realm, please check [this documentation](http://shiro.apache.org/realm.html).We also provide community custom Realms.### Active Directory```activeDirectoryRealm = org.apache.zeppelin.realm.ActiveDirectoryGroupRealmactiveDirectoryRealm.systemUsername = userNameAactiveDirectoryRealm.systemPassword = passwordAactiveDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/conf/zeppelin.jceksactiveDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COMactiveDirectoryRealm.url = ldap://ldap.test.com:389activeDirectoryRealm.groupRolesMap = "CN=aGroupName,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"group1"activeDirectoryRealm.authorizationCachingEnabled = false```Also instead of specifying systemPassword in clear text in shiro.ini administrator can choose to specify the same in "hadoop
  credential".Create a keystore file using the hadoop credential commandline, for this the hadoop commons should be in the classpath`hadoop credential create activeDirectoryRealm.systempassword -provider jceks://file/user/zeppelin/conf/zeppelin.jceks`Change the following values in the Shiro.ini file, and uncomment the line:`activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/conf/zeppelin.jceks`### LDAP```ldapRealm = org.apache.zeppelin.realm.LdapGroupRealm# search base for ldap groups (only relevant for LdapGroupRealm):ldapRealm.contextFactory.environment[ldap.searchBase] = dc=COMPANY,dc=COMldapRealm.contextFactory.url = ldap://ldap.test.com:389ldapRealm.userDnTemplate = uid={0},ou=Users,dc=COMPANY,dc=COMldapRealm.contextFactory.authenticationMechanism = SIMPLE```### PAM[PAM](https://en.wikipedia.org/wiki/Pluggable_authentication_module) authentication support allows the reuse of existing authentication moduls on the host where Zeppelin is running. On
  a typical system modules are configured per service for example sshd, passwd, etc. under `/etc/pam.d/`. You caneither reuse one of these services or create your own for Zeppelin. Activiting PAM authentication requires two parameters: 1. realm: The Shiro realm being used 2. service: The service configured under `/etc/pam.d/` to be used. The name here needs to be the same as the file name under `/etc/pam.d/` ```[main] pamRealm=org.apache.zeppelin.realm.PamRealm pamRealm.service=sshd```### ZeppelinHub[ZeppelinHub](https://www.zeppelinhub.com) is a service that synchronize your Apache Zeppelin notebooks and enables you to collaborate easily.To enable login with your ZeppelinHub credential, apply the following change in `conf/shiro.ini` under `[main]` section.```### A sample for configuring ZeppelinHub RealmzeppelinHubRealm = org.apache.zeppelin.realm.ZeppelinHubRealm## Url of ZeppelinHubzeppelinHubRealm.zeppelinhubUrl = https://www.zeppelinhub.comsecurityManager.realms = $zeppelinHubRe
 alm```> Note: ZeppelinHub is not releated to apache Zeppelin project.## Secure your Zeppelin information (optional)By default, anyone who defined in `[users]` can share **Interpreter Setting**, **Credential** and **Configuration** information in Apache Zeppelin.Sometimes you might want to hide these information for your use case.Since Shiro provides **url-based security**, you can hide the information by commenting or uncommenting these below lines in `conf/shiro.ini`.```[urls]/api/interpreter/** = authc, roles[admin]/api/configurations/** = authc, roles[admin]/api/credential/** = authc, roles[admin]```In this case, only who have `admin` role can see **Interpreter Setting**, **Credential** and **Configuration** information.If you want to grant this permission to other users, you can change **roles[ ]** as you defined at `[users]` section.> **NOTE :** All of the above configurations are defined in the `conf/shiro.ini` file. This documentation is originally from [SECURITY-README
 .md](https://github.com/apache/zeppelin/blob/master/SECURITY-README.md).## Other authentication methods- [HTTP Basic Authentication using NGINX](./authentication.html)",

[... 5 lines stripped ...]
Modified: zeppelin/site/docs/0.7.0-SNAPSHOT/security/authentication.html
URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.7.0-SNAPSHOT/security/authentication.html?rev=1777868&r1=1777867&r2=1777868&view=diff
==============================================================================
--- zeppelin/site/docs/0.7.0-SNAPSHOT/security/authentication.html (original)
+++ zeppelin/site/docs/0.7.0-SNAPSHOT/security/authentication.html Sun Jan  8 05:53:30 2017
@@ -69,7 +69,7 @@
                 <li role="separator" class="divider"></li>
                 <li class="title"><span><b>Getting Started</b><span></li>
                 <li><a href="/docs/0.7.0-SNAPSHOT/install/install.html">Install</a></li>
-                <li><a href="/docs/0.7.0-SNAPSHOT/install/install.html#apache-zeppelin-configuration">Configuration</a></li>
+                <li><a href="/docs/0.7.0-SNAPSHOT/install/configuration.html">Configuration</a></li>
                 <li><a href="/docs/0.7.0-SNAPSHOT/quickstart/explorezeppelinui.html">Explore Zeppelin UI</a></li>
                 <li><a href="/docs/0.7.0-SNAPSHOT/quickstart/tutorial.html">Tutorial</a></li>
                 <li role="separator" class="divider"></li>
@@ -314,7 +314,7 @@ The end result is that all requests to t
 
       <hr>
       <footer>
-        <!-- <p>&copy; 2016 The Apache Software Foundation</p>-->
+        <!-- <p>&copy; 2017 The Apache Software Foundation</p>-->
       </footer>
     </div>
 

Modified: zeppelin/site/docs/0.7.0-SNAPSHOT/security/datasource_authorization.html
URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.7.0-SNAPSHOT/security/datasource_authorization.html?rev=1777868&r1=1777867&r2=1777868&view=diff
==============================================================================
--- zeppelin/site/docs/0.7.0-SNAPSHOT/security/datasource_authorization.html (original)
+++ zeppelin/site/docs/0.7.0-SNAPSHOT/security/datasource_authorization.html Sun Jan  8 05:53:30 2017
@@ -69,7 +69,7 @@
                 <li role="separator" class="divider"></li>
                 <li class="title"><span><b>Getting Started</b><span></li>
                 <li><a href="/docs/0.7.0-SNAPSHOT/install/install.html">Install</a></li>
-                <li><a href="/docs/0.7.0-SNAPSHOT/install/install.html#apache-zeppelin-configuration">Configuration</a></li>
+                <li><a href="/docs/0.7.0-SNAPSHOT/install/configuration.html">Configuration</a></li>
                 <li><a href="/docs/0.7.0-SNAPSHOT/quickstart/explorezeppelinui.html">Explore Zeppelin UI</a></li>
                 <li><a href="/docs/0.7.0-SNAPSHOT/quickstart/tutorial.html">Tutorial</a></li>
                 <li role="separator" class="divider"></li>
@@ -260,7 +260,7 @@ Please keep track <a href="https://issue
 
       <hr>
       <footer>
-        <!-- <p>&copy; 2016 The Apache Software Foundation</p>-->
+        <!-- <p>&copy; 2017 The Apache Software Foundation</p>-->
       </footer>
     </div>
 

Modified: zeppelin/site/docs/0.7.0-SNAPSHOT/security/notebook_authorization.html
URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.7.0-SNAPSHOT/security/notebook_authorization.html?rev=1777868&r1=1777867&r2=1777868&view=diff
==============================================================================
--- zeppelin/site/docs/0.7.0-SNAPSHOT/security/notebook_authorization.html (original)
+++ zeppelin/site/docs/0.7.0-SNAPSHOT/security/notebook_authorization.html Sun Jan  8 05:53:30 2017
@@ -69,7 +69,7 @@
                 <li role="separator" class="divider"></li>
                 <li class="title"><span><b>Getting Started</b><span></li>
                 <li><a href="/docs/0.7.0-SNAPSHOT/install/install.html">Install</a></li>
-                <li><a href="/docs/0.7.0-SNAPSHOT/install/install.html#apache-zeppelin-configuration">Configuration</a></li>
+                <li><a href="/docs/0.7.0-SNAPSHOT/install/configuration.html">Configuration</a></li>
                 <li><a href="/docs/0.7.0-SNAPSHOT/quickstart/explorezeppelinui.html">Explore Zeppelin UI</a></li>
                 <li><a href="/docs/0.7.0-SNAPSHOT/quickstart/tutorial.html">Tutorial</a></li>
                 <li role="separator" class="divider"></li>
@@ -260,7 +260,7 @@ associated with the current user have at
 
       <hr>
       <footer>
-        <!-- <p>&copy; 2016 The Apache Software Foundation</p>-->
+        <!-- <p>&copy; 2017 The Apache Software Foundation</p>-->
       </footer>
     </div>
 

Modified: zeppelin/site/docs/0.7.0-SNAPSHOT/security/shiroauthentication.html
URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.7.0-SNAPSHOT/security/shiroauthentication.html?rev=1777868&r1=1777867&r2=1777868&view=diff
==============================================================================
--- zeppelin/site/docs/0.7.0-SNAPSHOT/security/shiroauthentication.html (original)
+++ zeppelin/site/docs/0.7.0-SNAPSHOT/security/shiroauthentication.html Sun Jan  8 05:53:30 2017
@@ -69,7 +69,7 @@
                 <li role="separator" class="divider"></li>
                 <li class="title"><span><b>Getting Started</b><span></li>
                 <li><a href="/docs/0.7.0-SNAPSHOT/install/install.html">Install</a></li>
-                <li><a href="/docs/0.7.0-SNAPSHOT/install/install.html#apache-zeppelin-configuration">Configuration</a></li>
+                <li><a href="/docs/0.7.0-SNAPSHOT/install/configuration.html">Configuration</a></li>
                 <li><a href="/docs/0.7.0-SNAPSHOT/quickstart/explorezeppelinui.html">Explore Zeppelin UI</a></li>
                 <li><a href="/docs/0.7.0-SNAPSHOT/quickstart/tutorial.html">Tutorial</a></li>
                 <li role="separator" class="divider"></li>

Modified: zeppelin/site/docs/0.7.0-SNAPSHOT/sitemap.txt
URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.7.0-SNAPSHOT/sitemap.txt?rev=1777868&r1=1777867&r2=1777868&view=diff
==============================================================================
--- zeppelin/site/docs/0.7.0-SNAPSHOT/sitemap.txt (original)
+++ zeppelin/site/docs/0.7.0-SNAPSHOT/sitemap.txt Sun Jan  8 05:53:30 2017
@@ -11,6 +11,7 @@ http://zeppelin.apache.org/displaysystem
 http://zeppelin.apache.org/index.html
 http://zeppelin.apache.org/install/build.html
 http://zeppelin.apache.org/install/cdh.html
+http://zeppelin.apache.org/install/configuration.html
 http://zeppelin.apache.org/install/docker.html
 http://zeppelin.apache.org/install/install.html
 http://zeppelin.apache.org/install/spark_cluster_mode.html