You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@zeppelin.apache.org by ku...@apache.org on 2017/07/24 05:01:09 UTC
svn commit: r1802748 [1/2] - in /zeppelin/site/docs/0.8.0-SNAPSHOT: atom.xml
interpreter/livy.html rss.xml search_data.json
setup/security/shiro_authentication.html usage/rest_api/helium.html
usage/rest_api/interpreter.html
Author: kun
Date: Mon Jul 24 05:01:08 2017
New Revision: 1802748
URL: http://svn.apache.org/viewvc?rev=1802748&view=rev
Log:
Update 0.8.0-SNAPSHOT
Modified:
zeppelin/site/docs/0.8.0-SNAPSHOT/atom.xml
zeppelin/site/docs/0.8.0-SNAPSHOT/interpreter/livy.html
zeppelin/site/docs/0.8.0-SNAPSHOT/rss.xml
zeppelin/site/docs/0.8.0-SNAPSHOT/search_data.json
zeppelin/site/docs/0.8.0-SNAPSHOT/setup/security/shiro_authentication.html
zeppelin/site/docs/0.8.0-SNAPSHOT/usage/rest_api/helium.html
zeppelin/site/docs/0.8.0-SNAPSHOT/usage/rest_api/interpreter.html
Modified: zeppelin/site/docs/0.8.0-SNAPSHOT/atom.xml
URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.8.0-SNAPSHOT/atom.xml?rev=1802748&r1=1802747&r2=1802748&view=diff
==============================================================================
--- zeppelin/site/docs/0.8.0-SNAPSHOT/atom.xml (original)
+++ zeppelin/site/docs/0.8.0-SNAPSHOT/atom.xml Mon Jul 24 05:01:08 2017
@@ -4,7 +4,7 @@
<title>Apache Zeppelin</title>
<link href="http://zeppelin.apache.org/" rel="self"/>
<link href="http://zeppelin.apache.org"/>
- <updated>2017-07-07T16:26:53+09:00</updated>
+ <updated>2017-07-24T13:59:50+09:00</updated>
<id>http://zeppelin.apache.org</id>
<author>
<name>The Apache Software Foundation</name>
Modified: zeppelin/site/docs/0.8.0-SNAPSHOT/interpreter/livy.html
URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.8.0-SNAPSHOT/interpreter/livy.html?rev=1802748&r1=1802747&r2=1802748&view=diff
==============================================================================
--- zeppelin/site/docs/0.8.0-SNAPSHOT/interpreter/livy.html (original)
+++ zeppelin/site/docs/0.8.0-SNAPSHOT/interpreter/livy.html Mon Jul 24 05:01:08 2017
@@ -310,7 +310,7 @@ Example: <code>spark.driver.memory</code
</tr>
<tr>
<td>zeppelin.livy.displayAppInfo</td>
- <td>false</td>
+ <td>true</td>
<td>Whether to display app info</td>
</tr>
<tr>
@@ -389,7 +389,7 @@ Example: <code>spark.driver.memory</code
<h2>Adding External libraries</h2>
-<p>You can load dynamic library to livy interpreter by set <code>livy.spark.jars.packages</code> property to comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. The format for the coordinates should be groupId:artifactId:version. </p>
+<p>You can load dynamic library to livy interpreter by set <code>livy.spark.jars.packages</code> property to comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. The format for the coordinates should be groupId:artifactId:version.</p>
<p>Example</p>
@@ -405,7 +405,6 @@ Example: <code>spark.driver.memory</code
<td>Adding extra libraries to livy interpreter</td>
</tr>
</table>
-
<h2>How to use</h2>
@@ -429,9 +428,9 @@ hello("livy")
</code></pre></div>
<h2>Impersonation</h2>
-<p>When Zeppelin server is running with authentication enabled,
-then this interpreter utilizes Livyâs user impersonation feature
-i.e. sends extra parameter for creating and running a session ("proxyUser": "${loggedInUser}").
+<p>When Zeppelin server is running with authentication enabled,
+then this interpreter utilizes Livyâs user impersonation feature
+i.e. sends extra parameter for creating and running a session ("proxyUser": "${loggedInUser}").
This is particularly useful when multi users are sharing a Notebook server.</p>
<h2>Apply Zeppelin Dynamic Forms</h2>
@@ -463,7 +462,7 @@ print "${group_by=product_id,produc
<p>Edit <code>conf/spark-blacklist.conf</code> file in livy server and comment out <code>#spark.master</code> line.</p>
<p>If you choose to work on livy in <code>apps/spark/java</code> directory in <a href="https://github.com/cloudera/hue">https://github.com/cloudera/hue</a>,
-copy <code>spark-user-configurable-options.template</code> to <code>spark-user-configurable-options.conf</code> file in livy server and comment out <code>#spark.master</code>. </p>
+copy <code>spark-user-configurable-options.template</code> to <code>spark-user-configurable-options.conf</code> file in livy server and comment out <code>#spark.master</code>.</p>
</div>
</div>
Modified: zeppelin/site/docs/0.8.0-SNAPSHOT/rss.xml
URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.8.0-SNAPSHOT/rss.xml?rev=1802748&r1=1802747&r2=1802748&view=diff
==============================================================================
--- zeppelin/site/docs/0.8.0-SNAPSHOT/rss.xml (original)
+++ zeppelin/site/docs/0.8.0-SNAPSHOT/rss.xml Mon Jul 24 05:01:08 2017
@@ -5,8 +5,8 @@
<description>Apache Zeppelin - The Apache Software Foundation</description>
<link>http://zeppelin.apache.org</link>
<link>http://zeppelin.apache.org</link>
- <lastBuildDate>2017-07-07T16:26:53+09:00</lastBuildDate>
- <pubDate>2017-07-07T16:26:53+09:00</pubDate>
+ <lastBuildDate>2017-07-24T13:59:50+09:00</lastBuildDate>
+ <pubDate>2017-07-24T13:59:50+09:00</pubDate>
<ttl>1800</ttl>
Modified: zeppelin/site/docs/0.8.0-SNAPSHOT/search_data.json
URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.8.0-SNAPSHOT/search_data.json?rev=1802748&r1=1802747&r2=1802748&view=diff
==============================================================================
--- zeppelin/site/docs/0.8.0-SNAPSHOT/search_data.json (original)
+++ zeppelin/site/docs/0.8.0-SNAPSHOT/search_data.json Mon Jul 24 05:01:08 2017
@@ -270,7 +270,7 @@
"/interpreter/livy.html": {
"title": "Livy Interpreter for Apache Zeppelin",
- "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Livy Interpreter for Apache ZeppelinOverviewLivy is an open source REST interface for interacting with Spark from anywhere. It supports executing snippets of code or programs in a Spark context that runs locally or in YARN.Interactive Scala, Python and R shellsBatch submissions in Scala, Java, PythonMulti users can share the same server (impersonation support)Can be used for submitting jobs from anywhere with RESTDoes not require a
ny code change to your programsRequirementsAdditional requirements for the Livy interpreter are:Spark 1.3 or above.Livy server.ConfigurationWe added some common configurations for spark, and you can set any configuration you want.You can find all Spark configurations in here.And instead of starting property with spark. it should be replaced with livy.spark..Example: spark.driver.memory to livy.spark.driver.memory Property Default Description zeppelin.livy.url http://localhost:8998 URL where livy server is running zeppelin.livy.spark.sql.maxResult 1000 Max number of Spark SQL result to display. zeppelin.livy.spark.sql.field.truncate true Whether to truncate field values longer than 20 characters or not zeppelin.livy.session.create_timeout 120 Timeout in seconds for session creation zeppelin.livy.displayAppInfo false Whether to display app info zeppelin.livy.pull_status.interval.millis 1000 The in
terval for checking paragraph execution status livy.spark.driver.cores Driver cores. ex) 1, 2. livy.spark.driver.memory Driver memory. ex) 512m, 32g. livy.spark.executor.instances Executor instances. ex) 1, 4. livy.spark.executor.cores Num cores per executor. ex) 1, 4. livy.spark.executor.memory Executor memory per worker instance. ex) 512m, 32g. livy.spark.dynamicAllocation.enabled Use dynamic resource allocation. ex) True, False. livy.spark.dynamicAllocation.cachedExecutorIdleTimeout Remove an executor which has cached data blocks. livy.spark.dynamicAllocation.minExecutors Lower bound for the number of executors. livy.spark.dynamicAllocation.initialExecutors Initial number of executors to run. livy.spark.dynamicAllocation.maxExecutors Upper bound for the number of executors. livy.spark.jars.packages Adding extra lib
raries to livy interpreter zeppelin.livy.ssl.trustStore client trustStore file. Used when livy ssl is enabled zeppelin.livy.ssl.trustStorePassword password for trustStore file. Used when livy ssl is enabled We remove livy.spark.master in zeppelin-0.7. Because we sugguest user to use livy 0.3 in zeppelin-0.7. And livy 0.3 don&#39;t allow to specify livy.spark.master, it enfornce yarn-cluster mode.Adding External librariesYou can load dynamic library to livy interpreter by set livy.spark.jars.packages property to comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. The format for the coordinates should be groupId:artifactId:version. Example Property Example Description livy.spark.jars.packages io.spray:spray-json_2.10:1.3.1 Adding extra libraries to livy interpreter How to useBasically, you can usespark%livy.sparksc.versionpyspark%livy.pysparkprint &quot;1&q
uot;sparkR%livy.sparkrhello &lt;- function( name ) { sprintf( &quot;Hello, %s&quot;, name );}hello(&quot;livy&quot;)ImpersonationWhen Zeppelin server is running with authentication enabled, then this interpreter utilizes Livyâs user impersonation feature i.e. sends extra parameter for creating and running a session (&quot;proxyUser&quot;: &quot;${loggedInUser}&quot;). This is particularly useful when multi users are sharing a Notebook server.Apply Zeppelin Dynamic FormsYou can leverage Zeppelin Dynamic Form. You can use both the text input and select form parameterization features.%livy.pysparkprint &quot;${group_by=product_id,product_id|product_name|customer_id|store_id}&quot;FAQLivy debugging: If you see any of these in error consoleConnect to livyhost:8998 [livyhost/127.0.0.1, livyhost/0:0:0:0:0:0:0:1] failed: Connection refusedLooks like the livy server is not up yet or the config is wrongException: Session not found, Livy serv
er would have restarted, or lost session.The session would have timed out, you may need to restart the interpreter.Blacklisted configuration values in session config: spark.masterEdit conf/spark-blacklist.conf file in livy server and comment out #spark.master line.If you choose to work on livy in apps/spark/java directory in https://github.com/cloudera/hue,copy spark-user-configurable-options.template to spark-user-configurable-options.conf file in livy server and comment out #spark.master. ",
+ "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Livy Interpreter for Apache ZeppelinOverviewLivy is an open source REST interface for interacting with Spark from anywhere. It supports executing snippets of code or programs in a Spark context that runs locally or in YARN.Interactive Scala, Python and R shellsBatch submissions in Scala, Java, PythonMulti users can share the same server (impersonation support)Can be used for submitting jobs from anywhere with RESTDoes not require a
ny code change to your programsRequirementsAdditional requirements for the Livy interpreter are:Spark 1.3 or above.Livy server.ConfigurationWe added some common configurations for spark, and you can set any configuration you want.You can find all Spark configurations in here.And instead of starting property with spark. it should be replaced with livy.spark..Example: spark.driver.memory to livy.spark.driver.memory Property Default Description zeppelin.livy.url http://localhost:8998 URL where livy server is running zeppelin.livy.spark.sql.maxResult 1000 Max number of Spark SQL result to display. zeppelin.livy.spark.sql.field.truncate true Whether to truncate field values longer than 20 characters or not zeppelin.livy.session.create_timeout 120 Timeout in seconds for session creation zeppelin.livy.displayAppInfo true Whether to display app info zeppelin.livy.pull_status.interval.millis 1000 The int
erval for checking paragraph execution status livy.spark.driver.cores Driver cores. ex) 1, 2. livy.spark.driver.memory Driver memory. ex) 512m, 32g. livy.spark.executor.instances Executor instances. ex) 1, 4. livy.spark.executor.cores Num cores per executor. ex) 1, 4. livy.spark.executor.memory Executor memory per worker instance. ex) 512m, 32g. livy.spark.dynamicAllocation.enabled Use dynamic resource allocation. ex) True, False. livy.spark.dynamicAllocation.cachedExecutorIdleTimeout Remove an executor which has cached data blocks. livy.spark.dynamicAllocation.minExecutors Lower bound for the number of executors. livy.spark.dynamicAllocation.initialExecutors Initial number of executors to run. livy.spark.dynamicAllocation.maxExecutors Upper bound for the number of executors. livy.spark.jars.packages Adding extra libr
aries to livy interpreter zeppelin.livy.ssl.trustStore client trustStore file. Used when livy ssl is enabled zeppelin.livy.ssl.trustStorePassword password for trustStore file. Used when livy ssl is enabled We remove livy.spark.master in zeppelin-0.7. Because we sugguest user to use livy 0.3 in zeppelin-0.7. And livy 0.3 don&#39;t allow to specify livy.spark.master, it enfornce yarn-cluster mode.Adding External librariesYou can load dynamic library to livy interpreter by set livy.spark.jars.packages property to comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. The format for the coordinates should be groupId:artifactId:version.Example Property Example Description livy.spark.jars.packages io.spray:spray-json_2.10:1.3.1 Adding extra libraries to livy interpreter How to useBasically, you can usespark%livy.sparksc.versionpyspark%livy.pysparkprint &quot;1&quot;
sparkR%livy.sparkrhello &lt;- function( name ) { sprintf( &quot;Hello, %s&quot;, name );}hello(&quot;livy&quot;)ImpersonationWhen Zeppelin server is running with authentication enabled,then this interpreter utilizes Livyâs user impersonation featurei.e. sends extra parameter for creating and running a session (&quot;proxyUser&quot;: &quot;${loggedInUser}&quot;).This is particularly useful when multi users are sharing a Notebook server.Apply Zeppelin Dynamic FormsYou can leverage Zeppelin Dynamic Form. You can use both the text input and select form parameterization features.%livy.pysparkprint &quot;${group_by=product_id,product_id|product_name|customer_id|store_id}&quot;FAQLivy debugging: If you see any of these in error consoleConnect to livyhost:8998 [livyhost/127.0.0.1, livyhost/0:0:0:0:0:0:0:1] failed: Connection refusedLooks like the livy server is not up yet or the config is wrongException: Session not found, Livy server woul
d have restarted, or lost session.The session would have timed out, you may need to restart the interpreter.Blacklisted configuration values in session config: spark.masterEdit conf/spark-blacklist.conf file in livy server and comment out #spark.master line.If you choose to work on livy in apps/spark/java directory in https://github.com/cloudera/hue,copy spark-user-configurable-options.template to spark-user-configurable-options.conf file in livy server and comment out #spark.master.",
"url": " /interpreter/livy.html",
"group": "interpreter",
"excerpt": "Livy is an open source REST interface for interacting with Spark from anywhere. It supports executing snippets of code or programs in a Spark context that runs locally or in YARN."
@@ -380,7 +380,7 @@
"/interpreter/spark.html": {
"title": "Apache Spark Interpreter for Apache Zeppelin",
- "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Spark Interpreter for Apache ZeppelinOverviewApache Spark is a fast and general-purpose cluster computing system.It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs.Apache Spark is supported in Zeppelin with Spark interpreter group which consists of below five interpreters. Name Class Description %spark SparkInterpreter Creates a SparkConte
xt and provides a Scala environment %spark.pyspark PySparkInterpreter Provides a Python environment %spark.r SparkRInterpreter Provides an R environment with SparkR support %spark.sql SparkSQLInterpreter Provides a SQL environment %spark.dep DepInterpreter Dependency loader ConfigurationThe Spark interpreter can be configured with properties provided by Zeppelin.You can also set other Spark properties which are not listed in the table. For a list of additional properties, refer to Spark Available Properties. Property Default Description args Spark commandline args master local[*] Spark master uri. ex) spark://masterhost:7077 spark.app.name Zeppelin The name of spark application. spark.cores.max Total number of cores to use. Empty value uses all available core. spark.executor.memory 1g Executor memory per worker instance. ex) 512m, 32g zeppelin.dep
.additionalRemoteRepository spark-packages, http://dl.bintray.com/spark-packages/maven, false; A list of id,remote-repository-URL,is-snapshot; for each remote repository. zeppelin.dep.localrepo local-repo Local repository for dependency loader PYSPARKPYTHON python Python binary executable to use for PySpark in both driver and workers (default is python). Property spark.pyspark.python take precedence if it is set PYSPARKDRIVERPYTHON python Python binary executable to use for PySpark in driver only (default is PYSPARKPYTHON). Property spark.pyspark.driver.python take precedence if it is set zeppelin.spark.concurrentSQL false Execute multiple SQL concurrently if set true. zeppelin.spark.maxResult 1000 Max number of Spark SQL result to display. zeppelin.spark.printREPLOutput true Print REPL output zeppelin.spark.useHiveContext true Use HiveContext instead of SQLConte
xt if it is true. zeppelin.spark.importImplicit true Import implicits, UDF collection, and sql if set true. zeppelin.spark.enableSupportedVersionCheck true Do not change - developer only setting, not for production use Without any configuration, Spark interpreter works out of box in local mode. But if you want to connect to your Spark cluster, you&#39;ll need to follow below two simple steps.1. Export SPARK_HOMEIn conf/zeppelin-env.sh, export SPARK_HOME environment variable with your Spark installation path.For example,export SPARK_HOME=/usr/lib/sparkYou can optionally set more environment variables# set hadoop conf direxport HADOOP_CONF_DIR=/usr/lib/hadoop# set options to pass spark-submit commandexport SPARK_SUBMIT_OPTIONS=&quot;--packages com.databricks:spark-csv_2.10:1.2.0&quot;# extra classpath. e.g. set classpath for hive-site.xmlexport ZEPPELIN_INTP_CLASSPATH_OVERRIDES=/etc/hive/confFor Windows, ensure you have winutils.exe in %HADOOP_HO
ME%bin. Please see Problems running Hadoop on Windows for the details.2. Set master in Interpreter menuAfter start Zeppelin, go to Interpreter menu and edit master property in your Spark interpreter setting. The value may vary depending on your Spark cluster deployment type.For example,local[*] in local modespark://master:7077 in standalone clusteryarn-client in Yarn client modemesos://host:5050 in Mesos clusterThat&#39;s it. Zeppelin will work with any version of Spark and any deployment type without rebuilding Zeppelin in this way. For the further information about Spark &amp; Zeppelin version compatibility, please refer to &quot;Available Interpreters&quot; section in Zeppelin download page.Note that without exporting SPARK_HOME, it&#39;s running in local mode with included version of Spark. The included version may vary depending on the build profile.SparkContext, SQLContext, SparkSession, ZeppelinContextSparkContext, SQLContext and ZeppelinContext are automa
tically created and exposed as variable names sc, sqlContext and z, respectively, in Scala, Python and R environments.Staring from 0.6.1 SparkSession is available as variable spark when you are using Spark 2.x.Note that Scala/Python/R environment shares the same SparkContext, SQLContext and ZeppelinContext instance. Dependency ManagementThere are two ways to load external libraries in Spark interpreter. First is using interpreter setting menu and second is loading Spark properties.1. Setting Dependencies via Interpreter SettingPlease see Dependency Management for the details.2. Loading Spark PropertiesOnce SPARK_HOME is set in conf/zeppelin-env.sh, Zeppelin uses spark-submit as spark interpreter runner. spark-submit supports two ways to load configurations. The first is command line options such as --master and Zeppelin can pass these options to spark-submit by exporting SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh. Second is reading configuration options from SPARK_HOME/conf/spark-
defaults.conf. Spark properties that user can set to distribute libraries are: spark-defaults.conf SPARK_SUBMIT_OPTIONS Description spark.jars --jars Comma-separated list of local jars to include on the driver and executor classpaths. spark.jars.packages --packages Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format for the coordinates should be groupId:artifactId:version. spark.files --files Comma-separated list of files to be placed in the working directory of each executor. Here are few examples:SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.shexport SPARK_SUBMIT_OPTIONS=&quot;--packages com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar --files /path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg&quot;SPARK_HOME/conf/spark-default
s.confspark.jars /path/mylib1.jar,/path/mylib2.jarspark.jars.packages com.databricks:spark-csv_2.10:1.2.0spark.files /path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip3. Dynamic Dependency Loading via %spark.dep interpreterNote: %spark.dep interpreter loads libraries to %spark and %spark.pyspark but not to %spark.sql interpreter. So we recommend you to use the first option instead.When your code requires external library, instead of doing download/copy/restart Zeppelin, you can easily do following jobs using %spark.dep interpreter.Load libraries recursively from maven repositoryLoad libraries from local filesystemAdd additional maven repositoryAutomatically add libraries to SparkCluster (You can turn off)Dep interpreter leverages Scala environment. So you can write any Scala code here.Note that %spark.dep interpreter should be used before %spark, %spark.pyspark, %spark.sql.Here&#39;s usages.%spark.depz.reset() // clean up previously added artifact and repository//
add maven repositoryz.addRepo(&quot;RepoName&quot;).url(&quot;RepoURL&quot;)// add maven snapshot repositoryz.addRepo(&quot;RepoName&quot;).url(&quot;RepoURL&quot;).snapshot()// add credentials for private maven repositoryz.addRepo(&quot;RepoName&quot;).url(&quot;RepoURL&quot;).username(&quot;username&quot;).password(&quot;password&quot;)// add artifact from filesystemz.load(&quot;/path/to.jar&quot;)// add artifact from maven repository, with no dependencyz.load(&quot;groupId:artifactId:version&quot;).excludeAll()// add artifact recursivelyz.load(&quot;groupId:artifactId:version&quot;)// add artifact recursively except comma separated GroupID:ArtifactId listz.load(&quot;groupId:artifactId:version&quot;).exclude(&quot;groupId:artifactId,groupId:artifactId, ...&quot;)// exclude with patternz.load(&quot;groupId:artifactId:version&quot;).exclude(*)z.load(&quot;groupId:art
ifactId:version&quot;).exclude(&quot;groupId:artifactId:*&quot;)z.load(&quot;groupId:artifactId:version&quot;).exclude(&quot;groupId:*&quot;)// local() skips adding artifact to spark clusters (skipping sc.addJar())z.load(&quot;groupId:artifactId:version&quot;).local()ZeppelinContextZeppelin automatically injects ZeppelinContext as variable z in your Scala/Python environment. ZeppelinContext provides some additional functions and utilities.Object ExchangeZeppelinContext extends map and it&#39;s shared between Scala and Python environment.So you can put some objects from Scala and read it from Python, vice versa. // Put object from scala%sparkval myObject = ...z.put(&quot;objName&quot;, myObject)// Exchanging data framesmyScalaDataFrame = ...z.put(&quot;myScalaDataFrame&quot;, myScalaDataFrame)val myPythonDataFrame = z.get(&quot;myPythonDataFrame&quot;).asInstanceOf[DataFrame] # Get object from python%spark.pysparkmyO
bject = z.get(&quot;objName&quot;)# Exchanging data framesmyPythonDataFrame = ...z.put(&quot;myPythonDataFrame&quot;, postsDf._jdf)myScalaDataFrame = DataFrame(z.get(&quot;myScalaDataFrame&quot;), sqlContext) Form CreationZeppelinContext provides functions for creating forms.In Scala and Python environments, you can create forms programmatically. %spark/* Create text input form */z.input(&quot;formName&quot;)/* Create text input form with default value */z.input(&quot;formName&quot;, &quot;defaultValue&quot;)/* Create select form */z.select(&quot;formName&quot;, Seq((&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;)))/* Create select form with default value*/z.select(&quot;formName&quot;, &quot;option1&quot;, Seq((&quot;option1&quot;, &quot;option1DisplayName&quot;),
(&quot;option2&quot;, &quot;option2DisplayName&quot;))) %spark.pyspark# Create text input formz.input(&quot;formName&quot;)# Create text input form with default valuez.input(&quot;formName&quot;, &quot;defaultValue&quot;)# Create select formz.select(&quot;formName&quot;, [(&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;)])# Create select form with default valuez.select(&quot;formName&quot;, [(&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;)], &quot;option1&quot;) In sql environment, you can create form in simple template.%spark.sqlselect * from ${table=defaultTableName} where text like &#39;%${search}%&#39;To learn more about dynamic form, checkout Dynamic Form.Matplot
lib Integration (pyspark)Both the python and pyspark interpreters have built-in support for inline visualization using matplotlib, a popular plotting library for python. More details can be found in the python interpreter documentation, since matplotlib support is identical. More advanced interactive plotting can be done with pyspark through utilizing Zeppelin&#39;s built-in Angular Display System, as shown below:Interpreter setting optionYou can choose one of shared, scoped and isolated options wheh you configure Spark interpreter. Spark interpreter creates separated Scala compiler per each notebook but share a single SparkContext in scoped mode (experimental). It creates separated SparkContext per each notebook in isolated mode.Setting up Zeppelin with KerberosLogical setup with Zeppelin, Kerberos Key Distribution Center (KDC), and Spark on YARN:Configuration SetupOn the server that Zeppelin is installed, install Kerberos client modules and configuration, krb5.conf.This is to
make the server communicate with KDC.Set SPARK_HOME in [ZEPPELIN_HOME]/conf/zeppelin-env.sh to use spark-submit(Additionally, you might have to set export HADOOP_CONF_DIR=/etc/hadoop/conf)Add the two properties below to Spark configuration ([SPARK_HOME]/conf/spark-defaults.conf):spark.yarn.principalspark.yarn.keytabNOTE: If you do not have permission to access for the above spark-defaults.conf file, optionally, you can add the above lines to the Spark Interpreter setting through the Interpreter tab in the Zeppelin UI.That&#39;s it. Play with Zeppelin!",
+ "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Spark Interpreter for Apache ZeppelinOverviewApache Spark is a fast and general-purpose cluster computing system.It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs.Apache Spark is supported in Zeppelin with Spark interpreter group which consists of below five interpreters. Name Class Description %spark SparkInterpreter Creates a SparkConte
xt and provides a Scala environment %spark.pyspark PySparkInterpreter Provides a Python environment %spark.r SparkRInterpreter Provides an R environment with SparkR support %spark.sql SparkSQLInterpreter Provides a SQL environment %spark.dep DepInterpreter Dependency loader ConfigurationThe Spark interpreter can be configured with properties provided by Zeppelin.You can also set other Spark properties which are not listed in the table. For a list of additional properties, refer to Spark Available Properties. Property Default Description args Spark commandline args master local[*] Spark master uri. ex) spark://masterhost:7077 spark.app.name Zeppelin The name of spark application. spark.cores.max Total number of cores to use. Empty value uses all available core. spark.executor.memory 1g Executor memory per worker instance. ex) 512m, 32g zeppelin.dep
.additionalRemoteRepository spark-packages, http://dl.bintray.com/spark-packages/maven, false; A list of id,remote-repository-URL,is-snapshot; for each remote repository. zeppelin.dep.localrepo local-repo Local repository for dependency loader PYSPARKPYTHON python Python binary executable to use for PySpark in both driver and workers (default is python). Property spark.pyspark.python take precedence if it is set PYSPARKDRIVERPYTHON python Python binary executable to use for PySpark in driver only (default is PYSPARKPYTHON). Property spark.pyspark.driver.python take precedence if it is set zeppelin.spark.concurrentSQL false Execute multiple SQL concurrently if set true. zeppelin.spark.maxResult 1000 Max number of Spark SQL result to display. zeppelin.spark.printREPLOutput true Print REPL output zeppelin.spark.useHiveContext true Use HiveContext instead of SQLConte
xt if it is true. zeppelin.spark.importImplicit true Import implicits, UDF collection, and sql if set true. zeppelin.spark.enableSupportedVersionCheck true Do not change - developer only setting, not for production use Without any configuration, Spark interpreter works out of box in local mode. But if you want to connect to your Spark cluster, you&#39;ll need to follow below two simple steps.1. Export SPARK_HOMEIn conf/zeppelin-env.sh, export SPARK_HOME environment variable with your Spark installation path.For example,export SPARK_HOME=/usr/lib/sparkYou can optionally set more environment variables# set hadoop conf direxport HADOOP_CONF_DIR=/usr/lib/hadoop# set options to pass spark-submit commandexport SPARK_SUBMIT_OPTIONS=&quot;--packages com.databricks:spark-csv_2.10:1.2.0&quot;# extra classpath. e.g. set classpath for hive-site.xmlexport ZEPPELIN_INTP_CLASSPATH_OVERRIDES=/etc/hive/confFor Windows, ensure you have winutils.exe in %HADOOP_HO
ME%bin. Please see Problems running Hadoop on Windows for the details.2. Set master in Interpreter menuAfter start Zeppelin, go to Interpreter menu and edit master property in your Spark interpreter setting. The value may vary depending on your Spark cluster deployment type.For example,local[*] in local modespark://master:7077 in standalone clusteryarn-client in Yarn client modemesos://host:5050 in Mesos clusterThat&#39;s it. Zeppelin will work with any version of Spark and any deployment type without rebuilding Zeppelin in this way. For the further information about Spark &amp; Zeppelin version compatibility, please refer to &quot;Available Interpreters&quot; section in Zeppelin download page.Note that without exporting SPARK_HOME, it&#39;s running in local mode with included version of Spark. The included version may vary depending on the build profile.SparkContext, SQLContext, SparkSession, ZeppelinContextSparkContext, SQLContext and ZeppelinContext are automa
tically created and exposed as variable names sc, sqlContext and z, respectively, in Scala, Python and R environments.Staring from 0.6.1 SparkSession is available as variable spark when you are using Spark 2.x.Note that Scala/Python/R environment shares the same SparkContext, SQLContext and ZeppelinContext instance. Dependency ManagementThere are two ways to load external libraries in Spark interpreter. First is using interpreter setting menu and second is loading Spark properties.1. Setting Dependencies via Interpreter SettingPlease see Dependency Management for the details.2. Loading Spark PropertiesOnce SPARK_HOME is set in conf/zeppelin-env.sh, Zeppelin uses spark-submit as spark interpreter runner. spark-submit supports two ways to load configurations. The first is command line options such as --master and Zeppelin can pass these options to spark-submit by exporting SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh. Second is reading configuration options from SPARK_HOME/conf/spark-
defaults.conf. Spark properties that user can set to distribute libraries are: spark-defaults.conf SPARK_SUBMIT_OPTIONS Description spark.jars --jars Comma-separated list of local jars to include on the driver and executor classpaths. spark.jars.packages --packages Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format for the coordinates should be groupId:artifactId:version. spark.files --files Comma-separated list of files to be placed in the working directory of each executor. Here are few examples:SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.shexport SPARK_SUBMIT_OPTIONS=&quot;--packages com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar --files /path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg&quot;SPARK_HOME/conf/spark-default
s.confspark.jars /path/mylib1.jar,/path/mylib2.jarspark.jars.packages com.databricks:spark-csv_2.10:1.2.0spark.files /path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip3. Dynamic Dependency Loading via %spark.dep interpreterNote: %spark.dep interpreter loads libraries to %spark and %spark.pyspark but not to %spark.sql interpreter. So we recommend you to use the first option instead.When your code requires external library, instead of doing download/copy/restart Zeppelin, you can easily do following jobs using %spark.dep interpreter.Load libraries recursively from maven repositoryLoad libraries from local filesystemAdd additional maven repositoryAutomatically add libraries to SparkCluster (You can turn off)Dep interpreter leverages Scala environment. So you can write any Scala code here.Note that %spark.dep interpreter should be used before %spark, %spark.pyspark, %spark.sql.Here&#39;s usages.%spark.depz.reset() // clean up previously added artifact and repository//
add maven repositoryz.addRepo(&quot;RepoName&quot;).url(&quot;RepoURL&quot;)// add maven snapshot repositoryz.addRepo(&quot;RepoName&quot;).url(&quot;RepoURL&quot;).snapshot()// add credentials for private maven repositoryz.addRepo(&quot;RepoName&quot;).url(&quot;RepoURL&quot;).username(&quot;username&quot;).password(&quot;password&quot;)// add artifact from filesystemz.load(&quot;/path/to.jar&quot;)// add artifact from maven repository, with no dependencyz.load(&quot;groupId:artifactId:version&quot;).excludeAll()// add artifact recursivelyz.load(&quot;groupId:artifactId:version&quot;)// add artifact recursively except comma separated GroupID:ArtifactId listz.load(&quot;groupId:artifactId:version&quot;).exclude(&quot;groupId:artifactId,groupId:artifactId, ...&quot;)// exclude with patternz.load(&quot;groupId:artifactId:version&quot;).exclude(*)z.load(&quot;groupId:art
ifactId:version&quot;).exclude(&quot;groupId:artifactId:*&quot;)z.load(&quot;groupId:artifactId:version&quot;).exclude(&quot;groupId:*&quot;)// local() skips adding artifact to spark clusters (skipping sc.addJar())z.load(&quot;groupId:artifactId:version&quot;).local()ZeppelinContextZeppelin automatically injects ZeppelinContext as variable z in your Scala/Python environment. ZeppelinContext provides some additional functions and utilities.Exploring Spark DataFramesZeppelinContext provides a show method, which, using Zeppelin&#39;s table feature, can be used to nicely display a Spark DataFrame:df = spark.read.csv(&#39;/path/to/csv&#39;)z.show(df)Object ExchangeZeppelinContext extends map and it&#39;s shared between Scala and Python environment.So you can put some objects from Scala and read it from Python, vice versa. // Put object from scala%sparkval myObject = ...z.put(&quot;objName&quot;, myObject)// Exchanging data fram
esmyScalaDataFrame = ...z.put(&quot;myScalaDataFrame&quot;, myScalaDataFrame)val myPythonDataFrame = z.get(&quot;myPythonDataFrame&quot;).asInstanceOf[DataFrame] # Get object from python%spark.pysparkmyObject = z.get(&quot;objName&quot;)# Exchanging data framesmyPythonDataFrame = ...z.put(&quot;myPythonDataFrame&quot;, postsDf._jdf)myScalaDataFrame = DataFrame(z.get(&quot;myScalaDataFrame&quot;), sqlContext) Form CreationZeppelinContext provides functions for creating forms.In Scala and Python environments, you can create forms programmatically. %spark/* Create text input form */z.input(&quot;formName&quot;)/* Create text input form with default value */z.input(&quot;formName&quot;, &quot;defaultValue&quot;)/* Create select form */z.select(&quot;formName&quot;, Seq((&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2Di
splayName&quot;)))/* Create select form with default value*/z.select(&quot;formName&quot;, &quot;option1&quot;, Seq((&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;))) %spark.pyspark# Create text input formz.input(&quot;formName&quot;)# Create text input form with default valuez.input(&quot;formName&quot;, &quot;defaultValue&quot;)# Create select formz.select(&quot;formName&quot;, [(&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;)])# Create select form with default valuez.select(&quot;formName&quot;, [(&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;)], &quot;option1&quo
t;) In sql environment, you can create form in simple template.%spark.sqlselect * from ${table=defaultTableName} where text like &#39;%${search}%&#39;To learn more about dynamic form, checkout Dynamic Form.Matplotlib Integration (pyspark)Both the python and pyspark interpreters have built-in support for inline visualization using matplotlib, a popular plotting library for python. More details can be found in the python interpreter documentation, since matplotlib support is identical. More advanced interactive plotting can be done with pyspark through utilizing Zeppelin&#39;s built-in Angular Display System, as shown below:Interpreter setting optionYou can choose one of shared, scoped and isolated options wheh you configure Spark interpreter. Spark interpreter creates separated Scala compiler per each notebook but share a single SparkContext in scoped mode (experimental). It creates separated SparkContext per each notebook in isolated mode.Setting up Zeppelin with Kerber
osLogical setup with Zeppelin, Kerberos Key Distribution Center (KDC), and Spark on YARN:Configuration SetupOn the server that Zeppelin is installed, install Kerberos client modules and configuration, krb5.conf.This is to make the server communicate with KDC.Set SPARK_HOME in [ZEPPELIN_HOME]/conf/zeppelin-env.sh to use spark-submit(Additionally, you might have to set export HADOOP_CONF_DIR=/etc/hadoop/conf)Add the two properties below to Spark configuration ([SPARK_HOME]/conf/spark-defaults.conf):spark.yarn.principalspark.yarn.keytabNOTE: If you do not have permission to access for the above spark-defaults.conf file, optionally, you can add the above lines to the Spark Interpreter setting through the Interpreter tab in the Zeppelin UI.That&#39;s it. Play with Zeppelin!",
"url": " /interpreter/spark.html",
"group": "interpreter",
"excerpt": "Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution engine."
@@ -627,7 +627,7 @@
"/setup/security/shiro_authentication.html": {
"title": "Apache Shiro Authentication for Apache Zeppelin",
- "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->{% include JB/setup %}# Apache Shiro authentication for Apache Zeppelin## Overview[Apache Shiro](http://shiro.apache.org/) is a powerful and easy-to-use Java security framework that performs authentication, authorization, cryptography, and session management. In this documentation, we will explain step by step how Shiro works for Zeppelin notebook authentication.When you connect to Apache Zeppelin, you will be asked to enter your c
redentials. Once you logged in, then you have access to all notes including other user's notes.## Security SetupYou can setup **Zeppelin notebook authentication** in some simple steps.### 1. Enable ShiroBy default in `conf`, you will find `shiro.ini.template`, this file is used as an example and it is strongly recommendedto create a `shiro.ini` file by doing the following command line```bashcp conf/shiro.ini.template conf/shiro.ini```For the further information about `shiro.ini` file format, please refer to [Shiro Configuration](http://shiro.apache.org/configuration.html#Configuration-INISections).### 2. Secure the Websocket channelSet to property **zeppelin.anonymous.allowed** to **false** in `conf/zeppelin-site.xml`. If you don't have this file yet, just copy `conf/zeppelin-site.xml.template` to `conf/zeppelin-site.xml`.### 3. Start Zeppelin```bin/zeppelin-daemon.sh start (or restart)```Then you can browse Zeppelin at [http://localhost:8080](http://localhost:8080).### 4.
LoginFinally, you can login using one of the below **username/password** combinations.```[users]admin = password1, adminuser1 = password2, role1, role2user2 = password3, role3user3 = password4, role2```You can set the roles for each users next to the password.## Groups and permissions (optional)In case you want to leverage user groups and permissions, use one of the following configuration for LDAP or AD under `[main]` segment in `shiro.ini`.```activeDirectoryRealm = org.apache.zeppelin.realm.ActiveDirectoryGroupRealmactiveDirectoryRealm.systemUsername = userNameAactiveDirectoryRealm.systemPassword = passwordAactiveDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COMactiveDirectoryRealm.url = ldap://ldap.test.com:389activeDirectoryRealm.groupRolesMap = "CN=aGroupName,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"group1"activeDirectoryRealm.authorizationCachingEnabled = falseactiveDirectoryRealm.principalSuffix = @corp.company.netldapRealm = org.apac
he.zeppelin.server.LdapGroupRealm# search base for ldap groups (only relevant for LdapGroupRealm):ldapRealm.contextFactory.environment[ldap.searchBase] = dc=COMPANY,dc=COMldapRealm.contextFactory.url = ldap://ldap.test.com:389ldapRealm.userDnTemplate = uid={0},ou=Users,dc=COMPANY,dc=COMldapRealm.contextFactory.authenticationMechanism = simple```also define roles/groups that you want to have in system, like below;```[roles]admin = *hr = *finance = *group1 = *```## Configure Realm (optional)Realms are responsible for authentication and authorization in Apache Zeppelin. By default, Apache Zeppelin uses [IniRealm](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/text/IniRealm.html) (users and groups are configurable in `conf/shiro.ini` file under `[user]` and `[group]` section). You can also leverage Shiro Realms like [JndiLdapRealm](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/ldap/JndiLdapRealm.html), [JdbcRealm](https://shiro.apache.org/s
tatic/latest/apidocs/org/apache/shiro/realm/jdbc/JdbcRealm.html) or create [our own](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/AuthorizingRealm.html).To learn more about Apache Shiro Realm, please check [this documentation](http://shiro.apache.org/realm.html).We also provide community custom Realms.### Active Directory```activeDirectoryRealm = org.apache.zeppelin.realm.ActiveDirectoryGroupRealmactiveDirectoryRealm.systemUsername = userNameAactiveDirectoryRealm.systemPassword = passwordAactiveDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/conf/zeppelin.jceksactiveDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COMactiveDirectoryRealm.url = ldap://ldap.test.com:389activeDirectoryRealm.groupRolesMap = "CN=aGroupName,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"group1"activeDirectoryRealm.authorizationCachingEnabled = falseactiveDirectoryRealm.principalSuffix = @corp.company.net```Also instead of s
pecifying systemPassword in clear text in shiro.ini administrator can choose to specify the same in "hadoop credential".Create a keystore file using the hadoop credential commandline, for this the hadoop commons should be in the classpath`hadoop credential create activeDirectoryRealm.systempassword -provider jceks://file/user/zeppelin/conf/zeppelin.jceks`Change the following values in the Shiro.ini file, and uncomment the line:`activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/conf/zeppelin.jceks`### LDAPTwo options exist for configuring an LDAP Realm. The simpler to use is the LdapGroupRealm. How ever it has limitedflexibility with mapping of ldap groups to users and for authorization for user groups. A sample configuration file forthis realm is given below.```ldapRealm = org.apache.zeppelin.realm.LdapGroupRealm# search base for ldap groups (only relevant for LdapGroupRealm):ldapRealm.contextFactory.environment[ldap.searchBase] = dc=COMPANY,dc
=COMldapRealm.contextFactory.url = ldap://ldap.test.com:389ldapRealm.userDnTemplate = uid={0},ou=Users,dc=COMPANY,dc=COMldapRealm.contextFactory.authenticationMechanism = simple```The other more flexible option is to use the LdapRealm. It allows for mapping of ldapgroups to roles and also allows for role/group based authentication into the zeppelin server. Sample configuration for this realm is given below. ```[main]ldapRealm=org.apache.zeppelin.realm.LdapRealmldapRealm.contextFactory.authenticationMechanism=simpleldapRealm.contextFactory.url=ldap://localhost:33389ldapRealm.userDnTemplate=uid={0},ou=people,dc=hadoop,dc=apache,dc=org# Ability to set ldap paging Size if needed default is 100ldapRealm.pagingSize = 200ldapRealm.authorizationEnabled=trueldapRealm.contextFactory.systemAuthenticationMechanism=simpleldapRealm.searchBase=dc=hadoop,dc=apache,dc=orgldapRealm.userSearchBase = dc=hadoop,dc=apache,dc=orgldapRealm.groupSearchBase = ou=groups,dc=hadoop,dc=apache,dc=orgldapRealm.gro
upObjectClass=groupofnames# Allow userSearchAttribute to be customizedldapRealm.userSearchAttributeName = sAMAccountNameldapRealm.memberAttribute=member# force usernames returned from ldap to lowercase useful for ADldapRealm.userLowerCase = true# ability set searchScopes subtree (default), one, baseldapRealm.userSearchScope = subtree;ldapRealm.groupSearchScope = subtree;ldapRealm.memberAttributeValueTemplate=cn={0},ou=people,dc=hadoop,dc=apache,dc=orgldapRealm.contextFactory.systemUsername=uid=guest,ou=people,dc=hadoop,dc=apache,dc=orgldapRealm.contextFactory.systemPassword=S{ALIAS=ldcSystemPassword}# enable support for nested groups using the LDAP_MATCHING_RULE_IN_CHAIN operatorldapRealm.groupSearchEnableMatchingRuleInChain = true# optional mapping from physical groups to logical application rolesldapRealm.rolesByGroup = LDN_USERS: user_role, NYK_USERS: user_role, HKG_USERS: user_role, GLOBAL_ADMIN: admin_role# optional list of roles that are allowed to authenticate. Incase not pre
sent all groups are allowed to authenticate (login).# This changes nothing for url specific permissions that will continue to work as specified in [urls].ldapRealm.allowedRolesForAuthentication = admin_role,user_roleldapRealm.permissionsByRole= user_role = *:ToDoItemsJdo:*:*, *:ToDoItem:*:*; admin_role = *securityManager.sessionManager = $sessionManagersecurityManager.realms = $ldapRealm ```### PAM[PAM](https://en.wikipedia.org/wiki/Pluggable_authentication_module) authentication support allows the reuse of existing authenticationmoduls on the host where Zeppelin is running. On a typical system modules are configured per service for example sshd, passwd, etc. under `/etc/pam.d/`. You caneither reuse one of these services or create your own for Zeppelin. Activiting PAM authentication requires two parameters: 1. realm: The Shiro realm being used 2. service: The service configured under `/etc/pam.d/` to be used. The name here needs to be the same as the file name under `/etc/pam.d/````
[main] pamRealm=org.apache.zeppelin.realm.PamRealm pamRealm.service=sshd```### ZeppelinHub[ZeppelinHub](https://www.zeppelinhub.com) is a service that synchronize your Apache Zeppelin notebooks and enables you to collaborate easily.To enable login with your ZeppelinHub credential, apply the following change in `conf/shiro.ini` under `[main]` section.```### A sample for configuring ZeppelinHub RealmzeppelinHubRealm = org.apache.zeppelin.realm.ZeppelinHubRealm## Url of ZeppelinHubzeppelinHubRealm.zeppelinhubUrl = https://www.zeppelinhub.comsecurityManager.realms = $zeppelinHubRealm```> Note: ZeppelinHub is not releated to Apache Zeppelin project.## Secure your Zeppelin information (optional)By default, anyone who defined in `[users]` can share **Interpreter Setting**, **Credential** and **Configuration** information in Apache Zeppelin.Sometimes you might want to hide these information for your use case.Since Shiro provides **url-based security**, you can hide the information by com
menting or uncommenting these below lines in `conf/shiro.ini`.```[urls]/api/interpreter/** = authc, roles[admin]/api/configurations/** = authc, roles[admin]/api/credential/** = authc, roles[admin]```In this case, only who have `admin` role can see **Interpreter Setting**, **Credential** and **Configuration** information.If you want to grant this permission to other users, you can change **roles[ ]** as you defined at `[users]` section.> **NOTE :** All of the above configurations are defined in the `conf/shiro.ini` file.## Other authentication methods- [HTTP Basic Authentication using NGINX](./authentication_nginx.html)",
+ "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->{% include JB/setup %}# Apache Shiro authentication for Apache Zeppelin## Overview[Apache Shiro](http://shiro.apache.org/) is a powerful and easy-to-use Java security framework that performs authentication, authorization, cryptography, and session management. In this documentation, we will explain step by step how Shiro works for Zeppelin notebook authentication.When you connect to Apache Zeppelin, you will be asked to enter your c
redentials. Once you logged in, then you have access to all notes including other user's notes.## Security SetupYou can setup **Zeppelin notebook authentication** in some simple steps.### 1. Enable ShiroBy default in `conf`, you will find `shiro.ini.template`, this file is used as an example and it is strongly recommendedto create a `shiro.ini` file by doing the following command line```bashcp conf/shiro.ini.template conf/shiro.ini```For the further information about `shiro.ini` file format, please refer to [Shiro Configuration](http://shiro.apache.org/configuration.html#Configuration-INISections).### 2. Secure the Websocket channelSet to property **zeppelin.anonymous.allowed** to **false** in `conf/zeppelin-site.xml`. If you don't have this file yet, just copy `conf/zeppelin-site.xml.template` to `conf/zeppelin-site.xml`.### 3. Start Zeppelin```bin/zeppelin-daemon.sh start (or restart)```Then you can browse Zeppelin at [http://localhost:8080](http://localhost:8080).### 4.
LoginFinally, you can login using one of the below **username/password** combinations.```[users]admin = password1, adminuser1 = password2, role1, role2user2 = password3, role3user3 = password4, role2```You can set the roles for each users next to the password.## Groups and permissions (optional)In case you want to leverage user groups and permissions, use one of the following configuration for LDAP or AD under `[main]` segment in `shiro.ini`.```activeDirectoryRealm = org.apache.zeppelin.realm.ActiveDirectoryGroupRealmactiveDirectoryRealm.systemUsername = userNameAactiveDirectoryRealm.systemPassword = passwordAactiveDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COMactiveDirectoryRealm.url = ldap://ldap.test.com:389activeDirectoryRealm.groupRolesMap = "CN=aGroupName,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"group1"activeDirectoryRealm.authorizationCachingEnabled = falseactiveDirectoryRealm.principalSuffix = @corp.company.netldapRealm = org.apac
he.zeppelin.server.LdapGroupRealm# search base for ldap groups (only relevant for LdapGroupRealm):ldapRealm.contextFactory.environment[ldap.searchBase] = dc=COMPANY,dc=COMldapRealm.contextFactory.url = ldap://ldap.test.com:389ldapRealm.userDnTemplate = uid={0},ou=Users,dc=COMPANY,dc=COMldapRealm.contextFactory.authenticationMechanism = simple```also define roles/groups that you want to have in system, like below;```[roles]admin = *hr = *finance = *group1 = *```## Configure Realm (optional)Realms are responsible for authentication and authorization in Apache Zeppelin. By default, Apache Zeppelin uses [IniRealm](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/text/IniRealm.html) (users and groups are configurable in `conf/shiro.ini` file under `[user]` and `[group]` section). You can also leverage Shiro Realms like [JndiLdapRealm](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/ldap/JndiLdapRealm.html), [JdbcRealm](https://shiro.apache.org/s
tatic/latest/apidocs/org/apache/shiro/realm/jdbc/JdbcRealm.html) or create [our own](https://shiro.apache.org/static/latest/apidocs/org/apache/shiro/realm/AuthorizingRealm.html).To learn more about Apache Shiro Realm, please check [this documentation](http://shiro.apache.org/realm.html).We also provide community custom Realms.### Active Directory```activeDirectoryRealm = org.apache.zeppelin.realm.ActiveDirectoryGroupRealmactiveDirectoryRealm.systemUsername = userNameAactiveDirectoryRealm.systemPassword = passwordAactiveDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/conf/zeppelin.jceksactiveDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COMactiveDirectoryRealm.url = ldap://ldap.test.com:389activeDirectoryRealm.groupRolesMap = "CN=aGroupName,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"group1"activeDirectoryRealm.authorizationCachingEnabled = falseactiveDirectoryRealm.principalSuffix = @corp.company.net```Also instead of s
pecifying systemPassword in clear text in shiro.ini administrator can choose to specify the same in "hadoop credential".Create a keystore file using the hadoop credential commandline, for this the hadoop commons should be in the classpath`hadoop credential create activeDirectoryRealm.systempassword -provider jceks://file/user/zeppelin/conf/zeppelin.jceks`Change the following values in the Shiro.ini file, and uncomment the line:`activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/conf/zeppelin.jceks`### LDAPTwo options exist for configuring an LDAP Realm. The simpler to use is the LdapGroupRealm. How ever it has limitedflexibility with mapping of ldap groups to users and for authorization for user groups. A sample configuration file forthis realm is given below.```ldapRealm = org.apache.zeppelin.realm.LdapGroupRealm# search base for ldap groups (only relevant for LdapGroupRealm):ldapRealm.contextFactory.environment[ldap.searchBase] = dc=COMPANY,dc
=COMldapRealm.contextFactory.url = ldap://ldap.test.com:389ldapRealm.userDnTemplate = uid={0},ou=Users,dc=COMPANY,dc=COMldapRealm.contextFactory.authenticationMechanism = simple```The other more flexible option is to use the LdapRealm. It allows for mapping of ldapgroups to roles and also allows for role/group based authentication into the zeppelin server. Sample configuration for this realm is given below.```[main]ldapRealm=org.apache.zeppelin.realm.LdapRealmldapRealm.contextFactory.authenticationMechanism=simpleldapRealm.contextFactory.url=ldap://localhost:33389ldapRealm.userDnTemplate=uid={0},ou=people,dc=hadoop,dc=apache,dc=org# Ability to set ldap paging Size if needed default is 100ldapRealm.pagingSize = 200ldapRealm.authorizationEnabled=trueldapRealm.contextFactory.systemAuthenticationMechanism=simpleldapRealm.searchBase=dc=hadoop,dc=apache,dc=orgldapRealm.userSearchBase = dc=hadoop,dc=apache,dc=orgldapRealm.groupSearchBase = ou=groups,dc=hadoop,dc=apache,dc=orgldapRealm.grou
pObjectClass=groupofnames# Allow userSearchAttribute to be customizedldapRealm.userSearchAttributeName = sAMAccountNameldapRealm.memberAttribute=member# force usernames returned from ldap to lowercase useful for ADldapRealm.userLowerCase = true# ability set searchScopes subtree (default), one, baseldapRealm.userSearchScope = subtree;ldapRealm.groupSearchScope = subtree;ldapRealm.memberAttributeValueTemplate=cn={0},ou=people,dc=hadoop,dc=apache,dc=orgldapRealm.contextFactory.systemUsername=uid=guest,ou=people,dc=hadoop,dc=apache,dc=orgldapRealm.contextFactory.systemPassword=S{ALIAS=ldcSystemPassword}# enable support for nested groups using the LDAP_MATCHING_RULE_IN_CHAIN operatorldapRealm.groupSearchEnableMatchingRuleInChain = true# optional mapping from physical groups to logical application rolesldapRealm.rolesByGroup = LDN_USERS: user_role, NYK_USERS: user_role, HKG_USERS: user_role, GLOBAL_ADMIN: admin_role# optional list of roles that are allowed to authenticate. Incase not pres
ent all groups are allowed to authenticate (login).# This changes nothing for url specific permissions that will continue to work as specified in [urls].ldapRealm.allowedRolesForAuthentication = admin_role,user_roleldapRealm.permissionsByRole= user_role = *:ToDoItemsJdo:*:*, *:ToDoItem:*:*; admin_role = *securityManager.sessionManager = $sessionManagersecurityManager.realms = $ldapRealm```### PAM[PAM](https://en.wikipedia.org/wiki/Pluggable_authentication_module) authentication support allows the reuse of existing authenticationmoduls on the host where Zeppelin is running. On a typical system modules are configured per service for example sshd, passwd, etc. under `/etc/pam.d/`. You caneither reuse one of these services or create your own for Zeppelin. Activiting PAM authentication requires two parameters: 1. realm: The Shiro realm being used 2. service: The service configured under `/etc/pam.d/` to be used. The name here needs to be the same as the file name under `/etc/pam.d/````[m
ain] pamRealm=org.apache.zeppelin.realm.PamRealm pamRealm.service=sshd```### ZeppelinHub[ZeppelinHub](https://www.zeppelinhub.com) is a service that synchronize your Apache Zeppelin notebooks and enables you to collaborate easily.To enable login with your ZeppelinHub credential, apply the following change in `conf/shiro.ini` under `[main]` section.```### A sample for configuring ZeppelinHub RealmzeppelinHubRealm = org.apache.zeppelin.realm.ZeppelinHubRealm## Url of ZeppelinHubzeppelinHubRealm.zeppelinhubUrl = https://www.zeppelinhub.comsecurityManager.realms = $zeppelinHubRealm```> Note: ZeppelinHub is not releated to Apache Zeppelin project.## Secure your Zeppelin information (optional)By default, anyone who defined in `[users]` can share **Interpreter Setting**, **Credential** and **Configuration** information in Apache Zeppelin.Sometimes you might want to hide these information for your use case.Since Shiro provides **url-based security**, you can hide the information by comme
nting or uncommenting these below lines in `conf/shiro.ini`.```[urls]/api/interpreter/** = authc, roles[admin]/api/configurations/** = authc, roles[admin]/api/credential/** = authc, roles[admin]```In this case, only who have `admin` role can see **Interpreter Setting**, **Credential** and **Configuration** information.If you want to grant this permission to other users, you can change **roles[ ]** as you defined at `[users]` section.> **NOTE :** All of the above configurations are defined in the `conf/shiro.ini` file.## Other authentication methods- [HTTP Basic Authentication using NGINX](./authentication_nginx.html)",
"url": " /setup/security/shiro_authentication.html",
"group": "setup/security",
"excerpt": "Apache Shiro is a powerful and easy-to-use Java security framework that performs authentication, authorization, cryptography, and session management. This document explains step by step how Shiro can be used for Zeppelin notebook authentication."
@@ -826,7 +826,7 @@
"/usage/rest_api/helium.html": {
"title": "Apache Zeppelin Helium REST API",
- "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->{% include JB/setup %}# Apache Zeppelin Helium REST API## OverviewApache Zeppelin provides several REST APIs for interaction and remote activation of zeppelin functionality.All REST APIs are available starting with the following endpoint `http://[zeppelin-server]:[zeppelin-port]/api`. Note that Apache Zeppelin REST APIs receive or return JSON objects, it is recommended for you to install some JSON viewers such as [JSONView](https:/
/chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc).If you work with Apache Zeppelin and find a need for an additional REST API, please [file an issue or send us an email](http://zeppelin.apache.org/community.html).## Helium REST API List### List of all available helium packages Description This ```GET``` method returns all the available helium packages in configured registries. URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/all``` Success code 200 Fail code 500 Sample JSON response { "status": "OK", "message": "", "body": { "zeppelin.clock": [ { "registry": "local", "pkg": { "type": "APPLICATION", "name": "zeppelin.clock", "description": "Clock (examp
le)", "artifact": "zeppelin-examples/zeppelin-example-clock/target/zeppelin-example-clock-0.7.0-SNAPSHOT.jar", "className": "org.apache.zeppelin.example.app.clock.Clock", "resources": [ [ ":java.util.Date" ] ], "icon": "icon" }, "enabled": false } ], "zeppelin-bubblechart": [ { "registry": "local", "pkg": { "type": "VISUALIZATION", "name": "zeppelin-bubblechart", "description": "Animated bubble chart", "artifact": "./../helium/zeppelin-bubble", "icon": "icon" }, "enabled": true }, { "registry": "local", "pkg":
{ "type": "VISUALIZATION", "name": "zeppelin-bubblechart", "description": "Animated bubble chart", "artifact": "zeppelin-bubblechart@0.0.2", "icon": "icon" }, "enabled": false } ], "zeppelin_horizontalbar": [ { "registry": "local", "pkg": { "type": "VISUALIZATION", "name": "zeppelin_horizontalbar", "description": "Horizontal Bar chart (example)", "artifact": "./zeppelin-examples/zeppelin-example-horizontalbar", "icon": "icon" }, "enabled": true } ] }} ### Suggest Helium application Description This ```GET``` method returns suggested h
elium application for the paragraph. URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/suggest/[Note ID]/[Paragraph ID]``` Success code 200 Fail code 404 on note or paragraph not exists 500 Sample JSON response { "status": "OK", "message": "", "body": { "available": [ { "registry": "local", "pkg": { "type": "APPLICATION", "name": "zeppelin.clock", "description": "Clock (example)", "artifact": "zeppelin-examples/zeppelin-example-clock/target/zeppelin-example-clock-0.7.0-SNAPSHOT.jar", "className": "org.apache.zeppelin.example.app.clock.Clock", "resources": [ [ ":java.util.
Date" ] ], "icon": "icon" }, "enabled": true } ] }} ### Load helium Application on a paragraph Description This ```GET``` method returns a helium Application id on success. URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/load/[Note ID]/[Paragraph ID]``` Success code 200 Fail code 404 on note or paragraph not exists 500 for any other errors Sample JSON response { "status": "OK", "message": "", "body": "app_2C5FYRZ1E-20170108-040449_2068241472zeppelin_clock"} ### Load bundled visualization script Description This ```GET``` method returns bundled helium visualization javascript. When refresh=true (optional) is provided, Zeppelin rebuild bundle.
otherwise, provided from cache URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/visualizations/load[?refresh=true]``` Success code 200 reponse body is executable javascript Fail code 200 reponse body is error message string starts with ERROR: ### Enable package Description This ```POST``` method enables a helium package. Needs artifact name in input payload URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/enable/[Package Name]``` Success code 200 Fail code 500 Sample input zeppelin-examples/zeppelin-example-clock/target/zeppelin-example-clock-0.7.0-SNAPSHOT.jar Sample JSON response {"status":"OK"} ### Disable package Description This ```POST``` method disables a helium package. U
RL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/disable/[Package Name]``` Success code 200 Fail code 500 Sample JSON response {"status":"OK"} ### Get visualization display order Description This ```GET``` method returns display order of enabled visualization packages. URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/visualizationOrder``` Success code 200 Fail code 500 Sample JSON response {"status":"OK","body":["zeppelin_horizontalbar","zeppelin-bubblechart"]} ### Set visualization display order Description This ```POST``` method sets visualization packages display order. URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/visualizationOrder``` S
uccess code 200 Fail code 500 Sample JSON input ["zeppelin-bubblechart", "zeppelin_horizontalbar"] Sample JSON response {"status":"OK"} ",
+ "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->{% include JB/setup %}# Apache Zeppelin Helium REST API## OverviewApache Zeppelin provides several REST APIs for interaction and remote activation of zeppelin functionality.All REST APIs are available starting with the following endpoint `http://[zeppelin-server]:[zeppelin-port]/api`. Note that Apache Zeppelin REST APIs receive or return JSON objects, it is recommended for you to install some JSON viewers such as [JSONView](https:/
/chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc).If you work with Apache Zeppelin and find a need for an additional REST API, please [file an issue or send us an email](http://zeppelin.apache.org/community.html).## Helium REST API List### Get all available helium packages Description This ```GET``` method returns all the available helium packages in configured registries. URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/package``` Success code 200 Fail code 500 Sample JSON response { "status": "OK", "message": "", "body": { "zeppelin.clock": [ { "registry": "local", "pkg": { "type": "APPLICATION", "name": "zeppelin.clock", "description": "Clock (examp
le)", "artifact": "zeppelin-examples/zeppelin-example-clock/target/zeppelin-example-clock-0.7.0-SNAPSHOT.jar", "className": "org.apache.zeppelin.example.app.clock.Clock", "resources": [ [ ":java.util.Date" ] ], "icon": "icon" }, "enabled": false } ] }} ### Get all enabled helium packages Description This ```GET``` method returns all enabled helium packages in configured registries. URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/enabledPackage``` Success code 200 Fail code 500 Sample JSON response { "status": "OK", "message": "", "body": { "zeppelin.clock": [ { "registry":
"local", "pkg": { "type": "APPLICATION", "name": "zeppelin.clock", "description": "Clock (example)", "artifact": "zeppelin-examples/zeppelin-example-clock/target/zeppelin-example-clock-0.7.0-SNAPSHOT.jar", "className": "org.apache.zeppelin.example.app.clock.Clock", "resources": [ [ ":java.util.Date" ] ], "icon": "icon" }, "enabled": false } ] }} ### Get single helium package Description This ```GET``` method returns specified helium package information URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/package/[Package Name]``` Success code 200 Fail code 500 Sample J
SON response { "status": "OK", "message": "", "body": { "zeppelin.clock": [ { "registry": "local", "pkg": { "type": "APPLICATION", "name": "zeppelin.clock", "description": "Clock (example)", "artifact": "zeppelin-examples/zeppelin-example-clock/target/zeppelin-example-clock-0.7.0-SNAPSHOT.jar", "className": "org.apache.zeppelin.example.app.clock.Clock", "resources": [ [ ":java.util.Date" ] ], "icon": "icon" }, "enabled": false } ] }} ### Suggest Helium package on a paragraph Description This ```GET``` method returns suggested helium package for
the paragraph. URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/suggest/[Note ID]/[Paragraph ID]``` Success code 200 Fail code 404 on note or paragraph not exists 500 Sample JSON response { "status": "OK", "message": "", "body": { "available": [ { "registry": "local", "pkg": { "type": "APPLICATION", "name": "zeppelin.clock", "description": "Clock (example)", "artifact": "zeppelin-examples/zeppelin-example-clock/target/zeppelin-example-clock-0.7.0-SNAPSHOT.jar", "className": "org.apache.zeppelin.example.app.clock.Clock", "resources": [ [ ":java.util.Date"
] ], "icon": "icon" }, "enabled": true } ] }} ### Load Helium package on a paragraph Description This ```POST``` method loads helium package to target paragraph. URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/load/[Note ID]/[Paragraph ID]``` Success code 200 Fail code 404 on note or paragraph not exists 500 Sample JSON response { "status": "OK", "message": "", "body": "app_2C5FYRZ1E-20170108-040449_2068241472zeppelin_clock"} ### Load bundled visualization script Description This ```GET``` method returns bundled helium visualization javascript. When refresh=true (optional) is provided, Zeppelin rebuilds bundle. Otherwise, it's provided from cache
URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/bundle/load/[Package Name][?refresh=true]``` Success code 200 reponse body is executable javascript Fail code 200 reponse body is error message string starts with ERROR: ### Enable package Description This ```POST``` method enables a helium package. Needs artifact name in input payload URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/enable/[Package Name]``` Success code 200 Fail code 500 Sample input zeppelin-examples/zeppelin-example-clock/target/zeppelin-example-clock-0.7.0-SNAPSHOT.jar Sample JSON response {"status":"OK"} ### Disable package Description This ```POST``` method disables a helium package. URL ```http://[zeppelin-server]:
[zeppelin-port]/api/helium/disable/[Package Name]``` Success code 200 Fail code 500 Sample JSON response {"status":"OK"} ### Get visualization display order Description This ```GET``` method returns display order of enabled visualization packages. URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/order/visualization``` Success code 200 Fail code 500 Sample JSON response {"status":"OK","body":["zeppelin_horizontalbar","zeppelin-bubblechart"]} ### Set visualization display order Description This ```POST``` method sets visualization packages display order. URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/order/visualization``` Success code 200
Fail code 500 Sample JSON input ["zeppelin-bubblechart", "zeppelin_horizontalbar"] Sample JSON response {"status":"OK"} ### Get configuration for all Helium packages Description This ```GET``` method returns configuration for all Helium packages URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/config``` Success code 200 Fail code 500 ### Get configuration for specific package Description This ```GET``` method returns configuration for the specified package name and artifact URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/config/[Package Name]/[Artifact]``` Success code 200 Fail code 500 ### Set configuration for specific package Description Thi
s ```POST``` method updates configuration for specified package name and artifact URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/config/[Package Name]/[Artifact]``` Success code 200 Fail code 500 ### Get Spell configuration for single package Description This ```GET``` method returns specified package Spell configuration URL ```http://[zeppelin-server]:[zeppelin-port]/api/helium/spell/config/[Package Name]``` Success code 200 Fail code 500 ",
"url": " /usage/rest_api/helium.html",
"group": "usage/rest_api",
"excerpt": "This page contains Apache Zeppelin Helium REST API information."
@@ -837,7 +837,7 @@
"/usage/rest_api/interpreter.html": {
"title": "Apache Zeppelin Interpreter REST API",
[... 6 lines stripped ...]