You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@zeppelin.apache.org by mo...@apache.org on 2015/11/18 01:08:31 UTC

[1/4] incubator-zeppelin git commit: ZEPPELIN-412 Documentation based on Zeppelin version

Repository: incubator-zeppelin
Updated Branches:
  refs/heads/master 79a92c789 -> c2cbafd1d


http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/interpreter/ignite.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/ignite.md b/docs/interpreter/ignite.md
new file mode 100644
index 0000000..02fc587
--- /dev/null
+++ b/docs/interpreter/ignite.md
@@ -0,0 +1,116 @@
+---
+layout: page
+title: "Ignite Interpreter"
+description: "Ignite user guide"
+group: manual
+---
+{% include JB/setup %}
+
+## Ignite Interpreter for Apache Zeppelin
+
+### Overview
+[Apache Ignite](https://ignite.apache.org/) In-Memory Data Fabric is a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based or flash technologies.
+
+![Apache Ignite](/assets/themes/zeppelin/img/docs-img/ignite-logo.png)
+
+You can use Zeppelin to retrieve distributed data from cache using Ignite SQL interpreter. Moreover, Ignite interpreter allows you to execute any Scala code in cases when SQL doesn't fit to your requirements. For example, you can populate data into your caches or execute distributed computations.
+
+### Installing and Running Ignite example
+In order to use Ignite interpreters, you may install Apache Ignite in some simple steps:
+
+  1. Download Ignite [source release](https://ignite.apache.org/download.html#sources) or [binary release](https://ignite.apache.org/download.html#binaries) whatever you want. But you must download Ignite as the same version of Zeppelin's. If it is not, you can't use scala code on Zeppelin. You can find ignite version in Zepplin at the pom.xml which is placed under `path/to/your-Zeppelin/ignite/pom.xml` ( Of course, in Zeppelin source release ). Please check `ignite.version` .<br>Currently, Zeppelin provides ignite only in Zeppelin source release. So, if you download Zeppelin binary release( `zeppelin-0.5.0-incubating-bin-spark-xxx-hadoop-xx` ), you can not use ignite interpreter on Zeppelin. We are planning to include ignite in a future binary release.
+  
+  2. Examples are shipped as a separate Maven project, so to start running you simply need to import provided <dest_dir>/apache-ignite-fabric-1.2.0-incubating-bin/pom.xml file into your favourite IDE, such as Eclipse. 
+
+   * In case of Eclipse, Eclipse -> File -> Import -> Existing Maven Projects
+   * Set examples directory path to Eclipse and select the pom.xml.
+   * Then start `org.apache.ignite.examples.ExampleNodeStartup` (or whatever you want) to run at least one or more ignite node. When you run example code, you may notice that the number of node is increase one by one. 
+  
+  > **Tip. If you want to run Ignite examples on the cli not IDE, you can export executable Jar file from IDE. Then run it by using below command.**
+      
+  ``` 
+  $ nohup java -jar </path/to/your Jar file name> 
+  ```
+    
+### Configuring Ignite Interpreter 
+At the "Interpreters" menu, you may edit Ignite interpreter or create new one. Zeppelin provides these properties for Ignite.
+
+ <table class="table-configuration">
+  <tr>
+      <th>Property Name</th>
+      <th>value</th>
+      <th>Description</th>
+  </tr>
+  <tr>
+      <td>ignite.addresses</td>
+      <td>127.0.0.1:47500..47509</td>
+      <td>Coma separated list of Ignite cluster hosts. See [Ignite Cluster Configuration](https://apacheignite.readme.io/v1.2/docs/cluster-config) section for more details.</td>
+  </tr>
+  <tr>
+      <td>ignite.clientMode</td>
+      <td>true</td>
+      <td>You can connect to the Ignite cluster as client or server node. See [Ignite Clients vs. Servers](https://apacheignite.readme.io/v1.2/docs/clients-vs-servers) section for details. Use true or false values in order to connect in client or server mode respectively.</td>
+  </tr>
+  <tr>
+      <td>ignite.config.url</td>
+      <td></td>
+      <td>Configuration URL. Overrides all other settings.</td>
+   </tr
+   <tr>
+      <td>ignite.jdbc.url</td>
+      <td>jdbc:ignite:cfg://default-ignite-jdbc.xml</td>
+      <td>Ignite JDBC connection URL.</td>
+   </tr>
+   <tr>
+      <td>ignite.peerClassLoadingEnabled</td>
+      <td>true</td>
+      <td>Enables peer-class-loading. See [Zero Deployment](https://apacheignite.readme.io/v1.2/docs/zero-deployment) section for details. Use true or false values in order to enable or disable P2P class loading respectively.</td>
+  </tr>
+ </table>
+
+![Configuration of Ignite Interpreter](/assets/themes/zeppelin/img/docs-img/ignite-interpreter-setting.png)
+
+### Interpreter Binding for Zeppelin Notebook
+After configuring Ignite interpreter, create your own notebook. Then you can bind interpreters like below image.
+
+![Binding Interpreters](/assets/themes/zeppelin/img/docs-img/ignite-interpreter-binding.png)
+
+For more interpreter binding information see [here](http://zeppelin.incubator.apache.org/docs/manual/interpreters.html).
+
+### How to use Ignite SQL interpreter
+In order to execute SQL query, use ` %ignite.ignitesql ` prefix. <br>
+Supposing you are running `org.apache.ignite.examples.streaming.wordcount.StreamWords`, then you can use "words" cache( Of course you have to specify this cache name to the Ignite interpreter setting section `ignite.jdbc.url` of Zeppelin ). 
+For example, you can select top 10 words in the words cache using the following query
+
+  ``` 
+  %ignite.ignitesql 
+  select _val, count(_val) as cnt from String group by _val order by cnt desc limit 10 
+  ``` 
+  
+  ![IgniteSql on Zeppelin](/assets/themes/zeppelin/img/docs-img/ignite-sql-example.png)
+  
+As long as your Ignite version and Zeppelin Ignite version is same, you can also use scala code. Please check the Zeppelin Ignite version before you download your own Ignite. 
+
+  ```
+  %ignite
+  import org.apache.ignite._
+  import org.apache.ignite.cache.affinity._
+  import org.apache.ignite.cache.query._
+  import org.apache.ignite.configuration._
+
+  import scala.collection.JavaConversions._
+
+  val cache: IgniteCache[AffinityUuid, String] = ignite.cache("words")
+
+  val qry = new SqlFieldsQuery("select avg(cnt), min(cnt), max(cnt) from (select count(_val) as cnt from String group by _val)", true)
+
+  val res = cache.query(qry).getAll()
+
+  collectionAsScalaIterable(res).foreach(println _)
+  ```
+  
+  ![Using Scala Code](/assets/themes/zeppelin/img/docs-img/ignite-scala-example.png)
+
+Apache Ignite also provides a guide docs for Zeppelin ["Ignite with Apache Zeppelin"](https://apacheignite.readme.io/docs/data-analysis-with-apache-zeppelin)
+ 
+  

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/interpreter/lens.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/lens.md b/docs/interpreter/lens.md
new file mode 100644
index 0000000..903df7e
--- /dev/null
+++ b/docs/interpreter/lens.md
@@ -0,0 +1,173 @@
+---
+layout: page
+title: "Lens Interpreter"
+description: "Lens user guide"
+group: manual
+---
+{% include JB/setup %}
+
+## Lens Interpreter for Apache Zeppelin
+
+### Overview
+[Apache Lens](https://lens.apache.org/) provides an Unified Analytics interface. Lens aims to cut the Data Analytics silos by providing a single view of data across multiple tiered data stores and optimal execution environment for the analytical query. It seamlessly integrates Hadoop with traditional data warehouses to appear like one.
+
+![Apache Lens](/assets/themes/zeppelin/img/docs-img/lens-logo.png)
+
+### Installing and Running Lens
+In order to use Lens interpreters, you may install Apache Lens in some simple steps:
+
+  1. Download Lens for latest version from [the ASF](http://www.apache.org/dyn/closer.lua/lens/2.3-beta). Or the older release can be found [in the Archives](http://archive.apache.org/dist/lens/).
+  2. Before running Lens, you have to set HIVE_HOME and HADOOP_HOME. If you want to get more information about this, please refer to [here](http://lens.apache.org/lenshome/install-and-run.html#Installation). Lens also provides Pseudo Distributed mode. [Lens pseudo-distributed setup](http://lens.apache.org/lenshome/pseudo-distributed-setup.html) is done by using [docker](https://www.docker.com/). Hive server and hadoop daemons are run as separate processes in lens pseudo-distributed setup. 
+  3. Now, you can start lens server (or stop).
+  
+  ```
+    ./bin/lens-ctl start (or stop)
+  ```
+
+### Configuring Lens Interpreter
+At the "Interpreters" menu, you can to edit Lens interpreter or create new one. Zeppelin provides these properties for Lens.
+
+ <table class="table-configuration">
+  <tr>
+      <th>Property Name</th>
+      <th>value</th>
+      <th>Description</th>
+  </tr>
+  <tr>
+      <td>lens.client.dbname</td>
+      <td>default</td>
+      <td>The database schema name</td>
+  </tr>
+  <tr>
+      <td>lens.query.enable.persistent.resultset</td>
+      <td>false</td>
+      <td>Whether to enable persistent resultset for queries. When enabled, server will fetch results from driver, custom format them if any and store in a configured location. The file name of query output is queryhandle-id, with configured extensions</td>
+  </tr>
+  <tr>
+      <td>lens.server.base.url</td>
+      <td>http://hostname:port/lensapi</td>
+      <td>The base url for the lens server. you have to edit "hostname" and "port" that you may use(ex. http://0.0.0.0:9999/lensapi)</td>
+   </tr>
+   <tr>
+      <td>lens.session.cluster.user </td>
+      <td>default</td>
+      <td>Hadoop cluster username</td>
+  </tr>
+  <tr>
+      <td>zeppelin.lens.maxResult</td>
+      <td>1000</td>
+      <td>Max number of rows to display</td>
+  </tr>
+  <tr>
+      <td>zeppelin.lens.maxThreads</td>
+      <td>10</td>
+      <td>If concurrency is true then how many threads?</td>
+  </tr>
+  <tr>
+      <td>zeppelin.lens.run.concurrent</td>
+      <td>true</td>
+      <td>Run concurrent Lens Sessions</td>
+  </tr>
+  <tr>
+      <td>xxx</td>
+      <td>yyy</td>
+      <td>anything else from [Configuring lens server](https://lens.apache.org/admin/config-server.html)</td>
+  </tr>
+ </table>
+
+![Apache Lens Interpreter Setting](/assets/themes/zeppelin/img/docs-img/lens-interpreter-setting.png)
+
+### Interpreter Bindging for Zeppelin Notebook
+After configuring Lens interpreter, create your own notebook, then you can bind interpreters like below image. 
+![Zeppelin Notebook Interpreter Biding](/assets/themes/zeppelin/img/docs-img/lens-interpreter-binding.png)
+
+For more interpreter binding information see [here](http://zeppelin.incubator.apache.org/docs/manual/interpreters.html).
+
+### How to use 
+You can analyze your data by using [OLAP Cube](http://lens.apache.org/user/olap-cube.html) [QL](http://lens.apache.org/user/cli.html) which is a high level SQL like language to query and describe data sets organized in data cubes. 
+You may experience OLAP Cube like this [Video tutorial](https://cwiki.apache.org/confluence/display/LENS/2015/07/13/20+Minute+video+demo+of+Apache+Lens+through+examples). 
+As you can see in this video, they are using Lens Client Shell(./bin/lens-cli.sh). All of these functions also can be used on Zeppelin by using Lens interpreter.
+
+<li> Create and Use(Switch) Databases.
+
+  ```
+  create database newDb
+  ```
+  
+  ```
+  use newDb
+  ```
+  
+<li> Create Storage.
+
+  ```
+  create storage your/path/to/lens/client/examples/resources/db-storage.xml
+  ```
+  
+<li> Create Dimensions, Show fields and join-chains of them. 
+
+  ```
+  create dimension your/path/to/lens/client/examples/resources/customer.xml
+  ```
+  
+  ```
+  dimension show fields customer
+  ```
+  
+  ```
+  dimension show joinchains customer
+  ```
+  
+<li> Create Caches, Show fields and join-chains of them.
+
+  ``` 
+  create cube your/path/to/lens/client/examples/resources/sales-cube.xml 
+  ```
+  
+  ```
+  cube show fields sales
+  ```
+  
+  ```
+  cube show joinchains sales
+  ```
+
+<li> Create Dimtables and Fact. 
+
+  ```
+  create dimtable your/path/to/lens/client/examples/resources/customer_table.xml
+  ```
+  
+  ```
+  create fact your/path/to/lens/client/examples/resources/sales-raw-fact.xml
+  ```
+
+<li> Add partitions to Dimtable and Fact.
+  
+  ```
+  dimtable add single-partition --dimtable_name customer_table --storage_name local --path your/path/to/lens/client/examples/resources/customer-local-part.xml
+  ```
+  
+  ```
+  fact add partitions --fact_name sales_raw_fact --storage_name local --path your/path/to/lens/client/examples/resources/sales-raw-local-parts.xml
+  ```
+
+<li> Now, you can run queries on cubes.
+ 
+  ```
+  query execute cube select customer_city_name, product_details.description, product_details.category, product_details.color, store_sales from sales where time_range_in(delivery_time, '2015-04-11-00', '2015-04-13-00')
+  ```
+  
+  
+  ![Lens Query Result](/assets/themes/zeppelin/img/docs-img/lens-result.png)
+
+These are just examples that provided in advance by Lens. If you want to explore whole tutorials of Lens, see the [tutorial video](https://cwiki.apache.org/confluence/display/LENS/2015/07/13/20+Minute+video+demo+of+Apache+Lens+through+examples).
+
+### Lens UI Service 
+Lens also provides web UI service. Once the server starts up, you can open the service on http://serverhost:19999/index.html and browse. You may also check the structure that you made and use query easily here.
+ 
+ ![Lens UI Servive](/assets/themes/zeppelin/img/docs-img/lens-ui-service.png)
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/interpreter/postgresql.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/postgresql.md b/docs/interpreter/postgresql.md
new file mode 100644
index 0000000..9753cdc
--- /dev/null
+++ b/docs/interpreter/postgresql.md
@@ -0,0 +1,180 @@
+---
+layout: page
+title: "PostgreSQL and HAWQ Interpreter"
+description: ""
+group: manual
+---
+{% include JB/setup %}
+
+
+## PostgreSQL, HAWQ  Interpreter for Apache Zeppelin
+
+<br/>
+<table class="table-configuration">
+  <tr>
+    <th>Name</th>
+    <th>Class</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>%psql.sql</td>
+    <td>PostgreSqlInterpreter</td>
+    <td>Provides SQL environment for Postgresql, HAWQ and Greenplum</td>
+  </tr>
+</table>
+
+<br/>
+[<img align="right" src="http://img.youtube.com/vi/wqXXQhJ5Uk8/0.jpg" alt="zeppelin-view" hspace="10" width="250"></img>](https://www.youtube.com/watch?v=wqXXQhJ5Uk8)
+
+This interpreter seamlessly supports the following SQL data processing engines:
+
+* [PostgreSQL](http://www.postgresql.org/) - OSS, Object-relational database management system (ORDBMS) 
+* [Apache HAWQ](http://pivotal.io/big-data/pivotal-hawq) - Powerful [Open Source](https://wiki.apache.org/incubator/HAWQProposal) SQL-On-Hadoop engine. 
+* [Greenplum](http://pivotal.io/big-data/pivotal-greenplum-database) - MPP database built on open source PostgreSQL.
+
+
+This [Video Tutorial](https://www.youtube.com/watch?v=wqXXQhJ5Uk8) illustrates some of the features provided by the `Postgresql Interpreter`.
+
+### Create Interpreter 
+
+By default Zeppelin creates one `PSQL` instance. You can remove it or create new instances. 
+
+Multiple PSQL instances can be created, each configured to the same or different backend databases. But over time a  `Notebook` can have only one PSQL interpreter instance `bound`. That means you _can not_ connect to different databases in the same `Notebook`. This is a known Zeppelin limitation. 
+
+To create new PSQL instance open the `Interprter` section and click the `+Create` button. Pick a `Name` of your choice and from the `Interpreter` drop-down select `psql`.  Then follow the configuration instructions and `Save` the new instance. 
+
+> Note: The `Name` of the instance is used only to distinct the instances while binding them to the `Notebook`. The `Name` is irrelevant inside the `Notebook`. In the `Notebook` you must use `%psql.sql` tag. 
+
+### Bind to Notebook
+In the `Notebook` click on the `settings` icon in the top right corner. The select/deselect the interpreters to be bound with the `Notebook`.
+
+### Configuration
+You can modify the configuration of the PSQL from the `Interpreter` section.  The PSQL interpreter expenses the following properties:
+
+ 
+ <table class="table-configuration">
+   <tr>
+     <th>Property Name</th>
+     <th>Description</th>
+     <th>Default Value</th>
+   </tr>
+   <tr>
+     <td>postgresql.url</td>
+     <td>JDBC URL to connect to </td>
+     <td>jdbc:postgresql://localhost:5432</td>
+   </tr>
+   <tr>
+     <td>postgresql.user</td>
+     <td>JDBC user name</td>
+     <td>gpadmin</td>
+   </tr>
+   <tr>
+     <td>postgresql.password</td>
+     <td>JDBC password</td>
+     <td></td>
+   </tr>
+   <tr>
+     <td>postgresql.driver.name</td>
+     <td>JDBC driver name. In this version the driver name is fixed and should not be changed</td>
+     <td>org.postgresql.Driver</td>
+   </tr>
+   <tr>
+     <td>postgresql.max.result</td>
+     <td>Max number of SQL result to display to prevent the browser overload</td>
+     <td>1000</td>
+   </tr>      
+ </table>
+ 
+ 
+### How to use
+```
+Tip: Use (CTRL + .) for SQL auto-completion.
+```
+#### DDL and SQL commands
+
+Start the paragraphs with the full `%psql.sql` prefix tag! The short notation: `%psql` would still be able run the queries but the syntax highlighting and the auto-completions will be disabled. 
+
+You can use the standard CREATE / DROP / INSERT commands to create or modify the data model:
+
+```sql
+%psql.sql
+drop table if exists mytable;
+create table mytable (i int);
+insert into mytable select generate_series(1, 100);
+```
+
+Then in a separate paragraph run the query.
+
+```sql
+%psql.sql
+select * from mytable;
+```
+
+> Note: You can have multiple queries in the same paragraph but only the result from the first is displayed. [[1](https://issues.apache.org/jira/browse/ZEPPELIN-178)], [[2](https://issues.apache.org/jira/browse/ZEPPELIN-212)].
+
+For example, this will execute both queries but only the count result will be displayed. If you revert the order of the queries the mytable content will be shown instead.
+
+```sql
+%psql.sql
+select count(*) from mytable;
+select * from mytable;
+```
+
+#### PSQL command line tools
+
+Use the Shell Interpreter (`%sh`) to access the command line [PSQL](http://www.postgresql.org/docs/9.4/static/app-psql.html) interactively:
+
+```bash
+%sh
+psql -h phd3.localdomain -U gpadmin -p 5432 <<EOF
+ \dn  
+ \q
+EOF
+```
+This will produce output like this:
+
+```
+        Name        |  Owner  
+--------------------+---------
+ hawq_toolkit       | gpadmin
+ information_schema | gpadmin
+ madlib             | gpadmin
+ pg_catalog         | gpadmin
+ pg_toast           | gpadmin
+ public             | gpadmin
+ retail_demo        | gpadmin
+```
+
+#### Apply Zeppelin Dynamic Forms
+
+You can leverage [Zepplein Dynamic Form](https://zeppelin.incubator.apache.org/docs/manual/dynamicform.html) inside your queries. You can use both the `text input` and `select form` parametrization features
+
+```sql
+%psql.sql
+SELECT ${group_by}, count(*) as count 
+FROM retail_demo.order_lineitems_pxf 
+GROUP BY ${group_by=product_id,product_id|product_name|customer_id|store_id} 
+ORDER BY count ${order=DESC,DESC|ASC} 
+LIMIT ${limit=10};
+```
+#### Example HAWQ PXF/HDFS Tables
+
+Create HAWQ external table that read data from tab-separated-value data in HDFS.
+
+```sql
+%psql.sql
+CREATE EXTERNAL TABLE retail_demo.payment_methods_pxf (
+  payment_method_id smallint,
+  payment_method_code character varying(20)
+) LOCATION ('pxf://${NAME_NODE_HOST}:50070/retail_demo/payment_methods.tsv.gz?profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER = E'\t');
+```
+And retrieve content
+
+```sql
+%psql.sql
+seelect * from retail_demo.payment_methods_pxf
+```
+### Auto-completion 
+The PSQL Interpreter provides a basic auto-completion functionality. On `(Ctrl+.)` it list the most relevant suggesntions in a pop-up window. In addition to the SQL keyword the interpter provides suggestions for the Schema, Table, Column names as well. 
+
+

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/interpreter/spark.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/spark.md b/docs/interpreter/spark.md
new file mode 100644
index 0000000..58fce0b
--- /dev/null
+++ b/docs/interpreter/spark.md
@@ -0,0 +1,221 @@
+---
+layout: page
+title: "Spark Interpreter Group"
+description: ""
+group: manual
+---
+{% include JB/setup %}
+
+
+## Spark
+
+[Apache Spark](http://spark.apache.org) is supported in Zeppelin with 
+Spark Interpreter group, which consisted of 4 interpreters.
+
+<table class="table-configuration">
+  <tr>
+    <th>Name</th>
+    <th>Class</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>%spark</td>
+    <td>SparkInterpreter</td>
+    <td>Creates SparkContext and provides scala environment</td>
+  </tr>
+  <tr>
+    <td>%pyspark</td>
+    <td>PySparkInterpreter</td>
+    <td>Provides python environment</td>
+  </tr>
+  <tr>
+    <td>%sql</td>
+    <td>SparkSQLInterpreter</td>
+    <td>Provides SQL environment</td>
+  </tr>
+  <tr>
+    <td>%dep</td>
+    <td>DepInterpreter</td>
+    <td>Dependency loader</td>
+  </tr>
+</table>
+
+
+<br />
+
+
+### SparkContext, SQLContext, ZeppelinContext
+
+SparkContext, SQLContext, ZeppelinContext are automatically created and exposed as variable names 'sc', 'sqlContext' and 'z', respectively, both in scala and python environments.
+
+Note that scala / python environment shares the same SparkContext, SQLContext, ZeppelinContext instance.
+
+
+<a name="dependencyloading"> </a>
+<br />
+<br />
+### Dependency Management
+There are two ways to load external library in spark interpreter. First is using Zeppelin's %dep interpreter and second is loading Spark properties.
+
+#### 1. Dynamic Dependency Loading via %dep interpreter
+
+When your code requires external library, instead of doing download/copy/restart Zeppelin, you can easily do following jobs using %dep interpreter.
+
+ * Load libraries recursively from Maven repository
+ * Load libraries from local filesystem
+ * Add additional maven repository
+ * Automatically add libraries to SparkCluster (You can turn off)
+
+Dep interpreter leverages scala environment. So you can write any Scala code here.
+Note that %dep interpreter should be used before %spark, %pyspark, %sql.
+
+Here's usages.
+
+```scala
+%dep
+z.reset() // clean up previously added artifact and repository
+
+// add maven repository
+z.addRepo("RepoName").url("RepoURL")
+
+// add maven snapshot repository
+z.addRepo("RepoName").url("RepoURL").snapshot()
+
+// add credentials for private maven repository
+z.addRepo("RepoName").url("RepoURL").username("username").password("password")
+
+// add artifact from filesystem
+z.load("/path/to.jar")
+
+// add artifact from maven repository, with no dependency
+z.load("groupId:artifactId:version").excludeAll()
+
+// add artifact recursively
+z.load("groupId:artifactId:version")
+
+// add artifact recursively except comma separated GroupID:ArtifactId list
+z.load("groupId:artifactId:version").exclude("groupId:artifactId,groupId:artifactId, ...")
+
+// exclude with pattern
+z.load("groupId:artifactId:version").exclude(*)
+z.load("groupId:artifactId:version").exclude("groupId:artifactId:*")
+z.load("groupId:artifactId:version").exclude("groupId:*")
+
+// local() skips adding artifact to spark clusters (skipping sc.addJar())
+z.load("groupId:artifactId:version").local()
+```
+
+
+<br />
+#### 2. Loading Spark Properties
+Once `SPARK_HOME` is set in `conf/zeppelin-env.sh`, Zeppelin uses `spark-submit` as spark interpreter runner. `spark-submit` supports two ways to load configurations. The first is command line options such as --master and Zeppelin can pass these options to `spark-submit` by exporting `SPARK_SUBMIT_OPTIONS` in conf/zeppelin-env.sh. Second is reading configuration options from `SPARK_HOME/conf/spark-defaults.conf`. Spark properites that user can set to distribute libraries are:
+
+<table class="table-configuration">
+  <tr>
+    <th>spark-defaults.conf</th>
+    <th>SPARK_SUBMIT_OPTIONS</th>
+    <th>Applicable Interpreter</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>spark.jars</td>
+    <td>--jars</td>
+    <td>%spark</td>
+    <td>Comma-separated list of local jars to include on the driver and executor classpaths.</td>
+  </tr>
+  <tr>
+    <td>spark.jars.packages</td>
+    <td>--packages</td>
+    <td>%spark</td>
+    <td>Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format for the coordinates should be groupId:artifactId:version.</td>
+  </tr>
+  <tr>
+    <td>spark.files</td>
+    <td>--files</td>
+    <td>%pyspark</td>
+    <td>Comma-separated list of files to be placed in the working directory of each executor.</td>
+  </tr>
+</table>
+Note that adding jar to pyspark is only availabe via %dep interpreter at the moment
+
+<br/>
+Here are few examples:
+
+##### 0.5.5 and later
+* SPARK\_SUBMIT\_OPTIONS in conf/zeppelin-env.sh
+
+		export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar --files /path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg"
+
+* SPARK_HOME/conf/spark-defaults.conf
+
+		spark.jars				/path/mylib1.jar,/path/mylib2.jar
+		spark.jars.packages		com.databricks:spark-csv_2.10:1.2.0
+		spark.files				/path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip
+
+##### 0.5.0
+* ZEPPELIN\_JAVA\_OPTS in conf/zeppelin-env.sh
+
+		export ZEPPELIN_JAVA_OPTS="-Dspark.jars=/path/mylib1.jar,/path/mylib2.jar -Dspark.files=/path/myfile1.dat,/path/myfile2.dat"
+<br />
+
+
+<a name="zeppelincontext"> </a>
+<br />
+<br />
+### ZeppelinContext
+
+
+Zeppelin automatically injects ZeppelinContext as variable 'z' in your scala/python environment. ZeppelinContext provides some additional functions and utility.
+
+<br />
+#### Object exchange
+
+ZeppelinContext extends map and it's shared between scala, python environment.
+So you can put some object from scala and read it from python, vise versa.
+
+Put object from scala
+
+```scala
+%spark
+val myObject = ...
+z.put("objName", myObject)
+```
+
+Get object from python
+
+```python
+%python
+myObject = z.get("objName")
+```
+
+<br />
+#### Form creation
+
+ZeppelinContext provides functions for creating forms. 
+In scala and python environments, you can create forms programmatically.
+
+```scala
+%spark
+/* Create text input form */
+z.input("formName")
+
+/* Create text input form with default value */
+z.input("formName", "defaultValue")
+
+/* Create select form */
+z.select("formName", Seq(("option1", "option1DisplayName"),
+                         ("option2", "option2DisplayName")))
+
+/* Create select form with default value*/
+z.select("formName", "option1", Seq(("option1", "option1DisplayName"),
+                                    ("option2", "option2DisplayName")))
+```
+
+In sql environment, you can create form in simple template.
+
+```
+%sql
+select * from ${table=defaultTableName} where text like '%${search}%'
+```
+
+To learn more about dynamic form, checkout [Dynamic Form](../dynamicform.html).

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/manual/dynamicform.md
----------------------------------------------------------------------
diff --git a/docs/manual/dynamicform.md b/docs/manual/dynamicform.md
new file mode 100644
index 0000000..06074fd
--- /dev/null
+++ b/docs/manual/dynamicform.md
@@ -0,0 +1,78 @@
+---
+layout: page
+title: "Dynamic Form"
+description: ""
+group: manual
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+
+## Dynamic Form
+
+Zeppelin dynamically creates input forms. Depending on language backend, there're two different ways to create dynamic form.
+Custom language backend can select which type of form creation it wants to use.
+
+<br />
+### Using form Templates
+
+This mode creates form using simple template language. It's simple and easy to use. For example Markdown, Shell, SparkSql language backend uses it.
+
+<br />
+#### Text input form
+
+To create text input form, use _${formName}_ templates.
+
+for example
+
+<img src="../../assets/themes/zeppelin/img/screenshots/form_input.png" />
+
+
+Also you can provide default value, using _${formName=defaultValue}_.
+
+<img src="../../assets/themes/zeppelin/img/screenshots/form_input_default.png" />
+
+
+<br />
+#### Select form
+
+To create select form, use _${formName=defaultValue,option1|option2...}_
+
+for example
+
+<img src="../../assets/themes/zeppelin/img/screenshots/form_select.png" />
+
+Also you can separate option's display name and value, using _${formName=defaultValue,option1(DisplayName)|option2(DisplayName)...}_
+
+<img src="../../assets/themes/zeppelin/img/screenshots/form_select_displayname.png" />
+
+<br />
+### Creates Programmatically
+
+Some language backend uses programmatic way to create form. For example [ZeppelinContext](./interpreter/spark.html#zeppelincontext) provides form creation API
+
+Here're some examples.
+
+Text input form
+
+<img src="../../assets/themes/zeppelin/img/screenshots/form_input_prog.png" />
+
+Text input form with default value
+
+<img src="../../assets/themes/zeppelin/img/screenshots/form_input_default_prog.png" />
+
+Select form
+
+<img src="../../assets/themes/zeppelin/img/screenshots/form_select_prog.png" />

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/manual/interpreters.md
----------------------------------------------------------------------
diff --git a/docs/manual/interpreters.md b/docs/manual/interpreters.md
new file mode 100644
index 0000000..ff5bff7
--- /dev/null
+++ b/docs/manual/interpreters.md
@@ -0,0 +1,64 @@
+---
+layout: page
+title: "Interpreters"
+description: ""
+group: manual
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+
+## Interpreters in zeppelin
+
+This section explain the role of Interpreters, interpreters group and interpreters settings in Zeppelin.
+Zeppelin interpreter concept allows any language/data-processing-backend to be plugged into Zeppelin.
+Currently Zeppelin supports many interpreters such as Scala(with Apache Spark), Python(with Apache Spark), SparkSQL, Hive, Markdown and Shell.
+
+### What is zeppelin interpreter?
+
+Zeppelin Interpreter is the plug-in which enable zeppelin user to use a specific language/data-processing-backend. For example to use scala code in Zeppelin, you need ```spark``` interpreter.
+
+When you click on the ```+Create``` button in the interpreter page the interpreter drop-down list box will present all the available interpreters on your server.
+
+<img src="../../assets/themes/zeppelin/img/screenshots/interpreter_create.png">
+
+### What is zeppelin interpreter setting?
+
+Zeppelin interpreter setting is the configuration of a given interpreter on zeppelin server. For example, the properties requried for hive  JDBC interpreter to connect to the Hive server.
+
+<img src="../../assets/themes/zeppelin/img/screenshots/interpreter_setting.png">
+### What is zeppelin interpreter group?
+
+Every Interpreter belongs to an InterpreterGroup. InterpreterGroup is a unit of start/stop interpreter.
+By default, every interpreter belong to a single group but the group might contain more interpreters. For example, spark interpreter group include spark support, pySpark, 
+SparkSQL and the dependency loader.
+
+Technically, Zeppelin interpreters from the same group are running in the same JVM.
+
+Interpreters belong to a single group a registered together and all of their properties are listed in the interpreter setting.
+<img src="../../assets/themes/zeppelin/img/screenshots/interpreter_setting_spark.png">
+
+### Programming langages for interpreter
+
+If the interpreter uses a specific programming language (like Scala, Python, SQL), it is generally a good idea to add syntax highlighting support for that to the notebook paragraph editor.  
+  
+To check out the list of languages supported, see the mode-*.js files under zeppelin-web/bower_components/ace-builds/src-noconflict or from github https://github.com/ajaxorg/ace-builds/tree/master/src-noconflict  
+  
+To add a new set of syntax highlighting,  
+1. add the mode-*.js file to zeppelin-web/bower.json (when built, zeppelin-web/src/index.html will be changed automatically)  
+2. add to the list of `editorMode` in zeppelin-web/src/app/notebook/paragraph/paragraph.controller.js - it follows the pattern 'ace/mode/x' where x is the name  
+3. add to the code that checks for `%` prefix and calls `session.setMode(editorMode.x)` in `setParagraphMode` in zeppelin-web/src/app/notebook/paragraph/paragraph.controller.js  
+  
+

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/manual/notebookashomepage.md
----------------------------------------------------------------------
diff --git a/docs/manual/notebookashomepage.md b/docs/manual/notebookashomepage.md
new file mode 100644
index 0000000..86f1ea9
--- /dev/null
+++ b/docs/manual/notebookashomepage.md
@@ -0,0 +1,109 @@
+---
+layout: page
+title: "Notebook as Homepage"
+description: ""
+group: manual
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+## Customize your zeppelin homepage
+ Zeppelin allows you to use one of the notebooks you create as your zeppelin Homepage.
+ With that you can brand your zeppelin installation, 
+ adjust the instruction to your users needs and even translate to other languages.
+
+ <br />
+### How to set a notebook as your zeppelin homepage
+
+The process for creating your homepage is very simple as shown below:
+ 
+ 1. Create a notebook using zeppelin
+ 2. Set the notebook id in the config file
+ 3. Restart zeppelin
+ 
+ <br />
+#### Create a notebook using zeppelin
+  Create a new notebook using zeppelin,
+  you can use ```%md``` interpreter for markdown content or any other interpreter you like.
+  
+  You can also use the display system to generate [text](../displaysystem/display.html), 
+  [html](../displaysystem/display.html#html),[table](../displaysystem/table.html) or
+   [angular](../displaysystem/angular.html)
+
+   Run (shift+Enter) the notebook and see the output. Optionally, change the notebook view to report to hide 
+   the code sections.
+     
+   <br />
+#### Set the notebook id in the config file
+  To set the notebook id in the config file you should copy it from the last word in the notebook url 
+  
+  for example
+  
+  <img src="../../assets/themes/zeppelin/img/screenshots/homepage_notebook_id.png" />
+
+  Set the notebook id to the ```ZEPPELIN_NOTEBOOK_HOMESCREEN``` environment variable 
+  or ```zeppelin.notebook.homescreen``` property. 
+  
+  You can also set the ```ZEPPELIN_NOTEBOOK_HOMESCREEN_HIDE``` environment variable 
+  or ```zeppelin.notebook.homescreen.hide``` property to hide the new notebook from the notebook list.
+
+  <br />
+#### Restart zeppelin
+  Restart your zeppelin server
+  
+  ```
+  ./bin/zeppelin-deamon stop 
+  ./bin/zeppelin-deamon start
+  ```
+  ####That's it! Open your browser and navigate to zeppelin and see your customized homepage...
+    
+  
+<br />
+### Show notebooks list in your custom homepage
+If you want to display the list of notebooks on your custom zeppelin homepage all 
+you need to do is use our %angular support.
+  
+  <br />
+  Add the following code to a paragraph in you home page and run it... walla! you have your notebooks list.
+  
+  ```javascript
+  println(
+  """%angular 
+    <div class="col-md-4" ng-controller="HomeCtrl as home">
+      <h4>Notebooks</h4>
+      <div>
+        <h5><a href="" data-toggle="modal" data-target="#noteNameModal" style="text-decoration: none;">
+          <i style="font-size: 15px;" class="icon-notebook"></i> Create new note</a></h5>
+          <ul style="list-style-type: none;">
+            <li ng-repeat="note in home.notes.list track by $index"><i style="font-size: 10px;" class="icon-doc"></i>
+              <a style="text-decoration: none;" href="#/notebook/{{note.id}}">{{note.name || 'Note ' + note.id}}</a>
+            </li>
+          </ul>
+      </div>
+    </div>
+  """)
+  ```
+  
+  After running the notebook you will see output similar to this one:
+  <img src="../../assets/themes/zeppelin/img/screenshots/homepage_notebook_list.png" />
+  
+  The main trick here relays in linking the ```<div>``` to the controller:
+  
+  ```javascript
+  <div class="col-md-4" ng-controller="HomeCtrl as home">
+  ```
+  
+  Once we have ```home``` as our controller variable in our ```<div></div>``` 
+  we can use ```home.notes.list``` to get access to the notebook list.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/pleasecontribute.md
----------------------------------------------------------------------
diff --git a/docs/pleasecontribute.md b/docs/pleasecontribute.md
new file mode 100644
index 0000000..063b48f
--- /dev/null
+++ b/docs/pleasecontribute.md
@@ -0,0 +1,28 @@
+---
+layout: page
+title: "Please contribute"
+description: ""
+group: development
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+
+### Waiting for your help
+The content does not exist yet.
+
+We're always welcoming contribution.
+
+If you're interested, please check [How to contribute (website)](./development/howtocontributewebsite.html).

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/rest-api/rest-interpreter.md
----------------------------------------------------------------------
diff --git a/docs/rest-api/rest-interpreter.md b/docs/rest-api/rest-interpreter.md
new file mode 100644
index 0000000..d852340
--- /dev/null
+++ b/docs/rest-api/rest-interpreter.md
@@ -0,0 +1,363 @@
+---
+layout: page
+title: "Interpreter REST API"
+description: ""
+group: rest-api
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+## Zeppelin REST API
+ Zeppelin provides several REST API's for interaction and remote activation of zeppelin functionality.
+ 
+ All REST API are available starting with the following endpoint ```http://[zeppelin-server]:[zeppelin-port]/api```
+ 
+ Note that zeppein REST API receive or return JSON objects, it it recommended you install some JSON view such as 
+ [JSONView](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc)
+ 
+ 
+ If you work with zeppelin and find a need for an additional REST API please [file an issue or send us mail](../../community.html) 
+
+ <br />
+### Interpreter REST API list
+  
+  The role of registered interpreters, settings and interpreters group is described [here](../manual/interpreters.html)
+  
+  <table class="table-configuration">
+    <col width="200">
+    <tr>
+      <th>List registered interpreters</th>
+      <th></th>
+    </tr>
+    <tr>
+      <td>Description</td>
+      <td>This ```GET``` method return all the registered interpreters available on the server.</td>
+    </tr>
+    <tr>
+      <td>URL</td>
+      <td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter```</td>
+    </tr>
+    <tr>
+      <td>Success code</td>
+      <td>200</td>
+    </tr>
+    <tr>
+      <td> Fail code</td>
+      <td> 500 </td>
+    </tr>
+    <tr>
+      <td> sample JSON response
+      </td>
+      <td>
+        <pre>
+{
+  "status": "OK",
+  "message": "",
+  "body": {
+    "md.md": {
+      "name": "md",
+      "group": "md",
+      "className": "org.apache.zeppelin.markdown.Markdown",
+      "properties": {},
+      "path": "/zeppelin/interpreter/md"
+    },
+    "spark.spark": {
+      "name": "spark",
+      "group": "spark",
+      "className": "org.apache.zeppelin.spark.SparkInterpreter",
+      "properties": {
+        "spark.executor.memory": {
+          "defaultValue": "512m",
+          "description": "Executor memory per worker instance. ex) 512m, 32g"
+        },
+        "spark.cores.max": {
+          "defaultValue": "",
+          "description": "Total number of cores to use. Empty value uses all available core."
+        },
+      },
+      "path": "/zeppelin/interpreter/spark"
+    },
+    "spark.sql": {
+      "name": "sql",
+      "group": "spark",
+      "className": "org.apache.zeppelin.spark.SparkSqlInterpreter",
+      "properties": {
+        "zeppelin.spark.maxResult": {
+          "defaultValue": "1000",
+          "description": "Max number of SparkSQL result to display."
+        }
+      },
+      "path": "/zeppelin/interpreter/spark"
+    }
+  }
+}
+        </pre>
+      </td>
+    </tr>
+  </table>
+  
+<br/>
+   
+  <table class="table-configuration">
+    <col width="200">
+    <tr>
+      <th>List interpreters settings</th>
+      <th></th>
+    </tr>
+    <tr>
+      <td>Description</td>
+      <td>This ```GET``` method return all the interpreters settings registered on the server.</td>
+    </tr>
+    <tr>
+      <td>URL</td>
+      <td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter/setting```</td>
+    </tr>
+    <tr>
+      <td>Success code</td>
+      <td>200</td>
+    </tr>
+    <tr>
+      <td> Fail code</td>
+      <td> 500 </td>
+    </tr>
+    <tr>
+      <td> sample JSON response
+      </td>
+      <td>
+        <pre>
+{
+  "status": "OK",
+  "message": "",
+  "body": [
+    {
+      "id": "2AYUGP2D5",
+      "name": "md",
+      "group": "md",
+      "properties": {
+        "_empty_": ""
+      },
+      "interpreterGroup": [
+        {
+          "class": "org.apache.zeppelin.markdown.Markdown",
+          "name": "md"
+        }
+      ]
+    },  
+    {
+      "id": "2AY6GV7Q3",
+      "name": "spark",
+      "group": "spark",
+      "properties": {
+        "spark.cores.max": "",
+        "spark.executor.memory": "512m",
+      },
+      "interpreterGroup": [
+        {
+          "class": "org.apache.zeppelin.spark.SparkInterpreter",
+          "name": "spark"
+        },
+        {
+          "class": "org.apache.zeppelin.spark.SparkSqlInterpreter",
+          "name": "sql"
+        }
+      ]
+    }
+  ]
+}
+        </pre>
+      </td>
+    </tr>
+  </table>
+
+<br/>
+   
+  <table class="table-configuration">
+    <col width="200">
+    <tr>
+      <th>Create an interpreter setting</th>
+      <th></th>
+    </tr>
+    <tr>
+      <td>Description</td>
+      <td>This ```POST``` method adds a new interpreter setting using a registered interpreter to the server.</td>
+    </tr>
+    <tr>
+      <td>URL</td>
+      <td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter/setting```</td>
+    </tr>
+    <tr>
+      <td>Success code</td>
+      <td>201</td>
+    </tr>
+    <tr>
+      <td> Fail code</td>
+      <td> 500 </td>
+    </tr>
+    <tr>
+      <td> sample JSON input
+      </td>
+      <td>
+        <pre>
+{
+  "name": "Markdown setting name",
+  "group": "md",
+  "properties": {
+    "propname": "propvalue"
+  },
+  "interpreterGroup": [
+    {
+      "class": "org.apache.zeppelin.markdown.Markdown",
+      "name": "md"
+    }
+  ]
+}
+        </pre>
+      </td>
+    </tr>
+    <tr>
+      <td> sample JSON response
+      </td>
+      <td>
+        <pre>
+{
+  "status": "CREATED",
+  "message": "",
+  "body": {
+    "id": "2AYW25ANY",
+    "name": "Markdown setting name",
+    "group": "md",
+    "properties": {
+      "propname": "propvalue"
+    },
+    "interpreterGroup": [
+      {
+        "class": "org.apache.zeppelin.markdown.Markdown",
+        "name": "md"
+      }
+    ]
+  }
+}
+        </pre>
+      </td>
+    </tr>
+  </table>
+  
+  
+<br/>
+   
+  <table class="table-configuration">
+    <col width="200">
+    <tr>
+      <th>Update an interpreter setting</th>
+      <th></th>
+    </tr>
+    <tr>
+      <td>Description</td>
+      <td>This ```PUT``` method updates an interpreter setting with new properties.</td>
+    </tr>
+    <tr>
+      <td>URL</td>
+      <td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter/setting/[interpreter ID]```</td>
+    </tr>
+    <tr>
+      <td>Success code</td>
+      <td>200</td>
+    </tr>
+    <tr>
+      <td> Fail code</td>
+      <td> 500 </td>
+    </tr>
+    <tr>
+      <td> sample JSON input
+      </td>
+      <td>
+        <pre>
+{
+  "name": "Markdown setting name",
+  "group": "md",
+  "properties": {
+    "propname": "Otherpropvalue"
+  },
+  "interpreterGroup": [
+    {
+      "class": "org.apache.zeppelin.markdown.Markdown",
+      "name": "md"
+    }
+  ]
+}
+        </pre>
+      </td>
+    </tr>
+    <tr>
+      <td> sample JSON response
+      </td>
+      <td>
+        <pre>
+{
+  "status": "OK",
+  "message": "",
+  "body": {
+    "id": "2AYW25ANY",
+    "name": "Markdown setting name",
+    "group": "md",
+    "properties": {
+      "propname": "Otherpropvalue"
+    },
+    "interpreterGroup": [
+      {
+        "class": "org.apache.zeppelin.markdown.Markdown",
+        "name": "md"
+      }
+    ]
+  }
+}
+        </pre>
+      </td>
+    </tr>
+  </table>
+
+  
+<br/>
+   
+  <table class="table-configuration">
+    <col width="200">
+    <tr>
+      <th>Delete an interpreter setting</th>
+      <th></th>
+    </tr>
+    <tr>
+      <td>Description</td>
+      <td>This ```DELETE``` method deletes an given interpreter setting.</td>
+    </tr>
+    <tr>
+      <td>URL</td>
+      <td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter/setting/[interpreter ID]```</td>
+    </tr>
+    <tr>
+      <td>Success code</td>
+      <td>200</td>
+    </tr>
+    <tr>
+      <td> Fail code</td>
+      <td> 500 </td>
+    </tr>
+    <tr>
+      <td> sample JSON response
+      </td>
+      <td>
+        <pre>{"status":"OK"}</pre>
+      </td>
+    </tr>
+  </table>

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/rest-api/rest-notebook.md
----------------------------------------------------------------------
diff --git a/docs/rest-api/rest-notebook.md b/docs/rest-api/rest-notebook.md
new file mode 100644
index 0000000..ffee95a
--- /dev/null
+++ b/docs/rest-api/rest-notebook.md
@@ -0,0 +1,171 @@
+---
+layout: page
+title: "Notebook REST API"
+description: ""
+group: rest-api
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+## Zeppelin REST API
+ Zeppelin provides several REST API's for interaction and remote activation of zeppelin functionality.
+ 
+ All REST API are available starting with the following endpoint ```http://[zeppelin-server]:[zeppelin-port]/api```
+ 
+ Note that zeppein REST API receive or return JSON objects, it it recommended you install some JSON view such as 
+ [JSONView](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc)
+ 
+ 
+ If you work with zeppelin and find a need for an additional REST API please [file an issue or send us mail](../../community.html) 
+
+ <br />
+### Notebook REST API list
+  
+  Notebooks REST API supports the following operations: List, Create, Delete & Clone as detailed in the following table 
+  
+  <table class="table-configuration">
+    <col width="200">
+    <tr>
+      <th>List notebooks</th>
+      <th></th>
+    </tr>
+    <tr>
+      <td>Description</td>
+      <td>This ```GET``` method list the available notebooks on your server.
+          Notebook JSON contains the ```name``` and ```id``` of all notebooks.
+      </td>
+    </tr>
+    <tr>
+      <td>URL</td>
+      <td>```http://[zeppelin-server]:[zeppelin-port]/api/notebook```</td>
+    </tr>
+    <tr>
+      <td>Success code</td>
+      <td>200</td>
+    </tr>
+    <tr>
+      <td> Fail code</td>
+      <td> 500 </td>
+    </tr>
+    <tr>
+      <td> sample JSON response </td>
+      <td><pre>{"status":"OK","message":"","body":[{"name":"Homepage","id":"2AV4WUEMK"},{"name":"Zeppelin Tutorial","id":"2A94M5J1Z"}]}</pre></td>
+    </tr>
+  </table>
+  
+<br/>
+
+  <table class="table-configuration">
+    <col width="200">
+    <tr>
+      <th>Create notebook</th>
+      <th></th>
+    </tr>
+    <tr>
+      <td>Description</td>
+      <td>This ```POST``` method create a new notebook using the given name or default name if none given.
+          The body field of the returned JSON contain the new notebook id.
+      </td>
+    </tr>
+    <tr>
+      <td>URL</td>
+      <td>```http://[zeppelin-server]:[zeppelin-port]/api/notebook```</td>
+    </tr>
+    <tr>
+      <td>Success code</td>
+      <td>201</td>
+    </tr>
+    <tr>
+      <td> Fail code</td>
+      <td> 500 </td>
+    </tr>
+    <tr>
+      <td> sample JSON input </td>
+      <td><pre>{"name": "name of new notebook"}</pre></td>
+    </tr>
+    <tr>
+      <td> sample JSON response </td>
+      <td><pre>{"status": "CREATED","message": "","body": "2AZPHY918"}</pre></td>
+    </tr>
+  </table>
+  
+<br/>
+
+  <table class="table-configuration">
+    <col width="200">
+    <tr>
+      <th>Delete notebook</th>
+      <th></th>
+    </tr>
+    <tr>
+      <td>Description</td>
+      <td>This ```DELETE``` method delete a notebook by the given notebook id.
+      </td>
+    </tr>
+    <tr>
+      <td>URL</td>
+      <td>```http://[zeppelin-server]:[zeppelin-port]/api/notebook/[notebookId]```</td>
+    </tr>
+    <tr>
+      <td>Success code</td>
+      <td>200</td>
+    </tr>
+    <tr>
+      <td> Fail code</td>
+      <td> 500 </td>
+    </tr>
+    <tr>
+      <td> sample JSON response </td>
+      <td><pre>{"status":"OK","message":""}</pre></td>
+    </tr>
+  </table>
+  
+<br/>
+  
+  <table class="table-configuration">
+    <col width="200">
+    <tr>
+      <th>Clone notebook</th>
+      <th></th>
+    </tr>
+    <tr>
+      <td>Description</td>
+      <td>This ```POST``` method clone a notebook by the given id and create a new notebook using the given name 
+          or default name if none given.
+          The body field of the returned JSON contain the new notebook id.
+      </td>
+    </tr>
+    <tr>
+      <td>URL</td>
+      <td>```http://[zeppelin-server]:[zeppelin-port]/api/notebook/[notebookId]```</td>
+    </tr>
+    <tr>
+      <td>Success code</td>
+      <td>201</td>
+    </tr>
+    <tr>
+      <td> Fail code</td>
+      <td> 500 </td>
+    </tr>
+    <tr>
+      <td> sample JSON input </td>
+      <td><pre>{"name": "name of new notebook"}</pre></td>
+    </tr>
+    <tr>
+      <td> sample JSON response </td>
+      <td><pre>{"status": "CREATED","message": "","body": "2AZPHY918"}</pre></td>
+    </tr>
+  </table>
+  

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/storage/storage.md
----------------------------------------------------------------------
diff --git a/docs/storage/storage.md b/docs/storage/storage.md
new file mode 100644
index 0000000..a04a703
--- /dev/null
+++ b/docs/storage/storage.md
@@ -0,0 +1,80 @@
+---
+layout: page
+title: "Storage"
+description: "Notebook Storage option for Zeppelin"
+group: storage
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+### Notebook Storage
+
+In Zeppelin there are two option for storage Notebook, by default the notebook is storage in the notebook folder in your local File System and the second option is S3.
+
+</br>
+#### Notebook Storage in S3
+
+For notebook storage in S3 you need the AWS credentials, for this there are three options, the enviroment variable ```AWS_ACCESS_KEY_ID``` and ```AWS_ACCESS_SECRET_KEY```,  credentials file in the folder .aws in you home and IAM role for your instance. For complete the need steps is necessary:
+
+</br>
+you need the following folder structure on S3
+
+```
+bucket_name/
+  username/
+    notebook/
+
+```
+
+set the enviroment variable in the file **zeppelin-env.sh**:
+
+```
+export ZEPPELIN_NOTEBOOK_S3_BUCKET = bucket_name
+export ZEPPELIN_NOTEBOOK_S3_USER = username
+```
+
+in the file **zeppelin-site.xml** uncommet and complete the next property:
+
+```
+<!--If used S3 to storage, it is necessary the following folder structure bucket_name/username/notebook/-->
+<property>
+  <name>zeppelin.notebook.s3.user</name>
+  <value>username</value>
+  <description>user name for s3 folder structure</description>
+</property>
+<property>
+  <name>zeppelin.notebook.s3.bucket</name>
+  <value>bucket_name</value>
+  <description>bucket name for notebook storage</description>
+</property>
+```
+
+uncomment the next property for use S3NotebookRepo class:
+
+```
+<property>
+  <name>zeppelin.notebook.storage</name>
+  <value>org.apache.zeppelin.notebook.repo.S3NotebookRepo</value>
+  <description>notebook persistence layer implementation</description>
+</property>
+```
+
+comment the next property:
+
+```
+<property>
+  <name>zeppelin.notebook.storage</name>
+  <value>org.apache.zeppelin.notebook.repo.VFSNotebookRepo</value>
+  <description>notebook persistence layer implementation</description>
+</property>
+```   

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/tutorial/tutorial.md
----------------------------------------------------------------------
diff --git a/docs/tutorial/tutorial.md b/docs/tutorial/tutorial.md
new file mode 100644
index 0000000..68b2ee7
--- /dev/null
+++ b/docs/tutorial/tutorial.md
@@ -0,0 +1,197 @@
+---
+layout: page
+title: "Tutorial"
+description: "Tutorial is valid for Spark 1.3 and higher"
+group: tutorial
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+### Zeppelin Tutorial
+
+We will assume you have Zeppelin installed already. If that's not the case, see [Install](../install/install.html).
+
+Zeppelin's current main backend processing engine is [Apache Spark](https://spark.apache.org). If you're new to the system, you might want to start by getting an idea of how it processes data to get the most out of Zeppelin.
+
+<br />
+### Tutorial with Local File
+
+#### Data Refine
+
+Before you start Zeppelin tutorial, you will need to download [bank.zip](http://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank.zip). 
+
+First, to transform data from csv format into RDD of `Bank` objects, run following script. This will also remove header using `filter` function.
+
+```scala
+
+val bankText = sc.textFile("yourPath/bank/bank-full.csv")
+
+case class Bank(age:Integer, job:String, marital : String, education : String, balance : Integer)
+
+// split each line, filter out header (starts with "age"), and map it into Bank case class  
+val bank = bankText.map(s=>s.split(";")).filter(s=>s(0)!="\"age\"").map(
+    s=>Bank(s(0).toInt, 
+            s(1).replaceAll("\"", ""),
+            s(2).replaceAll("\"", ""),
+            s(3).replaceAll("\"", ""),
+            s(5).replaceAll("\"", "").toInt
+        )
+)
+
+// convert to DataFrame and create temporal table
+bank.toDF().registerTempTable("bank")
+```
+
+<br />
+#### Data Retrieval
+
+Suppose we want to see age distribution from `bank`. To do this, run:
+
+```sql
+%sql select age, count(1) from bank where age < 30 group by age order by age
+```
+
+You can make input box for setting age condition by replacing `30` with `${maxAge=30}`.
+
+```sql
+%sql select age, count(1) from bank where age < ${maxAge=30} group by age order by age
+```
+
+Now we want to see age distribution with certain marital status and add combo box to select marital status. Run:
+
+```sql
+%sql select age, count(1) from bank where marital="${marital=single,single|divorced|married}" group by age order by age
+```
+
+<br />
+### Tutorial with Streaming Data 
+
+#### Data Refine
+
+Since this tutorial is based on Twitter's sample tweet stream, you must configure authentication with a Twitter account. To do this, take a look at [Twitter Credential Setup](https://databricks-training.s3.amazonaws.com/realtime-processing-with-spark-streaming.html#twitter-credential-setup). After you get API keys, you should fill out credential related values(`apiKey`, `apiSecret`, `accessToken`, `accessTokenSecret`) with your API keys on following script.
+
+This will create a RDD of `Tweet` objects and register these stream data as a table:
+
+```scala
+import org.apache.spark.streaming._
+import org.apache.spark.streaming.twitter._
+import org.apache.spark.storage.StorageLevel
+import scala.io.Source
+import scala.collection.mutable.HashMap
+import java.io.File
+import org.apache.log4j.Logger
+import org.apache.log4j.Level
+import sys.process.stringSeqToProcess
+
+/** Configures the Oauth Credentials for accessing Twitter */
+def configureTwitterCredentials(apiKey: String, apiSecret: String, accessToken: String, accessTokenSecret: String) {
+  val configs = new HashMap[String, String] ++= Seq(
+    "apiKey" -> apiKey, "apiSecret" -> apiSecret, "accessToken" -> accessToken, "accessTokenSecret" -> accessTokenSecret)
+  println("Configuring Twitter OAuth")
+  configs.foreach{ case(key, value) =>
+    if (value.trim.isEmpty) {
+      throw new Exception("Error setting authentication - value for " + key + " not set")
+    }
+    val fullKey = "twitter4j.oauth." + key.replace("api", "consumer")
+    System.setProperty(fullKey, value.trim)
+    println("\tProperty " + fullKey + " set as [" + value.trim + "]")
+  }
+  println()
+}
+
+// Configure Twitter credentials
+val apiKey = "xxxxxxxxxxxxxxxxxxxxxxxxx"
+val apiSecret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
+val accessToken = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
+val accessTokenSecret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
+configureTwitterCredentials(apiKey, apiSecret, accessToken, accessTokenSecret)
+
+import org.apache.spark.streaming.twitter._
+val ssc = new StreamingContext(sc, Seconds(2))
+val tweets = TwitterUtils.createStream(ssc, None)
+val twt = tweets.window(Seconds(60))
+
+case class Tweet(createdAt:Long, text:String)
+twt.map(status=>
+  Tweet(status.getCreatedAt().getTime()/1000, status.getText())
+).foreachRDD(rdd=>
+  // Below line works only in spark 1.3.0.
+  // For spark 1.1.x and spark 1.2.x,
+  // use rdd.registerTempTable("tweets") instead.
+  rdd.toDF().registerAsTable("tweets")
+)
+
+twt.print
+
+ssc.start()
+```
+
+<br />
+#### Data Retrieval
+
+For each following script, every time you click run button you will see different result since it is based on real-time data.
+
+Let's begin by extracting maximum 10 tweets which contain the word "girl".
+
+```sql
+%sql select * from tweets where text like '%girl%' limit 10
+```
+
+This time suppose we want to see how many tweets have been created per sec during last 60 sec. To do this, run:
+
+```sql
+%sql select createdAt, count(1) from tweets group by createdAt order by createdAt
+```
+
+
+You can make user-defined function and use it in Spark SQL. Let's try it by making function named `sentiment`. This function will return one of the three attitudes(positive, negative, neutral) towards the parameter.
+
+```scala
+def sentiment(s:String) : String = {
+    val positive = Array("like", "love", "good", "great", "happy", "cool", "the", "one", "that")
+    val negative = Array("hate", "bad", "stupid", "is")
+    
+    var st = 0;
+
+    val words = s.split(" ")    
+    positive.foreach(p =>
+        words.foreach(w =>
+            if(p==w) st = st+1
+        )
+    )
+    
+    negative.foreach(p=>
+        words.foreach(w=>
+            if(p==w) st = st-1
+        )
+    )
+    if(st>0)
+        "positivie"
+    else if(st<0)
+        "negative"
+    else
+        "neutral"
+}
+
+// Below line works only in spark 1.3.0.
+// For spark 1.1.x and spark 1.2.x,
+// use sqlc.registerFunction("sentiment", sentiment _) instead.
+sqlc.udf.register("sentiment", sentiment _)
+
+```
+
+To check how people think about girls using `sentiment` function we've made above, run this:
+
+```sql
+%sql select sentiment(text), count(1) from tweets where text like '%girl%' group by sentiment(text)
+```


[2/4] incubator-zeppelin git commit: ZEPPELIN-412 Documentation based on Zeppelin version

Posted by mo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/rest-api/rest-interpreter.md
----------------------------------------------------------------------
diff --git a/docs/docs/rest-api/rest-interpreter.md b/docs/docs/rest-api/rest-interpreter.md
deleted file mode 100644
index d852340..0000000
--- a/docs/docs/rest-api/rest-interpreter.md
+++ /dev/null
@@ -1,363 +0,0 @@
----
-layout: page
-title: "Interpreter REST API"
-description: ""
-group: rest-api
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-## Zeppelin REST API
- Zeppelin provides several REST API's for interaction and remote activation of zeppelin functionality.
- 
- All REST API are available starting with the following endpoint ```http://[zeppelin-server]:[zeppelin-port]/api```
- 
- Note that zeppein REST API receive or return JSON objects, it it recommended you install some JSON view such as 
- [JSONView](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc)
- 
- 
- If you work with zeppelin and find a need for an additional REST API please [file an issue or send us mail](../../community.html) 
-
- <br />
-### Interpreter REST API list
-  
-  The role of registered interpreters, settings and interpreters group is described [here](../manual/interpreters.html)
-  
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>List registered interpreters</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```GET``` method return all the registered interpreters available on the server.</td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      <td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>200</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON response
-      </td>
-      <td>
-        <pre>
-{
-  "status": "OK",
-  "message": "",
-  "body": {
-    "md.md": {
-      "name": "md",
-      "group": "md",
-      "className": "org.apache.zeppelin.markdown.Markdown",
-      "properties": {},
-      "path": "/zeppelin/interpreter/md"
-    },
-    "spark.spark": {
-      "name": "spark",
-      "group": "spark",
-      "className": "org.apache.zeppelin.spark.SparkInterpreter",
-      "properties": {
-        "spark.executor.memory": {
-          "defaultValue": "512m",
-          "description": "Executor memory per worker instance. ex) 512m, 32g"
-        },
-        "spark.cores.max": {
-          "defaultValue": "",
-          "description": "Total number of cores to use. Empty value uses all available core."
-        },
-      },
-      "path": "/zeppelin/interpreter/spark"
-    },
-    "spark.sql": {
-      "name": "sql",
-      "group": "spark",
-      "className": "org.apache.zeppelin.spark.SparkSqlInterpreter",
-      "properties": {
-        "zeppelin.spark.maxResult": {
-          "defaultValue": "1000",
-          "description": "Max number of SparkSQL result to display."
-        }
-      },
-      "path": "/zeppelin/interpreter/spark"
-    }
-  }
-}
-        </pre>
-      </td>
-    </tr>
-  </table>
-  
-<br/>
-   
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>List interpreters settings</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```GET``` method return all the interpreters settings registered on the server.</td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      <td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter/setting```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>200</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON response
-      </td>
-      <td>
-        <pre>
-{
-  "status": "OK",
-  "message": "",
-  "body": [
-    {
-      "id": "2AYUGP2D5",
-      "name": "md",
-      "group": "md",
-      "properties": {
-        "_empty_": ""
-      },
-      "interpreterGroup": [
-        {
-          "class": "org.apache.zeppelin.markdown.Markdown",
-          "name": "md"
-        }
-      ]
-    },  
-    {
-      "id": "2AY6GV7Q3",
-      "name": "spark",
-      "group": "spark",
-      "properties": {
-        "spark.cores.max": "",
-        "spark.executor.memory": "512m",
-      },
-      "interpreterGroup": [
-        {
-          "class": "org.apache.zeppelin.spark.SparkInterpreter",
-          "name": "spark"
-        },
-        {
-          "class": "org.apache.zeppelin.spark.SparkSqlInterpreter",
-          "name": "sql"
-        }
-      ]
-    }
-  ]
-}
-        </pre>
-      </td>
-    </tr>
-  </table>
-
-<br/>
-   
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>Create an interpreter setting</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```POST``` method adds a new interpreter setting using a registered interpreter to the server.</td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      <td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter/setting```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>201</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON input
-      </td>
-      <td>
-        <pre>
-{
-  "name": "Markdown setting name",
-  "group": "md",
-  "properties": {
-    "propname": "propvalue"
-  },
-  "interpreterGroup": [
-    {
-      "class": "org.apache.zeppelin.markdown.Markdown",
-      "name": "md"
-    }
-  ]
-}
-        </pre>
-      </td>
-    </tr>
-    <tr>
-      <td> sample JSON response
-      </td>
-      <td>
-        <pre>
-{
-  "status": "CREATED",
-  "message": "",
-  "body": {
-    "id": "2AYW25ANY",
-    "name": "Markdown setting name",
-    "group": "md",
-    "properties": {
-      "propname": "propvalue"
-    },
-    "interpreterGroup": [
-      {
-        "class": "org.apache.zeppelin.markdown.Markdown",
-        "name": "md"
-      }
-    ]
-  }
-}
-        </pre>
-      </td>
-    </tr>
-  </table>
-  
-  
-<br/>
-   
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>Update an interpreter setting</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```PUT``` method updates an interpreter setting with new properties.</td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      <td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter/setting/[interpreter ID]```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>200</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON input
-      </td>
-      <td>
-        <pre>
-{
-  "name": "Markdown setting name",
-  "group": "md",
-  "properties": {
-    "propname": "Otherpropvalue"
-  },
-  "interpreterGroup": [
-    {
-      "class": "org.apache.zeppelin.markdown.Markdown",
-      "name": "md"
-    }
-  ]
-}
-        </pre>
-      </td>
-    </tr>
-    <tr>
-      <td> sample JSON response
-      </td>
-      <td>
-        <pre>
-{
-  "status": "OK",
-  "message": "",
-  "body": {
-    "id": "2AYW25ANY",
-    "name": "Markdown setting name",
-    "group": "md",
-    "properties": {
-      "propname": "Otherpropvalue"
-    },
-    "interpreterGroup": [
-      {
-        "class": "org.apache.zeppelin.markdown.Markdown",
-        "name": "md"
-      }
-    ]
-  }
-}
-        </pre>
-      </td>
-    </tr>
-  </table>
-
-  
-<br/>
-   
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>Delete an interpreter setting</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```DELETE``` method deletes an given interpreter setting.</td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      <td>```http://[zeppelin-server]:[zeppelin-port]/api/interpreter/setting/[interpreter ID]```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>200</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON response
-      </td>
-      <td>
-        <pre>{"status":"OK"}</pre>
-      </td>
-    </tr>
-  </table>

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/rest-api/rest-notebook.md
----------------------------------------------------------------------
diff --git a/docs/docs/rest-api/rest-notebook.md b/docs/docs/rest-api/rest-notebook.md
deleted file mode 100644
index ffee95a..0000000
--- a/docs/docs/rest-api/rest-notebook.md
+++ /dev/null
@@ -1,171 +0,0 @@
----
-layout: page
-title: "Notebook REST API"
-description: ""
-group: rest-api
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-## Zeppelin REST API
- Zeppelin provides several REST API's for interaction and remote activation of zeppelin functionality.
- 
- All REST API are available starting with the following endpoint ```http://[zeppelin-server]:[zeppelin-port]/api```
- 
- Note that zeppein REST API receive or return JSON objects, it it recommended you install some JSON view such as 
- [JSONView](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc)
- 
- 
- If you work with zeppelin and find a need for an additional REST API please [file an issue or send us mail](../../community.html) 
-
- <br />
-### Notebook REST API list
-  
-  Notebooks REST API supports the following operations: List, Create, Delete & Clone as detailed in the following table 
-  
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>List notebooks</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```GET``` method list the available notebooks on your server.
-          Notebook JSON contains the ```name``` and ```id``` of all notebooks.
-      </td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      <td>```http://[zeppelin-server]:[zeppelin-port]/api/notebook```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>200</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON response </td>
-      <td><pre>{"status":"OK","message":"","body":[{"name":"Homepage","id":"2AV4WUEMK"},{"name":"Zeppelin Tutorial","id":"2A94M5J1Z"}]}</pre></td>
-    </tr>
-  </table>
-  
-<br/>
-
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>Create notebook</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```POST``` method create a new notebook using the given name or default name if none given.
-          The body field of the returned JSON contain the new notebook id.
-      </td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      <td>```http://[zeppelin-server]:[zeppelin-port]/api/notebook```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>201</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON input </td>
-      <td><pre>{"name": "name of new notebook"}</pre></td>
-    </tr>
-    <tr>
-      <td> sample JSON response </td>
-      <td><pre>{"status": "CREATED","message": "","body": "2AZPHY918"}</pre></td>
-    </tr>
-  </table>
-  
-<br/>
-
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>Delete notebook</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```DELETE``` method delete a notebook by the given notebook id.
-      </td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      <td>```http://[zeppelin-server]:[zeppelin-port]/api/notebook/[notebookId]```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>200</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON response </td>
-      <td><pre>{"status":"OK","message":""}</pre></td>
-    </tr>
-  </table>
-  
-<br/>
-  
-  <table class="table-configuration">
-    <col width="200">
-    <tr>
-      <th>Clone notebook</th>
-      <th></th>
-    </tr>
-    <tr>
-      <td>Description</td>
-      <td>This ```POST``` method clone a notebook by the given id and create a new notebook using the given name 
-          or default name if none given.
-          The body field of the returned JSON contain the new notebook id.
-      </td>
-    </tr>
-    <tr>
-      <td>URL</td>
-      <td>```http://[zeppelin-server]:[zeppelin-port]/api/notebook/[notebookId]```</td>
-    </tr>
-    <tr>
-      <td>Success code</td>
-      <td>201</td>
-    </tr>
-    <tr>
-      <td> Fail code</td>
-      <td> 500 </td>
-    </tr>
-    <tr>
-      <td> sample JSON input </td>
-      <td><pre>{"name": "name of new notebook"}</pre></td>
-    </tr>
-    <tr>
-      <td> sample JSON response </td>
-      <td><pre>{"status": "CREATED","message": "","body": "2AZPHY918"}</pre></td>
-    </tr>
-  </table>
-  

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/storage/storage.md
----------------------------------------------------------------------
diff --git a/docs/docs/storage/storage.md b/docs/docs/storage/storage.md
deleted file mode 100644
index a04a703..0000000
--- a/docs/docs/storage/storage.md
+++ /dev/null
@@ -1,80 +0,0 @@
----
-layout: page
-title: "Storage"
-description: "Notebook Storage option for Zeppelin"
-group: storage
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-### Notebook Storage
-
-In Zeppelin there are two option for storage Notebook, by default the notebook is storage in the notebook folder in your local File System and the second option is S3.
-
-</br>
-#### Notebook Storage in S3
-
-For notebook storage in S3 you need the AWS credentials, for this there are three options, the enviroment variable ```AWS_ACCESS_KEY_ID``` and ```AWS_ACCESS_SECRET_KEY```,  credentials file in the folder .aws in you home and IAM role for your instance. For complete the need steps is necessary:
-
-</br>
-you need the following folder structure on S3
-
-```
-bucket_name/
-  username/
-    notebook/
-
-```
-
-set the enviroment variable in the file **zeppelin-env.sh**:
-
-```
-export ZEPPELIN_NOTEBOOK_S3_BUCKET = bucket_name
-export ZEPPELIN_NOTEBOOK_S3_USER = username
-```
-
-in the file **zeppelin-site.xml** uncommet and complete the next property:
-
-```
-<!--If used S3 to storage, it is necessary the following folder structure bucket_name/username/notebook/-->
-<property>
-  <name>zeppelin.notebook.s3.user</name>
-  <value>username</value>
-  <description>user name for s3 folder structure</description>
-</property>
-<property>
-  <name>zeppelin.notebook.s3.bucket</name>
-  <value>bucket_name</value>
-  <description>bucket name for notebook storage</description>
-</property>
-```
-
-uncomment the next property for use S3NotebookRepo class:
-
-```
-<property>
-  <name>zeppelin.notebook.storage</name>
-  <value>org.apache.zeppelin.notebook.repo.S3NotebookRepo</value>
-  <description>notebook persistence layer implementation</description>
-</property>
-```
-
-comment the next property:
-
-```
-<property>
-  <name>zeppelin.notebook.storage</name>
-  <value>org.apache.zeppelin.notebook.repo.VFSNotebookRepo</value>
-  <description>notebook persistence layer implementation</description>
-</property>
-```   

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/tutorial/tutorial.md
----------------------------------------------------------------------
diff --git a/docs/docs/tutorial/tutorial.md b/docs/docs/tutorial/tutorial.md
deleted file mode 100644
index 68b2ee7..0000000
--- a/docs/docs/tutorial/tutorial.md
+++ /dev/null
@@ -1,197 +0,0 @@
----
-layout: page
-title: "Tutorial"
-description: "Tutorial is valid for Spark 1.3 and higher"
-group: tutorial
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-### Zeppelin Tutorial
-
-We will assume you have Zeppelin installed already. If that's not the case, see [Install](../install/install.html).
-
-Zeppelin's current main backend processing engine is [Apache Spark](https://spark.apache.org). If you're new to the system, you might want to start by getting an idea of how it processes data to get the most out of Zeppelin.
-
-<br />
-### Tutorial with Local File
-
-#### Data Refine
-
-Before you start Zeppelin tutorial, you will need to download [bank.zip](http://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank.zip). 
-
-First, to transform data from csv format into RDD of `Bank` objects, run following script. This will also remove header using `filter` function.
-
-```scala
-
-val bankText = sc.textFile("yourPath/bank/bank-full.csv")
-
-case class Bank(age:Integer, job:String, marital : String, education : String, balance : Integer)
-
-// split each line, filter out header (starts with "age"), and map it into Bank case class  
-val bank = bankText.map(s=>s.split(";")).filter(s=>s(0)!="\"age\"").map(
-    s=>Bank(s(0).toInt, 
-            s(1).replaceAll("\"", ""),
-            s(2).replaceAll("\"", ""),
-            s(3).replaceAll("\"", ""),
-            s(5).replaceAll("\"", "").toInt
-        )
-)
-
-// convert to DataFrame and create temporal table
-bank.toDF().registerTempTable("bank")
-```
-
-<br />
-#### Data Retrieval
-
-Suppose we want to see age distribution from `bank`. To do this, run:
-
-```sql
-%sql select age, count(1) from bank where age < 30 group by age order by age
-```
-
-You can make input box for setting age condition by replacing `30` with `${maxAge=30}`.
-
-```sql
-%sql select age, count(1) from bank where age < ${maxAge=30} group by age order by age
-```
-
-Now we want to see age distribution with certain marital status and add combo box to select marital status. Run:
-
-```sql
-%sql select age, count(1) from bank where marital="${marital=single,single|divorced|married}" group by age order by age
-```
-
-<br />
-### Tutorial with Streaming Data 
-
-#### Data Refine
-
-Since this tutorial is based on Twitter's sample tweet stream, you must configure authentication with a Twitter account. To do this, take a look at [Twitter Credential Setup](https://databricks-training.s3.amazonaws.com/realtime-processing-with-spark-streaming.html#twitter-credential-setup). After you get API keys, you should fill out credential related values(`apiKey`, `apiSecret`, `accessToken`, `accessTokenSecret`) with your API keys on following script.
-
-This will create a RDD of `Tweet` objects and register these stream data as a table:
-
-```scala
-import org.apache.spark.streaming._
-import org.apache.spark.streaming.twitter._
-import org.apache.spark.storage.StorageLevel
-import scala.io.Source
-import scala.collection.mutable.HashMap
-import java.io.File
-import org.apache.log4j.Logger
-import org.apache.log4j.Level
-import sys.process.stringSeqToProcess
-
-/** Configures the Oauth Credentials for accessing Twitter */
-def configureTwitterCredentials(apiKey: String, apiSecret: String, accessToken: String, accessTokenSecret: String) {
-  val configs = new HashMap[String, String] ++= Seq(
-    "apiKey" -> apiKey, "apiSecret" -> apiSecret, "accessToken" -> accessToken, "accessTokenSecret" -> accessTokenSecret)
-  println("Configuring Twitter OAuth")
-  configs.foreach{ case(key, value) =>
-    if (value.trim.isEmpty) {
-      throw new Exception("Error setting authentication - value for " + key + " not set")
-    }
-    val fullKey = "twitter4j.oauth." + key.replace("api", "consumer")
-    System.setProperty(fullKey, value.trim)
-    println("\tProperty " + fullKey + " set as [" + value.trim + "]")
-  }
-  println()
-}
-
-// Configure Twitter credentials
-val apiKey = "xxxxxxxxxxxxxxxxxxxxxxxxx"
-val apiSecret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
-val accessToken = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
-val accessTokenSecret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
-configureTwitterCredentials(apiKey, apiSecret, accessToken, accessTokenSecret)
-
-import org.apache.spark.streaming.twitter._
-val ssc = new StreamingContext(sc, Seconds(2))
-val tweets = TwitterUtils.createStream(ssc, None)
-val twt = tweets.window(Seconds(60))
-
-case class Tweet(createdAt:Long, text:String)
-twt.map(status=>
-  Tweet(status.getCreatedAt().getTime()/1000, status.getText())
-).foreachRDD(rdd=>
-  // Below line works only in spark 1.3.0.
-  // For spark 1.1.x and spark 1.2.x,
-  // use rdd.registerTempTable("tweets") instead.
-  rdd.toDF().registerAsTable("tweets")
-)
-
-twt.print
-
-ssc.start()
-```
-
-<br />
-#### Data Retrieval
-
-For each following script, every time you click run button you will see different result since it is based on real-time data.
-
-Let's begin by extracting maximum 10 tweets which contain the word "girl".
-
-```sql
-%sql select * from tweets where text like '%girl%' limit 10
-```
-
-This time suppose we want to see how many tweets have been created per sec during last 60 sec. To do this, run:
-
-```sql
-%sql select createdAt, count(1) from tweets group by createdAt order by createdAt
-```
-
-
-You can make user-defined function and use it in Spark SQL. Let's try it by making function named `sentiment`. This function will return one of the three attitudes(positive, negative, neutral) towards the parameter.
-
-```scala
-def sentiment(s:String) : String = {
-    val positive = Array("like", "love", "good", "great", "happy", "cool", "the", "one", "that")
-    val negative = Array("hate", "bad", "stupid", "is")
-    
-    var st = 0;
-
-    val words = s.split(" ")    
-    positive.foreach(p =>
-        words.foreach(w =>
-            if(p==w) st = st+1
-        )
-    )
-    
-    negative.foreach(p=>
-        words.foreach(w=>
-            if(p==w) st = st-1
-        )
-    )
-    if(st>0)
-        "positivie"
-    else if(st<0)
-        "negative"
-    else
-        "neutral"
-}
-
-// Below line works only in spark 1.3.0.
-// For spark 1.1.x and spark 1.2.x,
-// use sqlc.registerFunction("sentiment", sentiment _) instead.
-sqlc.udf.register("sentiment", sentiment _)
-
-```
-
-To check how people think about girls using `sentiment` function we've made above, run this:
-
-```sql
-%sql select sentiment(text), count(1) from tweets where text like '%girl%' group by sentiment(text)
-```

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/download.md
----------------------------------------------------------------------
diff --git a/docs/download.md b/docs/download.md
deleted file mode 100644
index 99c4ac1..0000000
--- a/docs/download.md
+++ /dev/null
@@ -1,87 +0,0 @@
----
-layout: page
-title: "Download"
-description: ""
-group: nav-right
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-### Download Zeppelin
-
-The latest release of Apache Zeppelin (incubating) is *0.5.0-incubating*.
-
-  - 0.5.0-incubating released on July 23, 2015 ([release notes](./docs/releases/zeppelin-release-0.5.0-incubating.html)) ([git tag](https://git-wip-us.apache.org/repos/asf?p=incubator-zeppelin.git;a=tag;h=refs/tags/v0.5.0))
-
-
-    * Source:
-    <a style="cursor:pointer" onclick="ga('send', 'event', 'download', 'zeppelin-src', '0.5.0-incubating'); window.location.href='http://www.apache.org/dyn/closer.cgi/incubator/zeppelin/0.5.0-incubating/zeppelin-0.5.0-incubating.tgz'">zeppelin-0.5.0-incubating.tgz</a>
-    ([pgp](https://www.apache.org/dist/incubator/zeppelin/0.5.0-incubating/zeppelin-0.5.0-incubating.tgz.asc),
-     [md5](https://www.apache.org/dist/incubator/zeppelin/0.5.0-incubating/zeppelin-0.5.0-incubating.tgz.md5),
-     [sha](https://www.apache.org/dist/incubator/zeppelin/0.5.0-incubating/zeppelin-0.5.0-incubating.tgz.sha))
-
-    * Binary built with spark-1.4.0 and hadoop-2.3:
-    <a style="cursor:pointer" onclick="ga('send', 'event', 'download', 'zeppelin-bin', '0.5.0-incubating'); window.location.href='http://www.apache.org/dyn/closer.cgi/incubator/zeppelin/0.5.0-incubating/zeppelin-0.5.0-incubating-bin-spark-1.4.0_hadoop-2.3.tgz'">zeppelin-0.5.0-incubating-bin-spark-1.4.0_hadoop-2.3.tgz</a>
-    ([pgp](https://www.apache.org/dist/incubator/zeppelin/0.5.0-incubating/zeppelin-0.5.0-incubating-bin-spark-1.4.0_hadoop-2.3.tgz.asc),
-     [md5](https://www.apache.org/dist/incubator/zeppelin/0.5.0-incubating/zeppelin-0.5.0-incubating-bin-spark-1.4.0_hadoop-2.3.tgz.md5),
-     [sha](https://www.apache.org/dist/incubator/zeppelin/0.5.0-incubating/zeppelin-0.5.0-incubating-bin-spark-1.4.0_hadoop-2.3.tgz.sha))
-
-    * Binary built with spark-1.3.1 and hadoop-2.3:
-    <a style="cursor:pointer" onclick="ga('send', 'event', 'download', 'zeppelin-bin', '0.5.0-incubating'); window.location.href='http://www.apache.org/dyn/closer.cgi/incubator/zeppelin/0.5.0-incubating/zeppelin-0.5.0-incubating-bin-spark-1.3.1_hadoop-2.3.tgz'">zeppelin-0.5.0-incubating-bin-spark-1.3.1_hadoop-2.3.tgz</a>
-    ([pgp](https://www.apache.org/dist/incubator/zeppelin/0.5.0-incubating/zeppelin-0.5.0-incubating-bin-spark-1.3.1_hadoop-2.3.tgz.asc),
-     [md5](https://www.apache.org/dist/incubator/zeppelin/0.5.0-incubating/zeppelin-0.5.0-incubating-bin-spark-1.3.1_hadoop-2.3.tgz.md5),
-     [sha](https://www.apache.org/dist/incubator/zeppelin/0.5.0-incubating/zeppelin-0.5.0-incubating-bin-spark-1.3.1_hadoop-2.3.tgz.sha))
-    
-    
-
-
-
-### Verify the integrity of the files
-
-It is essential that you [verify](https://www.apache.org/info/verification.html) the integrity of the downloaded files using the PGP or MD5 signatures. This signature should be matched against the [KEYS](https://www.apache.org/dist/incubator/zeppelin/KEYS) file.
-
-
-
-### Build from source
-
-For developers, to get latest *0.6.0-incubating-SNAPSHOT* check [install](./docs/install/install.html) section.
-
-
-<!-- 
--------------
-### Old release
-
-##### Zeppelin-0.3.3 (2014.03.29)
-
-Download <a onclick="ga('send', 'event', 'download', 'zeppelin', '0.3.3');" href="https://s3-ap-northeast-1.amazonaws.com/zeppel.in/zeppelin-0.3.3.tar.gz">zeppelin-0.3.3.tar.gz</a> ([release note](https://zeppelin-project.atlassian.net/secure/ReleaseNote.jspa?projectId=10001&version=10301))
-
-
-##### Zeppelin-0.3.2 (2014.03.14)
-
-Download <a onclick="ga('send', 'event', 'download', 'zeppelin', '0.3.2');" href="https://s3-ap-northeast-1.amazonaws.com/zeppel.in/zeppelin-0.3.2.tar.gz">zeppelin-0.3.2.tar.gz</a> ([release note](https://zeppelin-project.atlassian.net/secure/ReleaseNote.jspa?projectId=10001&version=10300))
-
-##### Zeppelin-0.3.1 (2014.03.06)
-
-Download <a onclick="ga('send', 'event', 'download', 'zeppelin', '0.3.1');" href="https://s3-ap-northeast-1.amazonaws.com/zeppel.in/zeppelin-0.3.1.tar.gz">zeppelin-0.3.1.tar.gz</a> ([release note](https://zeppelin-project.atlassian.net/secure/ReleaseNote.jspa?projectId=10001&version=10201))
-
-##### Zeppelin-0.3.0 (2014.02.07)
-
-Download <a onclick="ga('send', 'event', 'download', 'zeppelin', '0.3.0');" href="https://s3-ap-northeast-1.amazonaws.com/zeppel.in/zeppelin-0.3.0.tar.gz">zeppelin-0.3.0.tar.gz</a>, ([release note](https://zeppelin-project.atlassian.net/secure/ReleaseNote.jspa?projectId=10001&version=10200))
-
-##### Zeppelin-0.2.0 (2014.01.22)
-
-Download Download <a onclick="ga('send', 'event', 'download', 'zeppelin', '0.2.0');" href="https://s3-ap-northeast-1.amazonaws.com/zeppel.in/zeppelin-0.2.0.tar.gz">zeppelin-0.2.0.tar.gz</a>, ([release note](https://zeppelin-project.atlassian.net/secure/ReleaseNote.jspa?projectId=10001&version=10001))
-
--->
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/index.md
----------------------------------------------------------------------
diff --git a/docs/index.md b/docs/index.md
index 57ad2fb..4343c64 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,7 +1,8 @@
 ---
 layout: page
-title: Zeppelin
+title: Overview
 tagline: Less Development, More analysis!
+group: nav-right
 ---
 <!--
 Licensed under the Apache License, Version 2.0 (the "License");
@@ -17,7 +18,7 @@ See the License for the specific language governing permissions and
 limitations under the License.
 -->
 {% include JB/setup %}
-
+<br />
 <div class="row">
  <div class="col-md-5">
 <h2>Multi-purpose Notebook</h2>
@@ -45,7 +46,7 @@ Currently Zeppelin supports many interpreters such as Scala(with Apache Spark),
 
 <img class="img-responsive" src="assets/themes/zeppelin/img/screenshots/multiple_language_backend.png" />
 
-Adding new language-backend is really simple. Learn [how to write a zeppelin interpreter](./docs/development/writingzeppelininterpreter.html).
+Adding new language-backend is really simple. Learn [how to write a zeppelin interpreter](./development/writingzeppelininterpreter.html).
 
 
 <br />
@@ -58,7 +59,7 @@ Zeppelin provides built-in Apache Spark integration. You don't need to build a s
 Zeppelin's Spark integration provides
 
 - Automatic SparkContext and SQLContext injection
-- Runtime jar dependency loading from local filesystem or maven repository. Learn more about [dependency loader](./docs/interpreter/spark.html#dependencyloading).
+- Runtime jar dependency loading from local filesystem or maven repository. Learn more about [dependency loader](./interpreter/spark.html#dependencyloading).
 - Canceling job and displaying its progress
 
 <br />
@@ -84,7 +85,7 @@ With simple drag and drop Zeppelin aggeregates the values and display them in pi
     <img class="img-responsive" src="./assets/themes/zeppelin/img/screenshots/pivot.png" />
   </div>
 </div>
-Learn more about Zeppelin's Display system. ( [text](./docs/displaysystem/display.html), [html](./docs/displaysystem/display.html#html), [table](./docs/displaysystem/table.html), [angular](./docs/displaysystem/angular.html) )
+Learn more about Zeppelin's Display system. ( [text](./displaysystem/display.html), [html](./displaysystem/display.html#html), [table](./displaysystem/table.html), [angular](./displaysystem/angular.html) )
 
 
 <br />
@@ -94,7 +95,7 @@ Zeppelin can dynamically create some input forms into your notebook.
 
 <img class="img-responsive" src="./assets/themes/zeppelin/img/screenshots/form_input.png" />
 
-Learn more about [Dynamic Forms](./docs/manual/dynamicform.html).
+Learn more about [Dynamic Forms](./manual/dynamicform.html).
 
 
 <br />
@@ -117,7 +118,7 @@ This way, you can easily embed it as an iframe inside of your website.</p>
 <br />
 ### 100% Opensource
 
-Apache Zeppelin (incubating) is Apache2 Licensed software. Please check out the [source repository](https://github.com/apache/incubator-zeppelin) and [How to contribute](./docs/development/howtocontribute.html)
+Apache Zeppelin (incubating) is Apache2 Licensed software. Please check out the [source repository](https://github.com/apache/incubator-zeppelin) and [How to contribute](./development/howtocontribute.html)
 
 Zeppelin has a very active development community.
 Join the [Mailing list](./community.html) and report issues on our [Issue tracker](https://issues.apache.org/jira/browse/ZEPPELIN).

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/install/install.md
----------------------------------------------------------------------
diff --git a/docs/install/install.md b/docs/install/install.md
new file mode 100644
index 0000000..a4b3336
--- /dev/null
+++ b/docs/install/install.md
@@ -0,0 +1,132 @@
+---
+layout: page
+title: "Install Zeppelin"
+description: ""
+group: install
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+
+
+## Build
+
+#### Prerequisites
+
+ * Java 1.7
+ * None root account
+ * Apache Maven
+
+Build tested on OSX, CentOS 6.
+
+Checkout source code from [https://github.com/apache/incubator-zeppelin](https://github.com/apache/incubator-zeppelin)
+
+#### Local mode
+
+```
+mvn install -DskipTests
+```
+
+#### Cluster mode
+
+```
+mvn install -DskipTests -Dspark.version=1.1.0 -Dhadoop.version=2.2.0
+```
+
+Change spark.version and hadoop.version to your cluster's one.
+
+#### Custom built Spark
+
+Note that is you uses custom build spark, you need build Zeppelin with custome built spark artifact. To do that, deploy spark artifact to local maven repository using
+
+```
+sbt/sbt publish-local
+```
+
+and then build Zeppelin with your custom built Spark
+
+```
+mvn install -DskipTests -Dspark.version=1.1.0-Custom -Dhadoop.version=2.2.0
+```
+
+
+
+
+## Configure
+
+Configuration can be done by both environment variable(conf/zeppelin-env.sh) and java properties(conf/zeppelin-site.xml). If both defined, environment vaiable is used.
+
+
+<table class="table-configuration">
+  <tr>
+    <th>zepplin-env.sh</th>
+    <th>zepplin-site.xml</th>
+    <th>Default value</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>ZEPPELIN_PORT</td>
+    <td>zeppelin.server.port</td>
+    <td>8080</td>
+    <td>Zeppelin server port. Note that port+1 is used for web socket</td>
+  </tr>
+  <tr>
+    <td>ZEPPELIN_NOTEBOOK_DIR</td>
+    <td>zeppelin.notebook.dir</td>
+    <td>notebook</td>
+    <td>Where notebook file is saved</td>
+  </tr>
+  <tr>
+    <td>ZEPPELIN_INTERPRETERS</td>
+    <td>zeppelin.interpreters</td>
+  <description></description>
+    <td>org.apache.zeppelin.spark.SparkInterpreter,<br />org.apache.zeppelin.spark.PySparkInterpreter,<br />org.apache.zeppelin.spark.SparkSqlInterpreter,<br />org.apache.zeppelin.spark.DepInterpreter,<br />org.apache.zeppelin.markdown.Markdown,<br />org.apache.zeppelin.shell.ShellInterpreter,<br />org.apache.zeppelin.hive.HiveInterpreter</td>
+    <td>Comma separated interpreter configurations [Class]. First interpreter become a default</td>
+  </tr>
+  <tr>
+    <td>ZEPPELIN_INTERPRETER_DIR</td>
+    <td>zeppelin.interpreter.dir</td>
+    <td>interpreter</td>
+    <td>Zeppelin interpreter directory</td>
+  </tr>
+  <tr>
+    <td>MASTER</td>
+    <td></td>
+    <td>N/A</td>
+    <td>Spark master url. eg. spark://master_addr:7077. Leave empty if you want to use local mode</td>
+  </tr>
+  <tr>
+    <td>ZEPPELIN_JAVA_OPTS</td>
+    <td></td>
+    <td>N/A</td>
+    <td>JVM Options</td>
+</table>
+
+## Start/Stop
+#### Start Zeppelin
+
+```
+bin/zeppelin-daemon.sh start
+```
+After successful start, visit http://localhost:8080 with your web browser.
+Note that port **8081** also need to be accessible for websocket connection.
+
+#### Stop Zeppelin
+
+```
+bin/zeppelin-daemon.sh stop
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/install/yarn_install.md
----------------------------------------------------------------------
diff --git a/docs/install/yarn_install.md b/docs/install/yarn_install.md
new file mode 100644
index 0000000..2b38068
--- /dev/null
+++ b/docs/install/yarn_install.md
@@ -0,0 +1,264 @@
+---
+layout: page
+title: "Install Zeppelin to connect with existing YARN cluster"
+description: ""
+group: install
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+## Introduction
+This page describes how to pre-configure a bare metal node, build & configure Zeppelin on it, configure Zeppelin and connect it to existing YARN cluster running Hortonworks flavour of Hadoop. It also describes steps to configure Spark & Hive interpreter of Zeppelin. 
+
+## Prepare Node
+
+### Zeppelin user (Optional)
+This step is optional, however its nice to run Zeppelin under its own user. In case you do not like to use Zeppelin (hope not) the user could be deleted along with all the pacakges that were installed for Zeppelin, Zeppelin binary itself and associated directories.
+
+Create a zeppelin user and switch to zeppelin user or if zeppelin user is already created then login as zeppelin.
+
+```bash
+useradd zeppelin
+su - zeppelin 
+whoami
+```
+Assuming a zeppelin user is created then running whoami command must return 
+
+```bash
+zeppelin
+```
+
+Its assumed in the rest of the document that zeppelin user is indeed created and below installation instructions are performed as zeppelin user.
+
+### List of Prerequisites
+
+ * CentOS 6.x
+ * Git
+ * Java 1.7 
+ * Apache Maven
+ * Hadoop client.
+ * Spark.
+ * Internet connection is required. 
+
+Its assumed that the node has CentOS 6.x installed on it. Although any version of Linux distribution should work fine. The working directory of all prerequisite pacakges is /home/zeppelin/prerequisites, although any location could be used.
+
+#### Git
+Intall latest stable version of Git. This document describes installation of version 2.4.8
+
+```bash
+yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel
+yum install  gcc perl-ExtUtils-MakeMaker
+yum remove git
+cd /home/zeppelin/prerequisites
+wget https://github.com/git/git/archive/v2.4.8.tar.gz
+tar xzf git-2.0.4.tar.gz
+cd git-2.0.4
+make prefix=/home/zeppelin/prerequisites/git all
+make prefix=/home/zeppelin/prerequisites/git install
+echo "export PATH=$PATH:/home/zeppelin/prerequisites/bin" >> /home/zeppelin/.bashrc
+source /home/zeppelin/.bashrc
+git --version
+```
+
+Assuming all the packages are successfully installed, running the version option with git command should display
+
+```bash
+git version 2.4.8
+```
+
+#### Java
+Zeppelin works well with 1.7.x version of Java runtime. Download JDK version 7 and a stable update and follow below instructions to install it.
+
+```bash
+cd /home/zeppelin/prerequisites/
+#Download JDK 1.7, Assume JDK 7 update 79 is downloaded.
+tar -xf jdk-7u79-linux-x64.tar.gz
+echo "export JAVA_HOME=/home/zeppelin/prerequisites/jdk1.7.0_79" >> /home/zeppelin/.bashrc
+source /home/zeppelin/.bashrc
+echo $JAVA_HOME
+```
+Assuming all the packages are successfully installed, echoing JAVA_HOME environment variable should display
+
+```bash
+/home/zeppelin/prerequisites/jdk1.7.0_79
+```
+
+#### Apache Maven
+Download and install a stable version of Maven.
+
+```bash
+cd /home/zeppelin/prerequisites/
+wget ftp://mirror.reverse.net/pub/apache/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz
+tar -xf apache-maven-3.3.3-bin.tar.gz 
+cd apache-maven-3.3.3
+export MAVEN_HOME=/home/zeppelin/prerequisites/apache-maven-3.3.3
+echo "export PATH=$PATH:/home/zeppelin/prerequisites/apache-maven-3.3.3/bin" >> /home/zeppelin/.bashrc
+source /home/zeppelin/.bashrc
+mvn -version
+```
+
+Assuming all the packages are successfully installed, running the version option with mvn command should display
+
+```bash
+Apache Maven 3.3.3 (7994120775791599e205a5524ec3e0dfe41d4a06; 2015-04-22T04:57:37-07:00)
+Maven home: /home/zeppelin/prerequisites/apache-maven-3.3.3
+Java version: 1.7.0_79, vendor: Oracle Corporation
+Java home: /home/zeppelin/prerequisites/jdk1.7.0_79/jre
+Default locale: en_US, platform encoding: UTF-8
+OS name: "linux", version: "2.6.32-358.el6.x86_64", arch: "amd64", family: "unix"
+```
+
+#### Hadoop client
+Zeppelin can work with multiple versions & distributions of Hadoop. A complete list [is available here.](https://github.com/apache/incubator-zeppelin#build) This document assumes Hadoop 2.7.x client libraries including configuration files are installed on Zeppelin node. It also assumes /etc/hadoop/conf contains various Hadoop configuration files. The location of Hadoop configuration files may vary, hence use appropriate location.
+
+```bash
+hadoop version
+Hadoop 2.7.1.2.3.1.0-2574
+Subversion git@github.com:hortonworks/hadoop.git -r f66cf95e2e9367a74b0ec88b2df33458b6cff2d0
+Compiled by jenkins on 2015-07-25T22:36Z
+Compiled with protoc 2.5.0
+From source with checksum 54f9bbb4492f92975e84e390599b881d
+This command was run using /usr/hdp/2.3.1.0-2574/hadoop/lib/hadoop-common-2.7.1.2.3.1.0-2574.jar
+```
+
+#### Spark
+Zeppelin can work with multiple versions Spark. A complete list [is available here.](https://github.com/apache/incubator-zeppelin#build) This document assumes Spark 1.3.1 is installed on Zeppelin node at /home/zeppelin/prerequisites/spark.
+
+## Build
+
+Checkout source code from [https://github.com/apache/incubator-zeppelin](https://github.com/apache/incubator-zeppelin)
+
+```bash
+cd /home/zeppelin/
+git clone https://github.com/apache/incubator-zeppelin.git
+```
+Zeppelin package is available at /home/zeppelin/incubator-zeppelin after the checkout completes.
+
+### Cluster mode
+
+As its assumed Hadoop 2.7.x is installed on the YARN cluster & Spark 1.3.1 is installed on Zeppelin node. Hence appropriate options are chosen to build Zeppelin. This is very important as Zeppelin will bundle corresponding Hadoop & Spark libraries and they must match the ones present on YARN cluster & Zeppelin Spark installation. 
+
+Zeppelin is a maven project and hence must be built with Apache Maven.
+
+```bash
+cd /home/zeppelin/incubator-zeppelin
+mvn clean package -Pspark-1.3 -Dspark.version=1.3.1 -Dhadoop.version=2.7.0 -Phadoop-2.6 -Pyarn -DskipTests
+```
+Building Zeppelin for first time downloads various dependencies and hence takes few minutes to complete. 
+
+## Zeppelin Configuration
+Zeppelin configurations needs to be modified to connect to YARN cluster. Create a copy of zeppelin environment XML
+
+```bash
+cp /home/zeppelin/incubator-zeppelin/conf/zeppelin-env.sh.template /home/zeppelin/incubator-zeppelin/conf/zeppelin-env.sh 
+```
+
+Set the following properties
+
+```bash
+export JAVA_HOME=/home/zeppelin/prerequisites/jdk1.7.0_79
+export HADOOP_CONF_DIR=/etc/hadoop/conf
+export ZEPPELIN_JAVA_OPTS="-Dhdp.version=2.3.1.0-2574"
+```
+
+As /etc/hadoop/conf contains various configurations of YARN cluster, Zeppelin can now submit Spark/Hive jobs on YARN cluster form its web interface. The value of hdp.version is set to 2.3.1.0-2574. This can be obtained by running the following command
+
+```bash
+hdp-select status hadoop-client | sed 's/hadoop-client - \(.*\)/\1/'
+# It returned  2.3.1.0-2574
+```
+
+## Start/Stop
+### Start Zeppelin
+
+```
+cd /home/zeppelin/incubator-zeppelin
+bin/zeppelin-daemon.sh start
+```
+After successful start, visit http://[zeppelin-server-host-name]:8080 with your web browser.
+
+### Stop Zeppelin
+
+```
+bin/zeppelin-daemon.sh stop
+```
+
+## Interpreter
+Zeppelin provides to various distributed processing frameworks to process data that ranges from Spark, Hive, Tajo, Ignite and Lens to name a few. This document describes to configure Hive & Spark interpreters.
+
+### Hive
+Zeppelin supports Hive interpreter and hence copy hive-site.xml that should be present at /etc/hive/conf to the configuration folder of Zeppelin. Once Zeppelin is built it will have conf folder under /home/zeppelin/incubator-zeppelin.
+
+```bash
+cp /etc/hive/conf/hive-site.xml  /home/zeppelin/incubator-zeppelin/conf
+```
+
+Once Zeppelin server has started successfully, visit http://[zeppelin-server-host-name]:8080 with your web browser. Click on Interpreter tab next to Notebook dropdown. Look for Hive configurations and set them appropriately. By default hive.hiveserver2.url will be pointing to localhost and hive.hiveserver2.password/hive.hiveserver2.user are set to hive/hive. Set them as per Hive installation on YARN cluster. 
+Click on Save button. Once these configurations are updated, Zeppelin will prompt you to restart the interpreter. Accept the prompt and the interpreter will reload the configurations.
+
+### Spark
+Zeppelin was built with Spark 1.3.1 and it was assumed that 1.3.1 version of Spark is installed at /home/zeppelin/prerequisites/spark. Look for Spark configrations and click edit button to add the following properties
+
+<table class="table-configuration">
+  <tr>
+    <th>Property Name</th>
+    <th>Property Value</th>
+    <th>Remarks</th>
+  </tr>
+  <tr>
+    <td>master</td>
+    <td>yarn-client</td>
+    <td>In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.</td>
+  </tr>
+  <tr>
+    <td>spark.home</td>
+    <td>/home/zeppelin/prerequisites/spark</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>spark.driver.extraJavaOptions</td>
+    <td>-Dhdp.version=2.3.1.0-2574</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>spark.yarn.am.extraJavaOptions</td>
+    <td>-Dhdp.version=2.3.1.0-2574</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>spark.yarn.jar</td>
+    <td>/home/zeppelin/incubator-zeppelin/interpreter/spark/zeppelin-spark-0.6.0-incubating-SNAPSHOT.jar</td>
+    <td></td>
+  </tr>
+</table>
+
+Click on Save button. Once these configurations are updated, Zeppelin will prompt you to restart the interpreter. Accept the prompt and the interpreter will reload the configurations.
+
+Spark & Hive notebooks can be written with Zeppelin now. The resulting Spark & Hive jobs will run on configured YARN cluster.
+
+## Debug
+Zeppelin does not emit any kind of error messages on web interface when notebook/paragrah is run. If a paragraph fails it only displays ERROR. The reason for failure needs to be looked into log files which is present in logs directory under zeppelin installation base directory. Zeppelin creates a log file for each kind of interpreter.
+
+```bash
+[zeppelin@zeppelin-3529 logs]$ pwd
+/home/zeppelin/incubator-zeppelin/logs
+[zeppelin@zeppelin-3529 logs]$ ls -l
+total 844
+-rw-rw-r-- 1 zeppelin zeppelin  14648 Aug  3 14:45 zeppelin-interpreter-hive-zeppelin-zeppelin-3529.log
+-rw-rw-r-- 1 zeppelin zeppelin 625050 Aug  3 16:05 zeppelin-interpreter-spark-zeppelin-zeppelin-3529.log
+-rw-rw-r-- 1 zeppelin zeppelin 200394 Aug  3 21:15 zeppelin-zeppelin-zeppelin-3529.log
+-rw-rw-r-- 1 zeppelin zeppelin  16162 Aug  3 14:03 zeppelin-zeppelin-zeppelin-3529.out
+[zeppelin@zeppelin-3529 logs]$ 
+```

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/interpreter/cassandra.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/cassandra.md b/docs/interpreter/cassandra.md
new file mode 100644
index 0000000..b53295c
--- /dev/null
+++ b/docs/interpreter/cassandra.md
@@ -0,0 +1,807 @@
+---
+layout: page
+title: "Cassandra Interpreter"
+description: "Cassandra Interpreter"
+group: manual
+---
+{% include JB/setup %}
+
+<hr/>
+## 1. Cassandra CQL Interpreter for Apache Zeppelin
+
+<br/>
+<table class="table-configuration">
+  <tr>
+    <th>Name</th>
+    <th>Class</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>%cassandra</td>
+    <td>CassandraInterpreter</td>
+    <td>Provides interpreter for Apache Cassandra CQL query language</td>
+  </tr>
+</table>
+
+<hr/>
+
+## 2. Enabling Cassandra Interpreter
+
+ In a notebook, to enable the **Cassandra** interpreter, click on the **Gear** icon and select **Cassandra**
+ 
+ <center>
+ ![Interpreter Binding](/assets/themes/zeppelin/img/docs-img/cassandra-InterpreterBinding.png)
+ 
+ ![Interpreter Selection](/assets/themes/zeppelin/img/docs-img/cassandra-InterpreterSelection.png)
+ </center>
+
+<hr/>
+ 
+## 3. Using the Cassandra Interpreter
+
+ In a paragraph, use **_%cassandra_** to select the **Cassandra** interpreter and then input all commands.
+ 
+ To access the interactive help, type **HELP;**
+ 
+ <center>
+  ![Interactive Help](/assets/themes/zeppelin/img/docs-img/cassandra-InteractiveHelp.png)
+ </center>
+
+<hr/>
+
+## 4. Interpreter Commands
+
+ The **Cassandra** interpreter accepts the following commands
+ 
+<center>
+  <table class="table-configuration">
+    <tr>
+      <th>Command Type</th>
+      <th>Command Name</th>
+      <th>Description</th>
+    </tr>
+    <tr>
+      <td nowrap>Help command</td>
+      <td>HELP</td>
+      <td>Display the interactive help menu</td>
+    </tr>
+    <tr>
+      <td nowrap>Schema commands</td>
+      <td>DESCRIBE KEYSPACE, DESCRIBE CLUSTER, DESCRIBE TABLES ...</td>
+      <td>Custom commands to describe the Cassandra schema</td>
+    </tr>
+    <tr>
+      <td nowrap>Option commands</td>
+      <td>@consistency, @retryPolicy, @fetchSize ...</td>
+      <td>Inject runtime options to all statements in the paragraph</td>
+    </tr>
+    <tr>
+      <td nowrap>Prepared statement commands</td>
+      <td>@prepare, @bind, @remove_prepared</td>
+      <td>Let you register a prepared command and re-use it later by injecting bound values</td>
+    </tr>
+    <tr>
+      <td nowrap>Native CQL statements</td>
+      <td>All CQL-compatible statements (SELECT, INSERT, CREATE ...)</td>
+      <td>All CQL statements are executed directly against the Cassandra server</td>
+    </tr>
+  </table>  
+</center>
+
+<hr/>
+## 5. CQL statements
+ 
+This interpreter is compatible with any CQL statement supported by Cassandra. Ex: 
+
+```sql
+
+    INSERT INTO users(login,name) VALUES('jdoe','John DOE');
+    SELECT * FROM users WHERE login='jdoe';
+```                                
+
+Each statement should be separated by a semi-colon ( **;** ) except the special commands below:
+
+1. @prepare
+2. @bind
+3. @remove_prepare
+4. @consistency
+5. @serialConsistency
+6. @timestamp
+7. @retryPolicy
+8. @fetchSize
+ 
+Multi-line statements as well as multiple statements on the same line are also supported as long as they are 
+separated by a semi-colon. Ex: 
+
+```sql
+
+    USE spark_demo;
+
+    SELECT * FROM albums_by_country LIMIT 1; SELECT * FROM countries LIMIT 1;
+
+    SELECT *
+    FROM artists
+    WHERE login='jlennon';
+```
+
+Batch statements are supported and can span multiple lines, as well as DDL(CREATE/ALTER/DROP) statements: 
+
+```sql
+
+    BEGIN BATCH
+        INSERT INTO users(login,name) VALUES('jdoe','John DOE');
+        INSERT INTO users_preferences(login,account_type) VALUES('jdoe','BASIC');
+    APPLY BATCH;
+
+    CREATE TABLE IF NOT EXISTS test(
+        key int PRIMARY KEY,
+        value text
+    );
+```
+
+CQL statements are <strong>case-insensitive</strong> (except for column names and values). 
+This means that the following statements are equivalent and valid: 
+
+```sql
+
+    INSERT INTO users(login,name) VALUES('jdoe','John DOE');
+    Insert into users(login,name) vAlues('hsue','Helen SUE');
+```
+
+The complete list of all CQL statements and versions can be found below:
+<center>                                 
+ <table class="table-configuration">
+   <tr>
+     <th>Cassandra Version</th>
+     <th>Documentation Link</th>
+   </tr>
+   <tr>
+     <td><strong>2.2</strong></td>
+     <td>
+        <a target="_blank" 
+          href="http://docs.datastax.com/en/cql/3.3/cql/cqlIntro.html">
+          http://docs.datastax.com/en/cql/3.3/cql/cqlIntro.html
+        </a>
+     </td>
+   </tr>   
+   <tr>
+     <td><strong>2.1 & 2.0</strong></td>
+     <td>
+        <a target="_blank" 
+          href="http://docs.datastax.com/en/cql/3.1/cql/cql_intro_c.html">
+          http://docs.datastax.com/en/cql/3.1/cql/cql_intro_c.html
+        </a>
+     </td>
+   </tr>   
+   <tr>
+     <td><strong>1.2</strong></td>
+     <td>
+        <a target="_blank" 
+          href="http://docs.datastax.com/en/cql/3.0/cql/aboutCQL.html">
+          http://docs.datastax.com/en/cql/3.0/cql/aboutCQL.html
+        </a>
+     </td>
+   </tr>   
+ </table>
+</center>
+
+<hr/>
+
+## 6. Comments in statements
+
+It is possible to add comments between statements. Single line comments start with the hash sign (#). Multi-line comments are enclosed between /** and **/. Ex: 
+
+```sql
+
+    #First comment
+    INSERT INTO users(login,name) VALUES('jdoe','John DOE');
+
+    /**
+     Multi line
+     comments
+     **/
+    Insert into users(login,name) vAlues('hsue','Helen SUE');
+```
+
+<hr/>
+
+## 7. Syntax Validation
+
+The interpreters is shipped with a built-in syntax validator. This validator only checks for basic syntax errors. 
+All CQL-related syntax validation is delegated directly to **Cassandra** 
+
+Most of the time, syntax errors are due to **missing semi-colons** between statements or **typo errors**.
+
+<hr/>
+                                    
+## 8. Schema commands
+
+To make schema discovery easier and more interactive, the following commands are supported:
+<center>                                 
+ <table class="table-configuration">
+   <tr>
+     <th>Command</th>
+     <th>Description</th>
+   </tr>
+   <tr>
+     <td><strong>DESCRIBE CLUSTER;</strong></td>
+     <td>Show the current cluster name and its partitioner</td>
+   </tr>   
+   <tr>
+     <td><strong>DESCRIBE KEYSPACES;</strong></td>
+     <td>List all existing keyspaces in the cluster and their configuration (replication factor, durable write ...)</td>
+   </tr>   
+   <tr>
+     <td><strong>DESCRIBE TABLES;</strong></td>
+     <td>List all existing keyspaces in the cluster and for each, all the tables name</td>
+   </tr>   
+   <tr>
+     <td><strong>DESCRIBE TYPES;</strong></td>
+     <td>List all existing user defined types in the <strong>current (logged) keyspace</strong></td>
+   </tr>   
+   <tr>
+     <td nowrap><strong>DESCRIBE FUNCTIONS &lt;keyspace_name&gt;;</strong></td>
+     <td>List all existing user defined functions in the given keyspace</td>
+   </tr>   
+   <tr>
+     <td nowrap><strong>DESCRIBE AGGREGATES &lt;keyspace_name&gt;;</strong></td>
+     <td>List all existing user defined aggregates in the given keyspace</td>
+   </tr>   
+   <tr>
+     <td nowrap><strong>DESCRIBE KEYSPACE &lt;keyspace_name&gt;;</strong></td>
+     <td>Describe the given keyspace configuration and all its table details (name, columns, ...)</td>
+   </tr>   
+   <tr>
+     <td nowrap><strong>DESCRIBE TABLE (&lt;keyspace_name&gt;).&lt;table_name&gt;;</strong></td>
+     <td>
+        Describe the given table. If the keyspace is not provided, the current logged in keyspace is used. 
+        If there is no logged in keyspace, the default system keyspace is used. 
+        If no table is found, an error message is raised
+     </td>
+   </tr>   
+   <tr>
+     <td nowrap><strong>DESCRIBE TYPE (&lt;keyspace_name&gt;).&lt;type_name&gt;;</strong></td>
+     <td>
+        Describe the given type(UDT). If the keyspace is not provided, the current logged in keyspace is used. 
+        If there is no logged in keyspace, the default system keyspace is used. 
+        If no type is found, an error message is raised
+     </td>
+   </tr>   
+   <tr>
+     <td nowrap><strong>DESCRIBE FUNCTION (&lt;keyspace_name&gt;).&lt;function_name&gt;;</strong></td>
+     <td>Describe the given user defined function. The keyspace is optional</td>
+   </tr>   
+   <tr>
+     <td nowrap><strong>DESCRIBE AGGREGATE (&lt;keyspace_name&gt;).&lt;aggregate_name&gt;;</strong></td>
+     <td>Describe the given user defined aggregate. The keyspace is optional</td>
+   </tr>   
+ </table>
+</center>              
+                      
+The schema objects (cluster, keyspace, table, type, function and aggregate) are displayed in a tabular format. 
+There is a drop-down menu on the top left corner to expand objects details. On the top right menu is shown the Icon legend.
+
+<br/>
+<center>
+  ![Describe Schema](/assets/themes/zeppelin/img/docs-img/cassandra-DescribeSchema.png)
+</center>
+
+<hr/>
+
+## 9. Runtime Parameters
+
+Sometimes you want to be able to pass runtime query parameters to your statements. 
+Those parameters are not part of the CQL specs and are specific to the interpreter. 
+Below is the list of all parameters: 
+
+<br/>
+<center>                                 
+ <table class="table-configuration">
+   <tr>
+     <th>Parameter</th>
+     <th>Syntax</th>
+     <th>Description</th>
+   </tr>
+   <tr>
+     <td nowrap>Consistency Level</td>
+     <td><strong>@consistency=<em>value</em></strong></td>
+     <td>Apply the given consistency level to all queries in the paragraph</td>
+   </tr>
+   <tr>
+     <td nowrap>Serial Consistency Level</td>
+     <td><strong>@serialConsistency=<em>value</em></strong></td>
+     <td>Apply the given serial consistency level to all queries in the paragraph</td>
+   </tr>
+   <tr>
+     <td nowrap>Timestamp</td>
+     <td><strong>@timestamp=<em>long value</em></strong></td>
+     <td>
+        Apply the given timestamp to all queries in the paragraph.
+        Please note that timestamp value passed directly in CQL statement will override this value
+      </td>
+   </tr>
+   <tr>
+     <td nowrap>Retry Policy</td>
+     <td><strong>@retryPolicy=<em>value</em></strong></td>
+     <td>Apply the given retry policy to all queries in the paragraph</td>
+   </tr>
+   <tr>
+     <td nowrap>Fetch Size</td>
+     <td><strong>@fetchSize=<em>integer value</em></strong></td>
+     <td>Apply the given fetch size to all queries in the paragraph</td>
+   </tr>
+ </table>
+</center>
+
+ Some parameters only accept restricted values: 
+
+<br/>
+<center>                                 
+ <table class="table-configuration">
+   <tr>
+     <th>Parameter</th>
+     <th>Possible Values</th>
+   </tr>
+   <tr>
+     <td nowrap>Consistency Level</td>
+     <td><strong>ALL, ANY, ONE, TWO, THREE, QUORUM, LOCAL_ONE, LOCAL_QUORUM, EACH_QUORUM</strong></td>
+   </tr>
+   <tr>
+     <td nowrap>Serial Consistency Level</td>
+     <td><strong>SERIAL, LOCAL_SERIAL</strong></td>
+   </tr>
+   <tr>
+     <td nowrap>Timestamp</td>
+     <td>Any long value</td>
+   </tr>
+   <tr>
+     <td nowrap>Retry Policy</td>
+     <td><strong>DEFAULT, DOWNGRADING_CONSISTENCY, FALLTHROUGH, LOGGING_DEFAULT, LOGGING_DOWNGRADING, LOGGING_FALLTHROUGH</strong></td>
+   </tr>
+   <tr>
+     <td nowrap>Fetch Size</td>
+     <td>Any integer value</td>
+   </tr>
+ </table>
+</center> 
+
+>Please note that you should **not** add semi-colon ( **;** ) at the end of each parameter statement
+
+Some examples: 
+
+```sql
+
+    CREATE TABLE IF NOT EXISTS spark_demo.ts(
+        key int PRIMARY KEY,
+        value text
+    );
+    TRUNCATE spark_demo.ts;
+
+    # Timestamp in the past
+    @timestamp=10
+
+    # Force timestamp directly in the first insert
+    INSERT INTO spark_demo.ts(key,value) VALUES(1,'first insert') USING TIMESTAMP 100;
+
+    # Select some data to make the clock turn
+    SELECT * FROM spark_demo.albums LIMIT 100;
+
+    # Now insert using the timestamp parameter set at the beginning(10)
+    INSERT INTO spark_demo.ts(key,value) VALUES(1,'second insert');
+
+    # Check for the result. You should see 'first insert'
+    SELECT value FROM spark_demo.ts WHERE key=1;
+```
+                                
+Some remarks about query parameters:
+  
+> 1. **many** query parameters can be set in the same paragraph
+> 2. if the **same** query parameter is set many time with different values, the interpreter only take into account the first value
+> 3. each query parameter applies to **all CQL statements** in the same paragraph, unless you override the option using plain CQL text (like forcing timestamp with the USING clause)
+> 4. the order of each query parameter with regard to CQL statement does not matter
+
+<hr/>
+
+## 10. Support for Prepared Statements
+
+For performance reason, it is better to prepare statements before-hand and reuse them later by providing bound values. 
+This interpreter provides 3 commands to handle prepared and bound statements: 
+
+1. **@prepare**
+2. **@bind**
+3. **@remove_prepared**
+
+Example: 
+
+```
+
+    @prepare[statement_name]=...
+
+    @bind[statement_name]=’text’, 1223, ’2015-07-30 12:00:01’, null, true, [‘list_item1’, ’list_item2’]
+
+    @bind[statement_name_with_no_bound_value]
+
+    @remove_prepare[statement_name]
+```
+
+<br/>
+#### a. @prepare
+<br/>
+You can use the syntax _"@prepare[statement_name]=SELECT ..."_ to create a prepared statement. 
+The _statement_name_ is **mandatory** because the interpreter prepares the given statement with the Java driver and 
+saves the generated prepared statement in an **internal hash map**, using the provided _statement_name_ as search key.
+  
+> Please note that this internal prepared statement map is shared with **all notebooks** and **all paragraphs** because 
+there is only one instance of the interpreter for Cassandra
+  
+> If the interpreter encounters **many** @prepare for the **same _statement_name_ (key)**, only the **first** statement will be taken into account.
+  
+Example: 
+
+```
+
+    @prepare[select]=SELECT * FROM spark_demo.albums LIMIT ?
+
+    @prepare[select]=SELECT * FROM spark_demo.artists LIMIT ?
+```                                
+
+For the above example, the prepared statement is _SELECT * FROM spark_demo.albums LIMIT ?_. 
+_SELECT * FROM spark_demo.artists LIMIT ?_ is ignored because an entry already exists in the prepared statements map with the key select. 
+
+In the context of **Zeppelin**, a notebook can be scheduled to be executed at regular interval, 
+thus it is necessary to **avoid re-preparing many time the same statement (considered an anti-pattern)**.
+<br/>
+<br/>
+#### b. @bind
+<br/>
+Once the statement is prepared (possibly in a separated notebook/paragraph). You can bind values to it: 
+
+```
+    @bind[select_first]=10
+```                                
+
+Bound values are not mandatory for the **@bind** statement. However if you provide bound values, they need to comply to some syntax:
+
+* String values should be enclosed between simple quotes ( ‘ )
+* Date values should be enclosed between simple quotes ( ‘ ) and respect the formats:
+  1. yyyy-MM-dd HH:MM:ss
+  2. yyyy-MM-dd HH:MM:ss.SSS
+* **null** is parsed as-is
+* **boolean** (true|false) are parsed as-is
+* collection values must follow the **[standard CQL syntax]**:
+  * list: [‘list_item1’, ’list_item2’, ...]
+  * set: {‘set_item1’, ‘set_item2’, …}
+  * map: {‘key1’: ‘val1’, ‘key2’: ‘val2’, …}
+* **tuple** values should be enclosed between parenthesis (see **[Tuple CQL syntax]**): (‘text’, 123, true)
+* **udt** values should be enclosed between brackets (see **[UDT CQL syntax]**): {stree_name: ‘Beverly Hills’, number: 104, zip_code: 90020, state: ‘California’, …}
+
+> It is possible to use the @bind statement inside a batch:
+> 
+> ```sql
+>  
+>     BEGIN BATCH
+>         @bind[insert_user]='jdoe','John DOE'
+>         UPDATE users SET age = 27 WHERE login='hsue';
+>     APPLY BATCH;
+> ```
+
+<br/>
+#### c. @remove_prepare
+<br/>
+To avoid for a prepared statement to stay forever in the prepared statement map, you can use the 
+**@remove_prepare[statement_name]** syntax to remove it. 
+Removing a non-existing prepared statement yields no error.
+
+<hr/>
+
+## 11. Using Dynamic Forms
+
+Instead of hard-coding your CQL queries, it is possible to use the mustache syntax ( **\{\{ \}\}** ) to inject simple value or multiple choices forms. 
+
+The syntax for simple parameter is: **\{\{input_Label=default value\}\}**. The default value is mandatory because the first time the paragraph is executed, 
+we launch the CQL query before rendering the form so at least one value should be provided. 
+
+The syntax for multiple choices parameter is: **\{\{input_Label=value1 | value2 | … | valueN \}\}**. By default the first choice is used for CQL query 
+the first time the paragraph is executed. 
+
+Example: 
+
+{% raw %}
+    #Secondary index on performer style
+    SELECT name, country, performer
+    FROM spark_demo.performers
+    WHERE name='{{performer=Sheryl Crow|Doof|Fanfarlo|Los Paranoia}}'
+    AND styles CONTAINS '{{style=Rock}}';
+{% endraw %}
+                                
+
+In the above example, the first CQL query will be executed for _performer='Sheryl Crow' AND style='Rock'_. 
+For subsequent queries, you can change the value directly using the form. 
+
+> Please note that we enclosed the **\{\{ \}\}** block between simple quotes ( **'** ) because Cassandra expects a String here. 
+> We could have also use the **\{\{style='Rock'\}\}** syntax but this time, the value displayed on the form is **_'Rock'_** and not **_Rock_**. 
+
+It is also possible to use dynamic forms for **prepared statements**: 
+
+{% raw %}
+
+    @bind[select]=='{{performer=Sheryl Crow|Doof|Fanfarlo|Los Paranoia}}', '{{style=Rock}}'
+  
+{% endraw %}
+
+<hr/>
+
+## 12. Execution parallelism and shared states
+
+It is possible to execute many paragraphs in parallel. However, at the back-end side, we’re still using synchronous queries. 
+_Asynchronous execution_ is only possible when it is possible to return a `Future` value in the `InterpreterResult`. 
+It may be an interesting proposal for the **Zeppelin** project.
+
+Another caveat is that the same `com.datastax.driver.core.Session` object is used for **all** notebooks and paragraphs.
+Consequently, if you use the **USE _keyspace name_;** statement to log into a keyspace, it will change the keyspace for
+**all current users** of the **Cassandra** interpreter because we only create 1 `com.datastax.driver.core.Session` object
+per instance of **Cassandra** interpreter.
+
+The same remark does apply to the **prepared statement hash map**, it is shared by **all users** using the same instance of **Cassandra** interpreter.
+
+Until **Zeppelin** offers a real multi-users separation, there is a work-around to segregate user environment and states: 
+_create different **Cassandra** interpreter instances_
+
+For this, first go to the **Interpreter** menu and click on the **Create** button
+<br/>
+<br/>
+<center>
+  ![Create Interpreter](/assets/themes/zeppelin/img/docs-img/cassandra-NewInterpreterInstance.png)
+</center>
+ 
+In the interpreter creation form, put **cass-instance2** as **Name** and select the **cassandra** 
+in the interpreter drop-down list  
+<br/>
+<br/>
+<center>
+  ![Interpreter Name](/assets/themes/zeppelin/img/docs-img/cassandra-InterpreterName.png)
+</center>                         
+
+ Click on **Save** to create the new interpreter instance. Now you should be able to see it in the interpreter list.
+  
+<br/>
+<br/>
+<center>
+  ![Interpreter In List](/assets/themes/zeppelin/img/docs-img/cassandra-NewInterpreterInList.png)
+</center>                         
+
+Go back to your notebook and click on the **Gear** icon to configure interpreter bindings.
+You should be able to see and select the **cass-instance2** interpreter instance in the available
+interpreter list instead of the standard **cassandra** instance.
+
+<br/>
+<br/>
+<center>
+  ![Interpreter Instance Selection](/assets/themes/zeppelin/img/docs-img/cassandra-InterpreterInstanceSelection.png)
+</center> 
+
+<hr/>
+
+## 13. Interpreter Configuration
+
+To configure the **Cassandra** interpreter, go to the **Interpreter** menu and scroll down to change the parameters.
+The **Cassandra** interpreter is using the official **[Cassandra Java Driver]** and most of the parameters are used
+to configure the Java driver
+
+Below are the configuration parameters and their default value.
+
+
+ <table class="table-configuration">
+   <tr>
+     <th>Property Name</th>
+     <th>Description</th>
+     <th>Default Value</th>
+   </tr>
+   <tr>
+     <td>cassandra.cluster</td>
+     <td>Name of the Cassandra cluster to connect to</td>
+     <td>Test Cluster</td>
+   </tr>
+   <tr>
+     <td>cassandra.compression.protocol</td>
+     <td>On wire compression. Possible values are: NONE, SNAPPY, LZ4</td>
+     <td>NONE</td>
+   </tr>
+   <tr>
+     <td>cassandra.credentials.username</td>
+     <td>If security is enable, provide the login</td>
+     <td>none</td>
+   </tr>
+   <tr>
+     <td>cassandra.credentials.password</td>
+     <td>If security is enable, provide the password</td>
+     <td>none</td>
+   </tr>
+   <tr>
+     <td>cassandra.hosts</td>
+     <td>
+        Comma separated Cassandra hosts (DNS name or IP address).
+        <br/>
+        Ex: '192.168.0.12,node2,node3'
+      </td>
+     <td>localhost</td>
+   </tr>
+   <tr>
+     <td>cassandra.interpreter.parallelism</td>
+     <td>Number of concurrent paragraphs(queries block) that can be executed</td>
+     <td>10</td>
+   </tr>
+   <tr>
+     <td>cassandra.keyspace</td>
+     <td>
+        Default keyspace to connect to.
+        <strong>
+          It is strongly recommended to let the default value
+          and prefix the table name with the actual keyspace
+          in all of your queries
+        </strong>
+     </td>
+     <td>system</td>
+   </tr>
+   <tr>
+     <td>cassandra.load.balancing.policy</td>
+     <td>
+        Load balancing policy. Default = <em>new TokenAwarePolicy(new DCAwareRoundRobinPolicy())</em>
+        To Specify your own policy, provide the <strong>fully qualify class name (FQCN)</strong> of your policy.
+        At runtime the interpreter will instantiate the policy using 
+        <strong>Class.forName(FQCN)</strong>
+     </td>
+     <td>DEFAULT</td>
+   </tr>
+   <tr>
+     <td>cassandra.max.schema.agreement.wait.second</td>
+     <td>Cassandra max schema agreement wait in second</td>
+     <td>10</td>
+   </tr>
+   <tr>
+     <td>cassandra.pooling.core.connection.per.host.local</td>
+     <td>Protocol V2 and below default = 2. Protocol V3 and above default = 1</td>
+     <td>2</td>
+   </tr>
+   <tr>
+     <td>cassandra.pooling.core.connection.per.host.remote</td>
+     <td>Protocol V2 and below default = 1. Protocol V3 and above default = 1</td>
+     <td>1</td>
+   </tr>
+   <tr>
+     <td>cassandra.pooling.heartbeat.interval.seconds</td>
+     <td>Cassandra pool heartbeat interval in secs</td>
+     <td>30</td>
+   </tr>
+   <tr>
+     <td>cassandra.pooling.idle.timeout.seconds</td>
+     <td>Cassandra idle time out in seconds</td>
+     <td>120</td>
+   </tr>
+   <tr>
+     <td>cassandra.pooling.max.connection.per.host.local</td>
+     <td>Protocol V2 and below default = 8. Protocol V3 and above default = 1</td>
+     <td>8</td>
+   </tr>
+   <tr>
+     <td>cassandra.pooling.max.connection.per.host.remote</td>
+     <td>Protocol V2 and below default = 2. Protocol V3 and above default = 1</td>
+     <td>2</td>
+   </tr>
+   <tr>
+     <td>cassandra.pooling.max.request.per.connection.local</td>
+     <td>Protocol V2 and below default = 128. Protocol V3 and above default = 1024</td>
+     <td>128</td>
+   </tr>
+   <tr>
+     <td>cassandra.pooling.max.request.per.connection.remote</td>
+     <td>Protocol V2 and below default = 128. Protocol V3 and above default = 256</td>
+     <td>128</td>
+   </tr>
+   <tr>
+     <td>cassandra.pooling.new.connection.threshold.local</td>
+     <td>Protocol V2 and below default = 100. Protocol V3 and above default = 800</td>
+     <td>100</td>
+   </tr>
+   <tr>
+     <td>cassandra.pooling.new.connection.threshold.remote</td>
+     <td>Protocol V2 and below default = 100. Protocol V3 and above default = 200</td>
+     <td>100</td>
+   </tr>
+   <tr>
+     <td>cassandra.pooling.pool.timeout.millisecs</td>
+     <td>Cassandra pool time out in millisecs</td>
+     <td>5000</td>
+   </tr>
+   <tr>
+     <td>cassandra.protocol.version</td>
+     <td>Cassandra binary protocol version</td>
+     <td>3</td>
+   </tr>
+   <tr>
+     <td>cassandra.query.default.consistency</td>
+     <td>
+      Cassandra query default consistency level
+      <br/>
+      Available values: ONE, TWO, THREE, QUORUM, LOCAL_ONE, LOCAL_QUORUM, EACH_QUORUM, ALL
+     </td>
+     <td>ONE</td>
+   </tr>
+   <tr>
+     <td>cassandra.query.default.fetchSize</td>
+     <td>Cassandra query default fetch size</td>
+     <td>5000</td>
+   </tr>
+   <tr>
+     <td>cassandra.query.default.serial.consistency</td>
+     <td>
+      Cassandra query default serial consistency level
+      <br/>
+      Available values: SERIAL, LOCAL_SERIAL
+     </td>
+     <td>SERIAL</td>
+   </tr>
+   <tr>
+     <td>cassandra.reconnection.policy</td>
+     <td>
+        Cassandra Reconnection Policy.
+        Default = new ExponentialReconnectionPolicy(1000, 10 * 60 * 1000)
+        To Specify your own policy, provide the <strong>fully qualify class name (FQCN)</strong> of your policy.
+        At runtime the interpreter will instantiate the policy using 
+        <strong>Class.forName(FQCN)</strong>
+     </td>
+     <td>DEFAULT</td>
+   </tr>
+   <tr>
+     <td>cassandra.retry.policy</td>
+     <td>
+        Cassandra Retry Policy.
+        Default = DefaultRetryPolicy.INSTANCE
+        To Specify your own policy, provide the <strong>fully qualify class name (FQCN)</strong> of your policy.
+        At runtime the interpreter will instantiate the policy using 
+        <strong>Class.forName(FQCN)</strong>
+     </td>
+     <td>DEFAULT</td>
+   </tr>
+   <tr>
+     <td>cassandra.socket.connection.timeout.millisecs</td>
+     <td>Cassandra socket default connection timeout in millisecs</td>
+     <td>500</td>
+   </tr>
+   <tr>
+     <td>cassandra.socket.read.timeout.millisecs</td>
+     <td>Cassandra socket read timeout in millisecs</td>
+     <td>12000</td>
+   </tr>
+   <tr>
+     <td>cassandra.socket.tcp.no_delay</td>
+     <td>Cassandra socket TCP no delay</td>
+     <td>true</td>
+   </tr>
+   <tr>
+     <td>cassandra.speculative.execution.policy</td>
+     <td>
+        Cassandra Speculative Execution Policy.
+        Default = NoSpeculativeExecutionPolicy.INSTANCE
+        To Specify your own policy, provide the <strong>fully qualify class name (FQCN)</strong> of your policy.
+        At runtime the interpreter will instantiate the policy using 
+        <strong>Class.forName(FQCN)</strong>
+     </td>
+     <td>DEFAULT</td>
+   </tr>
+ </table>
+
+<hr/>
+
+## 14. Bugs & Contacts
+
+ If you encounter a bug for this interpreter, please create a **[JIRA]** ticket and ping me on Twitter
+ at **[@doanduyhai]**
+
+
+[Cassandra Java Driver]: https://github.com/datastax/java-driver
+[standard CQL syntax]: http://docs.datastax.com/en/cql/3.1/cql/cql_using/use_collections_c.html
+[Tuple CQL syntax]: http://docs.datastax.com/en/cql/3.1/cql/cql_reference/tupleType.html
+[UDT CQL syntax]: http://docs.datastax.com/en/cql/3.1/cql/cql_using/cqlUseUDT.html
+[JIRA]: https://issues.apache.org/jira/browse/ZEPPELIN-382?jql=project%20%3D%20ZEPPELIN
+[@doanduyhai]: https://twitter.com/doanduyhai

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/interpreter/flink.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/flink.md b/docs/interpreter/flink.md
new file mode 100644
index 0000000..ce1f780
--- /dev/null
+++ b/docs/interpreter/flink.md
@@ -0,0 +1,68 @@
+---
+layout: page
+title: "Flink Interpreter"
+description: ""
+group: manual
+---
+{% include JB/setup %}
+
+
+## Flink interpreter for Apache Zeppelin
+[Apache Flink](https://flink.apache.org) is an open source platform for distributed stream and batch data processing.
+
+
+### How to start local Flink cluster, to test the interpreter
+Zeppelin comes with pre-configured flink-local interpreter, which starts Flink in a local mode on your machine, so you do not need to install anything.
+
+### How to configure interpreter to point to Flink cluster
+At the "Interpreters" menu, you have to create a new Flink interpreter and provide next properties:
+
+<table class="table-configuration">
+  <tr>
+    <th>property</th>
+    <th>value</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>host</td>
+    <td>local</td>
+    <td>host name of running JobManager. 'local' runs flink in local mode (default)</td>
+  </tr>
+  <tr>
+    <td>port</td>
+    <td>6123</td>
+    <td>port of running JobManager</td>
+  </tr>
+  <tr>
+    <td>xxx</td>
+    <td>yyy</td>
+    <td>anything else from [Flink Configuration](https://ci.apache.org/projects/flink/flink-docs-release-0.9/setup/config.html)</td>
+  </tr>
+</table>
+<br />
+
+
+### How to test it's working
+
+In example, by using the [Zeppelin notebook](https://www.zeppelinhub.com/viewer/notebooks/aHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL05GTGFicy96ZXBwZWxpbi1ub3RlYm9va3MvbWFzdGVyL25vdGVib29rcy8yQVFFREs1UEMvbm90ZS5qc29u) is from [Till Rohrmann's presentation](http://www.slideshare.net/tillrohrmann/data-analysis-49806564) "Interactive data analysis with Apache Flink" for Apache Flink Meetup.
+
+
+```
+%sh
+rm 10.txt.utf-8
+wget http://www.gutenberg.org/ebooks/10.txt.utf-8
+```
+```
+%flink
+case class WordCount(word: String, frequency: Int)
+val bible:DataSet[String] = env.readTextFile("10.txt.utf-8")
+val partialCounts: DataSet[WordCount] = bible.flatMap{
+    line =>
+        """\b\w+\b""".r.findAllIn(line).map(word => WordCount(word, 1))
+//        line.split(" ").map(word => WordCount(word, 1))
+}
+val wordCounts = partialCounts.groupBy("word").reduce{
+    (left, right) => WordCount(left.word, left.frequency + right.frequency)
+}
+val result10 = wordCounts.first(10).collect()
+```

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/interpreter/geode.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/geode.md b/docs/interpreter/geode.md
new file mode 100644
index 0000000..96d1c04
--- /dev/null
+++ b/docs/interpreter/geode.md
@@ -0,0 +1,203 @@
+---
+layout: page
+title: "Geode OQL Interpreter"
+description: ""
+group: manual
+---
+{% include JB/setup %}
+
+
+## Geode/Gemfire OQL Interpreter for Apache Zeppelin
+
+<br/>
+<table class="table-configuration">
+  <tr>
+    <th>Name</th>
+    <th>Class</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>%geode.oql</td>
+    <td>GeodeOqlInterpreter</td>
+    <td>Provides OQL environment for Apache Geode</td>
+  </tr>
+</table>
+
+<br/>
+This interpreter supports the [Geode](http://geode.incubator.apache.org/) [Object Query Language (OQL)](http://geode-docs.cfapps.io/docs/developing/querying_basics/oql_compared_to_sql.html).  With the OQL-based querying language:
+
+[<img align="right" src="http://img.youtube.com/vi/zvzzA9GXu3Q/3.jpg" alt="zeppelin-view" hspace="10" width="200"></img>](https://www.youtube.com/watch?v=zvzzA9GXu3Q)
+
+* You can query on any arbitrary object
+* You can navigate object collections
+* You can invoke methods and access the behavior of objects
+* Data mapping is supported
+* You are not required to declare types. Since you do not need type definitions, you can work across multiple languages
+* You are not constrained by a schema
+
+This [Video Tutorial](https://www.youtube.com/watch?v=zvzzA9GXu3Q) illustrates some of the features provided by the `Geode Interpreter`.
+
+### Create Interpreter 
+
+By default Zeppelin creates one `Geode/OQL` instance. You can remove it or create more instances. 
+
+Multiple Geode instances can be created, each configured to the same or different backend Geode cluster. But over time a  `Notebook` can have only one Geode interpreter instance `bound`. That means you _can not_ connect to different Geode clusters in the same `Notebook`. This is a known Zeppelin limitation. 
+
+To create new Geode instance open the `Interprter` section and click the `+Create` button. Pick a `Name` of your choice and from the `Interpreter` drop-down select `geode`.  Then follow the configuration instructions and `Save` the new instance. 
+
+> Note: The `Name` of the instance is used only to distinct the instances while binding them to the `Notebook`. The `Name` is irrelevant inside the `Notebook`. In the `Notebook` you must use `%geode.oql` tag. 
+
+### Bind to Notebook
+In the `Notebook` click on the `settings` icon in the top right corner. The select/deselect the interpreters to be bound with the `Notebook`.
+
+### Configuration
+You can modify the configuration of the Geode from the `Interpreter` section.  The Geode interpreter express the following properties:
+
+ 
+ <table class="table-configuration">
+   <tr>
+     <th>Property Name</th>
+     <th>Description</th>
+     <th>Default Value</th>
+   </tr>
+   <tr>
+     <td>geode.locator.host</td>
+     <td>The Geode Locator Host</td>
+     <td>localhost</td>
+   </tr>
+   <tr>
+     <td>geode.locator.port</td>
+     <td>The Geode Locator Port</td>
+     <td>10334</td>
+   </tr>
+   <tr>
+     <td>geode.max.result</td>
+     <td>Max number of OQL result to display to prevent the browser overload</td>
+     <td>1000</td>
+   </tr>
+ </table>
+ 
+### How to use
+
+> *Tip 1: Use (CTRL + .) for OQL auto-completion.*
+
+> *Tip 2: Alawys start the paragraphs with the full `%geode.oql` prefix tag! The short notation: `%geode` would still be able run the OQL queries but the syntax highlighting and the auto-completions will be disabled.*
+
+#### Create / Destroy Regions
+
+The OQL sepecification does not support  [Geode Regions](https://cwiki.apache.org/confluence/display/GEODE/Index#Index-MainConceptsandComponents) mutation operations. To `creaate`/`destroy` regions one should use the [GFSH](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/chapter_overview.html) shell tool instead. To wokr this it assumes that the GFSH is colocated with Zeppelin server.
+
+```bash
+%sh
+source /etc/geode/conf/geode-env.sh
+gfsh << EOF
+
+ connect --locator=ambari.localdomain[10334]
+
+ destroy region --name=/regionEmployee
+ destroy region --name=/regionCompany
+ create region --name=regionEmployee --type=REPLICATE
+ create region --name=regionCompany --type=REPLICATE
+ 
+ exit;
+EOF
+```
+
+Above snippet re-creates two regions: `regionEmployee` and `regionCompany`. Note that you have to explicetely specify the locator host and port. The values should match those you have used in the Geode Interpreter configuration. Comprehensive  list of [GFSH Commands by Functional Area](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/gfsh_quick_reference.html).
+
+#### Basic OQL  
+
+
+```sql 
+%geode.oql 
+SELECT count(*) FROM /regionEmploee
+```
+
+OQL `IN` and `SET` filters:
+
+```sql
+%geode.oql
+SELECT * FROM /regionEmployee 
+WHERE companyId IN SET(2) OR lastName IN SET('Tzolov13', 'Tzolov73')
+```
+
+OQL `JOIN` operations
+
+```sql
+%geode.oql
+SELECT e.employeeId, e.firstName, e.lastName, c.id as companyId, c.companyName, c.address
+FROM /regionEmployee e, /regionCompany c 
+WHERE e.companyId = c.id
+```
+
+By default the QOL responses contain only the region entry values. To access the keys,  query the `EntrySet` instead:
+
+```sql
+%geode.oql
+SELECT e.key, e.value.companyId, e.value.email 
+FROM /regionEmployee.entrySet e
+```
+Following query will return the EntrySet value as a Blob:
+
+```sql
+%geode.oql
+SELECT e.key, e.value FROM /regionEmployee.entrySet e
+```
+
+
+> Note: You can have multiple queries in the same paragraph but only the result from the first is displayed. [[1](https://issues.apache.org/jira/browse/ZEPPELIN-178)], [[2](https://issues.apache.org/jira/browse/ZEPPELIN-212)].
+
+
+#### GFSH Commands From The Shell
+
+Use the Shell Interpreter (`%sh`) to run OQL commands form the command line:
+
+```bash
+%sh
+source /etc/geode/conf/geode-env.sh
+gfsh -e "connect" -e "list members"
+```
+
+#### Apply Zeppelin Dynamic Forms
+
+You can leverage [Zepplein Dynamic Form](https://zeppelin.incubator.apache.org/docs/manual/dynamicform.html) inside your OQL queries. You can use both the `text input` and `select form` parametrization features
+
+```sql
+%geode.oql
+SELECT * FROM /regionEmployee e WHERE e.employeeId > ${Id}
+```
+
+#### Geode REST API
+To list the defined regions you can use the [Geode REST API](http://geode-docs.cfapps.io/docs/geode_rest/chapter_overview.html):
+
+```
+http://<geode server hostname>phd1.localdomain:8484/gemfire-api/v1/
+```
+
+```json
+{
+  "regions" : [{
+    "name" : "regionEmployee",
+    "type" : "REPLICATE",
+    "key-constraint" : null,
+    "value-constraint" : null
+  }, {
+    "name" : "regionCompany",
+    "type" : "REPLICATE",
+    "key-constraint" : null,
+    "value-constraint" : null
+  }]
+}
+```
+
+> To enable Geode REST API with JSON support add the following properties to geode.server.properties.file and restart:
+
+```
+http-service-port=8484
+start-dev-rest-api=true
+```
+
+### Auto-completion 
+The Geode Interpreter provides a basic auto-completion functionality. On `(Ctrl+.)` it list the most relevant suggesntions in a pop-up window. 
+
+


[4/4] incubator-zeppelin git commit: ZEPPELIN-412 Documentation based on Zeppelin version

Posted by mo...@apache.org.
ZEPPELIN-412 Documentation based on Zeppelin version

https://issues.apache.org/jira/browse/ZEPPELIN-412

To provide documentation based on Zeppelin version, like Spark, Flink project does, it need to separate documentations from website.

* docs will be kept in Zeppelin main source tree and being built and published under 'docs' menu on website with specific version number.
* website will be kept in gh-pages branch and provides menu for multiple version of docs.

This PR removes unnecessary pages, which is provided by website. (for example download page)

This is the screenshot after applying this PR

![image](https://cloud.githubusercontent.com/assets/1540981/11163334/53a14c7a-8b0e-11e5-80cb-961bb8a15faa.png)

![image](https://cloud.githubusercontent.com/assets/1540981/11163335/5acc9f22-8b0e-11e5-8329-273bee738cc9.png)

Author: Lee moon soo <mo...@apache.org>

Closes #430 from Leemoonsoo/ZEPPELIN-412 and squashes the following commits:

35da7f2 [Lee moon soo] Remove docs dir
5e4ce12 [Lee moon soo] Update readme
0635cbb [Lee moon soo] Remove unnecessary pages
e21cdd2 [Lee moon soo] Style font size
b5fe812 [Lee moon soo] Change title to overview
469b850 [Lee moon soo] Get remove unnecessary menu


Project: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/commit/c2cbafd1
Tree: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/tree/c2cbafd1
Diff: http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/diff/c2cbafd1

Branch: refs/heads/master
Commit: c2cbafd1d834cd3b694a6f27f034d5724c90fee8
Parents: 79a92c7
Author: Lee moon soo <mo...@apache.org>
Authored: Sat Nov 14 20:12:09 2015 +0900
Committer: Lee moon soo <mo...@apache.org>
Committed: Wed Nov 18 09:08:54 2015 +0900

----------------------------------------------------------------------
 docs/README.md                                  |   6 +-
 docs/_includes/themes/zeppelin/_navigation.html |  26 +-
 docs/assets/themes/zeppelin/css/style.css       |   3 +-
 docs/community.md                               |  33 -
 docs/development/howtocontribute.md             | 109 +++
 docs/development/howtocontributewebsite.md      |  66 ++
 docs/development/writingzeppelininterpreter.md  | 156 ++++
 docs/displaysystem/angular.md                   |  98 +++
 docs/displaysystem/display.md                   |  45 ++
 docs/displaysystem/table.md                     |  37 +
 docs/docs.md                                    |  70 ++
 docs/docs/development/howtocontribute.md        | 109 ---
 docs/docs/development/howtocontributewebsite.md |  66 --
 .../development/writingzeppelininterpreter.md   | 156 ----
 docs/docs/displaysystem/angular.md              |  98 ---
 docs/docs/displaysystem/display.md              |  45 --
 docs/docs/displaysystem/table.md                |  37 -
 docs/docs/index.md                              |  70 --
 docs/docs/install/install.md                    | 132 ---
 docs/docs/install/yarn_install.md               | 264 ------
 docs/docs/interpreter/cassandra.md              | 807 -------------------
 docs/docs/interpreter/flink.md                  |  68 --
 docs/docs/interpreter/geode.md                  | 203 -----
 docs/docs/interpreter/ignite.md                 | 116 ---
 docs/docs/interpreter/lens.md                   | 173 ----
 docs/docs/interpreter/postgresql.md             | 180 -----
 docs/docs/interpreter/spark.md                  | 221 -----
 docs/docs/manual/dynamicform.md                 |  78 --
 docs/docs/manual/interpreters.md                |  64 --
 docs/docs/manual/notebookashomepage.md          | 109 ---
 docs/docs/pleasecontribute.md                   |  28 -
 .../zeppelin-release-0.5.0-incubating.md        |  77 --
 docs/docs/rest-api/rest-interpreter.md          | 363 ---------
 docs/docs/rest-api/rest-notebook.md             | 171 ----
 docs/docs/storage/storage.md                    |  80 --
 docs/docs/tutorial/tutorial.md                  | 197 -----
 docs/download.md                                |  87 --
 docs/index.md                                   |  15 +-
 docs/install/install.md                         | 132 +++
 docs/install/yarn_install.md                    | 264 ++++++
 docs/interpreter/cassandra.md                   | 807 +++++++++++++++++++
 docs/interpreter/flink.md                       |  68 ++
 docs/interpreter/geode.md                       | 203 +++++
 docs/interpreter/ignite.md                      | 116 +++
 docs/interpreter/lens.md                        | 173 ++++
 docs/interpreter/postgresql.md                  | 180 +++++
 docs/interpreter/spark.md                       | 221 +++++
 docs/manual/dynamicform.md                      |  78 ++
 docs/manual/interpreters.md                     |  64 ++
 docs/manual/notebookashomepage.md               | 109 +++
 docs/pleasecontribute.md                        |  28 +
 docs/rest-api/rest-interpreter.md               | 363 +++++++++
 docs/rest-api/rest-notebook.md                  | 171 ++++
 docs/storage/storage.md                         |  80 ++
 docs/tutorial/tutorial.md                       | 197 +++++
 55 files changed, 3848 insertions(+), 4069 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/README.md
----------------------------------------------------------------------
diff --git a/docs/README.md b/docs/README.md
index b463599..100aacf 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -1,6 +1,4 @@
-## Zeppelin project website
-
-Welcome to the Zeppelin documentation!
+## Zeppelin documentation
 
 This readme will walk you through building the Zeppelin documentation, which is included here with the Zeppelin source code.
 
@@ -32,7 +30,7 @@ See https://help.github.com/articles/using-jekyll-with-pages#installing-jekyll
     ```
     svn co https://svn.apache.org/repos/asf/incubator/zeppelin asf-zepplelin
     ```
- 3. copy zeppelin/_site to asf-zepplelin/site
+ 3. copy zeppelin/_site to asf-zepplelin/site/docs/[VERSION]
  4. ```svn commit```
 
 ## Adding a new page

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/_includes/themes/zeppelin/_navigation.html
----------------------------------------------------------------------
diff --git a/docs/_includes/themes/zeppelin/_navigation.html b/docs/_includes/themes/zeppelin/_navigation.html
index ca53595..6ac6930 100644
--- a/docs/_includes/themes/zeppelin/_navigation.html
+++ b/docs/_includes/themes/zeppelin/_navigation.html
@@ -9,7 +9,7 @@
           </button>
           <a class="navbar-brand" href="/">
             <img src="/assets/themes/zeppelin/img/zeppelin_logo.png" width="50" alt="I'm zeppelin">
-            Apache Zeppelin <small>(incubating)</small>
+            Zeppelin <small>(0.6.0-incubating-SNAPSHOT)</small>
           </a>
         </div>
         <nav class="navbar-collapse collapse" role="navigation">
@@ -22,31 +22,7 @@
             {% assign pages_list = site.pages %}
             {% assign group = 'nav-right' %}
             {% include JB/pages_list %}
-            <li><a href="https://github.com/apache/incubator-zeppelin">GitHub</a></li>
-            <li id="apache">
-              <a href="#" data-toggle="dropdown" class="dropdown-toggle">Apache<b class="caret"></b></a>
-               <ul class="dropdown-menu">
-                <li><a href="http://www.apache.org/foundation/how-it-works.html">Apache Software Foundation</a></li>
-                <li><a href="http://www.apache.org/licenses/">Apache License</a></li>
-                <li><a href="http://www.apache.org/foundation/sponsorship.html">Sponsorship</a></li>
-                <li><a href="http://www.apache.org/foundation/thanks.html">Thanks</a></li>
-            </ul>
-            </li>
           </ul>
         </nav><!--/.navbar-collapse -->
       </div>
     </div>
-
-{% if page.title == "Zeppelin" %}
-<div class="jumbotron">
-  <div class="container">
-    <h1>Apache Zeppelin <small>(incubating)</small></h1>
-    <p>A web-based notebook that enables interactive data analytics. <br/>
-      You can make beautiful data-driven, interactive and collaborative documents with SQL, Scala and more.
-    </p>
-    <p><a href="http://youtu.be/_PQbVH_aO5E" target="_zeppelinVideo" class="btn btn-primary btn-lg bigFingerButton" role="button">Watch the video</a>
-
-       <a href="./download.html" class="btn btn-primary btn-lg bigFingerButton" role="button">Get Zeppelin</a></p>
-  </div>
-</div>
-{% endif %}

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/assets/themes/zeppelin/css/style.css
----------------------------------------------------------------------
diff --git a/docs/assets/themes/zeppelin/css/style.css b/docs/assets/themes/zeppelin/css/style.css
index 92f25bf..d54df95 100644
--- a/docs/assets/themes/zeppelin/css/style.css
+++ b/docs/assets/themes/zeppelin/css/style.css
@@ -305,7 +305,8 @@ body {
 }
 
 .navbar-brand small {
-    font-size: 60%;
+    font-size: 14px;
+    font-family: 'Helvetica Neue', Helvetica;
     color: #FFF; }
 
 .navbar-collapse.collapse {

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/community.md
----------------------------------------------------------------------
diff --git a/docs/community.md b/docs/community.md
deleted file mode 100644
index d9ec874..0000000
--- a/docs/community.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-layout: page
-title: "Community"
-description: ""
-group: nav-right
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-
-### Mailing list
-
-Get help using Zeppelin or contribute to the project on our mailing lists:
-
-* [users@zeppelin.incubator.apache.org](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-users/) is for usage questions, help, and announcements. [subscribe](mailto:users-subscribe@zeppelin.incubator.apache.org?subject=send this email to subscribe),     [unsubscribe](mailto:users-unsubscribe@zeppelin.incubator.apache.org?subject=send this email to unsubscribe), [archives](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-users/)
-* [dev@zeppelin.incubator.apache.org](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/) is for people who want to contribute code to Zeppelin. [subscribe](mailto:dev-subscribe@zeppelin.incubator.apache.org?subject=send this email to subscribe), [unsubscribe](mailto:dev-unsubscribe@zeppelin.incubator.apache.org?subject=send this email to unsubscribe), [archives](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/)
-* [commits@zeppelin.incubator.apache.org](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-commits/) is for commit messages and patches to Zeppelin. [subscribe](mailto:commits-subscribe@zeppelin.incubator.apache.org?subject=send this email to subscribe), [unsubscribe](mailto:commits-unsubscribe@zeppelin.incubator.apache.org?subject=send this email to unsubscribe), [archives](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-commits/)
-
-### Issue tracker
-
-  [https://issues.apache.org/jira/browse/ZEPPELIN](https://issues.apache.org/jira/browse/ZEPPELIN)

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/development/howtocontribute.md
----------------------------------------------------------------------
diff --git a/docs/development/howtocontribute.md b/docs/development/howtocontribute.md
new file mode 100644
index 0000000..0de1e78
--- /dev/null
+++ b/docs/development/howtocontribute.md
@@ -0,0 +1,109 @@
+---
+layout: page
+title: "How to contribute"
+description: "How to contribute"
+group: development
+---
+
+## IMPORTANT
+
+Apache Zeppelin (incubating) is an [Apache2 License](http://www.apache.org/licenses/LICENSE-2.0.html) Software.
+Any contribution to Zeppelin (Source code, Documents, Image, Website) means you agree license all your contributions as Apache2 License.
+
+
+
+### Setting up
+Here are some things you will need to do to build and test Zeppelin. 
+
+#### Software Configuration Management(SCM)
+
+Zeppelin uses Git for it's SCM system. Hosted by github.com. `https://github.com/apache/incubator-zeppelin` You'll need git client installed in your development machine. 
+
+#### Integrated Development Environment(IDE)
+
+You are free to use whatever IDE you prefer, or your favorite command line editor. 
+
+#### Build Tools
+
+To build the code, install
+Oracle Java 7
+Apache Maven
+
+### Getting the source code
+First of all, you need the Zeppelin source code. The official location for Zeppelin is [https://github.com/apache/incubator-zeppelin](https://github.com/apache/incubator-zeppelin)
+
+#### git access
+
+Get the source code on your development machine using git.
+
+```
+git clone https://github.com/apache/incubator-zeppelin.git zeppelin
+```
+
+You may also want to develop against a specific release. For example, for branch-0.1
+
+```
+git clone -b branch-0.1 https://github.com/apache/incubator-zeppelin.git zeppelin
+```
+
+
+#### Fork repository
+
+If you want not only build Zeppelin but also make changes, then you need to fork Zeppelin repository and make pull request.
+
+
+###Build
+
+```
+mvn install
+```
+
+To skip test
+
+```
+mvn install -DskipTests
+```
+
+To build with specific spark / hadoop version
+
+```
+mvn install -Dspark.version=1.0.1 -Dhadoop.version=2.2.0
+```
+
+### Run Zeppelin server in development mode
+
+```
+cd zeppelin-server
+HADOOP_HOME=YOUR_HADOOP_HOME JAVA_HOME=YOUR_JAVA_HOME mvn exec:java -Dexec.mainClass="org.apache.zeppelin.server.ZeppelinServer" -Dexec.args=""
+```
+NOTE: make sure you first run ```mvn clean install -DskipTests``` on your zeppelin root directory otherwise your server build will fail to find the required dependencies in the local repro
+
+or use daemon script
+
+```
+bin/zeppelin-daemon start
+```
+
+
+Server will be run on http://localhost:8080
+
+
+### Generating Thrift Code
+
+Some portions of the Zeppelin code are generated by [Thrift](http://thrift.apache.org). For most Zeppelin changes, you don't need to worry about this, but if you modify any of the Thrift IDL files (e.g. zeppelin-interpreter/src/main/thrift/*.thrift), then you also need to regenerate these files and submit their updated version as part of your patch.
+
+To regenerate the code, install thrift-0.9.0 and change directory into Zeppelin source directory. and then run following command
+
+
+```
+thrift -out zeppelin-interpreter/src/main/java/ --gen java zeppelin-interpreter/src/main/thrift/RemoteInterpreterService.thrift
+```
+
+
+### JIRA
+Zeppelin manages its issues in Jira. [https://issues.apache.org/jira/browse/ZEPPELIN](https://issues.apache.org/jira/browse/ZEPPELIN)
+
+### Stay involved
+Contributors should join the Zeppelin mailing lists.
+
+* [dev@zeppelin.incubator.apache.org](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/) is for people who want to contribute code to Zeppelin. [subscribe](mailto:dev-subscribe@zeppelin.incubator.apache.org?subject=send this email to subscribe), [unsubscribe](mailto:dev-unsubscribe@zeppelin.incubator.apache.org?subject=send this email to unsubscribe), [archives](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/)

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/development/howtocontributewebsite.md
----------------------------------------------------------------------
diff --git a/docs/development/howtocontributewebsite.md b/docs/development/howtocontributewebsite.md
new file mode 100644
index 0000000..90a7367
--- /dev/null
+++ b/docs/development/howtocontributewebsite.md
@@ -0,0 +1,66 @@
+---
+layout: page
+title: "How to contribute (website)"
+description: "How to contribute (website)"
+group: development
+---
+
+## IMPORTANT
+
+Apache Zeppelin (incubating) is an [Apache2 License](http://www.apache.org/licenses/LICENSE-2.0.html) Software.
+Any contribution to Zeppelin (Source code, Documents, Image, Website) means you agree license all your contributions as Apache2 License.
+
+
+
+### Modifying the website
+
+
+<br />
+#### Getting the source code
+Website is hosted in 'master' branch under `/docs/` dir.
+
+First of all, you need the website source code. The official location of mirror for Zeppelin is [https://github.com/apache/incubator-zeppelin](https://github.com/apache/incubator-zeppelin).
+
+Get the source code on your development machine using git.
+
+```
+git clone https://github.com/apache/incubator-zeppelin.git
+cd docs
+```
+
+<br />
+#### Build
+
+To build, you'll need to install some prerequisites.
+
+Please check 'Build' section on [docs/README.md](https://github.com/apache/incubator-zeppelin/blob/master/docs/README.md#build)
+
+<br />
+#### Run website in development mode
+
+While you're modifying website, you'll want to see preview of it.
+
+Please check 'Run' section on [docs/README.md](https://github.com/apache/incubator-zeppelin/blob/master/docs/README.md#run)
+
+You'll be able to access it on localhost:4000 with your webbrowser.
+
+<br />
+#### Pull request
+
+When you're ready, just make a pull-request.
+
+
+<br />
+### Alternative way
+
+You can directly edit .md files in `/docs/` dir at github's web interface and make pull-request immediatly.
+
+
+<br />
+### JIRA
+Zeppelin manages its issues in Jira. [https://issues.apache.org/jira/browse/ZEPPELIN](https://issues.apache.org/jira/browse/ZEPPELIN)
+
+### Stay involved
+Contributors should join the Zeppelin mailing lists.
+
+* [dev@zeppelin.incubator.apache.org](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/) is for people who want to contribute code to Zeppelin. [subscribe](mailto:dev-subscribe@zeppelin.incubator.apache.org?subject=send this email to subscribe), [unsubscribe](mailto:dev-unsubscribe@zeppelin.incubator.apache.org?subject=send this email to unsubscribe), [archives](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/)

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/development/writingzeppelininterpreter.md
----------------------------------------------------------------------
diff --git a/docs/development/writingzeppelininterpreter.md b/docs/development/writingzeppelininterpreter.md
new file mode 100644
index 0000000..4bb69b4
--- /dev/null
+++ b/docs/development/writingzeppelininterpreter.md
@@ -0,0 +1,156 @@
+---
+layout: page
+title: "Writing Zeppelin Interpreter"
+description: ""
+group: development
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+### What is Zeppelin Interpreter
+
+Zeppelin Interpreter is a language backend. For example to use scala code in Zeppelin, you need scala interpreter.
+Every Interpreter belongs to an InterpreterGroup. InterpreterGroup is a unit of start/stop interpreter.
+Interpreters in the same InterpreterGroup can reference each other. For example, SparkSqlInterpreter can reference SparkInterpreter to get SparkContext from it while they're in the same group. 
+
+<img class="img-responsive" style="width:50%; border: 1px solid #ecf0f1;" height="auto" src="../../assets/themes/zeppelin/img/interpreter.png" />
+
+All Interpreters in the same interpreter group are launched in a single, separate JVM process. The Interpreter communicates with Zeppelin engine via thrift.
+
+### Make your own Interpreter
+
+Creating a new interpreter is quite simple. Just extend [org.apache.zeppelin.interpreter](https://github.com/apache/incubator-zeppelin/blob/master/zeppelin-interpreter/src/main/java/org/apache/zeppelin/interpreter/Interpreter.java) abstract class and implement some methods.
+
+You can include org.apache.zeppelin:zeppelin-interpreter:[VERSION] artifact in your build system.
+
+Your interpreter name is derived from the static register method
+
+```
+static {
+    Interpreter.register("MyInterpreterName", MyClassName.class.getName());
+  }
+```
+
+The name will appear later in the interpreter name option box during the interpreter configuration process.
+
+The name of the interpreter is what you later write to identify a paragraph which should be interpreted using this interpreter.
+
+```
+%MyInterpreterName
+some interpreter spesific code...
+```
+### Install your interpreter binary
+
+Once you have build your interpreter, you can place your interpreter under directory with all the dependencies.
+
+```
+[ZEPPELIN_HOME]/interpreter/[INTERPRETER_NAME]/
+```
+
+### Configure your interpreter
+
+To configure your interpreter you need to follow these steps:
+
+1. create conf/zeppelin-site.xml by copying conf/zeppelin-site.xml.template to conf/zeppelin-site.xml 
+
+2. Add your interpreter class name to the zeppelin.interpreters property in conf/zeppelin-site.xml
+
+  Property value is comma separated [INTERPRETER_CLASS_NAME]
+for example,
+  
+  ```
+<property>
+  <name>zeppelin.interpreters</name>
+  <value>org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.shell.ShellInterpreter,org.apache.zeppelin.hive.HiveInterpreter,com.me.MyNewInterpreter</value>
+</property>
+```
+3. start zeppelin by running ```./bin/zeppelin-deamon start```
+
+4. in the interpreter page, click the +Create button and configure your interpreter properties.
+Now you are done and ready to use your interpreter.
+
+Note that the interpreters shipped with zeppelin have a [default configuration](https://github.com/apache/incubator-zeppelin/blob/master/zeppelin-zengine/src/main/java/org/apache/zeppelin/conf/ZeppelinConfiguration.java#L397) which is used when there is no zeppelin-site.xml.
+
+### Use your interpreter
+
+#### 0.5.0
+Inside of a notebook, %[INTERPRETER_NAME] directive will call your interpreter.
+Note that the first interpreter configuration in zeppelin.interpreters will be the default one.
+
+for example
+
+```
+%myintp
+
+val a = "My interpreter"
+println(a)
+```
+
+<br />
+#### 0.6.0 and later
+Inside of a notebook, %[INTERPRETER\_GROUP].[INTERPRETER\_NAME] directive will call your interpreter.
+Note that the first interpreter configuration in zeppelin.interpreters will be the default one.
+
+You can omit either [INTERPRETER\_GROUP] or [INTERPRETER\_NAME]. Omit [INTERPRETER\_NAME] selects first available interpreter in the [INTERPRETER\_GROUP].
+Omit '[INTERPRETER\_GROUP]' will selects [INTERPRETER\_NAME] from default interpreter group.
+
+
+For example, if you have two interpreter myintp1 and myintp2 in group mygrp,
+
+you can call myintp1 like
+
+```
+%mygrp.myintp1
+
+codes for myintp1
+```
+
+and you can call myintp2 like
+
+```
+%mygrp.myintp2
+
+codes for myintp2
+```
+
+If you omit your interpreter name, it'll selects first available interpreter in the group (myintp1)
+
+```
+%mygrp
+
+codes for myintp1
+
+```
+
+You can only omit your interpreter group when your interpreter group is selected as a default group.
+
+```
+%myintp2
+
+codes for myintp2
+```
+
+
+
+
+### Examples
+
+Check some interpreters shipped by default.
+
+ - [spark](https://github.com/apache/incubator-zeppelin/tree/master/spark)
+ - [markdown](https://github.com/apache/incubator-zeppelin/tree/master/markdown)
+ - [shell](https://github.com/apache/incubator-zeppelin/tree/master/shell)
+ - [hive](https://github.com/apache/incubator-zeppelin/tree/master/hive)
+

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/displaysystem/angular.md
----------------------------------------------------------------------
diff --git a/docs/displaysystem/angular.md b/docs/displaysystem/angular.md
new file mode 100644
index 0000000..32e8253
--- /dev/null
+++ b/docs/displaysystem/angular.md
@@ -0,0 +1,98 @@
+---
+layout: page
+title: "Angular Display System"
+description: ""
+group: display
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+
+### Angular (Beta)
+
+Angular display system treats output as an view template of [AngularJS](https://angularjs.org/).
+It compiles templates and display inside of Zeppelin.
+
+Zeppelin provides gateway between your interpreter and your compiled AngularJS view teamplates.
+Therefore, you can not only update scope variable from your interpreter but also watch your scope variable in the interpreter, which is JVM process.
+
+<br />
+#### Print AngularJS view
+
+To use angular display system, your output should starts with "%angular".
+<img src="../../assets/themes/zeppelin/img/screenshots/display_angular.png" width=600px />
+
+Note that display system is backend independent.
+
+Because of variable 'name' is not defined, 'Hello \{\{name\}\}' display 'Hello '.
+
+<br />
+#### Bind/Unbind variable
+
+Through ZeppelinContext, you can bind/unbind variable to AngularJS view.
+
+Currently it only works in Spark Interpreter (scala).
+
+```
+// bind my 'object' as angular scope variable 'name' in current notebook.
+z.angularBind(String name, Object object)
+
+// bind my 'object' as angular scope variable 'name' in all notebooks related to current interpreter.
+z.angularBindGlobal(String name, Object object)
+
+// unbind angular scope variable 'name' in current notebook.
+z.angularUnbind(String name)
+
+// unbind angular scope variable 'name' in all notebooks related to current interpreter.
+z.angularUnbindGlobal(String name)
+
+```
+
+In the example, let's bind "world" variable 'name'. Then you can see AngularJs view are updated immediately.
+
+<img src="../../assets/themes/zeppelin/img/screenshots/display_angular1.png" width=600px />
+
+
+<br />
+#### Watch/Unwatch variable
+
+Through ZeppelinContext, you can watch/unwatch variable in AngularJs view.
+
+Currently it only works in Spark Interpreter (scala).
+
+```
+// register for angular scope variable 'name' (notebook)
+z.angularWatch(String name, (before, after) => { ... })
+
+// unregister watcher for angular variable 'name' (notebook)
+z.angularUnwatch(String name)
+
+// register for angular scope variable 'name' (global)
+z.angularWatchGlobal(String name, (before, after) => { ... })
+
+// unregister watcher for angular variable 'name' (global)
+z.angularUnwatchGlobal(String name)
+
+
+```
+
+Let's make an button, that increment 'run' variable by 1 when it is clicked.
+z.angularBind("run", 0) will initialize 'run' to zero. And then register watcher of 'run'.
+
+<img src="../../assets/themes/zeppelin/img/screenshots/display_angular2.png" width=600px />
+
+After clicked button, you'll see both 'run' and numWatched are increased by 1
+
+<img src="../../assets/themes/zeppelin/img/screenshots/display_angular3.png" width=600px />

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/displaysystem/display.md
----------------------------------------------------------------------
diff --git a/docs/displaysystem/display.md b/docs/displaysystem/display.md
new file mode 100644
index 0000000..132e356
--- /dev/null
+++ b/docs/displaysystem/display.md
@@ -0,0 +1,45 @@
+---
+layout: page
+title: "Text/Html Display System"
+description: ""
+group: display
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+
+<a name="text"> </a>
+<br />
+<br />
+### Text
+
+Zeppelin prints output of language backend in text, by default.
+
+<img src="../../assets/themes/zeppelin/img/screenshots/display_text.png" />
+
+You can explicitly say you're using text display system.
+
+<img src="../../assets/themes/zeppelin/img/screenshots/display_text1.png" />
+
+Note that display system is backend independent.
+
+<a name="html"> </a>
+<br />
+<br />
+### Html
+
+With '%html' directive, Zeppelin treats your output as html
+
+<img src="../../assets/themes/zeppelin/img/screenshots/display_html.png" />

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/displaysystem/table.md
----------------------------------------------------------------------
diff --git a/docs/displaysystem/table.md b/docs/displaysystem/table.md
new file mode 100644
index 0000000..b1fe2af
--- /dev/null
+++ b/docs/displaysystem/table.md
@@ -0,0 +1,37 @@
+---
+layout: page
+title: "Table Display System"
+description: ""
+group: display
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+
+### Table
+
+If you have data that row seprated by '\n' (newline) and column separated by '\t' (tab) with first row as header row, for example
+
+<img src="../../assets/themes/zeppelin/img/screenshots/display_table.png" />
+
+You can simply use %table display system to leverage Zeppelin's built in visualization.
+
+<img src="../../assets/themes/zeppelin/img/screenshots/display_table1.png" />
+
+Note that display system is backend independent.
+
+If table contents start with %html, it is interpreted as an HTML.
+
+<img src="../../assets/themes/zeppelin/img/screenshots/display_table_html.png" />

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs.md
----------------------------------------------------------------------
diff --git a/docs/docs.md b/docs/docs.md
new file mode 100644
index 0000000..9678641
--- /dev/null
+++ b/docs/docs.md
@@ -0,0 +1,70 @@
+---
+layout: page
+title: "Docs"
+description: ""
+group: nav-right
+---
+<!--
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+{% include JB/setup %}
+
+### Install
+
+* [Install](./install/install.html)
+* [YARN Install](./install/yarn_install.html)
+
+### Tutorial
+
+* [Tutorial](./tutorial/tutorial.html)
+
+### Interpreter
+
+**[Interpreters in zeppelin](manual/interpreters.html)**
+
+* [cassandra](./interpreter/cassandra.html)
+* [flink](./interpreter/flink.html)
+* [geode](./interpreter/geode.html)
+* [hive](../pleasecontribute.html)
+* [ignite](./interpreter/ignite.html)
+* [lens](./interpreter/lens.html)
+* [md](../pleasecontribute.html)
+* [postgresql, hawq](./interpreter/postgresql.html)
+* [sh](../pleasecontribute.html)
+* [spark](./interpreter/spark.html)
+* [tajo](../pleasecontribute.html)
+
+### Storage
+* [S3 Storage](./storage/storage.html)
+
+### Display System
+
+* [text](./displaysystem/display.html)
+* [html](./displaysystem/display.html#html)
+* [table](./displaysystem/table.html)
+* [angular](./displaysystem/angular.html) (Beta)
+
+### Manual
+
+* [Dynamic Form](./manual/dynamicform.html)
+* [Notebook as Homepage](./manual/notebookashomepage.html)
+
+### REST API
+ * [Interpreter API](./rest-api/rest-interpreter.html)
+ * [Notebook API](./rest-api/rest-notebook.html)
+
+### Development
+
+* [Writing Zeppelin Interpreter](./development/writingzeppelininterpreter.html)
+* [How to contribute (code)](./development/howtocontribute.html)
+* [How to contribute (website)](./development/howtocontributewebsite.html)

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/development/howtocontribute.md
----------------------------------------------------------------------
diff --git a/docs/docs/development/howtocontribute.md b/docs/docs/development/howtocontribute.md
deleted file mode 100644
index 0de1e78..0000000
--- a/docs/docs/development/howtocontribute.md
+++ /dev/null
@@ -1,109 +0,0 @@
----
-layout: page
-title: "How to contribute"
-description: "How to contribute"
-group: development
----
-
-## IMPORTANT
-
-Apache Zeppelin (incubating) is an [Apache2 License](http://www.apache.org/licenses/LICENSE-2.0.html) Software.
-Any contribution to Zeppelin (Source code, Documents, Image, Website) means you agree license all your contributions as Apache2 License.
-
-
-
-### Setting up
-Here are some things you will need to do to build and test Zeppelin. 
-
-#### Software Configuration Management(SCM)
-
-Zeppelin uses Git for it's SCM system. Hosted by github.com. `https://github.com/apache/incubator-zeppelin` You'll need git client installed in your development machine. 
-
-#### Integrated Development Environment(IDE)
-
-You are free to use whatever IDE you prefer, or your favorite command line editor. 
-
-#### Build Tools
-
-To build the code, install
-Oracle Java 7
-Apache Maven
-
-### Getting the source code
-First of all, you need the Zeppelin source code. The official location for Zeppelin is [https://github.com/apache/incubator-zeppelin](https://github.com/apache/incubator-zeppelin)
-
-#### git access
-
-Get the source code on your development machine using git.
-
-```
-git clone https://github.com/apache/incubator-zeppelin.git zeppelin
-```
-
-You may also want to develop against a specific release. For example, for branch-0.1
-
-```
-git clone -b branch-0.1 https://github.com/apache/incubator-zeppelin.git zeppelin
-```
-
-
-#### Fork repository
-
-If you want not only build Zeppelin but also make changes, then you need to fork Zeppelin repository and make pull request.
-
-
-###Build
-
-```
-mvn install
-```
-
-To skip test
-
-```
-mvn install -DskipTests
-```
-
-To build with specific spark / hadoop version
-
-```
-mvn install -Dspark.version=1.0.1 -Dhadoop.version=2.2.0
-```
-
-### Run Zeppelin server in development mode
-
-```
-cd zeppelin-server
-HADOOP_HOME=YOUR_HADOOP_HOME JAVA_HOME=YOUR_JAVA_HOME mvn exec:java -Dexec.mainClass="org.apache.zeppelin.server.ZeppelinServer" -Dexec.args=""
-```
-NOTE: make sure you first run ```mvn clean install -DskipTests``` on your zeppelin root directory otherwise your server build will fail to find the required dependencies in the local repro
-
-or use daemon script
-
-```
-bin/zeppelin-daemon start
-```
-
-
-Server will be run on http://localhost:8080
-
-
-### Generating Thrift Code
-
-Some portions of the Zeppelin code are generated by [Thrift](http://thrift.apache.org). For most Zeppelin changes, you don't need to worry about this, but if you modify any of the Thrift IDL files (e.g. zeppelin-interpreter/src/main/thrift/*.thrift), then you also need to regenerate these files and submit their updated version as part of your patch.
-
-To regenerate the code, install thrift-0.9.0 and change directory into Zeppelin source directory. and then run following command
-
-
-```
-thrift -out zeppelin-interpreter/src/main/java/ --gen java zeppelin-interpreter/src/main/thrift/RemoteInterpreterService.thrift
-```
-
-
-### JIRA
-Zeppelin manages its issues in Jira. [https://issues.apache.org/jira/browse/ZEPPELIN](https://issues.apache.org/jira/browse/ZEPPELIN)
-
-### Stay involved
-Contributors should join the Zeppelin mailing lists.
-
-* [dev@zeppelin.incubator.apache.org](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/) is for people who want to contribute code to Zeppelin. [subscribe](mailto:dev-subscribe@zeppelin.incubator.apache.org?subject=send this email to subscribe), [unsubscribe](mailto:dev-unsubscribe@zeppelin.incubator.apache.org?subject=send this email to unsubscribe), [archives](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/)

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/development/howtocontributewebsite.md
----------------------------------------------------------------------
diff --git a/docs/docs/development/howtocontributewebsite.md b/docs/docs/development/howtocontributewebsite.md
deleted file mode 100644
index 90a7367..0000000
--- a/docs/docs/development/howtocontributewebsite.md
+++ /dev/null
@@ -1,66 +0,0 @@
----
-layout: page
-title: "How to contribute (website)"
-description: "How to contribute (website)"
-group: development
----
-
-## IMPORTANT
-
-Apache Zeppelin (incubating) is an [Apache2 License](http://www.apache.org/licenses/LICENSE-2.0.html) Software.
-Any contribution to Zeppelin (Source code, Documents, Image, Website) means you agree license all your contributions as Apache2 License.
-
-
-
-### Modifying the website
-
-
-<br />
-#### Getting the source code
-Website is hosted in 'master' branch under `/docs/` dir.
-
-First of all, you need the website source code. The official location of mirror for Zeppelin is [https://github.com/apache/incubator-zeppelin](https://github.com/apache/incubator-zeppelin).
-
-Get the source code on your development machine using git.
-
-```
-git clone https://github.com/apache/incubator-zeppelin.git
-cd docs
-```
-
-<br />
-#### Build
-
-To build, you'll need to install some prerequisites.
-
-Please check 'Build' section on [docs/README.md](https://github.com/apache/incubator-zeppelin/blob/master/docs/README.md#build)
-
-<br />
-#### Run website in development mode
-
-While you're modifying website, you'll want to see preview of it.
-
-Please check 'Run' section on [docs/README.md](https://github.com/apache/incubator-zeppelin/blob/master/docs/README.md#run)
-
-You'll be able to access it on localhost:4000 with your webbrowser.
-
-<br />
-#### Pull request
-
-When you're ready, just make a pull-request.
-
-
-<br />
-### Alternative way
-
-You can directly edit .md files in `/docs/` dir at github's web interface and make pull-request immediatly.
-
-
-<br />
-### JIRA
-Zeppelin manages its issues in Jira. [https://issues.apache.org/jira/browse/ZEPPELIN](https://issues.apache.org/jira/browse/ZEPPELIN)
-
-### Stay involved
-Contributors should join the Zeppelin mailing lists.
-
-* [dev@zeppelin.incubator.apache.org](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/) is for people who want to contribute code to Zeppelin. [subscribe](mailto:dev-subscribe@zeppelin.incubator.apache.org?subject=send this email to subscribe), [unsubscribe](mailto:dev-unsubscribe@zeppelin.incubator.apache.org?subject=send this email to unsubscribe), [archives](http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/)

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/development/writingzeppelininterpreter.md
----------------------------------------------------------------------
diff --git a/docs/docs/development/writingzeppelininterpreter.md b/docs/docs/development/writingzeppelininterpreter.md
deleted file mode 100644
index 4bb69b4..0000000
--- a/docs/docs/development/writingzeppelininterpreter.md
+++ /dev/null
@@ -1,156 +0,0 @@
----
-layout: page
-title: "Writing Zeppelin Interpreter"
-description: ""
-group: development
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-### What is Zeppelin Interpreter
-
-Zeppelin Interpreter is a language backend. For example to use scala code in Zeppelin, you need scala interpreter.
-Every Interpreter belongs to an InterpreterGroup. InterpreterGroup is a unit of start/stop interpreter.
-Interpreters in the same InterpreterGroup can reference each other. For example, SparkSqlInterpreter can reference SparkInterpreter to get SparkContext from it while they're in the same group. 
-
-<img class="img-responsive" style="width:50%; border: 1px solid #ecf0f1;" height="auto" src="../../assets/themes/zeppelin/img/interpreter.png" />
-
-All Interpreters in the same interpreter group are launched in a single, separate JVM process. The Interpreter communicates with Zeppelin engine via thrift.
-
-### Make your own Interpreter
-
-Creating a new interpreter is quite simple. Just extend [org.apache.zeppelin.interpreter](https://github.com/apache/incubator-zeppelin/blob/master/zeppelin-interpreter/src/main/java/org/apache/zeppelin/interpreter/Interpreter.java) abstract class and implement some methods.
-
-You can include org.apache.zeppelin:zeppelin-interpreter:[VERSION] artifact in your build system.
-
-Your interpreter name is derived from the static register method
-
-```
-static {
-    Interpreter.register("MyInterpreterName", MyClassName.class.getName());
-  }
-```
-
-The name will appear later in the interpreter name option box during the interpreter configuration process.
-
-The name of the interpreter is what you later write to identify a paragraph which should be interpreted using this interpreter.
-
-```
-%MyInterpreterName
-some interpreter spesific code...
-```
-### Install your interpreter binary
-
-Once you have build your interpreter, you can place your interpreter under directory with all the dependencies.
-
-```
-[ZEPPELIN_HOME]/interpreter/[INTERPRETER_NAME]/
-```
-
-### Configure your interpreter
-
-To configure your interpreter you need to follow these steps:
-
-1. create conf/zeppelin-site.xml by copying conf/zeppelin-site.xml.template to conf/zeppelin-site.xml 
-
-2. Add your interpreter class name to the zeppelin.interpreters property in conf/zeppelin-site.xml
-
-  Property value is comma separated [INTERPRETER_CLASS_NAME]
-for example,
-  
-  ```
-<property>
-  <name>zeppelin.interpreters</name>
-  <value>org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.shell.ShellInterpreter,org.apache.zeppelin.hive.HiveInterpreter,com.me.MyNewInterpreter</value>
-</property>
-```
-3. start zeppelin by running ```./bin/zeppelin-deamon start```
-
-4. in the interpreter page, click the +Create button and configure your interpreter properties.
-Now you are done and ready to use your interpreter.
-
-Note that the interpreters shipped with zeppelin have a [default configuration](https://github.com/apache/incubator-zeppelin/blob/master/zeppelin-zengine/src/main/java/org/apache/zeppelin/conf/ZeppelinConfiguration.java#L397) which is used when there is no zeppelin-site.xml.
-
-### Use your interpreter
-
-#### 0.5.0
-Inside of a notebook, %[INTERPRETER_NAME] directive will call your interpreter.
-Note that the first interpreter configuration in zeppelin.interpreters will be the default one.
-
-for example
-
-```
-%myintp
-
-val a = "My interpreter"
-println(a)
-```
-
-<br />
-#### 0.6.0 and later
-Inside of a notebook, %[INTERPRETER\_GROUP].[INTERPRETER\_NAME] directive will call your interpreter.
-Note that the first interpreter configuration in zeppelin.interpreters will be the default one.
-
-You can omit either [INTERPRETER\_GROUP] or [INTERPRETER\_NAME]. Omit [INTERPRETER\_NAME] selects first available interpreter in the [INTERPRETER\_GROUP].
-Omit '[INTERPRETER\_GROUP]' will selects [INTERPRETER\_NAME] from default interpreter group.
-
-
-For example, if you have two interpreter myintp1 and myintp2 in group mygrp,
-
-you can call myintp1 like
-
-```
-%mygrp.myintp1
-
-codes for myintp1
-```
-
-and you can call myintp2 like
-
-```
-%mygrp.myintp2
-
-codes for myintp2
-```
-
-If you omit your interpreter name, it'll selects first available interpreter in the group (myintp1)
-
-```
-%mygrp
-
-codes for myintp1
-
-```
-
-You can only omit your interpreter group when your interpreter group is selected as a default group.
-
-```
-%myintp2
-
-codes for myintp2
-```
-
-
-
-
-### Examples
-
-Check some interpreters shipped by default.
-
- - [spark](https://github.com/apache/incubator-zeppelin/tree/master/spark)
- - [markdown](https://github.com/apache/incubator-zeppelin/tree/master/markdown)
- - [shell](https://github.com/apache/incubator-zeppelin/tree/master/shell)
- - [hive](https://github.com/apache/incubator-zeppelin/tree/master/hive)
-

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/displaysystem/angular.md
----------------------------------------------------------------------
diff --git a/docs/docs/displaysystem/angular.md b/docs/docs/displaysystem/angular.md
deleted file mode 100644
index 32e8253..0000000
--- a/docs/docs/displaysystem/angular.md
+++ /dev/null
@@ -1,98 +0,0 @@
----
-layout: page
-title: "Angular Display System"
-description: ""
-group: display
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-
-### Angular (Beta)
-
-Angular display system treats output as an view template of [AngularJS](https://angularjs.org/).
-It compiles templates and display inside of Zeppelin.
-
-Zeppelin provides gateway between your interpreter and your compiled AngularJS view teamplates.
-Therefore, you can not only update scope variable from your interpreter but also watch your scope variable in the interpreter, which is JVM process.
-
-<br />
-#### Print AngularJS view
-
-To use angular display system, your output should starts with "%angular".
-<img src="../../assets/themes/zeppelin/img/screenshots/display_angular.png" width=600px />
-
-Note that display system is backend independent.
-
-Because of variable 'name' is not defined, 'Hello \{\{name\}\}' display 'Hello '.
-
-<br />
-#### Bind/Unbind variable
-
-Through ZeppelinContext, you can bind/unbind variable to AngularJS view.
-
-Currently it only works in Spark Interpreter (scala).
-
-```
-// bind my 'object' as angular scope variable 'name' in current notebook.
-z.angularBind(String name, Object object)
-
-// bind my 'object' as angular scope variable 'name' in all notebooks related to current interpreter.
-z.angularBindGlobal(String name, Object object)
-
-// unbind angular scope variable 'name' in current notebook.
-z.angularUnbind(String name)
-
-// unbind angular scope variable 'name' in all notebooks related to current interpreter.
-z.angularUnbindGlobal(String name)
-
-```
-
-In the example, let's bind "world" variable 'name'. Then you can see AngularJs view are updated immediately.
-
-<img src="../../assets/themes/zeppelin/img/screenshots/display_angular1.png" width=600px />
-
-
-<br />
-#### Watch/Unwatch variable
-
-Through ZeppelinContext, you can watch/unwatch variable in AngularJs view.
-
-Currently it only works in Spark Interpreter (scala).
-
-```
-// register for angular scope variable 'name' (notebook)
-z.angularWatch(String name, (before, after) => { ... })
-
-// unregister watcher for angular variable 'name' (notebook)
-z.angularUnwatch(String name)
-
-// register for angular scope variable 'name' (global)
-z.angularWatchGlobal(String name, (before, after) => { ... })
-
-// unregister watcher for angular variable 'name' (global)
-z.angularUnwatchGlobal(String name)
-
-
-```
-
-Let's make an button, that increment 'run' variable by 1 when it is clicked.
-z.angularBind("run", 0) will initialize 'run' to zero. And then register watcher of 'run'.
-
-<img src="../../assets/themes/zeppelin/img/screenshots/display_angular2.png" width=600px />
-
-After clicked button, you'll see both 'run' and numWatched are increased by 1
-
-<img src="../../assets/themes/zeppelin/img/screenshots/display_angular3.png" width=600px />

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/displaysystem/display.md
----------------------------------------------------------------------
diff --git a/docs/docs/displaysystem/display.md b/docs/docs/displaysystem/display.md
deleted file mode 100644
index 132e356..0000000
--- a/docs/docs/displaysystem/display.md
+++ /dev/null
@@ -1,45 +0,0 @@
----
-layout: page
-title: "Text/Html Display System"
-description: ""
-group: display
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-
-<a name="text"> </a>
-<br />
-<br />
-### Text
-
-Zeppelin prints output of language backend in text, by default.
-
-<img src="../../assets/themes/zeppelin/img/screenshots/display_text.png" />
-
-You can explicitly say you're using text display system.
-
-<img src="../../assets/themes/zeppelin/img/screenshots/display_text1.png" />
-
-Note that display system is backend independent.
-
-<a name="html"> </a>
-<br />
-<br />
-### Html
-
-With '%html' directive, Zeppelin treats your output as html
-
-<img src="../../assets/themes/zeppelin/img/screenshots/display_html.png" />

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/displaysystem/table.md
----------------------------------------------------------------------
diff --git a/docs/docs/displaysystem/table.md b/docs/docs/displaysystem/table.md
deleted file mode 100644
index b1fe2af..0000000
--- a/docs/docs/displaysystem/table.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-layout: page
-title: "Table Display System"
-description: ""
-group: display
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-
-### Table
-
-If you have data that row seprated by '\n' (newline) and column separated by '\t' (tab) with first row as header row, for example
-
-<img src="../../assets/themes/zeppelin/img/screenshots/display_table.png" />
-
-You can simply use %table display system to leverage Zeppelin's built in visualization.
-
-<img src="../../assets/themes/zeppelin/img/screenshots/display_table1.png" />
-
-Note that display system is backend independent.
-
-If table contents start with %html, it is interpreted as an HTML.
-
-<img src="../../assets/themes/zeppelin/img/screenshots/display_table_html.png" />

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/index.md
----------------------------------------------------------------------
diff --git a/docs/docs/index.md b/docs/docs/index.md
deleted file mode 100644
index 1f1292e..0000000
--- a/docs/docs/index.md
+++ /dev/null
@@ -1,70 +0,0 @@
----
-layout: page
-title: "Docs"
-description: ""
-group: nav-right
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-### Install
-
-* [Install](./install/install.html)
-* [YARN Install](./install/yarn_install.html)
-
-### Tutorial
-
-* [Tutorial](./tutorial/tutorial.html)
-
-### Interpreter
-
-**[Interpreters in zeppelin](manual/interpreters.html)**
-
-* [cassandra](./interpreter/cassandra.html)
-* [flink](./interpreter/flink.html)
-* [geode](./interpreter/geode.html)
-* [hive](../docs/pleasecontribute.html)
-* [ignite](./interpreter/ignite.html)
-* [lens](./interpreter/lens.html)
-* [md](../docs/pleasecontribute.html)
-* [postgresql, hawq](./interpreter/postgresql.html)
-* [sh](../docs/pleasecontribute.html)
-* [spark](./interpreter/spark.html)
-* [tajo](../docs/pleasecontribute.html)
-
-### Storage
-* [S3 Storage](./storage/storage.html)
-
-### Display System
-
-* [text](./displaysystem/display.html)
-* [html](./displaysystem/display.html#html)
-* [table](./displaysystem/table.html)
-* [angular](./displaysystem/angular.html) (Beta)
-
-### Manual
-
-* [Dynamic Form](./manual/dynamicform.html)
-* [Notebook as Homepage](./manual/notebookashomepage.html)
-
-### REST API
- * [Interpreter API](./rest-api/rest-interpreter.html)
- * [Notebook API](./rest-api/rest-notebook.html)
-
-### Development
-
-* [Writing Zeppelin Interpreter](./development/writingzeppelininterpreter.html)
-* [How to contribute (code)](./development/howtocontribute.html)
-* [How to contribute (website)](./development/howtocontributewebsite.html)

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/install/install.md
----------------------------------------------------------------------
diff --git a/docs/docs/install/install.md b/docs/docs/install/install.md
deleted file mode 100644
index a4b3336..0000000
--- a/docs/docs/install/install.md
+++ /dev/null
@@ -1,132 +0,0 @@
----
-layout: page
-title: "Install Zeppelin"
-description: ""
-group: install
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-
-
-## Build
-
-#### Prerequisites
-
- * Java 1.7
- * None root account
- * Apache Maven
-
-Build tested on OSX, CentOS 6.
-
-Checkout source code from [https://github.com/apache/incubator-zeppelin](https://github.com/apache/incubator-zeppelin)
-
-#### Local mode
-
-```
-mvn install -DskipTests
-```
-
-#### Cluster mode
-
-```
-mvn install -DskipTests -Dspark.version=1.1.0 -Dhadoop.version=2.2.0
-```
-
-Change spark.version and hadoop.version to your cluster's one.
-
-#### Custom built Spark
-
-Note that is you uses custom build spark, you need build Zeppelin with custome built spark artifact. To do that, deploy spark artifact to local maven repository using
-
-```
-sbt/sbt publish-local
-```
-
-and then build Zeppelin with your custom built Spark
-
-```
-mvn install -DskipTests -Dspark.version=1.1.0-Custom -Dhadoop.version=2.2.0
-```
-
-
-
-
-## Configure
-
-Configuration can be done by both environment variable(conf/zeppelin-env.sh) and java properties(conf/zeppelin-site.xml). If both defined, environment vaiable is used.
-
-
-<table class="table-configuration">
-  <tr>
-    <th>zepplin-env.sh</th>
-    <th>zepplin-site.xml</th>
-    <th>Default value</th>
-    <th>Description</th>
-  </tr>
-  <tr>
-    <td>ZEPPELIN_PORT</td>
-    <td>zeppelin.server.port</td>
-    <td>8080</td>
-    <td>Zeppelin server port. Note that port+1 is used for web socket</td>
-  </tr>
-  <tr>
-    <td>ZEPPELIN_NOTEBOOK_DIR</td>
-    <td>zeppelin.notebook.dir</td>
-    <td>notebook</td>
-    <td>Where notebook file is saved</td>
-  </tr>
-  <tr>
-    <td>ZEPPELIN_INTERPRETERS</td>
-    <td>zeppelin.interpreters</td>
-  <description></description>
-    <td>org.apache.zeppelin.spark.SparkInterpreter,<br />org.apache.zeppelin.spark.PySparkInterpreter,<br />org.apache.zeppelin.spark.SparkSqlInterpreter,<br />org.apache.zeppelin.spark.DepInterpreter,<br />org.apache.zeppelin.markdown.Markdown,<br />org.apache.zeppelin.shell.ShellInterpreter,<br />org.apache.zeppelin.hive.HiveInterpreter</td>
-    <td>Comma separated interpreter configurations [Class]. First interpreter become a default</td>
-  </tr>
-  <tr>
-    <td>ZEPPELIN_INTERPRETER_DIR</td>
-    <td>zeppelin.interpreter.dir</td>
-    <td>interpreter</td>
-    <td>Zeppelin interpreter directory</td>
-  </tr>
-  <tr>
-    <td>MASTER</td>
-    <td></td>
-    <td>N/A</td>
-    <td>Spark master url. eg. spark://master_addr:7077. Leave empty if you want to use local mode</td>
-  </tr>
-  <tr>
-    <td>ZEPPELIN_JAVA_OPTS</td>
-    <td></td>
-    <td>N/A</td>
-    <td>JVM Options</td>
-</table>
-
-## Start/Stop
-#### Start Zeppelin
-
-```
-bin/zeppelin-daemon.sh start
-```
-After successful start, visit http://localhost:8080 with your web browser.
-Note that port **8081** also need to be accessible for websocket connection.
-
-#### Stop Zeppelin
-
-```
-bin/zeppelin-daemon.sh stop
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/install/yarn_install.md
----------------------------------------------------------------------
diff --git a/docs/docs/install/yarn_install.md b/docs/docs/install/yarn_install.md
deleted file mode 100644
index 2b38068..0000000
--- a/docs/docs/install/yarn_install.md
+++ /dev/null
@@ -1,264 +0,0 @@
----
-layout: page
-title: "Install Zeppelin to connect with existing YARN cluster"
-description: ""
-group: install
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-## Introduction
-This page describes how to pre-configure a bare metal node, build & configure Zeppelin on it, configure Zeppelin and connect it to existing YARN cluster running Hortonworks flavour of Hadoop. It also describes steps to configure Spark & Hive interpreter of Zeppelin. 
-
-## Prepare Node
-
-### Zeppelin user (Optional)
-This step is optional, however its nice to run Zeppelin under its own user. In case you do not like to use Zeppelin (hope not) the user could be deleted along with all the pacakges that were installed for Zeppelin, Zeppelin binary itself and associated directories.
-
-Create a zeppelin user and switch to zeppelin user or if zeppelin user is already created then login as zeppelin.
-
-```bash
-useradd zeppelin
-su - zeppelin 
-whoami
-```
-Assuming a zeppelin user is created then running whoami command must return 
-
-```bash
-zeppelin
-```
-
-Its assumed in the rest of the document that zeppelin user is indeed created and below installation instructions are performed as zeppelin user.
-
-### List of Prerequisites
-
- * CentOS 6.x
- * Git
- * Java 1.7 
- * Apache Maven
- * Hadoop client.
- * Spark.
- * Internet connection is required. 
-
-Its assumed that the node has CentOS 6.x installed on it. Although any version of Linux distribution should work fine. The working directory of all prerequisite pacakges is /home/zeppelin/prerequisites, although any location could be used.
-
-#### Git
-Intall latest stable version of Git. This document describes installation of version 2.4.8
-
-```bash
-yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel
-yum install  gcc perl-ExtUtils-MakeMaker
-yum remove git
-cd /home/zeppelin/prerequisites
-wget https://github.com/git/git/archive/v2.4.8.tar.gz
-tar xzf git-2.0.4.tar.gz
-cd git-2.0.4
-make prefix=/home/zeppelin/prerequisites/git all
-make prefix=/home/zeppelin/prerequisites/git install
-echo "export PATH=$PATH:/home/zeppelin/prerequisites/bin" >> /home/zeppelin/.bashrc
-source /home/zeppelin/.bashrc
-git --version
-```
-
-Assuming all the packages are successfully installed, running the version option with git command should display
-
-```bash
-git version 2.4.8
-```
-
-#### Java
-Zeppelin works well with 1.7.x version of Java runtime. Download JDK version 7 and a stable update and follow below instructions to install it.
-
-```bash
-cd /home/zeppelin/prerequisites/
-#Download JDK 1.7, Assume JDK 7 update 79 is downloaded.
-tar -xf jdk-7u79-linux-x64.tar.gz
-echo "export JAVA_HOME=/home/zeppelin/prerequisites/jdk1.7.0_79" >> /home/zeppelin/.bashrc
-source /home/zeppelin/.bashrc
-echo $JAVA_HOME
-```
-Assuming all the packages are successfully installed, echoing JAVA_HOME environment variable should display
-
-```bash
-/home/zeppelin/prerequisites/jdk1.7.0_79
-```
-
-#### Apache Maven
-Download and install a stable version of Maven.
-
-```bash
-cd /home/zeppelin/prerequisites/
-wget ftp://mirror.reverse.net/pub/apache/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz
-tar -xf apache-maven-3.3.3-bin.tar.gz 
-cd apache-maven-3.3.3
-export MAVEN_HOME=/home/zeppelin/prerequisites/apache-maven-3.3.3
-echo "export PATH=$PATH:/home/zeppelin/prerequisites/apache-maven-3.3.3/bin" >> /home/zeppelin/.bashrc
-source /home/zeppelin/.bashrc
-mvn -version
-```
-
-Assuming all the packages are successfully installed, running the version option with mvn command should display
-
-```bash
-Apache Maven 3.3.3 (7994120775791599e205a5524ec3e0dfe41d4a06; 2015-04-22T04:57:37-07:00)
-Maven home: /home/zeppelin/prerequisites/apache-maven-3.3.3
-Java version: 1.7.0_79, vendor: Oracle Corporation
-Java home: /home/zeppelin/prerequisites/jdk1.7.0_79/jre
-Default locale: en_US, platform encoding: UTF-8
-OS name: "linux", version: "2.6.32-358.el6.x86_64", arch: "amd64", family: "unix"
-```
-
-#### Hadoop client
-Zeppelin can work with multiple versions & distributions of Hadoop. A complete list [is available here.](https://github.com/apache/incubator-zeppelin#build) This document assumes Hadoop 2.7.x client libraries including configuration files are installed on Zeppelin node. It also assumes /etc/hadoop/conf contains various Hadoop configuration files. The location of Hadoop configuration files may vary, hence use appropriate location.
-
-```bash
-hadoop version
-Hadoop 2.7.1.2.3.1.0-2574
-Subversion git@github.com:hortonworks/hadoop.git -r f66cf95e2e9367a74b0ec88b2df33458b6cff2d0
-Compiled by jenkins on 2015-07-25T22:36Z
-Compiled with protoc 2.5.0
-From source with checksum 54f9bbb4492f92975e84e390599b881d
-This command was run using /usr/hdp/2.3.1.0-2574/hadoop/lib/hadoop-common-2.7.1.2.3.1.0-2574.jar
-```
-
-#### Spark
-Zeppelin can work with multiple versions Spark. A complete list [is available here.](https://github.com/apache/incubator-zeppelin#build) This document assumes Spark 1.3.1 is installed on Zeppelin node at /home/zeppelin/prerequisites/spark.
-
-## Build
-
-Checkout source code from [https://github.com/apache/incubator-zeppelin](https://github.com/apache/incubator-zeppelin)
-
-```bash
-cd /home/zeppelin/
-git clone https://github.com/apache/incubator-zeppelin.git
-```
-Zeppelin package is available at /home/zeppelin/incubator-zeppelin after the checkout completes.
-
-### Cluster mode
-
-As its assumed Hadoop 2.7.x is installed on the YARN cluster & Spark 1.3.1 is installed on Zeppelin node. Hence appropriate options are chosen to build Zeppelin. This is very important as Zeppelin will bundle corresponding Hadoop & Spark libraries and they must match the ones present on YARN cluster & Zeppelin Spark installation. 
-
-Zeppelin is a maven project and hence must be built with Apache Maven.
-
-```bash
-cd /home/zeppelin/incubator-zeppelin
-mvn clean package -Pspark-1.3 -Dspark.version=1.3.1 -Dhadoop.version=2.7.0 -Phadoop-2.6 -Pyarn -DskipTests
-```
-Building Zeppelin for first time downloads various dependencies and hence takes few minutes to complete. 
-
-## Zeppelin Configuration
-Zeppelin configurations needs to be modified to connect to YARN cluster. Create a copy of zeppelin environment XML
-
-```bash
-cp /home/zeppelin/incubator-zeppelin/conf/zeppelin-env.sh.template /home/zeppelin/incubator-zeppelin/conf/zeppelin-env.sh 
-```
-
-Set the following properties
-
-```bash
-export JAVA_HOME=/home/zeppelin/prerequisites/jdk1.7.0_79
-export HADOOP_CONF_DIR=/etc/hadoop/conf
-export ZEPPELIN_JAVA_OPTS="-Dhdp.version=2.3.1.0-2574"
-```
-
-As /etc/hadoop/conf contains various configurations of YARN cluster, Zeppelin can now submit Spark/Hive jobs on YARN cluster form its web interface. The value of hdp.version is set to 2.3.1.0-2574. This can be obtained by running the following command
-
-```bash
-hdp-select status hadoop-client | sed 's/hadoop-client - \(.*\)/\1/'
-# It returned  2.3.1.0-2574
-```
-
-## Start/Stop
-### Start Zeppelin
-
-```
-cd /home/zeppelin/incubator-zeppelin
-bin/zeppelin-daemon.sh start
-```
-After successful start, visit http://[zeppelin-server-host-name]:8080 with your web browser.
-
-### Stop Zeppelin
-
-```
-bin/zeppelin-daemon.sh stop
-```
-
-## Interpreter
-Zeppelin provides to various distributed processing frameworks to process data that ranges from Spark, Hive, Tajo, Ignite and Lens to name a few. This document describes to configure Hive & Spark interpreters.
-
-### Hive
-Zeppelin supports Hive interpreter and hence copy hive-site.xml that should be present at /etc/hive/conf to the configuration folder of Zeppelin. Once Zeppelin is built it will have conf folder under /home/zeppelin/incubator-zeppelin.
-
-```bash
-cp /etc/hive/conf/hive-site.xml  /home/zeppelin/incubator-zeppelin/conf
-```
-
-Once Zeppelin server has started successfully, visit http://[zeppelin-server-host-name]:8080 with your web browser. Click on Interpreter tab next to Notebook dropdown. Look for Hive configurations and set them appropriately. By default hive.hiveserver2.url will be pointing to localhost and hive.hiveserver2.password/hive.hiveserver2.user are set to hive/hive. Set them as per Hive installation on YARN cluster. 
-Click on Save button. Once these configurations are updated, Zeppelin will prompt you to restart the interpreter. Accept the prompt and the interpreter will reload the configurations.
-
-### Spark
-Zeppelin was built with Spark 1.3.1 and it was assumed that 1.3.1 version of Spark is installed at /home/zeppelin/prerequisites/spark. Look for Spark configrations and click edit button to add the following properties
-
-<table class="table-configuration">
-  <tr>
-    <th>Property Name</th>
-    <th>Property Value</th>
-    <th>Remarks</th>
-  </tr>
-  <tr>
-    <td>master</td>
-    <td>yarn-client</td>
-    <td>In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.</td>
-  </tr>
-  <tr>
-    <td>spark.home</td>
-    <td>/home/zeppelin/prerequisites/spark</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>spark.driver.extraJavaOptions</td>
-    <td>-Dhdp.version=2.3.1.0-2574</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>spark.yarn.am.extraJavaOptions</td>
-    <td>-Dhdp.version=2.3.1.0-2574</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>spark.yarn.jar</td>
-    <td>/home/zeppelin/incubator-zeppelin/interpreter/spark/zeppelin-spark-0.6.0-incubating-SNAPSHOT.jar</td>
-    <td></td>
-  </tr>
-</table>
-
-Click on Save button. Once these configurations are updated, Zeppelin will prompt you to restart the interpreter. Accept the prompt and the interpreter will reload the configurations.
-
-Spark & Hive notebooks can be written with Zeppelin now. The resulting Spark & Hive jobs will run on configured YARN cluster.
-
-## Debug
-Zeppelin does not emit any kind of error messages on web interface when notebook/paragrah is run. If a paragraph fails it only displays ERROR. The reason for failure needs to be looked into log files which is present in logs directory under zeppelin installation base directory. Zeppelin creates a log file for each kind of interpreter.
-
-```bash
-[zeppelin@zeppelin-3529 logs]$ pwd
-/home/zeppelin/incubator-zeppelin/logs
-[zeppelin@zeppelin-3529 logs]$ ls -l
-total 844
--rw-rw-r-- 1 zeppelin zeppelin  14648 Aug  3 14:45 zeppelin-interpreter-hive-zeppelin-zeppelin-3529.log
--rw-rw-r-- 1 zeppelin zeppelin 625050 Aug  3 16:05 zeppelin-interpreter-spark-zeppelin-zeppelin-3529.log
--rw-rw-r-- 1 zeppelin zeppelin 200394 Aug  3 21:15 zeppelin-zeppelin-zeppelin-3529.log
--rw-rw-r-- 1 zeppelin zeppelin  16162 Aug  3 14:03 zeppelin-zeppelin-zeppelin-3529.out
-[zeppelin@zeppelin-3529 logs]$ 
-```


[3/4] incubator-zeppelin git commit: ZEPPELIN-412 Documentation based on Zeppelin version

Posted by mo...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/interpreter/cassandra.md
----------------------------------------------------------------------
diff --git a/docs/docs/interpreter/cassandra.md b/docs/docs/interpreter/cassandra.md
deleted file mode 100644
index b53295c..0000000
--- a/docs/docs/interpreter/cassandra.md
+++ /dev/null
@@ -1,807 +0,0 @@
----
-layout: page
-title: "Cassandra Interpreter"
-description: "Cassandra Interpreter"
-group: manual
----
-{% include JB/setup %}
-
-<hr/>
-## 1. Cassandra CQL Interpreter for Apache Zeppelin
-
-<br/>
-<table class="table-configuration">
-  <tr>
-    <th>Name</th>
-    <th>Class</th>
-    <th>Description</th>
-  </tr>
-  <tr>
-    <td>%cassandra</td>
-    <td>CassandraInterpreter</td>
-    <td>Provides interpreter for Apache Cassandra CQL query language</td>
-  </tr>
-</table>
-
-<hr/>
-
-## 2. Enabling Cassandra Interpreter
-
- In a notebook, to enable the **Cassandra** interpreter, click on the **Gear** icon and select **Cassandra**
- 
- <center>
- ![Interpreter Binding](/assets/themes/zeppelin/img/docs-img/cassandra-InterpreterBinding.png)
- 
- ![Interpreter Selection](/assets/themes/zeppelin/img/docs-img/cassandra-InterpreterSelection.png)
- </center>
-
-<hr/>
- 
-## 3. Using the Cassandra Interpreter
-
- In a paragraph, use **_%cassandra_** to select the **Cassandra** interpreter and then input all commands.
- 
- To access the interactive help, type **HELP;**
- 
- <center>
-  ![Interactive Help](/assets/themes/zeppelin/img/docs-img/cassandra-InteractiveHelp.png)
- </center>
-
-<hr/>
-
-## 4. Interpreter Commands
-
- The **Cassandra** interpreter accepts the following commands
- 
-<center>
-  <table class="table-configuration">
-    <tr>
-      <th>Command Type</th>
-      <th>Command Name</th>
-      <th>Description</th>
-    </tr>
-    <tr>
-      <td nowrap>Help command</td>
-      <td>HELP</td>
-      <td>Display the interactive help menu</td>
-    </tr>
-    <tr>
-      <td nowrap>Schema commands</td>
-      <td>DESCRIBE KEYSPACE, DESCRIBE CLUSTER, DESCRIBE TABLES ...</td>
-      <td>Custom commands to describe the Cassandra schema</td>
-    </tr>
-    <tr>
-      <td nowrap>Option commands</td>
-      <td>@consistency, @retryPolicy, @fetchSize ...</td>
-      <td>Inject runtime options to all statements in the paragraph</td>
-    </tr>
-    <tr>
-      <td nowrap>Prepared statement commands</td>
-      <td>@prepare, @bind, @remove_prepared</td>
-      <td>Let you register a prepared command and re-use it later by injecting bound values</td>
-    </tr>
-    <tr>
-      <td nowrap>Native CQL statements</td>
-      <td>All CQL-compatible statements (SELECT, INSERT, CREATE ...)</td>
-      <td>All CQL statements are executed directly against the Cassandra server</td>
-    </tr>
-  </table>  
-</center>
-
-<hr/>
-## 5. CQL statements
- 
-This interpreter is compatible with any CQL statement supported by Cassandra. Ex: 
-
-```sql
-
-    INSERT INTO users(login,name) VALUES('jdoe','John DOE');
-    SELECT * FROM users WHERE login='jdoe';
-```                                
-
-Each statement should be separated by a semi-colon ( **;** ) except the special commands below:
-
-1. @prepare
-2. @bind
-3. @remove_prepare
-4. @consistency
-5. @serialConsistency
-6. @timestamp
-7. @retryPolicy
-8. @fetchSize
- 
-Multi-line statements as well as multiple statements on the same line are also supported as long as they are 
-separated by a semi-colon. Ex: 
-
-```sql
-
-    USE spark_demo;
-
-    SELECT * FROM albums_by_country LIMIT 1; SELECT * FROM countries LIMIT 1;
-
-    SELECT *
-    FROM artists
-    WHERE login='jlennon';
-```
-
-Batch statements are supported and can span multiple lines, as well as DDL(CREATE/ALTER/DROP) statements: 
-
-```sql
-
-    BEGIN BATCH
-        INSERT INTO users(login,name) VALUES('jdoe','John DOE');
-        INSERT INTO users_preferences(login,account_type) VALUES('jdoe','BASIC');
-    APPLY BATCH;
-
-    CREATE TABLE IF NOT EXISTS test(
-        key int PRIMARY KEY,
-        value text
-    );
-```
-
-CQL statements are <strong>case-insensitive</strong> (except for column names and values). 
-This means that the following statements are equivalent and valid: 
-
-```sql
-
-    INSERT INTO users(login,name) VALUES('jdoe','John DOE');
-    Insert into users(login,name) vAlues('hsue','Helen SUE');
-```
-
-The complete list of all CQL statements and versions can be found below:
-<center>                                 
- <table class="table-configuration">
-   <tr>
-     <th>Cassandra Version</th>
-     <th>Documentation Link</th>
-   </tr>
-   <tr>
-     <td><strong>2.2</strong></td>
-     <td>
-        <a target="_blank" 
-          href="http://docs.datastax.com/en/cql/3.3/cql/cqlIntro.html">
-          http://docs.datastax.com/en/cql/3.3/cql/cqlIntro.html
-        </a>
-     </td>
-   </tr>   
-   <tr>
-     <td><strong>2.1 & 2.0</strong></td>
-     <td>
-        <a target="_blank" 
-          href="http://docs.datastax.com/en/cql/3.1/cql/cql_intro_c.html">
-          http://docs.datastax.com/en/cql/3.1/cql/cql_intro_c.html
-        </a>
-     </td>
-   </tr>   
-   <tr>
-     <td><strong>1.2</strong></td>
-     <td>
-        <a target="_blank" 
-          href="http://docs.datastax.com/en/cql/3.0/cql/aboutCQL.html">
-          http://docs.datastax.com/en/cql/3.0/cql/aboutCQL.html
-        </a>
-     </td>
-   </tr>   
- </table>
-</center>
-
-<hr/>
-
-## 6. Comments in statements
-
-It is possible to add comments between statements. Single line comments start with the hash sign (#). Multi-line comments are enclosed between /** and **/. Ex: 
-
-```sql
-
-    #First comment
-    INSERT INTO users(login,name) VALUES('jdoe','John DOE');
-
-    /**
-     Multi line
-     comments
-     **/
-    Insert into users(login,name) vAlues('hsue','Helen SUE');
-```
-
-<hr/>
-
-## 7. Syntax Validation
-
-The interpreters is shipped with a built-in syntax validator. This validator only checks for basic syntax errors. 
-All CQL-related syntax validation is delegated directly to **Cassandra** 
-
-Most of the time, syntax errors are due to **missing semi-colons** between statements or **typo errors**.
-
-<hr/>
-                                    
-## 8. Schema commands
-
-To make schema discovery easier and more interactive, the following commands are supported:
-<center>                                 
- <table class="table-configuration">
-   <tr>
-     <th>Command</th>
-     <th>Description</th>
-   </tr>
-   <tr>
-     <td><strong>DESCRIBE CLUSTER;</strong></td>
-     <td>Show the current cluster name and its partitioner</td>
-   </tr>   
-   <tr>
-     <td><strong>DESCRIBE KEYSPACES;</strong></td>
-     <td>List all existing keyspaces in the cluster and their configuration (replication factor, durable write ...)</td>
-   </tr>   
-   <tr>
-     <td><strong>DESCRIBE TABLES;</strong></td>
-     <td>List all existing keyspaces in the cluster and for each, all the tables name</td>
-   </tr>   
-   <tr>
-     <td><strong>DESCRIBE TYPES;</strong></td>
-     <td>List all existing user defined types in the <strong>current (logged) keyspace</strong></td>
-   </tr>   
-   <tr>
-     <td nowrap><strong>DESCRIBE FUNCTIONS &lt;keyspace_name&gt;;</strong></td>
-     <td>List all existing user defined functions in the given keyspace</td>
-   </tr>   
-   <tr>
-     <td nowrap><strong>DESCRIBE AGGREGATES &lt;keyspace_name&gt;;</strong></td>
-     <td>List all existing user defined aggregates in the given keyspace</td>
-   </tr>   
-   <tr>
-     <td nowrap><strong>DESCRIBE KEYSPACE &lt;keyspace_name&gt;;</strong></td>
-     <td>Describe the given keyspace configuration and all its table details (name, columns, ...)</td>
-   </tr>   
-   <tr>
-     <td nowrap><strong>DESCRIBE TABLE (&lt;keyspace_name&gt;).&lt;table_name&gt;;</strong></td>
-     <td>
-        Describe the given table. If the keyspace is not provided, the current logged in keyspace is used. 
-        If there is no logged in keyspace, the default system keyspace is used. 
-        If no table is found, an error message is raised
-     </td>
-   </tr>   
-   <tr>
-     <td nowrap><strong>DESCRIBE TYPE (&lt;keyspace_name&gt;).&lt;type_name&gt;;</strong></td>
-     <td>
-        Describe the given type(UDT). If the keyspace is not provided, the current logged in keyspace is used. 
-        If there is no logged in keyspace, the default system keyspace is used. 
-        If no type is found, an error message is raised
-     </td>
-   </tr>   
-   <tr>
-     <td nowrap><strong>DESCRIBE FUNCTION (&lt;keyspace_name&gt;).&lt;function_name&gt;;</strong></td>
-     <td>Describe the given user defined function. The keyspace is optional</td>
-   </tr>   
-   <tr>
-     <td nowrap><strong>DESCRIBE AGGREGATE (&lt;keyspace_name&gt;).&lt;aggregate_name&gt;;</strong></td>
-     <td>Describe the given user defined aggregate. The keyspace is optional</td>
-   </tr>   
- </table>
-</center>              
-                      
-The schema objects (cluster, keyspace, table, type, function and aggregate) are displayed in a tabular format. 
-There is a drop-down menu on the top left corner to expand objects details. On the top right menu is shown the Icon legend.
-
-<br/>
-<center>
-  ![Describe Schema](/assets/themes/zeppelin/img/docs-img/cassandra-DescribeSchema.png)
-</center>
-
-<hr/>
-
-## 9. Runtime Parameters
-
-Sometimes you want to be able to pass runtime query parameters to your statements. 
-Those parameters are not part of the CQL specs and are specific to the interpreter. 
-Below is the list of all parameters: 
-
-<br/>
-<center>                                 
- <table class="table-configuration">
-   <tr>
-     <th>Parameter</th>
-     <th>Syntax</th>
-     <th>Description</th>
-   </tr>
-   <tr>
-     <td nowrap>Consistency Level</td>
-     <td><strong>@consistency=<em>value</em></strong></td>
-     <td>Apply the given consistency level to all queries in the paragraph</td>
-   </tr>
-   <tr>
-     <td nowrap>Serial Consistency Level</td>
-     <td><strong>@serialConsistency=<em>value</em></strong></td>
-     <td>Apply the given serial consistency level to all queries in the paragraph</td>
-   </tr>
-   <tr>
-     <td nowrap>Timestamp</td>
-     <td><strong>@timestamp=<em>long value</em></strong></td>
-     <td>
-        Apply the given timestamp to all queries in the paragraph.
-        Please note that timestamp value passed directly in CQL statement will override this value
-      </td>
-   </tr>
-   <tr>
-     <td nowrap>Retry Policy</td>
-     <td><strong>@retryPolicy=<em>value</em></strong></td>
-     <td>Apply the given retry policy to all queries in the paragraph</td>
-   </tr>
-   <tr>
-     <td nowrap>Fetch Size</td>
-     <td><strong>@fetchSize=<em>integer value</em></strong></td>
-     <td>Apply the given fetch size to all queries in the paragraph</td>
-   </tr>
- </table>
-</center>
-
- Some parameters only accept restricted values: 
-
-<br/>
-<center>                                 
- <table class="table-configuration">
-   <tr>
-     <th>Parameter</th>
-     <th>Possible Values</th>
-   </tr>
-   <tr>
-     <td nowrap>Consistency Level</td>
-     <td><strong>ALL, ANY, ONE, TWO, THREE, QUORUM, LOCAL_ONE, LOCAL_QUORUM, EACH_QUORUM</strong></td>
-   </tr>
-   <tr>
-     <td nowrap>Serial Consistency Level</td>
-     <td><strong>SERIAL, LOCAL_SERIAL</strong></td>
-   </tr>
-   <tr>
-     <td nowrap>Timestamp</td>
-     <td>Any long value</td>
-   </tr>
-   <tr>
-     <td nowrap>Retry Policy</td>
-     <td><strong>DEFAULT, DOWNGRADING_CONSISTENCY, FALLTHROUGH, LOGGING_DEFAULT, LOGGING_DOWNGRADING, LOGGING_FALLTHROUGH</strong></td>
-   </tr>
-   <tr>
-     <td nowrap>Fetch Size</td>
-     <td>Any integer value</td>
-   </tr>
- </table>
-</center> 
-
->Please note that you should **not** add semi-colon ( **;** ) at the end of each parameter statement
-
-Some examples: 
-
-```sql
-
-    CREATE TABLE IF NOT EXISTS spark_demo.ts(
-        key int PRIMARY KEY,
-        value text
-    );
-    TRUNCATE spark_demo.ts;
-
-    # Timestamp in the past
-    @timestamp=10
-
-    # Force timestamp directly in the first insert
-    INSERT INTO spark_demo.ts(key,value) VALUES(1,'first insert') USING TIMESTAMP 100;
-
-    # Select some data to make the clock turn
-    SELECT * FROM spark_demo.albums LIMIT 100;
-
-    # Now insert using the timestamp parameter set at the beginning(10)
-    INSERT INTO spark_demo.ts(key,value) VALUES(1,'second insert');
-
-    # Check for the result. You should see 'first insert'
-    SELECT value FROM spark_demo.ts WHERE key=1;
-```
-                                
-Some remarks about query parameters:
-  
-> 1. **many** query parameters can be set in the same paragraph
-> 2. if the **same** query parameter is set many time with different values, the interpreter only take into account the first value
-> 3. each query parameter applies to **all CQL statements** in the same paragraph, unless you override the option using plain CQL text (like forcing timestamp with the USING clause)
-> 4. the order of each query parameter with regard to CQL statement does not matter
-
-<hr/>
-
-## 10. Support for Prepared Statements
-
-For performance reason, it is better to prepare statements before-hand and reuse them later by providing bound values. 
-This interpreter provides 3 commands to handle prepared and bound statements: 
-
-1. **@prepare**
-2. **@bind**
-3. **@remove_prepared**
-
-Example: 
-
-```
-
-    @prepare[statement_name]=...
-
-    @bind[statement_name]=’text’, 1223, ’2015-07-30 12:00:01’, null, true, [‘list_item1’, ’list_item2’]
-
-    @bind[statement_name_with_no_bound_value]
-
-    @remove_prepare[statement_name]
-```
-
-<br/>
-#### a. @prepare
-<br/>
-You can use the syntax _"@prepare[statement_name]=SELECT ..."_ to create a prepared statement. 
-The _statement_name_ is **mandatory** because the interpreter prepares the given statement with the Java driver and 
-saves the generated prepared statement in an **internal hash map**, using the provided _statement_name_ as search key.
-  
-> Please note that this internal prepared statement map is shared with **all notebooks** and **all paragraphs** because 
-there is only one instance of the interpreter for Cassandra
-  
-> If the interpreter encounters **many** @prepare for the **same _statement_name_ (key)**, only the **first** statement will be taken into account.
-  
-Example: 
-
-```
-
-    @prepare[select]=SELECT * FROM spark_demo.albums LIMIT ?
-
-    @prepare[select]=SELECT * FROM spark_demo.artists LIMIT ?
-```                                
-
-For the above example, the prepared statement is _SELECT * FROM spark_demo.albums LIMIT ?_. 
-_SELECT * FROM spark_demo.artists LIMIT ?_ is ignored because an entry already exists in the prepared statements map with the key select. 
-
-In the context of **Zeppelin**, a notebook can be scheduled to be executed at regular interval, 
-thus it is necessary to **avoid re-preparing many time the same statement (considered an anti-pattern)**.
-<br/>
-<br/>
-#### b. @bind
-<br/>
-Once the statement is prepared (possibly in a separated notebook/paragraph). You can bind values to it: 
-
-```
-    @bind[select_first]=10
-```                                
-
-Bound values are not mandatory for the **@bind** statement. However if you provide bound values, they need to comply to some syntax:
-
-* String values should be enclosed between simple quotes ( ‘ )
-* Date values should be enclosed between simple quotes ( ‘ ) and respect the formats:
-  1. yyyy-MM-dd HH:MM:ss
-  2. yyyy-MM-dd HH:MM:ss.SSS
-* **null** is parsed as-is
-* **boolean** (true|false) are parsed as-is
-* collection values must follow the **[standard CQL syntax]**:
-  * list: [‘list_item1’, ’list_item2’, ...]
-  * set: {‘set_item1’, ‘set_item2’, …}
-  * map: {‘key1’: ‘val1’, ‘key2’: ‘val2’, …}
-* **tuple** values should be enclosed between parenthesis (see **[Tuple CQL syntax]**): (‘text’, 123, true)
-* **udt** values should be enclosed between brackets (see **[UDT CQL syntax]**): {stree_name: ‘Beverly Hills’, number: 104, zip_code: 90020, state: ‘California’, …}
-
-> It is possible to use the @bind statement inside a batch:
-> 
-> ```sql
->  
->     BEGIN BATCH
->         @bind[insert_user]='jdoe','John DOE'
->         UPDATE users SET age = 27 WHERE login='hsue';
->     APPLY BATCH;
-> ```
-
-<br/>
-#### c. @remove_prepare
-<br/>
-To avoid for a prepared statement to stay forever in the prepared statement map, you can use the 
-**@remove_prepare[statement_name]** syntax to remove it. 
-Removing a non-existing prepared statement yields no error.
-
-<hr/>
-
-## 11. Using Dynamic Forms
-
-Instead of hard-coding your CQL queries, it is possible to use the mustache syntax ( **\{\{ \}\}** ) to inject simple value or multiple choices forms. 
-
-The syntax for simple parameter is: **\{\{input_Label=default value\}\}**. The default value is mandatory because the first time the paragraph is executed, 
-we launch the CQL query before rendering the form so at least one value should be provided. 
-
-The syntax for multiple choices parameter is: **\{\{input_Label=value1 | value2 | … | valueN \}\}**. By default the first choice is used for CQL query 
-the first time the paragraph is executed. 
-
-Example: 
-
-{% raw %}
-    #Secondary index on performer style
-    SELECT name, country, performer
-    FROM spark_demo.performers
-    WHERE name='{{performer=Sheryl Crow|Doof|Fanfarlo|Los Paranoia}}'
-    AND styles CONTAINS '{{style=Rock}}';
-{% endraw %}
-                                
-
-In the above example, the first CQL query will be executed for _performer='Sheryl Crow' AND style='Rock'_. 
-For subsequent queries, you can change the value directly using the form. 
-
-> Please note that we enclosed the **\{\{ \}\}** block between simple quotes ( **'** ) because Cassandra expects a String here. 
-> We could have also use the **\{\{style='Rock'\}\}** syntax but this time, the value displayed on the form is **_'Rock'_** and not **_Rock_**. 
-
-It is also possible to use dynamic forms for **prepared statements**: 
-
-{% raw %}
-
-    @bind[select]=='{{performer=Sheryl Crow|Doof|Fanfarlo|Los Paranoia}}', '{{style=Rock}}'
-  
-{% endraw %}
-
-<hr/>
-
-## 12. Execution parallelism and shared states
-
-It is possible to execute many paragraphs in parallel. However, at the back-end side, we’re still using synchronous queries. 
-_Asynchronous execution_ is only possible when it is possible to return a `Future` value in the `InterpreterResult`. 
-It may be an interesting proposal for the **Zeppelin** project.
-
-Another caveat is that the same `com.datastax.driver.core.Session` object is used for **all** notebooks and paragraphs.
-Consequently, if you use the **USE _keyspace name_;** statement to log into a keyspace, it will change the keyspace for
-**all current users** of the **Cassandra** interpreter because we only create 1 `com.datastax.driver.core.Session` object
-per instance of **Cassandra** interpreter.
-
-The same remark does apply to the **prepared statement hash map**, it is shared by **all users** using the same instance of **Cassandra** interpreter.
-
-Until **Zeppelin** offers a real multi-users separation, there is a work-around to segregate user environment and states: 
-_create different **Cassandra** interpreter instances_
-
-For this, first go to the **Interpreter** menu and click on the **Create** button
-<br/>
-<br/>
-<center>
-  ![Create Interpreter](/assets/themes/zeppelin/img/docs-img/cassandra-NewInterpreterInstance.png)
-</center>
- 
-In the interpreter creation form, put **cass-instance2** as **Name** and select the **cassandra** 
-in the interpreter drop-down list  
-<br/>
-<br/>
-<center>
-  ![Interpreter Name](/assets/themes/zeppelin/img/docs-img/cassandra-InterpreterName.png)
-</center>                         
-
- Click on **Save** to create the new interpreter instance. Now you should be able to see it in the interpreter list.
-  
-<br/>
-<br/>
-<center>
-  ![Interpreter In List](/assets/themes/zeppelin/img/docs-img/cassandra-NewInterpreterInList.png)
-</center>                         
-
-Go back to your notebook and click on the **Gear** icon to configure interpreter bindings.
-You should be able to see and select the **cass-instance2** interpreter instance in the available
-interpreter list instead of the standard **cassandra** instance.
-
-<br/>
-<br/>
-<center>
-  ![Interpreter Instance Selection](/assets/themes/zeppelin/img/docs-img/cassandra-InterpreterInstanceSelection.png)
-</center> 
-
-<hr/>
-
-## 13. Interpreter Configuration
-
-To configure the **Cassandra** interpreter, go to the **Interpreter** menu and scroll down to change the parameters.
-The **Cassandra** interpreter is using the official **[Cassandra Java Driver]** and most of the parameters are used
-to configure the Java driver
-
-Below are the configuration parameters and their default value.
-
-
- <table class="table-configuration">
-   <tr>
-     <th>Property Name</th>
-     <th>Description</th>
-     <th>Default Value</th>
-   </tr>
-   <tr>
-     <td>cassandra.cluster</td>
-     <td>Name of the Cassandra cluster to connect to</td>
-     <td>Test Cluster</td>
-   </tr>
-   <tr>
-     <td>cassandra.compression.protocol</td>
-     <td>On wire compression. Possible values are: NONE, SNAPPY, LZ4</td>
-     <td>NONE</td>
-   </tr>
-   <tr>
-     <td>cassandra.credentials.username</td>
-     <td>If security is enable, provide the login</td>
-     <td>none</td>
-   </tr>
-   <tr>
-     <td>cassandra.credentials.password</td>
-     <td>If security is enable, provide the password</td>
-     <td>none</td>
-   </tr>
-   <tr>
-     <td>cassandra.hosts</td>
-     <td>
-        Comma separated Cassandra hosts (DNS name or IP address).
-        <br/>
-        Ex: '192.168.0.12,node2,node3'
-      </td>
-     <td>localhost</td>
-   </tr>
-   <tr>
-     <td>cassandra.interpreter.parallelism</td>
-     <td>Number of concurrent paragraphs(queries block) that can be executed</td>
-     <td>10</td>
-   </tr>
-   <tr>
-     <td>cassandra.keyspace</td>
-     <td>
-        Default keyspace to connect to.
-        <strong>
-          It is strongly recommended to let the default value
-          and prefix the table name with the actual keyspace
-          in all of your queries
-        </strong>
-     </td>
-     <td>system</td>
-   </tr>
-   <tr>
-     <td>cassandra.load.balancing.policy</td>
-     <td>
-        Load balancing policy. Default = <em>new TokenAwarePolicy(new DCAwareRoundRobinPolicy())</em>
-        To Specify your own policy, provide the <strong>fully qualify class name (FQCN)</strong> of your policy.
-        At runtime the interpreter will instantiate the policy using 
-        <strong>Class.forName(FQCN)</strong>
-     </td>
-     <td>DEFAULT</td>
-   </tr>
-   <tr>
-     <td>cassandra.max.schema.agreement.wait.second</td>
-     <td>Cassandra max schema agreement wait in second</td>
-     <td>10</td>
-   </tr>
-   <tr>
-     <td>cassandra.pooling.core.connection.per.host.local</td>
-     <td>Protocol V2 and below default = 2. Protocol V3 and above default = 1</td>
-     <td>2</td>
-   </tr>
-   <tr>
-     <td>cassandra.pooling.core.connection.per.host.remote</td>
-     <td>Protocol V2 and below default = 1. Protocol V3 and above default = 1</td>
-     <td>1</td>
-   </tr>
-   <tr>
-     <td>cassandra.pooling.heartbeat.interval.seconds</td>
-     <td>Cassandra pool heartbeat interval in secs</td>
-     <td>30</td>
-   </tr>
-   <tr>
-     <td>cassandra.pooling.idle.timeout.seconds</td>
-     <td>Cassandra idle time out in seconds</td>
-     <td>120</td>
-   </tr>
-   <tr>
-     <td>cassandra.pooling.max.connection.per.host.local</td>
-     <td>Protocol V2 and below default = 8. Protocol V3 and above default = 1</td>
-     <td>8</td>
-   </tr>
-   <tr>
-     <td>cassandra.pooling.max.connection.per.host.remote</td>
-     <td>Protocol V2 and below default = 2. Protocol V3 and above default = 1</td>
-     <td>2</td>
-   </tr>
-   <tr>
-     <td>cassandra.pooling.max.request.per.connection.local</td>
-     <td>Protocol V2 and below default = 128. Protocol V3 and above default = 1024</td>
-     <td>128</td>
-   </tr>
-   <tr>
-     <td>cassandra.pooling.max.request.per.connection.remote</td>
-     <td>Protocol V2 and below default = 128. Protocol V3 and above default = 256</td>
-     <td>128</td>
-   </tr>
-   <tr>
-     <td>cassandra.pooling.new.connection.threshold.local</td>
-     <td>Protocol V2 and below default = 100. Protocol V3 and above default = 800</td>
-     <td>100</td>
-   </tr>
-   <tr>
-     <td>cassandra.pooling.new.connection.threshold.remote</td>
-     <td>Protocol V2 and below default = 100. Protocol V3 and above default = 200</td>
-     <td>100</td>
-   </tr>
-   <tr>
-     <td>cassandra.pooling.pool.timeout.millisecs</td>
-     <td>Cassandra pool time out in millisecs</td>
-     <td>5000</td>
-   </tr>
-   <tr>
-     <td>cassandra.protocol.version</td>
-     <td>Cassandra binary protocol version</td>
-     <td>3</td>
-   </tr>
-   <tr>
-     <td>cassandra.query.default.consistency</td>
-     <td>
-      Cassandra query default consistency level
-      <br/>
-      Available values: ONE, TWO, THREE, QUORUM, LOCAL_ONE, LOCAL_QUORUM, EACH_QUORUM, ALL
-     </td>
-     <td>ONE</td>
-   </tr>
-   <tr>
-     <td>cassandra.query.default.fetchSize</td>
-     <td>Cassandra query default fetch size</td>
-     <td>5000</td>
-   </tr>
-   <tr>
-     <td>cassandra.query.default.serial.consistency</td>
-     <td>
-      Cassandra query default serial consistency level
-      <br/>
-      Available values: SERIAL, LOCAL_SERIAL
-     </td>
-     <td>SERIAL</td>
-   </tr>
-   <tr>
-     <td>cassandra.reconnection.policy</td>
-     <td>
-        Cassandra Reconnection Policy.
-        Default = new ExponentialReconnectionPolicy(1000, 10 * 60 * 1000)
-        To Specify your own policy, provide the <strong>fully qualify class name (FQCN)</strong> of your policy.
-        At runtime the interpreter will instantiate the policy using 
-        <strong>Class.forName(FQCN)</strong>
-     </td>
-     <td>DEFAULT</td>
-   </tr>
-   <tr>
-     <td>cassandra.retry.policy</td>
-     <td>
-        Cassandra Retry Policy.
-        Default = DefaultRetryPolicy.INSTANCE
-        To Specify your own policy, provide the <strong>fully qualify class name (FQCN)</strong> of your policy.
-        At runtime the interpreter will instantiate the policy using 
-        <strong>Class.forName(FQCN)</strong>
-     </td>
-     <td>DEFAULT</td>
-   </tr>
-   <tr>
-     <td>cassandra.socket.connection.timeout.millisecs</td>
-     <td>Cassandra socket default connection timeout in millisecs</td>
-     <td>500</td>
-   </tr>
-   <tr>
-     <td>cassandra.socket.read.timeout.millisecs</td>
-     <td>Cassandra socket read timeout in millisecs</td>
-     <td>12000</td>
-   </tr>
-   <tr>
-     <td>cassandra.socket.tcp.no_delay</td>
-     <td>Cassandra socket TCP no delay</td>
-     <td>true</td>
-   </tr>
-   <tr>
-     <td>cassandra.speculative.execution.policy</td>
-     <td>
-        Cassandra Speculative Execution Policy.
-        Default = NoSpeculativeExecutionPolicy.INSTANCE
-        To Specify your own policy, provide the <strong>fully qualify class name (FQCN)</strong> of your policy.
-        At runtime the interpreter will instantiate the policy using 
-        <strong>Class.forName(FQCN)</strong>
-     </td>
-     <td>DEFAULT</td>
-   </tr>
- </table>
-
-<hr/>
-
-## 14. Bugs & Contacts
-
- If you encounter a bug for this interpreter, please create a **[JIRA]** ticket and ping me on Twitter
- at **[@doanduyhai]**
-
-
-[Cassandra Java Driver]: https://github.com/datastax/java-driver
-[standard CQL syntax]: http://docs.datastax.com/en/cql/3.1/cql/cql_using/use_collections_c.html
-[Tuple CQL syntax]: http://docs.datastax.com/en/cql/3.1/cql/cql_reference/tupleType.html
-[UDT CQL syntax]: http://docs.datastax.com/en/cql/3.1/cql/cql_using/cqlUseUDT.html
-[JIRA]: https://issues.apache.org/jira/browse/ZEPPELIN-382?jql=project%20%3D%20ZEPPELIN
-[@doanduyhai]: https://twitter.com/doanduyhai

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/interpreter/flink.md
----------------------------------------------------------------------
diff --git a/docs/docs/interpreter/flink.md b/docs/docs/interpreter/flink.md
deleted file mode 100644
index ce1f780..0000000
--- a/docs/docs/interpreter/flink.md
+++ /dev/null
@@ -1,68 +0,0 @@
----
-layout: page
-title: "Flink Interpreter"
-description: ""
-group: manual
----
-{% include JB/setup %}
-
-
-## Flink interpreter for Apache Zeppelin
-[Apache Flink](https://flink.apache.org) is an open source platform for distributed stream and batch data processing.
-
-
-### How to start local Flink cluster, to test the interpreter
-Zeppelin comes with pre-configured flink-local interpreter, which starts Flink in a local mode on your machine, so you do not need to install anything.
-
-### How to configure interpreter to point to Flink cluster
-At the "Interpreters" menu, you have to create a new Flink interpreter and provide next properties:
-
-<table class="table-configuration">
-  <tr>
-    <th>property</th>
-    <th>value</th>
-    <th>Description</th>
-  </tr>
-  <tr>
-    <td>host</td>
-    <td>local</td>
-    <td>host name of running JobManager. 'local' runs flink in local mode (default)</td>
-  </tr>
-  <tr>
-    <td>port</td>
-    <td>6123</td>
-    <td>port of running JobManager</td>
-  </tr>
-  <tr>
-    <td>xxx</td>
-    <td>yyy</td>
-    <td>anything else from [Flink Configuration](https://ci.apache.org/projects/flink/flink-docs-release-0.9/setup/config.html)</td>
-  </tr>
-</table>
-<br />
-
-
-### How to test it's working
-
-In example, by using the [Zeppelin notebook](https://www.zeppelinhub.com/viewer/notebooks/aHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tL05GTGFicy96ZXBwZWxpbi1ub3RlYm9va3MvbWFzdGVyL25vdGVib29rcy8yQVFFREs1UEMvbm90ZS5qc29u) is from [Till Rohrmann's presentation](http://www.slideshare.net/tillrohrmann/data-analysis-49806564) "Interactive data analysis with Apache Flink" for Apache Flink Meetup.
-
-
-```
-%sh
-rm 10.txt.utf-8
-wget http://www.gutenberg.org/ebooks/10.txt.utf-8
-```
-```
-%flink
-case class WordCount(word: String, frequency: Int)
-val bible:DataSet[String] = env.readTextFile("10.txt.utf-8")
-val partialCounts: DataSet[WordCount] = bible.flatMap{
-    line =>
-        """\b\w+\b""".r.findAllIn(line).map(word => WordCount(word, 1))
-//        line.split(" ").map(word => WordCount(word, 1))
-}
-val wordCounts = partialCounts.groupBy("word").reduce{
-    (left, right) => WordCount(left.word, left.frequency + right.frequency)
-}
-val result10 = wordCounts.first(10).collect()
-```

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/interpreter/geode.md
----------------------------------------------------------------------
diff --git a/docs/docs/interpreter/geode.md b/docs/docs/interpreter/geode.md
deleted file mode 100644
index 96d1c04..0000000
--- a/docs/docs/interpreter/geode.md
+++ /dev/null
@@ -1,203 +0,0 @@
----
-layout: page
-title: "Geode OQL Interpreter"
-description: ""
-group: manual
----
-{% include JB/setup %}
-
-
-## Geode/Gemfire OQL Interpreter for Apache Zeppelin
-
-<br/>
-<table class="table-configuration">
-  <tr>
-    <th>Name</th>
-    <th>Class</th>
-    <th>Description</th>
-  </tr>
-  <tr>
-    <td>%geode.oql</td>
-    <td>GeodeOqlInterpreter</td>
-    <td>Provides OQL environment for Apache Geode</td>
-  </tr>
-</table>
-
-<br/>
-This interpreter supports the [Geode](http://geode.incubator.apache.org/) [Object Query Language (OQL)](http://geode-docs.cfapps.io/docs/developing/querying_basics/oql_compared_to_sql.html).  With the OQL-based querying language:
-
-[<img align="right" src="http://img.youtube.com/vi/zvzzA9GXu3Q/3.jpg" alt="zeppelin-view" hspace="10" width="200"></img>](https://www.youtube.com/watch?v=zvzzA9GXu3Q)
-
-* You can query on any arbitrary object
-* You can navigate object collections
-* You can invoke methods and access the behavior of objects
-* Data mapping is supported
-* You are not required to declare types. Since you do not need type definitions, you can work across multiple languages
-* You are not constrained by a schema
-
-This [Video Tutorial](https://www.youtube.com/watch?v=zvzzA9GXu3Q) illustrates some of the features provided by the `Geode Interpreter`.
-
-### Create Interpreter 
-
-By default Zeppelin creates one `Geode/OQL` instance. You can remove it or create more instances. 
-
-Multiple Geode instances can be created, each configured to the same or different backend Geode cluster. But over time a  `Notebook` can have only one Geode interpreter instance `bound`. That means you _can not_ connect to different Geode clusters in the same `Notebook`. This is a known Zeppelin limitation. 
-
-To create new Geode instance open the `Interprter` section and click the `+Create` button. Pick a `Name` of your choice and from the `Interpreter` drop-down select `geode`.  Then follow the configuration instructions and `Save` the new instance. 
-
-> Note: The `Name` of the instance is used only to distinct the instances while binding them to the `Notebook`. The `Name` is irrelevant inside the `Notebook`. In the `Notebook` you must use `%geode.oql` tag. 
-
-### Bind to Notebook
-In the `Notebook` click on the `settings` icon in the top right corner. The select/deselect the interpreters to be bound with the `Notebook`.
-
-### Configuration
-You can modify the configuration of the Geode from the `Interpreter` section.  The Geode interpreter express the following properties:
-
- 
- <table class="table-configuration">
-   <tr>
-     <th>Property Name</th>
-     <th>Description</th>
-     <th>Default Value</th>
-   </tr>
-   <tr>
-     <td>geode.locator.host</td>
-     <td>The Geode Locator Host</td>
-     <td>localhost</td>
-   </tr>
-   <tr>
-     <td>geode.locator.port</td>
-     <td>The Geode Locator Port</td>
-     <td>10334</td>
-   </tr>
-   <tr>
-     <td>geode.max.result</td>
-     <td>Max number of OQL result to display to prevent the browser overload</td>
-     <td>1000</td>
-   </tr>
- </table>
- 
-### How to use
-
-> *Tip 1: Use (CTRL + .) for OQL auto-completion.*
-
-> *Tip 2: Alawys start the paragraphs with the full `%geode.oql` prefix tag! The short notation: `%geode` would still be able run the OQL queries but the syntax highlighting and the auto-completions will be disabled.*
-
-#### Create / Destroy Regions
-
-The OQL sepecification does not support  [Geode Regions](https://cwiki.apache.org/confluence/display/GEODE/Index#Index-MainConceptsandComponents) mutation operations. To `creaate`/`destroy` regions one should use the [GFSH](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/chapter_overview.html) shell tool instead. To wokr this it assumes that the GFSH is colocated with Zeppelin server.
-
-```bash
-%sh
-source /etc/geode/conf/geode-env.sh
-gfsh << EOF
-
- connect --locator=ambari.localdomain[10334]
-
- destroy region --name=/regionEmployee
- destroy region --name=/regionCompany
- create region --name=regionEmployee --type=REPLICATE
- create region --name=regionCompany --type=REPLICATE
- 
- exit;
-EOF
-```
-
-Above snippet re-creates two regions: `regionEmployee` and `regionCompany`. Note that you have to explicetely specify the locator host and port. The values should match those you have used in the Geode Interpreter configuration. Comprehensive  list of [GFSH Commands by Functional Area](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/gfsh_quick_reference.html).
-
-#### Basic OQL  
-
-
-```sql 
-%geode.oql 
-SELECT count(*) FROM /regionEmploee
-```
-
-OQL `IN` and `SET` filters:
-
-```sql
-%geode.oql
-SELECT * FROM /regionEmployee 
-WHERE companyId IN SET(2) OR lastName IN SET('Tzolov13', 'Tzolov73')
-```
-
-OQL `JOIN` operations
-
-```sql
-%geode.oql
-SELECT e.employeeId, e.firstName, e.lastName, c.id as companyId, c.companyName, c.address
-FROM /regionEmployee e, /regionCompany c 
-WHERE e.companyId = c.id
-```
-
-By default the QOL responses contain only the region entry values. To access the keys,  query the `EntrySet` instead:
-
-```sql
-%geode.oql
-SELECT e.key, e.value.companyId, e.value.email 
-FROM /regionEmployee.entrySet e
-```
-Following query will return the EntrySet value as a Blob:
-
-```sql
-%geode.oql
-SELECT e.key, e.value FROM /regionEmployee.entrySet e
-```
-
-
-> Note: You can have multiple queries in the same paragraph but only the result from the first is displayed. [[1](https://issues.apache.org/jira/browse/ZEPPELIN-178)], [[2](https://issues.apache.org/jira/browse/ZEPPELIN-212)].
-
-
-#### GFSH Commands From The Shell
-
-Use the Shell Interpreter (`%sh`) to run OQL commands form the command line:
-
-```bash
-%sh
-source /etc/geode/conf/geode-env.sh
-gfsh -e "connect" -e "list members"
-```
-
-#### Apply Zeppelin Dynamic Forms
-
-You can leverage [Zepplein Dynamic Form](https://zeppelin.incubator.apache.org/docs/manual/dynamicform.html) inside your OQL queries. You can use both the `text input` and `select form` parametrization features
-
-```sql
-%geode.oql
-SELECT * FROM /regionEmployee e WHERE e.employeeId > ${Id}
-```
-
-#### Geode REST API
-To list the defined regions you can use the [Geode REST API](http://geode-docs.cfapps.io/docs/geode_rest/chapter_overview.html):
-
-```
-http://<geode server hostname>phd1.localdomain:8484/gemfire-api/v1/
-```
-
-```json
-{
-  "regions" : [{
-    "name" : "regionEmployee",
-    "type" : "REPLICATE",
-    "key-constraint" : null,
-    "value-constraint" : null
-  }, {
-    "name" : "regionCompany",
-    "type" : "REPLICATE",
-    "key-constraint" : null,
-    "value-constraint" : null
-  }]
-}
-```
-
-> To enable Geode REST API with JSON support add the following properties to geode.server.properties.file and restart:
-
-```
-http-service-port=8484
-start-dev-rest-api=true
-```
-
-### Auto-completion 
-The Geode Interpreter provides a basic auto-completion functionality. On `(Ctrl+.)` it list the most relevant suggesntions in a pop-up window. 
-
-

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/interpreter/ignite.md
----------------------------------------------------------------------
diff --git a/docs/docs/interpreter/ignite.md b/docs/docs/interpreter/ignite.md
deleted file mode 100644
index 02fc587..0000000
--- a/docs/docs/interpreter/ignite.md
+++ /dev/null
@@ -1,116 +0,0 @@
----
-layout: page
-title: "Ignite Interpreter"
-description: "Ignite user guide"
-group: manual
----
-{% include JB/setup %}
-
-## Ignite Interpreter for Apache Zeppelin
-
-### Overview
-[Apache Ignite](https://ignite.apache.org/) In-Memory Data Fabric is a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based or flash technologies.
-
-![Apache Ignite](/assets/themes/zeppelin/img/docs-img/ignite-logo.png)
-
-You can use Zeppelin to retrieve distributed data from cache using Ignite SQL interpreter. Moreover, Ignite interpreter allows you to execute any Scala code in cases when SQL doesn't fit to your requirements. For example, you can populate data into your caches or execute distributed computations.
-
-### Installing and Running Ignite example
-In order to use Ignite interpreters, you may install Apache Ignite in some simple steps:
-
-  1. Download Ignite [source release](https://ignite.apache.org/download.html#sources) or [binary release](https://ignite.apache.org/download.html#binaries) whatever you want. But you must download Ignite as the same version of Zeppelin's. If it is not, you can't use scala code on Zeppelin. You can find ignite version in Zepplin at the pom.xml which is placed under `path/to/your-Zeppelin/ignite/pom.xml` ( Of course, in Zeppelin source release ). Please check `ignite.version` .<br>Currently, Zeppelin provides ignite only in Zeppelin source release. So, if you download Zeppelin binary release( `zeppelin-0.5.0-incubating-bin-spark-xxx-hadoop-xx` ), you can not use ignite interpreter on Zeppelin. We are planning to include ignite in a future binary release.
-  
-  2. Examples are shipped as a separate Maven project, so to start running you simply need to import provided <dest_dir>/apache-ignite-fabric-1.2.0-incubating-bin/pom.xml file into your favourite IDE, such as Eclipse. 
-
-   * In case of Eclipse, Eclipse -> File -> Import -> Existing Maven Projects
-   * Set examples directory path to Eclipse and select the pom.xml.
-   * Then start `org.apache.ignite.examples.ExampleNodeStartup` (or whatever you want) to run at least one or more ignite node. When you run example code, you may notice that the number of node is increase one by one. 
-  
-  > **Tip. If you want to run Ignite examples on the cli not IDE, you can export executable Jar file from IDE. Then run it by using below command.**
-      
-  ``` 
-  $ nohup java -jar </path/to/your Jar file name> 
-  ```
-    
-### Configuring Ignite Interpreter 
-At the "Interpreters" menu, you may edit Ignite interpreter or create new one. Zeppelin provides these properties for Ignite.
-
- <table class="table-configuration">
-  <tr>
-      <th>Property Name</th>
-      <th>value</th>
-      <th>Description</th>
-  </tr>
-  <tr>
-      <td>ignite.addresses</td>
-      <td>127.0.0.1:47500..47509</td>
-      <td>Coma separated list of Ignite cluster hosts. See [Ignite Cluster Configuration](https://apacheignite.readme.io/v1.2/docs/cluster-config) section for more details.</td>
-  </tr>
-  <tr>
-      <td>ignite.clientMode</td>
-      <td>true</td>
-      <td>You can connect to the Ignite cluster as client or server node. See [Ignite Clients vs. Servers](https://apacheignite.readme.io/v1.2/docs/clients-vs-servers) section for details. Use true or false values in order to connect in client or server mode respectively.</td>
-  </tr>
-  <tr>
-      <td>ignite.config.url</td>
-      <td></td>
-      <td>Configuration URL. Overrides all other settings.</td>
-   </tr
-   <tr>
-      <td>ignite.jdbc.url</td>
-      <td>jdbc:ignite:cfg://default-ignite-jdbc.xml</td>
-      <td>Ignite JDBC connection URL.</td>
-   </tr>
-   <tr>
-      <td>ignite.peerClassLoadingEnabled</td>
-      <td>true</td>
-      <td>Enables peer-class-loading. See [Zero Deployment](https://apacheignite.readme.io/v1.2/docs/zero-deployment) section for details. Use true or false values in order to enable or disable P2P class loading respectively.</td>
-  </tr>
- </table>
-
-![Configuration of Ignite Interpreter](/assets/themes/zeppelin/img/docs-img/ignite-interpreter-setting.png)
-
-### Interpreter Binding for Zeppelin Notebook
-After configuring Ignite interpreter, create your own notebook. Then you can bind interpreters like below image.
-
-![Binding Interpreters](/assets/themes/zeppelin/img/docs-img/ignite-interpreter-binding.png)
-
-For more interpreter binding information see [here](http://zeppelin.incubator.apache.org/docs/manual/interpreters.html).
-
-### How to use Ignite SQL interpreter
-In order to execute SQL query, use ` %ignite.ignitesql ` prefix. <br>
-Supposing you are running `org.apache.ignite.examples.streaming.wordcount.StreamWords`, then you can use "words" cache( Of course you have to specify this cache name to the Ignite interpreter setting section `ignite.jdbc.url` of Zeppelin ). 
-For example, you can select top 10 words in the words cache using the following query
-
-  ``` 
-  %ignite.ignitesql 
-  select _val, count(_val) as cnt from String group by _val order by cnt desc limit 10 
-  ``` 
-  
-  ![IgniteSql on Zeppelin](/assets/themes/zeppelin/img/docs-img/ignite-sql-example.png)
-  
-As long as your Ignite version and Zeppelin Ignite version is same, you can also use scala code. Please check the Zeppelin Ignite version before you download your own Ignite. 
-
-  ```
-  %ignite
-  import org.apache.ignite._
-  import org.apache.ignite.cache.affinity._
-  import org.apache.ignite.cache.query._
-  import org.apache.ignite.configuration._
-
-  import scala.collection.JavaConversions._
-
-  val cache: IgniteCache[AffinityUuid, String] = ignite.cache("words")
-
-  val qry = new SqlFieldsQuery("select avg(cnt), min(cnt), max(cnt) from (select count(_val) as cnt from String group by _val)", true)
-
-  val res = cache.query(qry).getAll()
-
-  collectionAsScalaIterable(res).foreach(println _)
-  ```
-  
-  ![Using Scala Code](/assets/themes/zeppelin/img/docs-img/ignite-scala-example.png)
-
-Apache Ignite also provides a guide docs for Zeppelin ["Ignite with Apache Zeppelin"](https://apacheignite.readme.io/docs/data-analysis-with-apache-zeppelin)
- 
-  

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/interpreter/lens.md
----------------------------------------------------------------------
diff --git a/docs/docs/interpreter/lens.md b/docs/docs/interpreter/lens.md
deleted file mode 100644
index 903df7e..0000000
--- a/docs/docs/interpreter/lens.md
+++ /dev/null
@@ -1,173 +0,0 @@
----
-layout: page
-title: "Lens Interpreter"
-description: "Lens user guide"
-group: manual
----
-{% include JB/setup %}
-
-## Lens Interpreter for Apache Zeppelin
-
-### Overview
-[Apache Lens](https://lens.apache.org/) provides an Unified Analytics interface. Lens aims to cut the Data Analytics silos by providing a single view of data across multiple tiered data stores and optimal execution environment for the analytical query. It seamlessly integrates Hadoop with traditional data warehouses to appear like one.
-
-![Apache Lens](/assets/themes/zeppelin/img/docs-img/lens-logo.png)
-
-### Installing and Running Lens
-In order to use Lens interpreters, you may install Apache Lens in some simple steps:
-
-  1. Download Lens for latest version from [the ASF](http://www.apache.org/dyn/closer.lua/lens/2.3-beta). Or the older release can be found [in the Archives](http://archive.apache.org/dist/lens/).
-  2. Before running Lens, you have to set HIVE_HOME and HADOOP_HOME. If you want to get more information about this, please refer to [here](http://lens.apache.org/lenshome/install-and-run.html#Installation). Lens also provides Pseudo Distributed mode. [Lens pseudo-distributed setup](http://lens.apache.org/lenshome/pseudo-distributed-setup.html) is done by using [docker](https://www.docker.com/). Hive server and hadoop daemons are run as separate processes in lens pseudo-distributed setup. 
-  3. Now, you can start lens server (or stop).
-  
-  ```
-    ./bin/lens-ctl start (or stop)
-  ```
-
-### Configuring Lens Interpreter
-At the "Interpreters" menu, you can to edit Lens interpreter or create new one. Zeppelin provides these properties for Lens.
-
- <table class="table-configuration">
-  <tr>
-      <th>Property Name</th>
-      <th>value</th>
-      <th>Description</th>
-  </tr>
-  <tr>
-      <td>lens.client.dbname</td>
-      <td>default</td>
-      <td>The database schema name</td>
-  </tr>
-  <tr>
-      <td>lens.query.enable.persistent.resultset</td>
-      <td>false</td>
-      <td>Whether to enable persistent resultset for queries. When enabled, server will fetch results from driver, custom format them if any and store in a configured location. The file name of query output is queryhandle-id, with configured extensions</td>
-  </tr>
-  <tr>
-      <td>lens.server.base.url</td>
-      <td>http://hostname:port/lensapi</td>
-      <td>The base url for the lens server. you have to edit "hostname" and "port" that you may use(ex. http://0.0.0.0:9999/lensapi)</td>
-   </tr>
-   <tr>
-      <td>lens.session.cluster.user </td>
-      <td>default</td>
-      <td>Hadoop cluster username</td>
-  </tr>
-  <tr>
-      <td>zeppelin.lens.maxResult</td>
-      <td>1000</td>
-      <td>Max number of rows to display</td>
-  </tr>
-  <tr>
-      <td>zeppelin.lens.maxThreads</td>
-      <td>10</td>
-      <td>If concurrency is true then how many threads?</td>
-  </tr>
-  <tr>
-      <td>zeppelin.lens.run.concurrent</td>
-      <td>true</td>
-      <td>Run concurrent Lens Sessions</td>
-  </tr>
-  <tr>
-      <td>xxx</td>
-      <td>yyy</td>
-      <td>anything else from [Configuring lens server](https://lens.apache.org/admin/config-server.html)</td>
-  </tr>
- </table>
-
-![Apache Lens Interpreter Setting](/assets/themes/zeppelin/img/docs-img/lens-interpreter-setting.png)
-
-### Interpreter Bindging for Zeppelin Notebook
-After configuring Lens interpreter, create your own notebook, then you can bind interpreters like below image. 
-![Zeppelin Notebook Interpreter Biding](/assets/themes/zeppelin/img/docs-img/lens-interpreter-binding.png)
-
-For more interpreter binding information see [here](http://zeppelin.incubator.apache.org/docs/manual/interpreters.html).
-
-### How to use 
-You can analyze your data by using [OLAP Cube](http://lens.apache.org/user/olap-cube.html) [QL](http://lens.apache.org/user/cli.html) which is a high level SQL like language to query and describe data sets organized in data cubes. 
-You may experience OLAP Cube like this [Video tutorial](https://cwiki.apache.org/confluence/display/LENS/2015/07/13/20+Minute+video+demo+of+Apache+Lens+through+examples). 
-As you can see in this video, they are using Lens Client Shell(./bin/lens-cli.sh). All of these functions also can be used on Zeppelin by using Lens interpreter.
-
-<li> Create and Use(Switch) Databases.
-
-  ```
-  create database newDb
-  ```
-  
-  ```
-  use newDb
-  ```
-  
-<li> Create Storage.
-
-  ```
-  create storage your/path/to/lens/client/examples/resources/db-storage.xml
-  ```
-  
-<li> Create Dimensions, Show fields and join-chains of them. 
-
-  ```
-  create dimension your/path/to/lens/client/examples/resources/customer.xml
-  ```
-  
-  ```
-  dimension show fields customer
-  ```
-  
-  ```
-  dimension show joinchains customer
-  ```
-  
-<li> Create Caches, Show fields and join-chains of them.
-
-  ``` 
-  create cube your/path/to/lens/client/examples/resources/sales-cube.xml 
-  ```
-  
-  ```
-  cube show fields sales
-  ```
-  
-  ```
-  cube show joinchains sales
-  ```
-
-<li> Create Dimtables and Fact. 
-
-  ```
-  create dimtable your/path/to/lens/client/examples/resources/customer_table.xml
-  ```
-  
-  ```
-  create fact your/path/to/lens/client/examples/resources/sales-raw-fact.xml
-  ```
-
-<li> Add partitions to Dimtable and Fact.
-  
-  ```
-  dimtable add single-partition --dimtable_name customer_table --storage_name local --path your/path/to/lens/client/examples/resources/customer-local-part.xml
-  ```
-  
-  ```
-  fact add partitions --fact_name sales_raw_fact --storage_name local --path your/path/to/lens/client/examples/resources/sales-raw-local-parts.xml
-  ```
-
-<li> Now, you can run queries on cubes.
- 
-  ```
-  query execute cube select customer_city_name, product_details.description, product_details.category, product_details.color, store_sales from sales where time_range_in(delivery_time, '2015-04-11-00', '2015-04-13-00')
-  ```
-  
-  
-  ![Lens Query Result](/assets/themes/zeppelin/img/docs-img/lens-result.png)
-
-These are just examples that provided in advance by Lens. If you want to explore whole tutorials of Lens, see the [tutorial video](https://cwiki.apache.org/confluence/display/LENS/2015/07/13/20+Minute+video+demo+of+Apache+Lens+through+examples).
-
-### Lens UI Service 
-Lens also provides web UI service. Once the server starts up, you can open the service on http://serverhost:19999/index.html and browse. You may also check the structure that you made and use query easily here.
- 
- ![Lens UI Servive](/assets/themes/zeppelin/img/docs-img/lens-ui-service.png)
-
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/interpreter/postgresql.md
----------------------------------------------------------------------
diff --git a/docs/docs/interpreter/postgresql.md b/docs/docs/interpreter/postgresql.md
deleted file mode 100644
index 9753cdc..0000000
--- a/docs/docs/interpreter/postgresql.md
+++ /dev/null
@@ -1,180 +0,0 @@
----
-layout: page
-title: "PostgreSQL and HAWQ Interpreter"
-description: ""
-group: manual
----
-{% include JB/setup %}
-
-
-## PostgreSQL, HAWQ  Interpreter for Apache Zeppelin
-
-<br/>
-<table class="table-configuration">
-  <tr>
-    <th>Name</th>
-    <th>Class</th>
-    <th>Description</th>
-  </tr>
-  <tr>
-    <td>%psql.sql</td>
-    <td>PostgreSqlInterpreter</td>
-    <td>Provides SQL environment for Postgresql, HAWQ and Greenplum</td>
-  </tr>
-</table>
-
-<br/>
-[<img align="right" src="http://img.youtube.com/vi/wqXXQhJ5Uk8/0.jpg" alt="zeppelin-view" hspace="10" width="250"></img>](https://www.youtube.com/watch?v=wqXXQhJ5Uk8)
-
-This interpreter seamlessly supports the following SQL data processing engines:
-
-* [PostgreSQL](http://www.postgresql.org/) - OSS, Object-relational database management system (ORDBMS) 
-* [Apache HAWQ](http://pivotal.io/big-data/pivotal-hawq) - Powerful [Open Source](https://wiki.apache.org/incubator/HAWQProposal) SQL-On-Hadoop engine. 
-* [Greenplum](http://pivotal.io/big-data/pivotal-greenplum-database) - MPP database built on open source PostgreSQL.
-
-
-This [Video Tutorial](https://www.youtube.com/watch?v=wqXXQhJ5Uk8) illustrates some of the features provided by the `Postgresql Interpreter`.
-
-### Create Interpreter 
-
-By default Zeppelin creates one `PSQL` instance. You can remove it or create new instances. 
-
-Multiple PSQL instances can be created, each configured to the same or different backend databases. But over time a  `Notebook` can have only one PSQL interpreter instance `bound`. That means you _can not_ connect to different databases in the same `Notebook`. This is a known Zeppelin limitation. 
-
-To create new PSQL instance open the `Interprter` section and click the `+Create` button. Pick a `Name` of your choice and from the `Interpreter` drop-down select `psql`.  Then follow the configuration instructions and `Save` the new instance. 
-
-> Note: The `Name` of the instance is used only to distinct the instances while binding them to the `Notebook`. The `Name` is irrelevant inside the `Notebook`. In the `Notebook` you must use `%psql.sql` tag. 
-
-### Bind to Notebook
-In the `Notebook` click on the `settings` icon in the top right corner. The select/deselect the interpreters to be bound with the `Notebook`.
-
-### Configuration
-You can modify the configuration of the PSQL from the `Interpreter` section.  The PSQL interpreter expenses the following properties:
-
- 
- <table class="table-configuration">
-   <tr>
-     <th>Property Name</th>
-     <th>Description</th>
-     <th>Default Value</th>
-   </tr>
-   <tr>
-     <td>postgresql.url</td>
-     <td>JDBC URL to connect to </td>
-     <td>jdbc:postgresql://localhost:5432</td>
-   </tr>
-   <tr>
-     <td>postgresql.user</td>
-     <td>JDBC user name</td>
-     <td>gpadmin</td>
-   </tr>
-   <tr>
-     <td>postgresql.password</td>
-     <td>JDBC password</td>
-     <td></td>
-   </tr>
-   <tr>
-     <td>postgresql.driver.name</td>
-     <td>JDBC driver name. In this version the driver name is fixed and should not be changed</td>
-     <td>org.postgresql.Driver</td>
-   </tr>
-   <tr>
-     <td>postgresql.max.result</td>
-     <td>Max number of SQL result to display to prevent the browser overload</td>
-     <td>1000</td>
-   </tr>      
- </table>
- 
- 
-### How to use
-```
-Tip: Use (CTRL + .) for SQL auto-completion.
-```
-#### DDL and SQL commands
-
-Start the paragraphs with the full `%psql.sql` prefix tag! The short notation: `%psql` would still be able run the queries but the syntax highlighting and the auto-completions will be disabled. 
-
-You can use the standard CREATE / DROP / INSERT commands to create or modify the data model:
-
-```sql
-%psql.sql
-drop table if exists mytable;
-create table mytable (i int);
-insert into mytable select generate_series(1, 100);
-```
-
-Then in a separate paragraph run the query.
-
-```sql
-%psql.sql
-select * from mytable;
-```
-
-> Note: You can have multiple queries in the same paragraph but only the result from the first is displayed. [[1](https://issues.apache.org/jira/browse/ZEPPELIN-178)], [[2](https://issues.apache.org/jira/browse/ZEPPELIN-212)].
-
-For example, this will execute both queries but only the count result will be displayed. If you revert the order of the queries the mytable content will be shown instead.
-
-```sql
-%psql.sql
-select count(*) from mytable;
-select * from mytable;
-```
-
-#### PSQL command line tools
-
-Use the Shell Interpreter (`%sh`) to access the command line [PSQL](http://www.postgresql.org/docs/9.4/static/app-psql.html) interactively:
-
-```bash
-%sh
-psql -h phd3.localdomain -U gpadmin -p 5432 <<EOF
- \dn  
- \q
-EOF
-```
-This will produce output like this:
-
-```
-        Name        |  Owner  
---------------------+---------
- hawq_toolkit       | gpadmin
- information_schema | gpadmin
- madlib             | gpadmin
- pg_catalog         | gpadmin
- pg_toast           | gpadmin
- public             | gpadmin
- retail_demo        | gpadmin
-```
-
-#### Apply Zeppelin Dynamic Forms
-
-You can leverage [Zepplein Dynamic Form](https://zeppelin.incubator.apache.org/docs/manual/dynamicform.html) inside your queries. You can use both the `text input` and `select form` parametrization features
-
-```sql
-%psql.sql
-SELECT ${group_by}, count(*) as count 
-FROM retail_demo.order_lineitems_pxf 
-GROUP BY ${group_by=product_id,product_id|product_name|customer_id|store_id} 
-ORDER BY count ${order=DESC,DESC|ASC} 
-LIMIT ${limit=10};
-```
-#### Example HAWQ PXF/HDFS Tables
-
-Create HAWQ external table that read data from tab-separated-value data in HDFS.
-
-```sql
-%psql.sql
-CREATE EXTERNAL TABLE retail_demo.payment_methods_pxf (
-  payment_method_id smallint,
-  payment_method_code character varying(20)
-) LOCATION ('pxf://${NAME_NODE_HOST}:50070/retail_demo/payment_methods.tsv.gz?profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER = E'\t');
-```
-And retrieve content
-
-```sql
-%psql.sql
-seelect * from retail_demo.payment_methods_pxf
-```
-### Auto-completion 
-The PSQL Interpreter provides a basic auto-completion functionality. On `(Ctrl+.)` it list the most relevant suggesntions in a pop-up window. In addition to the SQL keyword the interpter provides suggestions for the Schema, Table, Column names as well. 
-
-

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/interpreter/spark.md
----------------------------------------------------------------------
diff --git a/docs/docs/interpreter/spark.md b/docs/docs/interpreter/spark.md
deleted file mode 100644
index 58fce0b..0000000
--- a/docs/docs/interpreter/spark.md
+++ /dev/null
@@ -1,221 +0,0 @@
----
-layout: page
-title: "Spark Interpreter Group"
-description: ""
-group: manual
----
-{% include JB/setup %}
-
-
-## Spark
-
-[Apache Spark](http://spark.apache.org) is supported in Zeppelin with 
-Spark Interpreter group, which consisted of 4 interpreters.
-
-<table class="table-configuration">
-  <tr>
-    <th>Name</th>
-    <th>Class</th>
-    <th>Description</th>
-  </tr>
-  <tr>
-    <td>%spark</td>
-    <td>SparkInterpreter</td>
-    <td>Creates SparkContext and provides scala environment</td>
-  </tr>
-  <tr>
-    <td>%pyspark</td>
-    <td>PySparkInterpreter</td>
-    <td>Provides python environment</td>
-  </tr>
-  <tr>
-    <td>%sql</td>
-    <td>SparkSQLInterpreter</td>
-    <td>Provides SQL environment</td>
-  </tr>
-  <tr>
-    <td>%dep</td>
-    <td>DepInterpreter</td>
-    <td>Dependency loader</td>
-  </tr>
-</table>
-
-
-<br />
-
-
-### SparkContext, SQLContext, ZeppelinContext
-
-SparkContext, SQLContext, ZeppelinContext are automatically created and exposed as variable names 'sc', 'sqlContext' and 'z', respectively, both in scala and python environments.
-
-Note that scala / python environment shares the same SparkContext, SQLContext, ZeppelinContext instance.
-
-
-<a name="dependencyloading"> </a>
-<br />
-<br />
-### Dependency Management
-There are two ways to load external library in spark interpreter. First is using Zeppelin's %dep interpreter and second is loading Spark properties.
-
-#### 1. Dynamic Dependency Loading via %dep interpreter
-
-When your code requires external library, instead of doing download/copy/restart Zeppelin, you can easily do following jobs using %dep interpreter.
-
- * Load libraries recursively from Maven repository
- * Load libraries from local filesystem
- * Add additional maven repository
- * Automatically add libraries to SparkCluster (You can turn off)
-
-Dep interpreter leverages scala environment. So you can write any Scala code here.
-Note that %dep interpreter should be used before %spark, %pyspark, %sql.
-
-Here's usages.
-
-```scala
-%dep
-z.reset() // clean up previously added artifact and repository
-
-// add maven repository
-z.addRepo("RepoName").url("RepoURL")
-
-// add maven snapshot repository
-z.addRepo("RepoName").url("RepoURL").snapshot()
-
-// add credentials for private maven repository
-z.addRepo("RepoName").url("RepoURL").username("username").password("password")
-
-// add artifact from filesystem
-z.load("/path/to.jar")
-
-// add artifact from maven repository, with no dependency
-z.load("groupId:artifactId:version").excludeAll()
-
-// add artifact recursively
-z.load("groupId:artifactId:version")
-
-// add artifact recursively except comma separated GroupID:ArtifactId list
-z.load("groupId:artifactId:version").exclude("groupId:artifactId,groupId:artifactId, ...")
-
-// exclude with pattern
-z.load("groupId:artifactId:version").exclude(*)
-z.load("groupId:artifactId:version").exclude("groupId:artifactId:*")
-z.load("groupId:artifactId:version").exclude("groupId:*")
-
-// local() skips adding artifact to spark clusters (skipping sc.addJar())
-z.load("groupId:artifactId:version").local()
-```
-
-
-<br />
-#### 2. Loading Spark Properties
-Once `SPARK_HOME` is set in `conf/zeppelin-env.sh`, Zeppelin uses `spark-submit` as spark interpreter runner. `spark-submit` supports two ways to load configurations. The first is command line options such as --master and Zeppelin can pass these options to `spark-submit` by exporting `SPARK_SUBMIT_OPTIONS` in conf/zeppelin-env.sh. Second is reading configuration options from `SPARK_HOME/conf/spark-defaults.conf`. Spark properites that user can set to distribute libraries are:
-
-<table class="table-configuration">
-  <tr>
-    <th>spark-defaults.conf</th>
-    <th>SPARK_SUBMIT_OPTIONS</th>
-    <th>Applicable Interpreter</th>
-    <th>Description</th>
-  </tr>
-  <tr>
-    <td>spark.jars</td>
-    <td>--jars</td>
-    <td>%spark</td>
-    <td>Comma-separated list of local jars to include on the driver and executor classpaths.</td>
-  </tr>
-  <tr>
-    <td>spark.jars.packages</td>
-    <td>--packages</td>
-    <td>%spark</td>
-    <td>Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format for the coordinates should be groupId:artifactId:version.</td>
-  </tr>
-  <tr>
-    <td>spark.files</td>
-    <td>--files</td>
-    <td>%pyspark</td>
-    <td>Comma-separated list of files to be placed in the working directory of each executor.</td>
-  </tr>
-</table>
-Note that adding jar to pyspark is only availabe via %dep interpreter at the moment
-
-<br/>
-Here are few examples:
-
-##### 0.5.5 and later
-* SPARK\_SUBMIT\_OPTIONS in conf/zeppelin-env.sh
-
-		export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar --files /path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg"
-
-* SPARK_HOME/conf/spark-defaults.conf
-
-		spark.jars				/path/mylib1.jar,/path/mylib2.jar
-		spark.jars.packages		com.databricks:spark-csv_2.10:1.2.0
-		spark.files				/path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip
-
-##### 0.5.0
-* ZEPPELIN\_JAVA\_OPTS in conf/zeppelin-env.sh
-
-		export ZEPPELIN_JAVA_OPTS="-Dspark.jars=/path/mylib1.jar,/path/mylib2.jar -Dspark.files=/path/myfile1.dat,/path/myfile2.dat"
-<br />
-
-
-<a name="zeppelincontext"> </a>
-<br />
-<br />
-### ZeppelinContext
-
-
-Zeppelin automatically injects ZeppelinContext as variable 'z' in your scala/python environment. ZeppelinContext provides some additional functions and utility.
-
-<br />
-#### Object exchange
-
-ZeppelinContext extends map and it's shared between scala, python environment.
-So you can put some object from scala and read it from python, vise versa.
-
-Put object from scala
-
-```scala
-%spark
-val myObject = ...
-z.put("objName", myObject)
-```
-
-Get object from python
-
-```python
-%python
-myObject = z.get("objName")
-```
-
-<br />
-#### Form creation
-
-ZeppelinContext provides functions for creating forms. 
-In scala and python environments, you can create forms programmatically.
-
-```scala
-%spark
-/* Create text input form */
-z.input("formName")
-
-/* Create text input form with default value */
-z.input("formName", "defaultValue")
-
-/* Create select form */
-z.select("formName", Seq(("option1", "option1DisplayName"),
-                         ("option2", "option2DisplayName")))
-
-/* Create select form with default value*/
-z.select("formName", "option1", Seq(("option1", "option1DisplayName"),
-                                    ("option2", "option2DisplayName")))
-```
-
-In sql environment, you can create form in simple template.
-
-```
-%sql
-select * from ${table=defaultTableName} where text like '%${search}%'
-```
-
-To learn more about dynamic form, checkout [Dynamic Form](../dynamicform.html).

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/manual/dynamicform.md
----------------------------------------------------------------------
diff --git a/docs/docs/manual/dynamicform.md b/docs/docs/manual/dynamicform.md
deleted file mode 100644
index 06074fd..0000000
--- a/docs/docs/manual/dynamicform.md
+++ /dev/null
@@ -1,78 +0,0 @@
----
-layout: page
-title: "Dynamic Form"
-description: ""
-group: manual
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-
-## Dynamic Form
-
-Zeppelin dynamically creates input forms. Depending on language backend, there're two different ways to create dynamic form.
-Custom language backend can select which type of form creation it wants to use.
-
-<br />
-### Using form Templates
-
-This mode creates form using simple template language. It's simple and easy to use. For example Markdown, Shell, SparkSql language backend uses it.
-
-<br />
-#### Text input form
-
-To create text input form, use _${formName}_ templates.
-
-for example
-
-<img src="../../assets/themes/zeppelin/img/screenshots/form_input.png" />
-
-
-Also you can provide default value, using _${formName=defaultValue}_.
-
-<img src="../../assets/themes/zeppelin/img/screenshots/form_input_default.png" />
-
-
-<br />
-#### Select form
-
-To create select form, use _${formName=defaultValue,option1|option2...}_
-
-for example
-
-<img src="../../assets/themes/zeppelin/img/screenshots/form_select.png" />
-
-Also you can separate option's display name and value, using _${formName=defaultValue,option1(DisplayName)|option2(DisplayName)...}_
-
-<img src="../../assets/themes/zeppelin/img/screenshots/form_select_displayname.png" />
-
-<br />
-### Creates Programmatically
-
-Some language backend uses programmatic way to create form. For example [ZeppelinContext](./interpreter/spark.html#zeppelincontext) provides form creation API
-
-Here're some examples.
-
-Text input form
-
-<img src="../../assets/themes/zeppelin/img/screenshots/form_input_prog.png" />
-
-Text input form with default value
-
-<img src="../../assets/themes/zeppelin/img/screenshots/form_input_default_prog.png" />
-
-Select form
-
-<img src="../../assets/themes/zeppelin/img/screenshots/form_select_prog.png" />

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/manual/interpreters.md
----------------------------------------------------------------------
diff --git a/docs/docs/manual/interpreters.md b/docs/docs/manual/interpreters.md
deleted file mode 100644
index ff5bff7..0000000
--- a/docs/docs/manual/interpreters.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-layout: page
-title: "Interpreters"
-description: ""
-group: manual
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-
-## Interpreters in zeppelin
-
-This section explain the role of Interpreters, interpreters group and interpreters settings in Zeppelin.
-Zeppelin interpreter concept allows any language/data-processing-backend to be plugged into Zeppelin.
-Currently Zeppelin supports many interpreters such as Scala(with Apache Spark), Python(with Apache Spark), SparkSQL, Hive, Markdown and Shell.
-
-### What is zeppelin interpreter?
-
-Zeppelin Interpreter is the plug-in which enable zeppelin user to use a specific language/data-processing-backend. For example to use scala code in Zeppelin, you need ```spark``` interpreter.
-
-When you click on the ```+Create``` button in the interpreter page the interpreter drop-down list box will present all the available interpreters on your server.
-
-<img src="../../assets/themes/zeppelin/img/screenshots/interpreter_create.png">
-
-### What is zeppelin interpreter setting?
-
-Zeppelin interpreter setting is the configuration of a given interpreter on zeppelin server. For example, the properties requried for hive  JDBC interpreter to connect to the Hive server.
-
-<img src="../../assets/themes/zeppelin/img/screenshots/interpreter_setting.png">
-### What is zeppelin interpreter group?
-
-Every Interpreter belongs to an InterpreterGroup. InterpreterGroup is a unit of start/stop interpreter.
-By default, every interpreter belong to a single group but the group might contain more interpreters. For example, spark interpreter group include spark support, pySpark, 
-SparkSQL and the dependency loader.
-
-Technically, Zeppelin interpreters from the same group are running in the same JVM.
-
-Interpreters belong to a single group a registered together and all of their properties are listed in the interpreter setting.
-<img src="../../assets/themes/zeppelin/img/screenshots/interpreter_setting_spark.png">
-
-### Programming langages for interpreter
-
-If the interpreter uses a specific programming language (like Scala, Python, SQL), it is generally a good idea to add syntax highlighting support for that to the notebook paragraph editor.  
-  
-To check out the list of languages supported, see the mode-*.js files under zeppelin-web/bower_components/ace-builds/src-noconflict or from github https://github.com/ajaxorg/ace-builds/tree/master/src-noconflict  
-  
-To add a new set of syntax highlighting,  
-1. add the mode-*.js file to zeppelin-web/bower.json (when built, zeppelin-web/src/index.html will be changed automatically)  
-2. add to the list of `editorMode` in zeppelin-web/src/app/notebook/paragraph/paragraph.controller.js - it follows the pattern 'ace/mode/x' where x is the name  
-3. add to the code that checks for `%` prefix and calls `session.setMode(editorMode.x)` in `setParagraphMode` in zeppelin-web/src/app/notebook/paragraph/paragraph.controller.js  
-  
-

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/manual/notebookashomepage.md
----------------------------------------------------------------------
diff --git a/docs/docs/manual/notebookashomepage.md b/docs/docs/manual/notebookashomepage.md
deleted file mode 100644
index 86f1ea9..0000000
--- a/docs/docs/manual/notebookashomepage.md
+++ /dev/null
@@ -1,109 +0,0 @@
----
-layout: page
-title: "Notebook as Homepage"
-description: ""
-group: manual
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-## Customize your zeppelin homepage
- Zeppelin allows you to use one of the notebooks you create as your zeppelin Homepage.
- With that you can brand your zeppelin installation, 
- adjust the instruction to your users needs and even translate to other languages.
-
- <br />
-### How to set a notebook as your zeppelin homepage
-
-The process for creating your homepage is very simple as shown below:
- 
- 1. Create a notebook using zeppelin
- 2. Set the notebook id in the config file
- 3. Restart zeppelin
- 
- <br />
-#### Create a notebook using zeppelin
-  Create a new notebook using zeppelin,
-  you can use ```%md``` interpreter for markdown content or any other interpreter you like.
-  
-  You can also use the display system to generate [text](../displaysystem/display.html), 
-  [html](../displaysystem/display.html#html),[table](../displaysystem/table.html) or
-   [angular](../displaysystem/angular.html)
-
-   Run (shift+Enter) the notebook and see the output. Optionally, change the notebook view to report to hide 
-   the code sections.
-     
-   <br />
-#### Set the notebook id in the config file
-  To set the notebook id in the config file you should copy it from the last word in the notebook url 
-  
-  for example
-  
-  <img src="../../assets/themes/zeppelin/img/screenshots/homepage_notebook_id.png" />
-
-  Set the notebook id to the ```ZEPPELIN_NOTEBOOK_HOMESCREEN``` environment variable 
-  or ```zeppelin.notebook.homescreen``` property. 
-  
-  You can also set the ```ZEPPELIN_NOTEBOOK_HOMESCREEN_HIDE``` environment variable 
-  or ```zeppelin.notebook.homescreen.hide``` property to hide the new notebook from the notebook list.
-
-  <br />
-#### Restart zeppelin
-  Restart your zeppelin server
-  
-  ```
-  ./bin/zeppelin-deamon stop 
-  ./bin/zeppelin-deamon start
-  ```
-  ####That's it! Open your browser and navigate to zeppelin and see your customized homepage...
-    
-  
-<br />
-### Show notebooks list in your custom homepage
-If you want to display the list of notebooks on your custom zeppelin homepage all 
-you need to do is use our %angular support.
-  
-  <br />
-  Add the following code to a paragraph in you home page and run it... walla! you have your notebooks list.
-  
-  ```javascript
-  println(
-  """%angular 
-    <div class="col-md-4" ng-controller="HomeCtrl as home">
-      <h4>Notebooks</h4>
-      <div>
-        <h5><a href="" data-toggle="modal" data-target="#noteNameModal" style="text-decoration: none;">
-          <i style="font-size: 15px;" class="icon-notebook"></i> Create new note</a></h5>
-          <ul style="list-style-type: none;">
-            <li ng-repeat="note in home.notes.list track by $index"><i style="font-size: 10px;" class="icon-doc"></i>
-              <a style="text-decoration: none;" href="#/notebook/{{note.id}}">{{note.name || 'Note ' + note.id}}</a>
-            </li>
-          </ul>
-      </div>
-    </div>
-  """)
-  ```
-  
-  After running the notebook you will see output similar to this one:
-  <img src="../../assets/themes/zeppelin/img/screenshots/homepage_notebook_list.png" />
-  
-  The main trick here relays in linking the ```<div>``` to the controller:
-  
-  ```javascript
-  <div class="col-md-4" ng-controller="HomeCtrl as home">
-  ```
-  
-  Once we have ```home``` as our controller variable in our ```<div></div>``` 
-  we can use ```home.notes.list``` to get access to the notebook list.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/pleasecontribute.md
----------------------------------------------------------------------
diff --git a/docs/docs/pleasecontribute.md b/docs/docs/pleasecontribute.md
deleted file mode 100644
index 063b48f..0000000
--- a/docs/docs/pleasecontribute.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-layout: page
-title: "Please contribute"
-description: ""
-group: development
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-
-### Waiting for your help
-The content does not exist yet.
-
-We're always welcoming contribution.
-
-If you're interested, please check [How to contribute (website)](./development/howtocontributewebsite.html).

http://git-wip-us.apache.org/repos/asf/incubator-zeppelin/blob/c2cbafd1/docs/docs/releases/zeppelin-release-0.5.0-incubating.md
----------------------------------------------------------------------
diff --git a/docs/docs/releases/zeppelin-release-0.5.0-incubating.md b/docs/docs/releases/zeppelin-release-0.5.0-incubating.md
deleted file mode 100644
index a6fbe4d..0000000
--- a/docs/docs/releases/zeppelin-release-0.5.0-incubating.md
+++ /dev/null
@@ -1,77 +0,0 @@
----
-layout: page
-title: "Zeppelin Release 0.5.0-incubating"
-description: ""
-group: release
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-### Zeppelin Release 0.5.0-incubating
-
-Zeppelin 0.5.0-incubating is the first release under Apache incubation, with contributions from 42 developers and more than 600 commits.
-
-To download Zeppelin 0.5.0-incubating visit the [download](../../download.html) page.
-
-You can visit [issue tracker](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316221&version=12329850) for full list of issues being resolved.
-
-### Contributors
-
-The following developers contributed to this release:
-
-* Akshat Aranya - New features and Improvements in UI.
-* Alexander Bezzubov -Improvements and Bug fixes in Core, UI, Build system. New feature and Improvements in Spark interpreter. Documentation in roadmap.
-* Anthony Corbacho - Improvements in Website. Bug fixes Build system. Improvements and Bug fixes in UI. Documentation in roadmap.
-* Brennon York - Improvements and Bug fixes in Build system.
-* CORNEAU Damien - New feature, Improvements and Bug fixes in UI and Build system.
-* Corey Huang - Improvements in Build system. New feature in Core.
-* Digeratus - Improvements in Tutorials.
-* Dimitrios Liapis - Improvements in Documentation.
-* DuyHai DOAN - New feature in Build system.
-* Emmanuelle Raffenne - Bug fixes in UI.
-* Eran Medan - Improvements in Documentation.
-* Eugene Morozov - Bug fixes in Core.
-* Felix Cheung - Improvements in Spark interpreter. Improvements in Documentation. New features, Improvements and Bug fixes in UI.
-* Hung Lin - Improvements in Core.
-* Hyungu Roh - Bug fixes in UI.
-* Ilya Ganelin - Improvements in Tutorials.
-* JaeHwa Jung - New features in Tajo interpreter.
-* Jakob Homan - Improvements in Website.
-* James Carman - Improvements in Build system.
-* Jongyoul Lee - Improvements in Core, Build system and Spark interpreter. Bug fixes in Spark Interpreter. New features in Build system and Spark interpreter. Improvements in Documentation.
-* Juarez Bochi - Bug fixes in Build system.
-* Julien Buret - Bug fixes in Spark interpreter.
-* Jérémy Subtil - Bug fixes in Build system.
-* Kevin (SangWoo) Kim - New features in Core, Tutorials. Improvements in Documentation. New features, Improvements and Bug fixes in UI.
-* Kyoung-chan Lee - Improvements in Documentation.
-* Lee moon soo - Improvements in Tutorials. New features, Improvements and Bug fixes in Core, UI, Build system and Spark interpreter. New features in Flink interpreter. Improvments in Documentation.
-* Mina Lee - Improvements and Bug fixes in UI. New features in UI. Improvements in Core, Website.
-* Rajat Gupta - Bug fixes in Spark interpreter.
-* Ram Venkatesh - Improvements in Core, Build system, Spark interpreter and Markdown interpreter. New features and Bug fixes in Hive interpreter.
-* Sebastian YEPES - Improvements in Core.
-* Seckin Savasci - Improvements in Build system.
-* Timothy Shelton - Bug fixes in UI.
-* Vincent Botta - New features in UI.
-* Young boom - Improvements in UI.
-* bobbych - Improvements in Spark interpreter.
-* debugger87 - Bug fixes in Core.
-* dobachi - Improvements in UI.
-* epahomov - Improvements in Core and Spark interpreter.
-* kevindai0126 - Improvements in Core.
-* rahul agarwal - Bug fixes in Core.
-* whisperstream - Improvements in Spark interpreter.
-* yundai - Improvements in Core.
-
-Thanks to everyone who made this release possible!