You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@zeppelin.apache.org by mi...@apache.org on 2016/05/31 21:32:33 UTC

svn commit: r1746352 - in /incubator/zeppelin/site/docs/0.5.6-incubating: install/install.html install/yarn_install.html interpreter/spark.html

Author: minalee
Date: Tue May 31 21:32:33 2016
New Revision: 1746352

URL: http://svn.apache.org/viewvc?rev=1746352&view=rev
Log:
Manual update of doc to take care of previous version docs
https://github.com/apache/incubator-zeppelin/pull/605#issuecomment-206680628

Modified:
    incubator/zeppelin/site/docs/0.5.6-incubating/install/install.html
    incubator/zeppelin/site/docs/0.5.6-incubating/install/yarn_install.html
    incubator/zeppelin/site/docs/0.5.6-incubating/interpreter/spark.html

Modified: incubator/zeppelin/site/docs/0.5.6-incubating/install/install.html
URL: http://svn.apache.org/viewvc/incubator/zeppelin/site/docs/0.5.6-incubating/install/install.html?rev=1746352&r1=1746351&r2=1746352&view=diff
==============================================================================
--- incubator/zeppelin/site/docs/0.5.6-incubating/install/install.html (original)
+++ incubator/zeppelin/site/docs/0.5.6-incubating/install/install.html Tue May 31 21:32:33 2016
@@ -154,7 +154,7 @@ limitations under the License.
 
 <h2>From binary package</h2>
 
-<p>Download latest binary package from <a href="../download.html">Download</a>.</p>
+<p>Download latest binary package from <a href="http://zeppelin.apache.org/download.html">Download</a>.</p>
 
 <h2>Build from source</h2>
 

Modified: incubator/zeppelin/site/docs/0.5.6-incubating/install/yarn_install.html
URL: http://svn.apache.org/viewvc/incubator/zeppelin/site/docs/0.5.6-incubating/install/yarn_install.html?rev=1746352&r1=1746351&r2=1746352&view=diff
==============================================================================
--- incubator/zeppelin/site/docs/0.5.6-incubating/install/yarn_install.html (original)
+++ incubator/zeppelin/site/docs/0.5.6-incubating/install/yarn_install.html Tue May 31 21:32:33 2016
@@ -179,8 +179,8 @@ whoami
 <li>Git</li>
 <li>Java 1.7 </li>
 <li>Apache Maven</li>
-<li>Hadoop client.</li>
-<li>Spark.</li>
+<li>Hadoop client</li>
+<li>Spark</li>
 <li>Internet connection is required. </li>
 </ul>
 
@@ -240,7 +240,7 @@ OS name: <span class="s2">&quot;linux&qu
 </code></pre></div>
 <h4>Hadoop client</h4>
 
-<p>Zeppelin can work with multiple versions &amp; distributions of Hadoop. A complete list <a href="https://github.com/apache/incubator-zeppelin#build">is available here.</a> This document assumes Hadoop 2.7.x client libraries including configuration files are installed on Zeppelin node. It also assumes /etc/hadoop/conf contains various Hadoop configuration files. The location of Hadoop configuration files may vary, hence use appropriate location.</p>
+<p>Zeppelin can work with multiple versions &amp; distributions of Hadoop. A complete list is available <a href="https://github.com/apache/incubator-zeppelin#build">here.</a> This document assumes Hadoop 2.7.x client libraries including configuration files are installed on Zeppelin node. It also assumes /etc/hadoop/conf contains various Hadoop configuration files. The location of Hadoop configuration files may vary, hence use appropriate location.</p>
 <div class="highlight"><pre><code class="bash language-bash" data-lang="bash">hadoop version
 Hadoop 2.7.1.2.3.1.0-2574
 Subversion git@github.com:hortonworks/hadoop.git -r f66cf95e2e9367a74b0ec88b2df33458b6cff2d0
@@ -251,7 +251,7 @@ This <span class="nb">command </span>was
 </code></pre></div>
 <h4>Spark</h4>
 
-<p>Zeppelin can work with multiple versions Spark. A complete list <a href="https://github.com/apache/incubator-zeppelin#build">is available here.</a> This document assumes Spark 1.3.1 is installed on Zeppelin node at /home/zeppelin/prerequisites/spark.</p>
+<p>Zeppelin can work with multiple versions of Spark. A complete list <a href="https://github.com/apache/incubator-zeppelin#build">is available here.</a> This document assumes Spark 1.6.1 is installed on Zeppelin node at /home/zeppelin/prerequisites/spark.</p>
 
 <h2>Build</h2>
 
@@ -263,7 +263,7 @@ git clone https://github.com/apache/incu
 
 <h3>Cluster mode</h3>
 
-<p>As its assumed Hadoop 2.7.x is installed on the YARN cluster &amp; Spark 1.3.1 is installed on Zeppelin node. Hence appropriate options are chosen to build Zeppelin. This is very important as Zeppelin will bundle corresponding Hadoop &amp; Spark libraries and they must match the ones present on YARN cluster &amp; Zeppelin Spark installation. </p>
+<p>As its assumed Hadoop 2.7.x is installed on the YARN cluster &amp; Spark 1.6.1 is installed on Zeppelin node. Hence appropriate options are chosen to build Zeppelin. This is very important as Zeppelin will bundle corresponding Hadoop &amp; Spark libraries and they must match the ones present on YARN cluster &amp; Zeppelin Spark installation. </p>
 
 <p>Zeppelin is a maven project and hence must be built with Apache Maven.</p>
 <div class="highlight"><pre><code class="bash language-bash" data-lang="bash"><span class="nb">cd</span> /home/zeppelin/incubator-zeppelin
@@ -310,7 +310,7 @@ Click on Save button. Once these configu
 
 <h3>Spark</h3>
 
-<p>Zeppelin was built with Spark 1.3.1 and it was assumed that 1.3.1 version of Spark is installed at /home/zeppelin/prerequisites/spark. Look for Spark configrations and click edit button to add the following properties</p>
+<p>Look for Spark configrations and click edit button to add the following properties to make Spark Interpreter to run on YARN.</p>
 
 <table class="table-configuration">
   <tr>
@@ -324,11 +324,6 @@ Click on Save button. Once these configu
     <td>In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.</td>
   </tr>
   <tr>
-    <td>spark.home</td>
-    <td>/home/zeppelin/prerequisites/spark</td>
-    <td></td>
-  </tr>
-  <tr>
     <td>spark.driver.extraJavaOptions</td>
     <td>-Dhdp.version=2.3.1.0-2574</td>
     <td></td>
@@ -339,9 +334,9 @@ Click on Save button. Once these configu
     <td></td>
   </tr>
   <tr>
-    <td>spark.yarn.jar</td>
-    <td>/home/zeppelin/incubator-zeppelin/interpreter/spark/zeppelin-spark-0.6.0-incubating-SNAPSHOT.jar</td>
-    <td></td>
+    <td>spark.yarn.isPython</td>
+    <td>true</td>
+    <td>Distributes libraries required for pyspark in yarn-client mode if set to 'true'.</td>
   </tr>
 </table>
 

Modified: incubator/zeppelin/site/docs/0.5.6-incubating/interpreter/spark.html
URL: http://svn.apache.org/viewvc/incubator/zeppelin/site/docs/0.5.6-incubating/interpreter/spark.html?rev=1746352&r1=1746351&r2=1746352&view=diff
==============================================================================
--- incubator/zeppelin/site/docs/0.5.6-incubating/interpreter/spark.html (original)
+++ incubator/zeppelin/site/docs/0.5.6-incubating/interpreter/spark.html Tue May 31 21:32:33 2016
@@ -171,12 +171,10 @@ Spark Interpreter group, which consisted
   </tr>
 </table>
 
-<p><br /><br /></p>
+<p><br /></p>
 
 <h3>Configuration</h3>
 
-<hr />
-
 <p>Without any configuration, Spark interpreter works out of box in local mode. But if you want to connect to your Spark cluster, you&#39;ll need following two simple steps.</p>
 
 <h4>1. export SPARK_HOME</h4>
@@ -206,28 +204,23 @@ Spark Interpreter group, which consisted
 </ul>
 
 <p><br />
-That&#39;s it. Zeppelin will work with any version of Spark and any deployment type without rebuild Zeppelin in this way. (Zeppelin 0.5.5-incubating release works up to Spark 1.5.1)</p>
+That&#39;s it. Zeppelin will work with any version of Spark and any deployment type without rebuild Zeppelin in this way. (Zeppelin 0.5.6-incubating release works up to Spark 1.6.1)</p>
 
 <p>Note that without exporting SPARK_HOME, it&#39;s running in local mode with included version of Spark. The included version may vary depending on the build profile.</p>
 
-<p><br /> <br /></p>
+<p><br /></p>
 
 <h3>SparkContext, SQLContext, ZeppelinContext</h3>
 
-<hr />
-
 <p>SparkContext, SQLContext, ZeppelinContext are automatically created and exposed as variable names &#39;sc&#39;, &#39;sqlContext&#39; and &#39;z&#39;, respectively, both in scala and python environments.</p>
 
 <p>Note that scala / python environment shares the same SparkContext, SQLContext, ZeppelinContext instance.</p>
 
 <p><a name="dependencyloading"> </a>
-<br />
 <br /></p>
 
 <h3>Dependency Management</h3>
 
-<hr />
-
 <p>There are two ways to load external library in spark interpreter. First is using Zeppelin&#39;s %dep interpreter and second is loading Spark properties.</p>
 
 <h4>1. Dynamic Dependency Loading via %dep interpreter</h4>
@@ -331,8 +324,6 @@ spark.files             /path/mylib1.py,
 
 <h3>ZeppelinContext</h3>
 
-<hr />
-
 <p>Zeppelin automatically injects ZeppelinContext as variable &#39;z&#39; in your scala/python environment. ZeppelinContext provides some additional functions and utility.</p>
 
 <p><br /></p>