You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by pw...@apache.org on 2014/09/18 02:53:31 UTC

svn commit: r1625867 - in /spark/site/docs/1.1.0: ./ css/

Author: pwendell
Date: Thu Sep 18 00:53:30 2014
New Revision: 1625867

URL: http://svn.apache.org/r1625867
Log:
Fixing version in 1.1 documentation

Modified:
    spark/site/docs/1.1.0/bagel-programming-guide.html
    spark/site/docs/1.1.0/css/bootstrap.min.css
    spark/site/docs/1.1.0/graphx-programming-guide.html
    spark/site/docs/1.1.0/index.html
    spark/site/docs/1.1.0/programming-guide.html
    spark/site/docs/1.1.0/quick-start.html
    spark/site/docs/1.1.0/running-on-mesos.html
    spark/site/docs/1.1.0/sql-programming-guide.html
    spark/site/docs/1.1.0/streaming-programming-guide.html

Modified: spark/site/docs/1.1.0/bagel-programming-guide.html
URL: http://svn.apache.org/viewvc/spark/site/docs/1.1.0/bagel-programming-guide.html?rev=1625867&r1=1625866&r2=1625867&view=diff
==============================================================================
--- spark/site/docs/1.1.0/bagel-programming-guide.html (original)
+++ spark/site/docs/1.1.0/bagel-programming-guide.html Thu Sep 18 00:53:30 2014
@@ -135,7 +135,7 @@
 
 <pre><code>groupId = org.apache.spark
 artifactId = spark-bagel_2.10
-version = 1.1.0-SNAPSHOT
+version = 1.1.0
 </code></pre>
 
 <h1 id="programming-model">Programming Model</h1>

Modified: spark/site/docs/1.1.0/css/bootstrap.min.css
URL: http://svn.apache.org/viewvc/spark/site/docs/1.1.0/css/bootstrap.min.css?rev=1625867&r1=1625866&r2=1625867&view=diff
==============================================================================
--- spark/site/docs/1.1.0/css/bootstrap.min.css (original)
+++ spark/site/docs/1.1.0/css/bootstrap.min.css Thu Sep 18 00:53:30 2014
@@ -6,4 +6,4 @@
  * http://www.apache.org/licenses/LICENSE-2.0
  *
  * Designed and built with all the love in the world @twitter by @mdo and @fat.

[... 3 lines stripped ...]
Modified: spark/site/docs/1.1.0/graphx-programming-guide.html
URL: http://svn.apache.org/viewvc/spark/site/docs/1.1.0/graphx-programming-guide.html?rev=1625867&r1=1625866&r2=1625867&view=diff
==============================================================================
--- spark/site/docs/1.1.0/graphx-programming-guide.html (original)
+++ spark/site/docs/1.1.0/graphx-programming-guide.html Thu Sep 18 00:53:30 2014
@@ -236,7 +236,7 @@ explore the new GraphX API and comment o
 
 <h2 id="migrating-from-spark-091">Migrating from Spark 0.9.1</h2>
 
-<p>GraphX in Spark 1.1.0-SNAPSHOT contains one user-facing interface change from Spark 0.9.1. <a href="api/scala/index.html#org.apache.spark.graphx.EdgeRDD"><code>EdgeRDD</code></a> may now store adjacent vertex attributes to construct the triplets, so it has gained a type parameter. The edges of a graph of type <code>Graph[VD, ED]</code> are of type <code>EdgeRDD[ED, VD]</code> rather than <code>EdgeRDD[ED]</code>.</p>
+<p>GraphX in Spark 1.1.0 contains one user-facing interface change from Spark 0.9.1. <a href="api/scala/index.html#org.apache.spark.graphx.EdgeRDD"><code>EdgeRDD</code></a> may now store adjacent vertex attributes to construct the triplets, so it has gained a type parameter. The edges of a graph of type <code>Graph[VD, ED]</code> are of type <code>EdgeRDD[ED, VD]</code> rather than <code>EdgeRDD[ED]</code>.</p>
 
 <h1 id="getting-started">Getting Started</h1>
 

Modified: spark/site/docs/1.1.0/index.html
URL: http://svn.apache.org/viewvc/spark/site/docs/1.1.0/index.html?rev=1625867&r1=1625866&r2=1625867&view=diff
==============================================================================
--- spark/site/docs/1.1.0/index.html (original)
+++ spark/site/docs/1.1.0/index.html Thu Sep 18 00:53:30 2014
@@ -128,7 +128,7 @@ It also supports a rich set of higher-le
 
 <h1 id="downloading">Downloading</h1>
 
-<p>Get Spark from the <a href="http://spark.apache.org/downloads.html">downloads page</a> of the project website. This documentation is for Spark version 1.1.0-SNAPSHOT. The downloads page 
+<p>Get Spark from the <a href="http://spark.apache.org/downloads.html">downloads page</a> of the project website. This documentation is for Spark version 1.1.0. The downloads page 
 contains Spark packages for many popular HDFS versions. If you&#8217;d like to build Spark from 
 scratch, visit <a href="building-with-maven.html">building Spark with Maven</a>.</p>
 
@@ -136,7 +136,7 @@ scratch, visit <a href="building-with-ma
 locally on one machine &#8212; all you need is to have <code>java</code> installed on your system <code>PATH</code>,
 or the <code>JAVA_HOME</code> environment variable pointing to a Java installation.</p>
 
-<p>Spark runs on Java 6+ and Python 2.6+. For the Scala API, Spark 1.1.0-SNAPSHOT uses
+<p>Spark runs on Java 6+ and Python 2.6+. For the Scala API, Spark 1.1.0 uses
 Scala 2.10. You will need to use a compatible Scala version 
 (2.10.x).</p>
 

Modified: spark/site/docs/1.1.0/programming-guide.html
URL: http://svn.apache.org/viewvc/spark/site/docs/1.1.0/programming-guide.html?rev=1625867&r1=1625866&r2=1625867&view=diff
==============================================================================
--- spark/site/docs/1.1.0/programming-guide.html (original)
+++ spark/site/docs/1.1.0/programming-guide.html Thu Sep 18 00:53:30 2014
@@ -173,14 +173,14 @@ along with if you launch Spark&#8217;s i
 
 <div data-lang="scala">
 
-    <p>Spark 1.1.0-SNAPSHOT uses Scala 2.10. To write
+    <p>Spark 1.1.0 uses Scala 2.10. To write
 applications in Scala, you will need to use a compatible Scala version (e.g. 2.10.X).</p>
 
     <p>To write a Spark application, you need to add a Maven dependency on Spark. Spark is available through Maven Central at:</p>
 
     <pre><code>groupId = org.apache.spark
 artifactId = spark-core_2.10
-version = 1.1.0-SNAPSHOT
+version = 1.1.0
 </code></pre>
 
     <p>In addition, if you wish to access an HDFS cluster, you need to add a dependency on
@@ -203,7 +203,7 @@ version = &lt;your-hdfs-version&gt;
 
 <div data-lang="java">
 
-    <p>Spark 1.1.0-SNAPSHOT works with Java 6 and higher. If you are using Java 8, Spark supports
+    <p>Spark 1.1.0 works with Java 6 and higher. If you are using Java 8, Spark supports
 <a href="http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html">lambda expressions</a>
 for concisely writing functions, otherwise you can use the classes in the
 <a href="api/java/index.html?org/apache/spark/api/java/function/package-summary.html">org.apache.spark.api.java.function</a> package.</p>
@@ -212,7 +212,7 @@ for concisely writing functions, otherwi
 
     <pre><code>groupId = org.apache.spark
 artifactId = spark-core_2.10
-version = 1.1.0-SNAPSHOT
+version = 1.1.0
 </code></pre>
 
     <p>In addition, if you wish to access an HDFS cluster, you need to add a dependency on
@@ -235,7 +235,7 @@ version = &lt;your-hdfs-version&gt;
 
 <div data-lang="python">
 
-    <p>Spark 1.1.0-SNAPSHOT works with Python 2.6 or higher (but not Python 3). It uses the standard CPython interpreter,
+    <p>Spark 1.1.0 works with Python 2.6 or higher (but not Python 3). It uses the standard CPython interpreter,
 so C libraries like NumPy can be used.</p>
 
     <p>To run Spark applications in Python, use the <code>bin/spark-submit</code> script located in the Spark directory.

Modified: spark/site/docs/1.1.0/quick-start.html
URL: http://svn.apache.org/viewvc/spark/site/docs/1.1.0/quick-start.html?rev=1625867&r1=1625866&r2=1625867&view=diff
==============================================================================
--- spark/site/docs/1.1.0/quick-start.html (original)
+++ spark/site/docs/1.1.0/quick-start.html Thu Sep 18 00:53:30 2014
@@ -372,7 +372,7 @@ Spark depends on:</p>
 
 <span class="n">scalaVersion</span> <span class="o">:=</span> <span class="s">&quot;2.10.4&quot;</span>
 
-<span class="n">libraryDependencies</span> <span class="o">+=</span> <span class="s">&quot;org.apache.spark&quot;</span> <span class="o">%%</span> <span class="s">&quot;spark-core&quot;</span> <span class="o">%</span> <span class="s">&quot;1.1.0-SNAPSHOT&quot;</span>
+<span class="n">libraryDependencies</span> <span class="o">+=</span> <span class="s">&quot;org.apache.spark&quot;</span> <span class="o">%%</span> <span class="s">&quot;spark-core&quot;</span> <span class="o">%</span> <span class="s">&quot;1.1.0&quot;</span>
 </code></pre></div>
 
     <p>For sbt to work correctly, we&#8217;ll need to layout <code>SimpleApp.scala</code> and <code>simple.sbt</code>
@@ -455,7 +455,7 @@ Note that Spark artifacts are tagged wit
     <span class="nt">&lt;dependency&gt;</span> <span class="c">&lt;!-- Spark dependency --&gt;</span>
       <span class="nt">&lt;groupId&gt;</span>org.apache.spark<span class="nt">&lt;/groupId&gt;</span>
       <span class="nt">&lt;artifactId&gt;</span>spark-core_2.10<span class="nt">&lt;/artifactId&gt;</span>
-      <span class="nt">&lt;version&gt;</span>1.1.0-SNAPSHOT<span class="nt">&lt;/version&gt;</span>
+      <span class="nt">&lt;version&gt;</span>1.1.0<span class="nt">&lt;/version&gt;</span>
     <span class="nt">&lt;/dependency&gt;</span>
   <span class="nt">&lt;/dependencies&gt;</span>
 <span class="nt">&lt;/project&gt;</span>

Modified: spark/site/docs/1.1.0/running-on-mesos.html
URL: http://svn.apache.org/viewvc/spark/site/docs/1.1.0/running-on-mesos.html?rev=1625867&r1=1625866&r2=1625867&view=diff
==============================================================================
--- spark/site/docs/1.1.0/running-on-mesos.html (original)
+++ spark/site/docs/1.1.0/running-on-mesos.html Thu Sep 18 00:53:30 2014
@@ -149,7 +149,7 @@ static partitioning of resources.</p>
 
 <h1 id="installing-mesos">Installing Mesos</h1>
 
-<p>Spark 1.1.0-SNAPSHOT is designed for use with Mesos 0.18.1 and does not
+<p>Spark 1.1.0 is designed for use with Mesos 0.18.1 and does not
 require any special patches of Mesos.</p>
 
 <p>If you already have a Mesos cluster running, you can skip this Mesos installation step.</p>
@@ -212,8 +212,8 @@ The Spark package can be hosted at any H
   <li>Upload to hdfs/http/s3</li>
 </ol>
 
-<p>To host on HDFS, use the Hadoop fs put command: <code>hadoop fs -put spark-1.1.0-SNAPSHOT.tar.gz
-/path/to/spark-1.1.0-SNAPSHOT.tar.gz</code></p>
+<p>To host on HDFS, use the Hadoop fs put command: <code>hadoop fs -put spark-1.1.0.tar.gz
+/path/to/spark-1.1.0.tar.gz</code></p>
 
 <p>Or if you are using a custom-compiled version of Spark, you will need to create a package using
 the <code>make-distribution.sh</code> script included in a Spark source tarball/checkout.</p>
@@ -238,10 +238,10 @@ cluster, or <code>mesos://zk://host:2181
 <code>&lt;prefix&gt;/lib/libmesos.so</code> where the prefix is <code>/usr/local</code> by default. See Mesos installation
 instructions above. On Mac OS X, the library is called <code>libmesos.dylib</code> instead of
 <code>libmesos.so</code>.</li>
-      <li><code>export SPARK_EXECUTOR_URI=&lt;URL of spark-1.1.0-SNAPSHOT.tar.gz uploaded above&gt;</code>.</li>
+      <li><code>export SPARK_EXECUTOR_URI=&lt;URL of spark-1.1.0.tar.gz uploaded above&gt;</code>.</li>
     </ul>
   </li>
-  <li>Also set <code>spark.executor.uri</code> to <code>&lt;URL of spark-1.1.0-SNAPSHOT.tar.gz&gt;</code>.</li>
+  <li>Also set <code>spark.executor.uri</code> to <code>&lt;URL of spark-1.1.0.tar.gz&gt;</code>.</li>
 </ol>
 
 <p>Now when starting a Spark application against the cluster, pass a <code>mesos://</code>
@@ -250,7 +250,7 @@ URL as the master when creating a <code>
 <div class="highlight"><pre><code class="scala"><span class="k">val</span> <span class="n">conf</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">SparkConf</span><span class="o">()</span>
   <span class="o">.</span><span class="n">setMaster</span><span class="o">(</span><span class="s">&quot;mesos://HOST:5050&quot;</span><span class="o">)</span>
   <span class="o">.</span><span class="n">setAppName</span><span class="o">(</span><span class="s">&quot;My app&quot;</span><span class="o">)</span>
-  <span class="o">.</span><span class="n">set</span><span class="o">(</span><span class="s">&quot;spark.executor.uri&quot;</span><span class="o">,</span> <span class="s">&quot;&lt;path to spark-1.1.0-SNAPSHOT.tar.gz uploaded above&gt;&quot;</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">set</span><span class="o">(</span><span class="s">&quot;spark.executor.uri&quot;</span><span class="o">,</span> <span class="s">&quot;&lt;path to spark-1.1.0.tar.gz uploaded above&gt;&quot;</span><span class="o">)</span>
 <span class="k">val</span> <span class="n">sc</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">SparkContext</span><span class="o">(</span><span class="n">conf</span><span class="o">)</span>
 </code></pre></div>
 

Modified: spark/site/docs/1.1.0/sql-programming-guide.html
URL: http://svn.apache.org/viewvc/spark/site/docs/1.1.0/sql-programming-guide.html?rev=1625867&r1=1625866&r2=1625867&view=diff
==============================================================================
--- spark/site/docs/1.1.0/sql-programming-guide.html (original)
+++ spark/site/docs/1.1.0/sql-programming-guide.html Thu Sep 18 00:53:30 2014
@@ -287,7 +287,7 @@ feature parity with a HiveContext.</p>
 
 </div>
 
-<p>The specific variant of SQL that is used to parse queries can also be selected using the 
+<p>The specific variant of SQL that is used to parse queries can also be selected using the
 <code>spark.sql.dialect</code> option.  This parameter can be changed using either the <code>setConf</code> method on
 a SQLContext or by using a <code>SET key=value</code> command in SQL.  For a SQLContext, the only dialect
 available is &#8220;sql&#8221; which uses a simple SQL parser provided by Spark SQL.  In a HiveContext, the
@@ -298,7 +298,7 @@ default is &#8220;hiveql&#8221;, though 
 
 <p>Spark SQL supports operating on a variety of data sources through the <code>SchemaRDD</code> interface.
 A SchemaRDD can be operated on as normal RDDs and can also be registered as a temporary table.
-Registering a SchemaRDD as a table allows you to run SQL queries over its data.  This section 
+Registering a SchemaRDD as a table allows you to run SQL queries over its data.  This section
 describes the various methods for loading data into a SchemaRDD.</p>
 
 <h2 id="rdds">RDDs</h2>
@@ -351,7 +351,7 @@ registered as a table.  Tables can be us
 <div data-lang="java">
 
     <p>Spark SQL supports automatically converting an RDD of <a href="http://stackoverflow.com/questions/3295496/what-is-a-javabean-exactly">JavaBeans</a>
-into a Schema RDD.  The BeanInfo, obtained using reflection, defines the schema of the table. 
+into a Schema RDD.  The BeanInfo, obtained using reflection, defines the schema of the table.
 Currently, Spark SQL does not support JavaBeans that contain
 nested or contain complex types such as Lists or Arrays.  You can create a JavaBean by creating a
 class that implements Serializable and has getters and setters for all of its fields.</p>
@@ -634,7 +634,7 @@ tuples or lists in the RDD created in th
 
 <p><a href="http://parquet.io">Parquet</a> is a columnar format that is supported by many other data processing systems.
 Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema
-of the original data.  </p>
+of the original data.</p>
 
 <h3 id="loading-data-programmatically">Loading Data Programmatically</h3>
 
@@ -959,7 +959,7 @@ memory usage and GC pressure. You can ca
 <p>Note that if you call <code>cache</code> rather than <code>cacheTable</code>, tables will <em>not</em> be cached using
 the in-memory columnar format, and therefore <code>cacheTable</code> is strongly recommended for this use case.</p>
 
-<p>Configuration of in-memory caching can be done using the <code>setConf</code> method on SQLContext or by running 
+<p>Configuration of in-memory caching can be done using the <code>setConf</code> method on SQLContext or by running
 <code>SET key=value</code> commands using SQL.</p>
 
 <table class="table">
@@ -1033,10 +1033,30 @@ in Hive 0.12. You can test the JDBC serv
 <pre><code>./sbin/start-thriftserver.sh
 </code></pre>
 
-<p>The default port the server listens on is 10000.  To listen on customized host and port, please set
-the <code>HIVE_SERVER2_THRIFT_PORT</code> and <code>HIVE_SERVER2_THRIFT_BIND_HOST</code> environment variables. You may
-run <code>./sbin/start-thriftserver.sh --help</code> for a complete list of all available options.  Now you can
-use beeline to test the Thrift JDBC server:</p>
+<p>This script accepts all <code>bin/spark-submit</code> command line options, plus a <code>--hiveconf</code> option to
+specify Hive properties.  You may run <code>./sbin/start-thriftserver.sh --help</code> for a complete list of
+all available options.  By default, the server listens on localhost:10000. You may override this
+bahaviour via either environment variables, i.e.:</p>
+
+<div class="highlight"><pre><code class="bash"><span class="nb">export </span><span class="nv">HIVE_SERVER2_THRIFT_PORT</span><span class="o">=</span>&lt;listening-port&gt;
+<span class="nb">export </span><span class="nv">HIVE_SERVER2_THRIFT_BIND_HOST</span><span class="o">=</span>&lt;listening-host&gt;
+./sbin/start-thriftserver.sh <span class="se">\</span>
+  --master &lt;master-uri&gt; <span class="se">\</span>
+  ...
+<span class="sb">```</span>
+</code></pre></div>
+
+<p>or system properties:</p>
+
+<div class="highlight"><pre><code class="bash">./sbin/start-thriftserver.sh <span class="se">\</span>
+  --hiveconf hive.server2.thrift.port<span class="o">=</span>&lt;listening-port&gt; <span class="se">\</span>
+  --hiveconf hive.server2.thrift.bind.host<span class="o">=</span>&lt;listening-host&gt; <span class="se">\</span>
+  --master &lt;master-uri&gt;
+  ...
+<span class="sb">```</span>
+</code></pre></div>
+
+<p>Now you can use beeline to test the Thrift JDBC server:</p>
 
 <pre><code>./bin/beeline
 </code></pre>
@@ -1073,8 +1093,7 @@ options.</p>
 <h2 id="migration-guide-for-shark-user">Migration Guide for Shark User</h2>
 
 <h3 id="scheduling">Scheduling</h3>
-<p>s
-To set a <a href="job-scheduling.html#fair-scheduler-pools">Fair Scheduler</a> pool for a JDBC client session,
+<p>To set a <a href="job-scheduling.html#fair-scheduler-pools">Fair Scheduler</a> pool for a JDBC client session,
 users can set the <code>spark.sql.thriftserver.scheduler.pool</code> variable:</p>
 
 <pre><code>SET spark.sql.thriftserver.scheduler.pool=accounting;
@@ -1087,7 +1106,7 @@ SQL deprecates this property in favor of
 is 200. Users may customize this property via <code>SET</code>:</p>
 
 <pre><code>SET spark.sql.shuffle.partitions=10;
-SELECT page, count(*) c 
+SELECT page, count(*) c
 FROM logs_last_month_cached
 GROUP BY page ORDER BY c DESC LIMIT 10;
 </code></pre>
@@ -1300,7 +1319,7 @@ evaluated by the SQL execution engine.  
   The range of numbers is from <code>-9223372036854775808</code> to <code>9223372036854775807</code>.</li>
       <li><code>FloatType</code>: Represents 4-byte single-precision floating point numbers.</li>
       <li><code>DoubleType</code>: Represents 8-byte double-precision floating point numbers.</li>
-      <li><code>DecimalType</code>: </li>
+      <li><code>DecimalType</code>: Represents arbitrary-precision signed decimal numbers. Backed internally by <code>java.math.BigDecimal</code>. A <code>BigDecimal</code> consists of an arbitrary precision integer unscaled value and a 32-bit integer scale.</li>
     </ul>
   </li>
   <li>String type
@@ -1351,7 +1370,7 @@ evaluated by the SQL execution engine.  
 <div data-lang="scala">
 
     <p>All data types of Spark SQL are located in the package <code>org.apache.spark.sql</code>.
-You can access them by doing </p>
+You can access them by doing</p>
 
     <div class="highlight"><pre><code class="scala"><span class="k">import</span>  <span class="nn">org.apache.spark.sql._</span>
 </code></pre></div>
@@ -1457,7 +1476,7 @@ You can access them by doing </p>
 <tr>
   <td> <b>StructType</b> </td>
   <td> org.apache.spark.sql.Row </td>
-  <td> 
+  <td>
   StructType(<i>fields</i>)<br />
   <b>Note:</b> <i>fields</i> is a Seq of StructFields. Also, two fields with the same
   name are not allowed.
@@ -1479,7 +1498,7 @@ You can access them by doing </p>
 
     <p>All data types of Spark SQL are located in the package of
 <code>org.apache.spark.sql.api.java</code>. To access or create a data type,
-please use factory methods provided in 
+please use factory methods provided in
 <code>org.apache.spark.sql.api.java.DataType</code>.</p>
 
     <table class="table">
@@ -1585,7 +1604,7 @@ please use factory methods provided in 
 <tr>
   <td> <b>StructType</b> </td>
   <td> org.apache.spark.sql.api.java </td>
-  <td> 
+  <td>
   DataType.createStructType(<i>fields</i>)<br />
   <b>Note:</b> <i>fields</i> is a List or an array of StructFields.
   Also, two fields with the same name are not allowed.
@@ -1606,7 +1625,7 @@ please use factory methods provided in 
 <div data-lang="python">
 
     <p>All data types of Spark SQL are located in the package of <code>pyspark.sql</code>.
-You can access them by doing </p>
+You can access them by doing</p>
 
     <div class="highlight"><pre><code class="python"><span class="kn">from</span> <span class="nn">pyspark.sql</span> <span class="kn">import</span> <span class="o">*</span>
 </code></pre></div>
@@ -1730,7 +1749,7 @@ You can access them by doing </p>
 <tr>
   <td> <b>StructType</b> </td>
   <td> list or tuple </td>
-  <td> 
+  <td>
   StructType(<i>fields</i>)<br />
   <b>Note:</b> <i>fields</i> is a Seq of StructFields. Also, two fields with the same
   name are not allowed.

Modified: spark/site/docs/1.1.0/streaming-programming-guide.html
URL: http://svn.apache.org/viewvc/spark/site/docs/1.1.0/streaming-programming-guide.html?rev=1625867&r1=1625866&r2=1625867&view=diff
==============================================================================
--- spark/site/docs/1.1.0/streaming-programming-guide.html (original)
+++ spark/site/docs/1.1.0/streaming-programming-guide.html Thu Sep 18 00:53:30 2014
@@ -422,13 +422,13 @@ need to know to write your streaming app
     <pre><code>&lt;dependency&gt;
     &lt;groupId&gt;org.apache.spark&lt;/groupId&gt;
     &lt;artifactId&gt;spark-streaming_2.10&lt;/artifactId&gt;
-    &lt;version&gt;1.1.0-SNAPSHOT&lt;/version&gt;
+    &lt;version&gt;1.1.0&lt;/version&gt;
 &lt;/dependency&gt;
 </code></pre>
   </div>
 <div data-lang="SBT">
 
-    <pre><code>libraryDependencies += "org.apache.spark" % "spark-streaming_2.10" % "1.1.0-SNAPSHOT"
+    <pre><code>libraryDependencies += "org.apache.spark" % "spark-streaming_2.10" % "1.1.0"
 </code></pre>
   </div>
 </div>



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org