You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by rx...@apache.org on 2014/11/11 08:11:22 UTC
svn commit: r1638035 - in /spark: images/spark-runs-everywhere.png index.md
site/images/spark-runs-everywhere.png site/index.html
Author: rxin
Date: Tue Nov 11 07:11:21 2014
New Revision: 1638035
URL: http://svn.apache.org/r1638035
Log:
Update Spark Runs Everywhere section
Added:
spark/images/spark-runs-everywhere.png (with props)
spark/site/images/spark-runs-everywhere.png (with props)
Modified:
spark/index.md
spark/site/index.html
Added: spark/images/spark-runs-everywhere.png
URL: http://svn.apache.org/viewvc/spark/images/spark-runs-everywhere.png?rev=1638035&view=auto
==============================================================================
Binary file - no diff available.
Propchange: spark/images/spark-runs-everywhere.png
------------------------------------------------------------------------------
svn:mime-type = application/octet-stream
Modified: spark/index.md
URL: http://svn.apache.org/viewvc/spark/index.md?rev=1638035&r1=1638034&r2=1638035&view=diff
==============================================================================
--- spark/index.md (original)
+++ spark/index.md Tue Nov 11 07:11:21 2014
@@ -105,23 +105,20 @@ navigation:
<div class="row row-padded" style="margin-bottom: 15px;">
<div class="col-md-7 col-sm-7">
- <h2>Integrated with Hadoop</h2>
+ <h2>Runs Everywhere</h2>
<p class="lead">
- Spark can run on Hadoop 2's YARN cluster manager, and can read
- any existing Hadoop data.
+ Spark runs on Hadoop YARN, Apache Mesos, standalone, or in the cloud. It can talk to and read from many different data sources such as HDFS, Cassandra, HBase, S3.
</p>
<p>
- If you have a Hadoop 2 cluster, you can run Spark without any installation needed.
- Otherwise, Spark is easy to run <a href="{{site.url}}docs/latest/spark-standalone.html">standalone</a>
- or on <a href="{{site.url}}docs/latest/ec2-scripts.html">EC2</a> or <a href="http://mesos.apache.org">Mesos</a>.
+ You can run Spark readily using its <a href="{{site.url}}docs/latest/spark-standalone.html">standalone cluster mode</a>, on <a href="{{site.url}}docs/latest/ec2-scripts.html">EC2</a>, or run it on Hadoop YARN or <a href="http://mesos.apache.org">Apache Mesos</a>.
It can read from <a href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">HDFS</a>, <a href="http://hbase.apache.org">HBase</a>, <a href="http://cassandra.apache.org">Cassandra</a>,
and any Hadoop data source.
</p>
</div>
<div class="col-md-5 col-sm-5 col-padded-top col-center">
- <img src="{{site.url}}images/hadoop.jpg" style="width: 100%; max-width: 280px;">
+ <img src="{{site.url}}images/spark-runs-everywhere.png" style="width: 100%; max-width: 280px;">
</div>
</div>
Added: spark/site/images/spark-runs-everywhere.png
URL: http://svn.apache.org/viewvc/spark/site/images/spark-runs-everywhere.png?rev=1638035&view=auto
==============================================================================
Binary file - no diff available.
Propchange: spark/site/images/spark-runs-everywhere.png
------------------------------------------------------------------------------
svn:mime-type = application/octet-stream
Modified: spark/site/index.html
URL: http://svn.apache.org/viewvc/spark/site/index.html?rev=1638035&r1=1638034&r2=1638035&view=diff
==============================================================================
--- spark/site/index.html (original)
+++ spark/site/index.html Tue Nov 11 07:11:21 2014
@@ -253,23 +253,20 @@
<div class="row row-padded" style="margin-bottom: 15px;">
<div class="col-md-7 col-sm-7">
- <h2>Integrated with Hadoop</h2>
+ <h2>Runs Everywhere</h2>
<p class="lead">
- Spark can run on Hadoop 2's YARN cluster manager, and can read
- any existing Hadoop data.
+ Spark runs on Hadoop YARN, Apache Mesos, standalone, or in the cloud. It can talk to and read from many different data sources such as HDFS, Cassandra, HBase, S3.
</p>
<p>
- If you have a Hadoop 2 cluster, you can run Spark without any installation needed.
- Otherwise, Spark is easy to run <a href="/docs/latest/spark-standalone.html">standalone</a>
- or on <a href="/docs/latest/ec2-scripts.html">EC2</a> or <a href="http://mesos.apache.org">Mesos</a>.
+ You can run Spark readily using its <a href="/docs/latest/spark-standalone.html">standalone cluster mode</a>, on <a href="/docs/latest/ec2-scripts.html">EC2</a>, or run it on Hadoop YARN or <a href="http://mesos.apache.org">Apache Mesos</a>.
It can read from <a href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">HDFS</a>, <a href="http://hbase.apache.org">HBase</a>, <a href="http://cassandra.apache.org">Cassandra</a>,
and any Hadoop data source.
</p>
</div>
<div class="col-md-5 col-sm-5 col-padded-top col-center">
- <img src="/images/hadoop.jpg" style="width: 100%; max-width: 280px;" />
+ <img src="/images/spark-runs-everywhere.png" style="width: 100%; max-width: 280px;" />
</div>
</div>
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org