You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@accumulo.apache.org by mw...@apache.org on 2019/04/25 20:01:27 UTC

[accumulo-website] branch master updated: Created blog post and updated docs (#175)

This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
     new c010c61  Created blog post and updated docs (#175)
c010c61 is described below

commit c010c611fefeba660ba0f16184a08636e5ab5ab1
Author: Mike Walch <mw...@apache.org>
AuthorDate: Thu Apr 25 16:01:23 2019 -0400

    Created blog post and updated docs (#175)
---
 _docs-2/development/spark.md                        |  8 ++++----
 _posts/blog/2019-04-24-using-spark-with-accumulo.md | 12 ++++++++++++
 2 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/_docs-2/development/spark.md b/_docs-2/development/spark.md
index e1bb251..d19b76f 100644
--- a/_docs-2/development/spark.md
+++ b/_docs-2/development/spark.md
@@ -4,7 +4,7 @@ category: development
 order: 3
 ---
 
-[Apache Spark] applications can read and write from Accumulo tables.
+[Apache Spark] applications can read from and write to Accumulo tables.
 
 Before reading this documentation, it may help to review the [MapReduce]
 documentation as API created for MapReduce jobs is used by Spark.
@@ -16,7 +16,7 @@ This documentation references code from the Accumulo [Spark example].
 1. Create a [shaded jar] with your Spark code and all of your dependencies (excluding
    Spark and Hadoop). When creating the shaded jar, you should relocate Guava
    as Accumulo uses a different version. The [pom.xml] in the [Spark example] is
-   a good reference and can be used a a starting point for a Spark application.
+   a good reference and can be used as a starting point for a Spark application.
 
 2. Submit the job by running `spark-submit` with your shaded jar. You should pass
    in the location of your `accumulo-client.properties` that will be used to connect
@@ -43,7 +43,7 @@ JavaPairRDD<Key,Value> data = sc.newAPIHadoopRDD(job.getConfiguration(),
 
 ## Writing to Accumulo table
 
-There are two ways to write an Accumulo table.
+There are two ways to write to an Accumulo table in Spark applications.
 
 ### Use a BatchWriter
 
@@ -95,7 +95,7 @@ try (AccumuloClient client = Accumulo.newClient().from(props).build()) {
 
 ## Reference
 
-* [Spark example] - Accumulo example application that uses Spark to read & write from Accumulo
+* [Spark example] - Example Spark application that reads from and writes to Accumulo
 * [MapReduce] - Documentation on reading/writing to Accumulo using MapReduce
 * [Apache Spark] - Spark project website
 
diff --git a/_posts/blog/2019-04-24-using-spark-with-accumulo.md b/_posts/blog/2019-04-24-using-spark-with-accumulo.md
new file mode 100644
index 0000000..9206c71
--- /dev/null
+++ b/_posts/blog/2019-04-24-using-spark-with-accumulo.md
@@ -0,0 +1,12 @@
+---
+title: "Using Apache Spark with Accumulo"
+---
+
+[Apache Spark] applications can read from and write to Accumulo tables.  To
+get started using Spark with Accumulo, checkout the [Spark documentation][docs] in
+the 2.0 Accumulo user manual. The [Spark example] application is a good starting point
+for using Spark with Accumulo.
+
+[Apache Spark]: https://spark.apache.org/
+[docs]: /docs/2.x/development/spark
+[Spark example]: https://github.com/apache/accumulo-examples/tree/master/spark