You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tinkerpop.apache.org by ok...@apache.org on 2017/11/01 18:07:44 UTC

[04/14] tinkerpop git commit: Corrected TinkerPop naming, added changelog and pointer to manifest

Corrected TinkerPop naming, added changelog and pointer to manifest


Project: http://git-wip-us.apache.org/repos/asf/tinkerpop/repo
Commit: http://git-wip-us.apache.org/repos/asf/tinkerpop/commit/16f3ee7e
Tree: http://git-wip-us.apache.org/repos/asf/tinkerpop/tree/16f3ee7e
Diff: http://git-wip-us.apache.org/repos/asf/tinkerpop/diff/16f3ee7e

Branch: refs/heads/master
Commit: 16f3ee7e765e05a344be62774c14617b90f17585
Parents: a60ac45
Author: HadoopMarc <vt...@xs4all.nl>
Authored: Sun Oct 1 16:38:42 2017 +0200
Committer: HadoopMarc <vt...@xs4all.nl>
Committed: Thu Oct 12 21:55:28 2017 +0200

----------------------------------------------------------------------
 docs/src/recipes/olap-spark-yarn.asciidoc | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/tinkerpop/blob/16f3ee7e/docs/src/recipes/olap-spark-yarn.asciidoc
----------------------------------------------------------------------
diff --git a/docs/src/recipes/olap-spark-yarn.asciidoc b/docs/src/recipes/olap-spark-yarn.asciidoc
index f909ce0..01aedcb 100644
--- a/docs/src/recipes/olap-spark-yarn.asciidoc
+++ b/docs/src/recipes/olap-spark-yarn.asciidoc
@@ -18,7 +18,7 @@ limitations under the License.
 OLAP traversals with Spark on Yarn
 ----------------------------------
 
-Tinkerpop's combination of http://tinkerpop.apache.org/docs/current/reference/#sparkgraphcomputer[SparkGraphComputer]
+TinkerPop's combination of http://tinkerpop.apache.org/docs/current/reference/#sparkgraphcomputer[SparkGraphComputer]
 and http://tinkerpop.apache.org/docs/current/reference/#_properties_files[HadoopGraph] allows for running
 distributed, analytical graph queries (OLAP) on a computer cluster. The
 http://tinkerpop.apache.org/docs/current/reference/#sparkgraphcomputer[reference documentation] covers the cases
@@ -29,15 +29,15 @@ configured differently. This recipe describes this configuration.
 Approach
 ~~~~~~~~
 
-Most configuration problems of Tinkerpop with Spark on Yarn stem from three reasons:
+Most configuration problems of TinkerPop with Spark on Yarn stem from three reasons:
 
 1. `SparkGraphComputer` creates its own `SparkContext` so it does not get any configs from the usual `spark-submit` command.
-2. The Tinkerpop Spark-plugin did not include Spark Yarn runtime dependencies until version 3.2.7/3.3.1.
+2. The TinkerPop Spark plugin did not include Spark on Yarn runtime dependencies until version 3.2.7/3.3.1.
 3. Resolving reason 2 by adding the cluster's Spark jars to the classpath may create all kinds of version
 conflicts with the Tinkerpop dependencies.
 
 The current recipe follows a minimalist approach in which no dependencies are added to the dependencies
-included in the Tinkerpop binary distribution. The Hadoop cluster's Spark installation is completely ignored. This
+included in the TinkerPop binary distribution. The Hadoop cluster's Spark installation is completely ignored. This
 approach minimizes the chance of dependency version conflicts.
 
 Prerequisites
@@ -68,7 +68,7 @@ GREMLIN_HOME=/home/yourdir/lib/apache-tinkerpop-gremlin-console-x.y.z-standalone
 export HADOOP_HOME=/usr/local/lib/hadoop-2.7.2
 export HADOOP_CONF_DIR=/usr/local/lib/hadoop-2.7.2/etc/hadoop
 
-# Have Tinkerpop find the hadoop cluster configs and hadoop native libraries
+# Have TinkerPop find the hadoop cluster configs and hadoop native libraries
 export CLASSPATH=$HADOOP_CONF_DIR
 export JAVA_OPTIONS="-Djava.library.path=$HADOOP_HOME/lib/native:$HADOOP_HOME/lib/native/Linux-amd64-64"
 
@@ -137,9 +137,11 @@ also the right moment to take a look at the `spark-defaults.xml` file of your cl
 the Spark History Service, which allows you to access logs of finished jobs via the Yarn resource manager UI.
 
 This recipe uses the gremlin console, but things should not be very different for your own JVM-based application,
-as long as you do not use the `spark-submit` or `spark-shell` commands.
+as long as you do not use the `spark-submit` or `spark-shell` commands. You will also want to check the additional
+runtime dependencies listed in the `Gremlin-Plugin-Dependencies` section of the manifest file in the `spark-gremlin`
+jar.
 
 You may not like the idea that the Hadoop and Spark jars from the Tinkerpop distribution differ from the versions in
-your cluster. If so, just build Tinkerpop from source with the corresponding dependencies changed in the various `pom.xml`
-files (e.g. `spark-core_2.11-2.2.0-some-vendor.jar` instead of `spark-core_2.11-2.2.0.jar`). Of course, Tinkerpop will
+your cluster. If so, just build TinkerPop from source with the corresponding dependencies changed in the various `pom.xml`
+files (e.g. `spark-core_2.11-2.2.0-some-vendor.jar` instead of `spark-core_2.11-2.2.0.jar`). Of course, TinkerPop will
 only build for exactly matching or slightly differing artifact versions.
\ No newline at end of file