You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tinkerpop.apache.org by dk...@apache.org on 2019/07/08 18:37:27 UTC
[tinkerpop] branch master updated: CTR: removed references to
`BulkDumperVertexProgram` from docs.
This is an automated email from the ASF dual-hosted git repository.
dkuppitz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/tinkerpop.git
The following commit(s) were added to refs/heads/master by this push:
new b2967ed CTR: removed references to `BulkDumperVertexProgram` from docs.
new 06c5a90 Merge branch 'tp34'
b2967ed is described below
commit b2967ed44ae9691f57bbc62c3a434c8e59fbe424
Author: Daniel Kuppitz <da...@hotmail.com>
AuthorDate: Mon Jul 8 11:36:43 2019 -0700
CTR: removed references to `BulkDumperVertexProgram` from docs.
---
docs/src/reference/implementations-spark.asciidoc | 4 ++--
docs/src/reference/the-graphcomputer.asciidoc | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/docs/src/reference/implementations-spark.asciidoc b/docs/src/reference/implementations-spark.asciidoc
index ce917ee..e64b467 100644
--- a/docs/src/reference/implementations-spark.asciidoc
+++ b/docs/src/reference/implementations-spark.asciidoc
@@ -122,7 +122,7 @@ references to that Spark Context. The exception to this rule are those propertie
Finally, there is a `spark` object that can be used to manage persisted RDDs (see <<interacting-with-spark, Interacting with Spark>>).
-anchor:bulkdumpervertexprogramusingspark[]
+anchor:clonevertexprogramusingspark[]
[[clonevertexprogramusingspark]]
===== Using CloneVertexProgram
@@ -136,7 +136,7 @@ The example below takes a Hadoop graph as the input (in `GryoInputFormat`) and e
hdfs.copyFromLocal('data/tinkerpop-modern.kryo', 'tinkerpop-modern.kryo')
graph = GraphFactory.open('conf/hadoop/hadoop-gryo.properties')
graph.configuration().setProperty('gremlin.hadoop.graphWriter', 'org.apache.tinkerpop.gremlin.hadoop.structure.io.graphson.GraphSONOutputFormat')
-graph.compute(SparkGraphComputer).program(BulkDumperVertexProgram.build().create()).submit().get()
+graph.compute(SparkGraphComputer).program(CloneVertexProgram.build().create()).submit().get()
hdfs.ls('output')
hdfs.head('output/~g')
----
diff --git a/docs/src/reference/the-graphcomputer.asciidoc b/docs/src/reference/the-graphcomputer.asciidoc
index e98c984..a2c2db6 100644
--- a/docs/src/reference/the-graphcomputer.asciidoc
+++ b/docs/src/reference/the-graphcomputer.asciidoc
@@ -460,7 +460,7 @@ custom distance properties or traversals can lead to much longer runtimes and a
Note that `GraphTraversal` provides a <<shortestpath-step,`shortestPath()`>>-step.
-anchor:bulkdumpervertexprogram[]
+anchor:clonevertexprogram[]
[[clonevertexprogram]]
=== CloneVertexProgram