You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tinkerpop.apache.org by "Marko A. Rodriguez (JIRA)" <ji...@apache.org> on 2015/10/27 22:02:27 UTC
[jira] [Created] (TINKERPOP3-925) Use persisted SparkContext to
persist an RDD across Spark jobs.
Marko A. Rodriguez created TINKERPOP3-925:
---------------------------------------------
Summary: Use persisted SparkContext to persist an RDD across Spark jobs.
Key: TINKERPOP3-925
URL: https://issues.apache.org/jira/browse/TINKERPOP3-925
Project: TinkerPop 3
Issue Type: Improvement
Components: hadoop
Affects Versions: 3.0.2-incubating
Reporter: Marko A. Rodriguez
Assignee: Marko A. Rodriguez
Fix For: 3.1.0-incubating
If a provider is using Spark, they are currently forced to have HDFS be used to store intermediate RDD data. However, if they plan on using that data in a {{GraphComputer}} "job chain," then they should be able to lookup a {{.cached()}} RDD by name.
Create a {{inputGraphRDD.name}} and {{outputGraphRDD.name}} to make it so that the configuration references {{SparkContext.getPersitedRDDs()}}.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)