You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tinkerpop.apache.org by "Russell Alexander Spitzer (JIRA)" <ji...@apache.org> on 2015/10/23 00:19:27 UTC

[jira] [Created] (TINKERPOP3-911) Allow setting Spark JobGroup/Custom Properties based on hadoop conf

Russell Alexander Spitzer created TINKERPOP3-911:
----------------------------------------------------

             Summary: Allow setting Spark JobGroup/Custom Properties based on hadoop conf
                 Key: TINKERPOP3-911
                 URL: https://issues.apache.org/jira/browse/TINKERPOP3-911
             Project: TinkerPop 3
          Issue Type: Improvement
          Components: hadoop
            Reporter: Russell Alexander Spitzer
            Assignee: Marko A. Rodriguez


When using a Persistant Spark context it can be beneficial to pass in new configuration options for new users/ GraphComputers. Currently the .getOrCreate call will always use the configuration from the initial construction. To work around this we should iterate over all of the properties passed into the graph computer and set them as local context properties on the thread we are operating on.

See
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/SparkContext.scala#L630-L640

This would let different graph computers set different spark properties for use with things like the Spark Fair Scheduler. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)