You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by "Abbass Marouni (JIRA)" <ji...@apache.org> on 2016/05/31 10:04:12 UTC
[jira] [Created] (BEAM-313) Enable the use of an existing spark
context with the SparkPipelineRunner
Abbass Marouni created BEAM-313:
-----------------------------------
Summary: Enable the use of an existing spark context with the SparkPipelineRunner
Key: BEAM-313
URL: https://issues.apache.org/jira/browse/BEAM-313
Project: Beam
Issue Type: New Feature
Reporter: Abbass Marouni
The general use case is that the SparkPipelineRunner creates its own Spark context and uses it for the pipeline execution.
Another alternative is to provide the SparkPipelineRunner with an existing spark context. This can be interesting for a lot of use cases where the Spark context is managed outside of beam (context reuse, advanced context management, spark job server, ...).
code sample : https://github.com/amarouni/incubator-beam/commit/fe0bb517bf0ccde07ef5a61f3e44df695b75f076
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)