You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2018/08/23 14:51:00 UTC

[jira] [Work logged] (BEAM-313) Enable the use of an existing spark context with the SparkPipelineRunner

     [ https://issues.apache.org/jira/browse/BEAM-313?focusedWorklogId=137405&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-137405 ]

ASF GitHub Bot logged work on BEAM-313:
---------------------------------------

                Author: ASF GitHub Bot
            Created on: 23/Aug/18 14:50
            Start Date: 23/Aug/18 14:50
    Worklog Time Spent: 10m 
      Work Description: kohlerm commented on issue #401: [BEAM-313] Enable the use of an existing spark context with the SparkPipelineRunner
URL: https://github.com/apache/beam/pull/401#issuecomment-415446231
 
 
   @amitsela does this mean Beam works now with the Spark job server? The beam documentation even mentions the Spark job server, but it's not clear to me how it would work. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

            Worklog Id:     (was: 137405)
            Time Spent: 10m
    Remaining Estimate: 0h

> Enable the use of an existing spark context with the SparkPipelineRunner
> ------------------------------------------------------------------------
>
>                 Key: BEAM-313
>                 URL: https://issues.apache.org/jira/browse/BEAM-313
>             Project: Beam
>          Issue Type: New Feature
>          Components: runner-spark
>            Reporter: Abbass Marouni
>            Assignee: Jean-Baptiste Onofré
>            Priority: Major
>             Fix For: 0.3.0-incubating
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> The general use case is that the SparkPipelineRunner creates its own Spark context and uses it for the pipeline execution.
> Another alternative is to provide the SparkPipelineRunner with an existing spark context. This can be interesting for a lot of use cases where the Spark context is managed outside of beam (context reuse, advanced context management, spark job server, ...).
> code sample : https://github.com/amarouni/incubator-beam/commit/fe0bb517bf0ccde07ef5a61f3e44df695b75f076



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)