You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by "Jean-Baptiste Onofré (JIRA)" <ji...@apache.org> on 2017/01/04 17:38:58 UTC

[jira] [Resolved] (BEAM-1206) PCollections used as a sideInput are unnecessarily re-evaluated.

     [ https://issues.apache.org/jira/browse/BEAM-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jean-Baptiste Onofré resolved BEAM-1206.
----------------------------------------
       Resolution: Duplicate
    Fix Version/s: Not applicable

Actually this issue is directly related to BEAM-649. So fixing BEAM-649 will fix this one as well.

> PCollections used as a sideInput are unnecessarily re-evaluated.
> ----------------------------------------------------------------
>
>                 Key: BEAM-1206
>                 URL: https://issues.apache.org/jira/browse/BEAM-1206
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-spark
>    Affects Versions: 0.4.0
>            Reporter: Ryan Skraba
>            Assignee: Jean-Baptiste Onofré
>             Fix For: Not applicable
>
>
> The SparkRunner keeps track of "leaf" transforms in the job graph and ensures that they have been executed when the pipeline is run.
> However, when a PCollection is used as a sideInput, it is evaluated *once* to get the materialized values but is still considered a leaf node.  It is evaluated a second time at the end of the Spark job, which is unnecessary (and can cause unexpected behaviour).
> One of the symptoms of this bug is that {{Sink}} will create spurious writers that execute, but are never finalized.  Specifically the HDFSFileSink will _always_ die with a {{Writer results and output files do not match}} error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)