You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2019/10/17 12:53:00 UTC

[jira] [Work logged] (BEAM-8191) Multiple Flatten.pCollections() transforms generate an overwhelming number of tasks

     [ https://issues.apache.org/jira/browse/BEAM-8191?focusedWorklogId=329823&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-329823 ]

ASF GitHub Bot logged work on BEAM-8191:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 17/Oct/19 12:52
            Start Date: 17/Oct/19 12:52
    Worklog Time Spent: 10m 
      Work Description: RyanSkraba commented on pull request #9544: [BEAM-8191] Fixes potentially large number of tasks on Spark after Flatten.pCollections()
URL: https://github.com/apache/beam/pull/9544#discussion_r335985096
 
 

 ##########
 File path: runners/spark/src/main/java/org/apache/beam/runners/spark/translation/SparkBatchPortablePipelineTranslator.java
 ##########
 @@ -353,6 +353,11 @@ public void setName(String name) {
         index++;
       }
       unionRDD = context.getSparkContext().union(rdds);
+
+      Partitioner partitioner = getPartitioner(context);
 
 Review comment:
   This *does* sound like a good idea!  I linked your JIRA to https://issues.apache.org/jira/browse/BEAM-8384 .  Before adding some a new pipeline option, it would be great if there were a better "overall" view of how the SparkRunner is managing parallelism.
   
   This seems like it would be a good area to collaborate.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 329823)
    Time Spent: 1h 40m  (was: 1.5h)

> Multiple Flatten.pCollections() transforms generate an overwhelming number of tasks
> -----------------------------------------------------------------------------------
>
>                 Key: BEAM-8191
>                 URL: https://issues.apache.org/jira/browse/BEAM-8191
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-spark
>    Affects Versions: 2.12.0, 2.14.0, 2.15.0
>            Reporter: Peter Backx
>            Assignee: Peter Backx
>            Priority: Major
>          Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The Flatten.pCollections() is translated into a Spark union operation. The resulting RDD will have the sum of the partitions in the originating RDDs.
> If you flatten 2 PCollections with each 10 partitions, the result will have 20 partitions.
> This is ok in small pipelins, but in our main pipeline, this means the number of tasks grows out of hand quite easily (over 500k tasks in one stage). This overloads the driver and crashes the process.
> I have created a small repro case:
> [https://github.com/pbackx/beam-flatmap-test]
>  
> A possible solution is to add a coalesce call after the union. We have been testing this and it seems to do exactly what we want, but I'm not sure if this fix is applicable for all cases. 
> I will open a PR for this so that you can review my proposed change and discuss whether or not it's a good idea.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)