You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2019/01/08 16:59:00 UTC

[jira] [Work logged] (BEAM-4783) Add bundleSize parameter to control splitting of Spark sources (useful for Dynamic Allocation)

     [ https://issues.apache.org/jira/browse/BEAM-4783?focusedWorklogId=182546&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-182546 ]

ASF GitHub Bot logged work on BEAM-4783:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 08/Jan/19 16:58
            Start Date: 08/Jan/19 16:58
    Worklog Time Spent: 10m 
      Work Description: kyle-winkelman commented on issue #6884: [BEAM-4783] Fix invalid parameter to set the partitioner in Spark GbK
URL: https://github.com/apache/beam/pull/6884#issuecomment-452372940
 
 
   I believe this refactor actually does the opposite of what it was supposed to. Previously the `HashPartitioner` was used in all cases. I wanted to get rid of it but @iemejia was concerned it might bring back an old issue in which the SparkRunner when in streaming mode would shuffle the data twice. I therefore only removed the `HashPartitioner` in the case that bundleSize was specified. Can someone check if a streaming workflow with a groupByKey has a double shuffle? If not we can remove most of this code and always call `rdd.groupByKey()` without the `HashPartitioner`. If it does we need to flip all of this to do the opposite.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 182546)
    Time Spent: 6h  (was: 5h 50m)

> Add bundleSize parameter to control splitting of Spark sources (useful for Dynamic Allocation)
> ----------------------------------------------------------------------------------------------
>
>                 Key: BEAM-4783
>                 URL: https://issues.apache.org/jira/browse/BEAM-4783
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-spark
>    Affects Versions: 2.8.0
>            Reporter: Kyle Winkelman
>            Assignee: Kyle Winkelman
>            Priority: Major
>             Fix For: 2.8.0, 2.9.0
>
>          Time Spent: 6h
>  Remaining Estimate: 0h
>
> When the spark-runner is used along with the configuration spark.dynamicAllocation.enabled=true the SourceRDD does not detect this. It then falls back to the value calculated in this description:
>       // when running on YARN/SparkDeploy it's the result of max(totalCores, 2).
>       // when running on Mesos it's 8.
>       // when running local it's the total number of cores (local = 1, local[N] = N,
>       // local[*] = estimation of the machine's cores).
>       // ** the configuration "spark.default.parallelism" takes precedence over all of the above **
> So in most cases this default is quite small. This is an issue when using a very large input file as it will only get split in half.
> I believe that when Dynamic Allocation is enable the SourceRDD should use the DEFAULT_BUNDLE_SIZE and possibly expose a SparkPipelineOptions that allows you to change this DEFAULT_BUNDLE_SIZE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)