You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@beam.apache.org by "Jean-Baptiste Onofré (JIRA)" <ji...@apache.org> on 2018/07/13 17:24:01 UTC
[jira] [Assigned] (BEAM-4783) Spark SourceRDD Not Designed With
Dynamic Allocation In Mind
[ https://issues.apache.org/jira/browse/BEAM-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jean-Baptiste Onofré reassigned BEAM-4783:
------------------------------------------
Assignee: Jean-Baptiste Onofré (was: Amit Sela)
> Spark SourceRDD Not Designed With Dynamic Allocation In Mind
> ------------------------------------------------------------
>
> Key: BEAM-4783
> URL: https://issues.apache.org/jira/browse/BEAM-4783
> Project: Beam
> Issue Type: Improvement
> Components: runner-spark
> Affects Versions: 2.5.0
> Reporter: Kyle Winkelman
> Assignee: Jean-Baptiste Onofré
> Priority: Major
> Labels: newbie
>
> When the spark-runner is used along with the configuration spark.dynamicAllocation.enabled=true the SourceRDD does not detect this. It then falls back to the value calculated in this description:
> // when running on YARN/SparkDeploy it's the result of max(totalCores, 2).
> // when running on Mesos it's 8.
> // when running local it's the total number of cores (local = 1, local[N] = N,
> // local[*] = estimation of the machine's cores).
> // ** the configuration "spark.default.parallelism" takes precedence over all of the above **
> So in most cases this default is quite small. This is an issue when using a very large input file as it will only get split in half.
> I believe that when Dynamic Allocation is enable the SourceRDD should use the DEFAULT_BUNDLE_SIZE and possibly expose a SparkPipelineOptions that allows you to change this DEFAULT_BUNDLE_SIZE.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)