You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yue Ma (JIRA)" <ji...@apache.org> on 2017/09/14 11:12:00 UTC

[jira] [Updated] (SPARK-22008) Spark Streaming Dynamic Allocation auto fix maxNumExecutors

     [ https://issues.apache.org/jira/browse/SPARK-22008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Yue Ma updated SPARK-22008:
---------------------------
    Description: 
In SparkStreaming DRA .The metric we use to add or remove executor is the ratio of batch processing time / batch duration (R). And we use the parameter "spark.streaming.dynamicAllocation.maxExecutors" to set the max Num of executor .Currently it doesn't work well with Spark streaming because of several reasons:
(1) For example if the max nums of executor we need is 10 and we set "spark.streaming.dynamicAllocation.maxExecutors" to 15,Obviously ,We wasted 5 executors.
(2) If  the number of topic partition changes ,then the partition of KafkaRDD or  the num of tasks in a stage changes too.And the max executor we need will also change,so the num of maxExecutors should change with the nums of Task .

The goal of this JIRA is to auto fix maxNumExecutors . Using a SparkListerner when Stage Submitted ,first figure out the num executor we need  , then update the maxNumExecutor

  was:
In SparkStreaming DRA .The metric we use to add or remove executor is the ratio of batch processing time / batch duration (R). And we use the parameter "spark.streaming.dynamicAllocation.maxExecutors" to set the max Num of executor .Currently it doesn't work well with Spark streaming because of several reasons:
(1) For example if the max nums of executor we need is 10 and we set "spark.streaming.dynamicAllocation.maxExecutors" to 15,Obviously ,We wasted 5 executors.
(2) If  the number of topic partition changes ,then the partition of KafkaRDD or  the num of tasks in a stage changes too.And the max executor we need will also change,so the num of maxExecutors should change with the nums of Task .


The goal of this JIRA is to auto fix maxNumExecutors . Using a SparkListerner when Stage Submitted ,first figure out the num executor we need  , then update the maxNumExecutor


> Spark Streaming Dynamic Allocation auto fix maxNumExecutors
> -----------------------------------------------------------
>
>                 Key: SPARK-22008
>                 URL: https://issues.apache.org/jira/browse/SPARK-22008
>             Project: Spark
>          Issue Type: Improvement
>          Components: DStreams
>    Affects Versions: 2.2.0
>            Reporter: Yue Ma
>            Priority: Minor
>
> In SparkStreaming DRA .The metric we use to add or remove executor is the ratio of batch processing time / batch duration (R). And we use the parameter "spark.streaming.dynamicAllocation.maxExecutors" to set the max Num of executor .Currently it doesn't work well with Spark streaming because of several reasons:
> (1) For example if the max nums of executor we need is 10 and we set "spark.streaming.dynamicAllocation.maxExecutors" to 15,Obviously ,We wasted 5 executors.
> (2) If  the number of topic partition changes ,then the partition of KafkaRDD or  the num of tasks in a stage changes too.And the max executor we need will also change,so the num of maxExecutors should change with the nums of Task .
> The goal of this JIRA is to auto fix maxNumExecutors . Using a SparkListerner when Stage Submitted ,first figure out the num executor we need  , then update the maxNumExecutor



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org