You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "harish chandra (JIRA)" <ji...@apache.org> on 2017/06/03 09:40:04 UTC

[jira] [Comment Edited] (SPARK-20975) Excutors are no released if speculation + dynamic allocation enabled

    [ https://issues.apache.org/jira/browse/SPARK-20975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16035903#comment-16035903 ] 

harish chandra edited comment on SPARK-20975 at 6/3/17 9:39 AM:
----------------------------------------------------------------

Thanks [~sowen] for quick response, 

Since I am passing config

{code:java}
spark.dynamicAllocation.executorIdleTimeout=15s
{code}

which says after 15 seconds executor should cool down, if no job running. But I don't see this expected behavior 




was (Author: harry.rathor@gmail.com):
Thanks [~sowen] for quick response, 

Since I am passing config

{code:java}
spark.dynamicAllocation.executorIdleTimeout=15s
{code}

which says after 15 seconds execution should cool down, if no job running. But I don't see this expected behavior 



> Excutors are no released if speculation + dynamic allocation enabled 
> ---------------------------------------------------------------------
>
>                 Key: SPARK-20975
>                 URL: https://issues.apache.org/jira/browse/SPARK-20975
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.2
>            Reporter: harish chandra
>
> Whenever user enable spark speculation + dynamic allocation then after running few jobs in spark context, few executors keeps running like forever and never get free.
> *Configuration*
> {code:java}
>     - "spark.master=yarn-client"
>     - "spark.yarn.am.extraJavaOptions=-Dhdp.version=2.5.3.0-37"
>     - "spark.sql.sources.maxConcurrentWrites=1"
>     - "parquet.memory.pool.ratio=0.1"
>     - "hive.map.aggr=true"
>     - "spark.sql.shuffle.partitions=1200"
>     - "spark.scheduler.mode=FAIR"
>     - "spark.scheduler.allocation.file=/etc/spark/conf/fairscheduler.xml.template"
>     - "spark.speculation=true"
>     - "spark.dynamicAllocation.enabled=true"
>     - "spark.shuffle.service.enabled=true"
>     - "spark.dynamicAllocation.executorIdleTimeout=15s"
>     - "spark.dynamicAllocation.cachedExecutorIdleTimeout=15s"
>     - "spark.dynamicAllocation.initialExecutors=1"
>     - "spark.dynamicAllocation.maxExecutors=900"
>     - "spark.dynamicAllocation.minExecutors=1"
>     - "spark.yarn.max.executor.failures=10000"
>     - "spark.executor.cores=2"
>     - "spark.executor.memory=8G"
>     - "spark.sql.codegen=true"
>     - "spark.sql.codegen.wholeStage=true"
>     - "spark.sql.shuffle.partitions=75"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org