You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2017/06/03 09:22:04 UTC

[jira] [Resolved] (SPARK-20975) Excutors are no released if speculation + dynamic allocation enabled

     [ https://issues.apache.org/jira/browse/SPARK-20975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-20975.
-------------------------------
    Resolution: Invalid

There's no useful detail here. It's normal for executors to not be released in some cases, like if the min number are running, or if there is cached data. Questions should to to the mailing list.

> Excutors are no released if speculation + dynamic allocation enabled 
> ---------------------------------------------------------------------
>
>                 Key: SPARK-20975
>                 URL: https://issues.apache.org/jira/browse/SPARK-20975
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.2
>            Reporter: harish chandra
>
> Whenever user enable spark speculation + dynamic allocation then after running few jobs in spark context, few executors keeps running like forever and never get free.
> *Configuration*
> {code:java}
>     - "spark.master=yarn-client"
>     - "spark.yarn.am.extraJavaOptions=-Dhdp.version=2.5.3.0-37"
>     - "spark.sql.sources.maxConcurrentWrites=1"
>     - "parquet.memory.pool.ratio=0.1"
>     - "hive.map.aggr=true"
>     - "spark.sql.shuffle.partitions=1200"
>     - "spark.scheduler.mode=FAIR"
>     - "spark.scheduler.allocation.file=/etc/spark/conf/fairscheduler.xml.template"
>     - "spark.speculation=true"
>     - "spark.dynamicAllocation.enabled=true"
>     - "spark.shuffle.service.enabled=true"
>     - "spark.dynamicAllocation.executorIdleTimeout=15s"
>     - "spark.dynamicAllocation.cachedExecutorIdleTimeout=15s"
>     - "spark.dynamicAllocation.initialExecutors=1"
>     - "spark.dynamicAllocation.maxExecutors=900"
>     - "spark.dynamicAllocation.minExecutors=1"
>     - "spark.yarn.max.executor.failures=10000"
>     - "spark.executor.cores=2"
>     - "spark.executor.memory=8G"
>     - "spark.sql.codegen=true"
>     - "spark.sql.codegen.wholeStage=true"
>     - "spark.sql.shuffle.partitions=75"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org