You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "zhoukang (JIRA)" <ji...@apache.org> on 2019/01/11 09:39:00 UTC
[jira] [Updated] (SPARK-26601) Make broadcast-exchange thread pool
keepalivetime and maxThreadNumber configurable
[ https://issues.apache.org/jira/browse/SPARK-26601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
zhoukang updated SPARK-26601:
-----------------------------
Attachment: 选区_002 (1).png
选区_002.png
选区_001.png
> Make broadcast-exchange thread pool keepalivetime and maxThreadNumber configurable
> ----------------------------------------------------------------------------------
>
> Key: SPARK-26601
> URL: https://issues.apache.org/jira/browse/SPARK-26601
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 2.4.0
> Reporter: zhoukang
> Priority: Major
> Attachments: 选区_001.png, 选区_002 (1).png, 选区_002.png
>
>
> Currently,thread number of broadcast-exchange thread pool is fixed and keepAliveSeconds is also fixed as 60s.
> {code:java}
> object BroadcastExchangeExec {
> private[execution] val executionContext = ExecutionContext.fromExecutorService(
> ThreadUtils.newDaemonCachedThreadPool("broadcast-exchange", 128))
> }
> /**
> * Create a cached thread pool whose max number of threads is `maxThreadNumber`. Thread names
> * are formatted as prefix-ID, where ID is a unique, sequentially assigned integer.
> */
> def newDaemonCachedThreadPool(
> prefix: String, maxThreadNumber: Int, keepAliveSeconds: Int = 60): ThreadPoolExecutor = {
> val threadFactory = namedThreadFactory(prefix)
> val threadPool = new ThreadPoolExecutor(
> maxThreadNumber, // corePoolSize: the max number of threads to create before queuing the tasks
> maxThreadNumber, // maximumPoolSize: because we use LinkedBlockingDeque, this one is not used
> keepAliveSeconds,
> TimeUnit.SECONDS,
> new LinkedBlockingQueue[Runnable],
> threadFactory)
> threadPool.allowCoreThreadTimeOut(true)
> threadPool
> }
> {code}
> But some times, if the Thead object do not GC quickly it may caused server(driver) OOM.
> Below is an example:
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org