You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takeshi Yamamuro (Jira)" <ji...@apache.org> on 2021/02/28 23:36:00 UTC
[jira] [Commented] (SPARK-34573) SQLConf sqlConfEntries map has a
global lock, should not lock on get
[ https://issues.apache.org/jira/browse/SPARK-34573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17292546#comment-17292546 ]
Takeshi Yamamuro commented on SPARK-34573:
------------------------------------------
IIUC executors just refer to ReadOnlySQLConf instead of mutable SQLConf: [https://github.com/apache/spark/blob/5a48eb8d00faee3a7c8f023c0699296e22edb893/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L172]
ReadOnlySQLConf uses java.util.Map internally, so I think there is no lock when looking up configurations.
The description points out different cases from this?
> SQLConf sqlConfEntries map has a global lock, should not lock on get
> --------------------------------------------------------------------
>
> Key: SPARK-34573
> URL: https://issues.apache.org/jira/browse/SPARK-34573
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 2.4.7, 3.0.2
> Reporter: Gabriele Nizzoli
> Priority: Minor
>
> SQLConf sqlConfEntries map has a global lock (since it implements a Collections.synchronizedMap).
> Every operation (like get or set) blocks the full object.
> Concurrent threads may wait on lock.
> An example is the DatatType.sameType method, that queries SQLConf entries map:
> {code:scala}
> if (SQLConf.get.caseSensitiveAnalysis)
> ...
> {code}
> If this data type check is run in a custom piece of code on an executor with multiple cores (eg: 40), then a lot of time will be lost waiting on the lock.
> An easy fix is to use the a ConcurrentHashMap that does not lock on read SQLConf.get): " ... retrieval operations do not entail locking ..."
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org