You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "shanyu zhao (Jira)" <ji...@apache.org> on 2020/03/04 01:44:00 UTC

[jira] [Updated] (SPARK-31028) Add "-XX:ActiveProcessorCount" to Spark driver and executor in Yarn mode

     [ https://issues.apache.org/jira/browse/SPARK-31028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

shanyu zhao updated SPARK-31028:
--------------------------------
    Description: 
When starting Spark driver and executors on Yarn cluster, the JVM process can discover all CPU cores on the system and set thread-pool or GC threads based on that value. We should limit what the JVM sees for the number of cores set by the user (spark.driver.cores or spark.executor.cores) by "-XX:ActiveProcessorCount", which was introduced in Java 8u191.

Especially in running Spark on Yarn inside Kubernetes container, the number of CPU cores discovered sometimes is 1, which means it always use 1 thread in the default thread pool, or GC threads.

  was:When starting Spark driver and executors on Yarn cluster, the JVM process can discover all CPU cores on the system and set thread-pool or GC threads based on that value. We should limit what the JVM sees for the number of cores set by the user (spark.driver.cores or spark.executor.cores) by "-XX:ActiveProcessorCount", which was introduced in Java 8u191.


> Add "-XX:ActiveProcessorCount" to Spark driver and executor in Yarn mode
> ------------------------------------------------------------------------
>
>                 Key: SPARK-31028
>                 URL: https://issues.apache.org/jira/browse/SPARK-31028
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 2.4.5
>            Reporter: shanyu zhao
>            Priority: Major
>
> When starting Spark driver and executors on Yarn cluster, the JVM process can discover all CPU cores on the system and set thread-pool or GC threads based on that value. We should limit what the JVM sees for the number of cores set by the user (spark.driver.cores or spark.executor.cores) by "-XX:ActiveProcessorCount", which was introduced in Java 8u191.
> Especially in running Spark on Yarn inside Kubernetes container, the number of CPU cores discovered sometimes is 1, which means it always use 1 thread in the default thread pool, or GC threads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org