You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (Jira)" <ji...@apache.org> on 2020/10/30 20:37:00 UTC
[jira] [Closed] (SPARK-32661) Spark executors on K8S do not request
extra memory for off-heap allocations
[ https://issues.apache.org/jira/browse/SPARK-32661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Dongjoon Hyun closed SPARK-32661.
---------------------------------
> Spark executors on K8S do not request extra memory for off-heap allocations
> ---------------------------------------------------------------------------
>
> Key: SPARK-32661
> URL: https://issues.apache.org/jira/browse/SPARK-32661
> Project: Spark
> Issue Type: Sub-task
> Components: Kubernetes
> Affects Versions: 3.0.0, 3.0.1, 3.1.0
> Reporter: Luca Canali
> Priority: Minor
>
> Off-heap memory allocations are configured using `spark.memory.offHeap.enabled=true` and `conf spark.memory.offHeap.size=<size>`. Spark on YARN adds the off-heap memory size to the executor container resources. Spark on Kubernetes does not request the allocation of the off-heap memory. Currently, this can be worked around by using spark.executor.memoryOverhead to reserve memory for off-heap allocations. This proposes make Spark on Kubernetes behave as in the case of YARN, that is adding spark.memory.offHeap.size to the memory request for executor containers.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org