You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (Jira)" <ji...@apache.org> on 2019/09/18 14:12:00 UTC

[jira] [Updated] (SPARK-28972) [Spark] spark.memory.offHeap.size description require to update in document

     [ https://issues.apache.org/jira/browse/SPARK-28972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen updated SPARK-28972:
------------------------------
    Issue Type: Improvement  (was: Bug)

> [Spark] spark.memory.offHeap.size description require to update in document
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-28972
>                 URL: https://issues.apache.org/jira/browse/SPARK-28972
>             Project: Spark
>          Issue Type: Improvement
>          Components: Documentation
>    Affects Versions: 2.4.3
>            Reporter: ABHISHEK KUMAR GUPTA
>            Assignee: pavithra ramachandran
>            Priority: Minor
>
>  
> spark.memory.offHeap.size accept 1G or 1KB also. So User is able to give suffix also but description say only *'absolute amount of memory in bytes'.*
> This require to update like *spark.driver.memory* where it is mentioned is accepts *a size unit suffix ("k", "m", "g" or "t") (e.g. {{512m}}, {{2g}}).* 
>  
> |{{spark.memory.offHeap.size}}|0|The *absolute amount of memory in bytes* which can be used for off-heap allocation. This setting has no impact on heap memory usage, so if your executors' total memory consumption must fit within some hard limit then be sure to shrink your JVM heap size accordingly. This must be set to a positive value when {{spark.memory.offHeap.enabled=true}}.|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org