You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "oskarryn (JIRA)" <ji...@apache.org> on 2019/02/12 19:07:00 UTC

[jira] [Updated] (SPARK-26863) Add minimal values for spark.driver.memory and spark.executor.memory

     [ https://issues.apache.org/jira/browse/SPARK-26863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

oskarryn updated SPARK-26863:
-----------------------------
    Description: 
I propose to change `1g` to `1g, with minimum of 472m` in "Default" column for spark.driver.memory and spark.executor.memory properties in [Application Properties](https://spark.apache.org/docs/latest/configuration.html#application-properties).

Reasoning:

In UnifiedMemoryManager.scala file I see definition of RESERVED_SYSTEM_MEMORY_BYTES:

{code:scala}
// Set aside a fixed amount of memory for non-storage, non-execution purposes.
// This serves a function similar to `spark.memory.fraction`, but guarantees that we reserve
// sufficient memory for the system even for small heaps. E.g. if we have a 1GB JVM, then
// the memory used for execution and storage will be (1024 - 300) * 0.6 = 434MB by default.
private val RESERVED_SYSTEM_MEMORY_BYTES = 300 * 1024 * 1024
{code}

Then `reservedMemory` takes on this value and also `minSystemMemory` is defined: 
{code:scala}
val minSystemMemory = (reservedMemory * 1.5).ceil.toLong
{code}
Consequently driver heap size and executor memory are checked if they are bigger than  minSystemMemory (471859200B) or IllegalArgumentException is thrown. It seems that 472MB is absolute minimum for spark.executor.memory and spark.executor.memory. 

Side question: how is this 472MB established as sufficient memory for small heaps? What do I risk if I build Spark with smaller RESERVED_SYSTEM_MEMORY_BYTES?

  was:
I propose to change `1g` to `1g, with minimum of 472m` in "Default" column for spark.driver.memory and spark.executor.memory properties in [Application Properties](https://spark.apache.org/docs/latest/configuration.html#application-properties).

Reasoning:

In UnifiedMemoryManager.scala file I see definition of `RESERVED_SYSTEM_MEMORY_BYTES`:

```
// Set aside a fixed amount of memory for non-storage, non-execution purposes.
// This serves a function similar to `spark.memory.fraction`, but guarantees that we reserve
// sufficient memory for the system even for small heaps. E.g. if we have a 1GB JVM, then
// the memory used for execution and storage will be (1024 - 300) * 0.6 = 434MB by default.
private val RESERVED_SYSTEM_MEMORY_BYTES = 300 * 1024 * 1024
```

Then `reservedMemory` takes on this value and also `minSystemMemory` is defined: 

```
val minSystemMemory = (reservedMemory * 1.5).ceil.toLong
```

Consequently driver heap size and executor memory are checked if they are bigger than  minSystemMemory (471859200B) or IllegalArgumentException is thrown.

It seems that 472MB is absolute minimum for `spark.executor.memory` (`--driver-memory`) and `spark.executor.memory` (`--executor-memory`). 

Side question: how is this 472MB established as sufficient memory for small heaps? What do I risk if I build Spark with smaller RESERVED_SYSTEM_MEMORY_BYTES?


> Add minimal values for spark.driver.memory and spark.executor.memory
> --------------------------------------------------------------------
>
>                 Key: SPARK-26863
>                 URL: https://issues.apache.org/jira/browse/SPARK-26863
>             Project: Spark
>          Issue Type: Documentation
>          Components: Documentation
>    Affects Versions: 2.4.0
>            Reporter: oskarryn
>            Priority: Trivial
>
> I propose to change `1g` to `1g, with minimum of 472m` in "Default" column for spark.driver.memory and spark.executor.memory properties in [Application Properties](https://spark.apache.org/docs/latest/configuration.html#application-properties).
> Reasoning:
> In UnifiedMemoryManager.scala file I see definition of RESERVED_SYSTEM_MEMORY_BYTES:
> {code:scala}
> // Set aside a fixed amount of memory for non-storage, non-execution purposes.
> // This serves a function similar to `spark.memory.fraction`, but guarantees that we reserve
> // sufficient memory for the system even for small heaps. E.g. if we have a 1GB JVM, then
> // the memory used for execution and storage will be (1024 - 300) * 0.6 = 434MB by default.
> private val RESERVED_SYSTEM_MEMORY_BYTES = 300 * 1024 * 1024
> {code}
> Then `reservedMemory` takes on this value and also `minSystemMemory` is defined: 
> {code:scala}
> val minSystemMemory = (reservedMemory * 1.5).ceil.toLong
> {code}
> Consequently driver heap size and executor memory are checked if they are bigger than  minSystemMemory (471859200B) or IllegalArgumentException is thrown. It seems that 472MB is absolute minimum for spark.executor.memory and spark.executor.memory. 
> Side question: how is this 472MB established as sufficient memory for small heaps? What do I risk if I build Spark with smaller RESERVED_SYSTEM_MEMORY_BYTES?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org