You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/09/15 09:16:20 UTC

[jira] [Resolved] (SPARK-17554) spark.executor.memory option not working

     [ https://issues.apache.org/jira/browse/SPARK-17554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-17554.
-------------------------------
    Resolution: Invalid

Questions should go to user@. Without seeing how you're running the job or what you are looking at specifically in the UI it's hard to say. The parameter does work correctly in all of my usages.

> spark.executor.memory option not working
> ----------------------------------------
>
>                 Key: SPARK-17554
>                 URL: https://issues.apache.org/jira/browse/SPARK-17554
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: Sankar Mittapally
>
> Hi,
>  I am new to spark, I have spark cluster with 5 slaves(Each one have 2 cores and 4g RAM). In spark cluster dashboard I am seeing memory per node is 1gb, I tried to increase it to 2g by using this parameter spark.executor.memory  2g in defaults.conf but it didn't work. I want to increase the memory. Please let me know how to do that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org