You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Stefano Parmesan (JIRA)" <ji...@apache.org> on 2015/07/02 10:31:04 UTC

[jira] [Updated] (SPARK-8726) Wrong spark.executor.memory when using different EC2 master and worker machine types

     [ https://issues.apache.org/jira/browse/SPARK-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Stefano Parmesan updated SPARK-8726:
------------------------------------
    Description: 
_(this is a mirror of [SPARK-8726|https://issues.apache.org/jira/browse/MESOS-2985])_

By default, {{spark.executor.memory}} is set to the [min(slave_ram_kb, master_ram_kb)|https://github.com/mesos/spark-ec2/blob/e642aa362338e01efed62948ec0f063d5fce3242/deploy_templates.py#L32]; when using the same instance type for master and workers you will not notice, but when using different ones (which makes sense, as the master cannot be a spot instance, and using a big machine for the master would be a waste of resources) the default amount of memory given to each worker is capped to the amount of RAM available on the master (ex: if you create a cluster with an m1.small master (1.7GB RAM) and one m1.large worker (7.5GB RAM), spark.executor.memory will be set to 512MB).

  was:By default, {{spark.executor.memory}} is set to the [min(slave_ram_kb, master_ram_kb)|https://github.com/mesos/spark-ec2/blob/e642aa362338e01efed62948ec0f063d5fce3242/deploy_templates.py#L32]; when using the same instance type for master and workers you will not notice, but when using different ones (which makes sense, as the master cannot be a spot instance, and using a big machine for the master would be a waste of resources) the default amount of memory given to each worker is capped to the amount of RAM available on the master (ex: if you create a cluster with an m1.small master (1.7GB RAM) and one m1.large worker (7.5GB RAM), spark.executor.memory will be set to 512MB).


> Wrong spark.executor.memory when using different EC2 master and worker machine types
> ------------------------------------------------------------------------------------
>
>                 Key: SPARK-8726
>                 URL: https://issues.apache.org/jira/browse/SPARK-8726
>             Project: Spark
>          Issue Type: Bug
>          Components: EC2
>    Affects Versions: 1.4.0
>            Reporter: Stefano Parmesan
>
> _(this is a mirror of [SPARK-8726|https://issues.apache.org/jira/browse/MESOS-2985])_
> By default, {{spark.executor.memory}} is set to the [min(slave_ram_kb, master_ram_kb)|https://github.com/mesos/spark-ec2/blob/e642aa362338e01efed62948ec0f063d5fce3242/deploy_templates.py#L32]; when using the same instance type for master and workers you will not notice, but when using different ones (which makes sense, as the master cannot be a spot instance, and using a big machine for the master would be a waste of resources) the default amount of memory given to each worker is capped to the amount of RAM available on the master (ex: if you create a cluster with an m1.small master (1.7GB RAM) and one m1.large worker (7.5GB RAM), spark.executor.memory will be set to 512MB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org