You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Aaron Davidson <il...@gmail.com> on 2014/05/07 02:32:13 UTC

Re: Easy one

If you're using standalone mode, you need to make sure the Spark Workers
know about the extra memory. This can be configured in spark-env.sh on the
workers as

export SPARK_WORKER_MEMORY=4g


On Tue, May 6, 2014 at 5:29 PM, Ian Ferreira <ia...@hotmail.com>wrote:

> Hi there,
>
> Why can’t I seem to kick the executor memory higher? See below from EC2
> deployment using m1.large
>
>
> And in the spark-env.sh
> export SPARK_MEM=6154m
>
>
> And in the spark context
> sconf.setExecutorEnv("spark.executor.memory", "4g”)
>
> Cheers
> - Ian
>
>

Re: Easy one

Posted by Ian Ferreira <ia...@hotmail.com>.
Spoke to soon, while I can get the  worker memory up, I can¹t seem to
increase the executor memory, still seems locked in a 512MB

Even with context set up like so

      sconf.setExecutorEnv("spark.executor.memory", "1g²)

From:  Ian Ferreira <ia...@hotmail.com>
Date:  Tuesday, May 6, 2014 at 6:10 PM
To:  <us...@spark.apache.org>
Subject:  Re: Easy one

Thanks!

From:  Aaron Davidson <il...@gmail.com>
Reply-To:  <us...@spark.apache.org>
Date:  Tuesday, May 6, 2014 at 5:32 PM
To:  <us...@spark.apache.org>
Subject:  Re: Easy one

If you're using standalone mode, you need to make sure the Spark Workers
know about the extra memory. This can be configured in spark-env.sh on the
workers as

export SPARK_WORKER_MEMORY=4g


On Tue, May 6, 2014 at 5:29 PM, Ian Ferreira <ia...@hotmail.com>
wrote:
> Hi there,
> 
> Why can¹t I seem to kick the executor memory higher? See below from EC2
> deployment using m1.large
> 
> 
> And in the spark-env.sh
> export SPARK_MEM=6154m
> 
> 
> And in the spark context
> sconf.setExecutorEnv("spark.executor.memory", "4g²)
> 
> Cheers
> - Ian
> 




Re: Easy one

Posted by Ian Ferreira <ia...@hotmail.com>.
Thanks!

From:  Aaron Davidson <il...@gmail.com>
Reply-To:  <us...@spark.apache.org>
Date:  Tuesday, May 6, 2014 at 5:32 PM
To:  <us...@spark.apache.org>
Subject:  Re: Easy one

If you're using standalone mode, you need to make sure the Spark Workers
know about the extra memory. This can be configured in spark-env.sh on the
workers as

export SPARK_WORKER_MEMORY=4g


On Tue, May 6, 2014 at 5:29 PM, Ian Ferreira <ia...@hotmail.com>
wrote:
> Hi there,
> 
> Why can¹t I seem to kick the executor memory higher? See below from EC2
> deployment using m1.large
> 
> 
> And in the spark-env.sh
> export SPARK_MEM=6154m
> 
> 
> And in the spark context
> sconf.setExecutorEnv("spark.executor.memory", "4g²)
> 
> Cheers
> - Ian
>