You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Renato Perini <re...@gmail.com> on 2016/05/03 20:56:06 UTC

Free memory while launching jobs.

I have a machine with an 8GB total memory, on which there are other 
applications installed.

The Spark application must run 1 driver and two jobs at a time. I have 
configured 8 cores in total.
The machine (without Spark) has 4GB of free RAM (the other half RAM is 
used by other applications).

So I have configured 1 worker with a total memory of 2800MB of RAM. The 
driver is configured to use 512MB limit (2 cores) and 762MB per executor.
The driver launch a driver process and a Spark Stream (always on) job, 
occupying 512MB + 762MB (using 4 cores in total).
The other jobs will use 762MB each, so, when the whole app in started 
and the 2 jobs (and the driver) are up, I should consume the whole 2.8GB 
of memory.

Now, the free RAM. I said I have circa 4GB of RAM, so I should obtain 4 
- 2.8 = 1.2GB of free RAM.
When jobs starts, however, I can see the free memory during execution is 
near to 200MB.


Why this behaviour? Why Spark is using practically all the available RAM 
if I use only 1 worker with a 2.8GB limit in total?


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org