You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Ofer Eliassaf <of...@gmail.com> on 2016/10/27 08:00:20 UTC

Dynamic Resource Allocation in a standalone

Hi,

I have a question/problem regarding dynamic resource allocation.
I am using spark 1.6.2 with stand alone cluster manager.

I have one worker with 2 cores.

I set the the folllowing arguments in the spark-defaults.conf file on all
my nodes:

spark.dynamicAllocation.enabled  true
spark.shuffle.service.enabled true
spark.deploy.defaultCores 1

I run a sample application with many tasks.

I open port 4040 on the driver and i can verify that the above
configuration exists.

My problem is that no matter what i do my application only gets 1 core even
though the other cores are available.

Is this normal or do i have a problem in my configuration.


The behaviour i want to get is this:
I have many users working with the same spark cluster.
I want that each application will get a fixed number of cores unless the
rest of the clutser is pending.
In this case I want that the runn ing applications will get the total
amount of cores until a new application arrives...


-- 
Regards,
Ofer Eliassaf