You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Nicholas Marion <nm...@us.ibm.com> on 2018/10/29 15:45:10 UTC
Spark Worker Questions
Hello,
Hope this is the right place for these kind of questions:
Deep diving into Spark Worker and spawning of executors. We notice upon
killing the worker through a, ./sbin/stop-slave.sh; that the driver
continues to request executors over and over again until the Worker is
actually taken down. This is because there is an executor that is being
terminated in the process. We were hoping to possibly indicate to the
master that the worker was coming down and not to try to spawn new
executors, even us the available: WorkerState.DECOMMISSIONED, that doesn't
seem to ever be used.
The questions are:
Was there an original intention for WorkerState.DECOMMISSIONED that either
was removed or never used?
Do you know the code-path that occurs when a Worker is killed through
stop-slave.sh? We thought it may be onStop, but that seems to be only used
for tests? I thought was to add a messageToMaster from the Worker to say:
I'm DECOMMISSIONED, do not schedule executors against me.
Note:
We only have 1 master, 1 slave.
Regards,
NICHOLAS T. MARION
IBM Open Data Analytics for z/OS Service Team Lead
Phone: 1-845-433-5010 | Tie-Line: 293-5010 IBM
E-mail: nmarion@us.ibm.com
Find me on: LinkedIn: 2455 South Rd
http://www.linkedin.com/in/nicholasmarion Poughkeepie, New York 12601-5400
United States
IBM Redbooks Silver Author
Data Science Foundations - Level 1