You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Archit Thakur (JIRA)" <ji...@apache.org> on 2014/05/29 08:46:03 UTC
[jira] [Comment Edited] (SPARK-874) Have a --wait flag in
./sbin/stop-all.sh that polls until Worker's are finished
[ https://issues.apache.org/jira/browse/SPARK-874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14012127#comment-14012127 ]
Archit Thakur edited comment on SPARK-874 at 5/29/14 6:45 AM:
--------------------------------------------------------------
I am interested in taking it up, please do assign. Thanks :) .
was (Author: archit279):
I am intersted in taking it up.
> Have a --wait flag in ./sbin/stop-all.sh that polls until Worker's are finished
> -------------------------------------------------------------------------------
>
> Key: SPARK-874
> URL: https://issues.apache.org/jira/browse/SPARK-874
> Project: Spark
> Issue Type: New Feature
> Components: Deploy
> Reporter: Patrick Wendell
> Priority: Minor
> Labels: starter
> Fix For: 1.1.0
>
>
> When running benchmarking jobs, sometimes the cluster takes a long time to shut down. We should add a feature where it will ssh into all the workers every few seconds and check that the processes are dead, and won't return until they are all dead. This would help a lot with automating benchmarking scripts.
> There is some equivalent logic here written in python, we just need to add it to the shell script:
> https://github.com/pwendell/spark-perf/blob/master/bin/run#L117
--
This message was sent by Atlassian JIRA
(v6.2#6252)