You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Igor Zaytsev (JIRA)" <ji...@apache.org> on 2016/11/29 19:23:59 UTC
[jira] [Commented] (SPARK-18159) Stand-alone cluster, supervised
app: restart of worker hosting the driver causes app to run twice
[ https://issues.apache.org/jira/browse/SPARK-18159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15706276#comment-15706276 ]
Igor Zaytsev commented on SPARK-18159:
--------------------------------------
Reproduced the issue with version 1.6.2 though not with current 2.1.0-SNAPSHOT. It is hard to say when its gone. Tried that on single box:
1. start master, start two workers, deploy app,
2. watch spark processes with 'ps',
3. kill -9 worker which is running app instance,
4. watch spark processes with 'ps' - version 1.6.2 does not terminate old app starting the new one.
> Stand-alone cluster, supervised app: restart of worker hosting the driver causes app to run twice
> -------------------------------------------------------------------------------------------------
>
> Key: SPARK-18159
> URL: https://issues.apache.org/jira/browse/SPARK-18159
> Project: Spark
> Issue Type: Bug
> Affects Versions: 1.6.2
> Reporter: Stephan Kepser
> Priority: Critical
>
> We use Spark in stand-alone cluster mode with HA with three master nodes. All aps are submitted using
> > spark-submit --deploy-mode cluster --supervised --master ...
> We have many apps running.
> The deploy-mode cluster is needed to prevent the drivers of the apps to be all placed on the active master.
> If a worker goes down that hosts a driver, the following happens:
> * the driver is started on another worker node
> * the new driver does not connect to the still running app
> * the new driver starts a new instance of the running app
> * there are now two instances of the app running,
> * one with an attached new driver,
> * one without a driver.
> * the old instance of the app cannot effectively be stop. I.e., it can be kill via the UI, but is immediately restarted.
> Iterating this process causes more and more instances of the app running.
> To get the effect both options --deploy-mode cluster and --supervised are required.
> The only remedy we know of is reboot all linux nodes the cluster runs on.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org