You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "liujianhui (JIRA)" <ji...@apache.org> on 2017/01/09 03:16:58 UTC

[jira] [Comment Edited] (SPARK-18806) driverwrapper and executor doesn't exit when worker killed

    [ https://issues.apache.org/jira/browse/SPARK-18806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15810519#comment-15810519 ] 

liujianhui edited comment on SPARK-18806 at 1/9/17 3:15 AM:
------------------------------------------------------------

Hi, Sean Owen, I found the root cause, when start the driverwrapper or coarseGrainedExecutorBackend, the worker url is linke `DriverWrapper spark://Worker@10.0.52.25:32777`, the host is ip, but when a disconnectedProcessEvent occure, the remoteAddress is hostname:port, detail could be found at 
{code}
 val clientAddr = RpcAddress(addr.getHostString, addr.getPort)
      nettyEnv.removeOutbox(clientAddr)
      dispatcher.postToAll(RemoteProcessDisconnected(clientAddr))
{code}
but in class `WorkerWatcher`, whether the remoteAddress is Worker according to the follow codes
{code}
 private def isWorker(address: RpcAddress) = expectedAddress == address
{code}
that will by false, as a result that the address's host is hostname instead of IP


was (Author: liujianhui):
Hi, Sean Owen, I found the root cause, when start the driverwrapper or coarseGrainedExecutorBackend, the worker url is linke `DriverWrapper spark://Worker@10.0.52.25:32777`, the host is ip, but when a disconnectedProcessEvent occure, the remoteAddress is hostname:port, detail could be found at 
{code}
 val clientAddr = RpcAddress(addr.getHostString, addr.getPort)
      nettyEnv.removeOutbox(clientAddr)
      dispatcher.postToAll(RemoteProcessDisconnected(clientAddr))
{code}

> driverwrapper and executor doesn't exit when worker killed
> ----------------------------------------------------------
>
>                 Key: SPARK-18806
>                 URL: https://issues.apache.org/jira/browse/SPARK-18806
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy
>    Affects Versions: 1.6.1
>         Environment: java1.8
>            Reporter: liujianhui
>
> submit an application with standlone-cluster mode,  and then the master will launch executor and driverwrapper on worker. They are all start WorkerWatcher to watch the worker, as a result, when the worker killed  by manual, the driverwrapper and executor sometimes will not exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org