You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2017/03/07 12:03:38 UTC

[jira] [Assigned] (SPARK-19831) Sending the heartbeat master from worker maybe blocked by other rpc messages

     [ https://issues.apache.org/jira/browse/SPARK-19831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-19831:
------------------------------------

    Assignee:     (was: Apache Spark)

> Sending the heartbeat  master from worker  maybe blocked by other rpc messages
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-19831
>                 URL: https://issues.apache.org/jira/browse/SPARK-19831
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.2.0
>            Reporter: hustfxj
>            Priority: Minor
>
> Cleaning the application may cost much time at worker, then it will block that  the worker send heartbeats master because the worker is extend *ThreadSafeRpcEndpoint*. If the heartbeat from a worker  is blocked  by the message *ApplicationFinished*,  master will think the worker is dead. If the worker has a driver, the driver will be scheduled by master again. So I think it is the bug on spark. It may solve this problem by the followed suggests:
> 1. It had better  put the cleaning the application in a single asynchronous thread like 'cleanupThreadExecutor'. Thus it won't block other rpc messages like *SendHeartbeat*;
> 2. It had better not receive the heartbeat master by *receive* method.  Because any other rpc message may block the *receive* method. Then worker won't receive the heartbeat message timely. So it had better send the heartbeat master at an asynchronous timing thread .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org