You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Josh Rosen (JIRA)" <ji...@apache.org> on 2016/01/14 04:48:40 UTC

[jira] [Resolved] (SPARK-2989) Error sending message to BlockManagerMaster

     [ https://issues.apache.org/jira/browse/SPARK-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Josh Rosen resolved SPARK-2989.
-------------------------------
    Resolution: Cannot Reproduce

I'm going to resolve this old issue as "Cannot Reproduce." Please file a new issue or re-open if this is still a problem.

> Error sending message to BlockManagerMaster
> -------------------------------------------
>
>                 Key: SPARK-2989
>                 URL: https://issues.apache.org/jira/browse/SPARK-2989
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager
>    Affects Versions: 1.0.2
>            Reporter: pengyanhong
>
> run a simple hive sql Spark App via yarn-cluster,  got 3 segments log content via yarn logs --applicationID command line, the detail as below:
> * 1st segment is about the Driver & Application Master, everything is fine without error, start time is 16:43:49 and end time is 16:44:08.
> * 2nd & 3rd segment is about  the Executor, the start time is 16:43:52, then from 16:44:38 encounter many times error as below:
> {quote}
> WARN org.apache.spark.Logging$class.logWarning(Logging.scala:91): Error sending message to BlockManagerMaster in 1 attempts
> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
> 	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
> 	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
> 	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
> 	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
> 	at scala.concurrent.Await$.result(package.scala:107)
> 	at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:237)
> 	at org.apache.spark.storage.BlockManagerMaster.sendHeartBeat(BlockManagerMaster.scala:51)
> 	at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:113)
> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(BlockManager.scala:158)
> 	at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790)
> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:158)
> 	at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80)
> 	at akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:662)
> 14/08/12 16:45:31 WARN org.apache.spark.Logging$class.logWarning(Logging.scala:91): Error sending message to BlockManagerMaster in 2 attempts
> ......
> {quote}
> confirmed that the date time of 3 nodes is sync.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org