You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by felix <cn...@gmail.com> on 2014/04/03 05:35:54 UTC

Error when run Spark on mesos

I deployed mesos and test it using the exmaple/test-framework script, mesos
seems OK.but when runing spark on the mesos cluster, the mesos slave nodes
report the following exception, any one can help me to fix this ? thanks in
advance:14/04/03 11:24:39 INFO Slf4jLogger: Slf4jLogger started14/04/03
11:24:39 INFO Remoting: Starting remoting14/04/03 11:24:39 INFO Remoting:
Remoting started; listening on addresses
:[akka.tcp://spark@pdn00:40265]14/04/03 11:24:39 INFO Remoting: Remoting now
listens on addresses: [akka.tcp://spark@pdn00:40265]14/04/03 11:24:39 INFO
SparkEnv: Connecting to BlockManagerMaster:
akka.tcp://spark@localhost:7077/user/BlockManagerMasterakka.actor.ActorNotFound:
Actor not found for:
ActorSelection[Actor[akka.tcp://spark@localhost:7077/]/user/BlockManagerMaster]       
at
akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:66)       
at
akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:64)       
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)        at
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)       
at
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)       
at
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)       
at
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)       
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)       
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)       
at
akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)       
at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)       
at
akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)       
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)       
at
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)       
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:269)        at
akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:512)        at
akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:545)        at
akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:535)        at
akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:91)       
at akka.actor.ActorRef.tell(ActorRef.scala:125)        at
akka.dispatch.Mailboxes$$anon$1$$anon$2.enqueue(Mailboxes.scala:44)       
at akka.dispatch.QueueBasedMessageQueue$class.cleanUp(Mailbox.scala:438)       
at
akka.dispatch.UnboundedDequeBasedMailbox$MessageQueue.cleanUp(Mailbox.scala:650)       
at akka.dispatch.Mailbox.cleanUp(Mailbox.scala:309)        at
akka.dispatch.MessageDispatcher.unregister(AbstractDispatcher.scala:204)       
at akka.dispatch.MessageDispatcher.detach(AbstractDispatcher.scala:140)       
at
akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:203)       
at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:163)       
at akka.actor.ActorCell.terminate(ActorCell.scala:338)        at
akka.actor.ActorCell.invokeAll$1(ActorCell.scala:431)        at
akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)        at
akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)        at
akka.dispatch.Mailbox.run(Mailbox.scala:218)        at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)       
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)       
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)       
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)       
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Error-when-run-Spark-on-mesos-tp3687.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Error when run Spark on mesos

Posted by felix <cn...@gmail.com>.
You can download this tarball to replace the 0.9.0 one:

wget https://github.com/apache/spark/archive/v0.9.1-rc3.tar.gz

just compile it and test it !


2014-04-03 18:41 GMT+08:00 Gino Mathews [via Apache Spark User List] <
ml-node+s1001560n3702h62@n3.nabble.com>:

>  Hi,
>
>
>
> I have installed Spark 0.9.0 on Ubuntu 12.04 LTS with Hadoop 2.2.0 and
> able to successfully run few apps in a 1+2 stand alone configuration. I
> tried both standalone apps as well as spark-shell. But when I tried various
> mesos version from 0.13.0 to 0.17.0, the stand alone app as well as
> spark-shell are failing with segmentation fault during the initialization.
> I have n’t seen the spark app trying to connect to mesos anyway. The
> invocation log of spark-shell is pasted below. Btw is there any
> documentation on how to upgrade to spark to 0.9.1.
>
>
>
> Thanks and regards
>
> Gino Mathews K
>
>
>
> *From:* panfei [mailto:[hidden email]<http://user/SendEmail.jtp?type=node&node=3702&i=0>]
>
> *Sent:* Thursday, April 03, 2014 11:37 AM
> *To:* [hidden email] <http://user/SendEmail.jtp?type=node&node=3702&i=1>
> *Subject:* Re: Error when run Spark on mesos
>
>
>
> after upgrading to 0.9.1 , everything  goes well now. thanks for the reply.
>
>
>
>
>
>
>
>
>
> rad@master:~/Downloads/spark-0.9.0-incubating$ MASTER=mesos://master:5050
> bin/spark-shell
>
> 14/04/03 15:50:21 INFO HttpServer: Using Spark's default log4j profile:
> org/apache/spark/log4j-defaults.properties
>
> 14/04/03 15:50:22 INFO HttpServer: Starting HTTP Server
>
> Welcome to
>
>       ____              __
>
>      / __/__  ___ _____/ /__
>
>     _\ \/ _ \/ _ `/ __/  '_/
>
>    /___/ .__/\_,_/_/ /_/\_\   version 0.9.0
>
>       /_/
>
>
>
> Using Scala version 2.10.3 (Java HotSpot(TM) Server VM, Java 1.7.0_51)
>
> Type in expressions to have them evaluated.
>
> Type :help for more information.
>
> 14/04/03 15:50:34 INFO Slf4jLogger: Slf4jLogger started
>
> 14/04/03 15:50:35 INFO Remoting: Starting remoting
>
> 14/04/03 15:50:35 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://spark@master:41611]
>
> 14/04/03 15:50:35 INFO Remoting: Remoting now listens on addresses:
> [akka.tcp://spark@master:41611]
>
> 14/04/03 15:50:35 INFO SparkEnv: Registering BlockManagerMaster
>
> 14/04/03 15:50:36 INFO DiskBlockManager: Created local directory at
> /tmp/spark-local-20140403155036-7eff
>
> 14/04/03 15:50:36 INFO MemoryStore: MemoryStore started with capacity
> 294.6 MB.
>
> 14/04/03 15:50:36 INFO ConnectionManager: Bound socket to port 37833 with
> id = ConnectionManagerId(master,37833)
>
> 14/04/03 15:50:36 INFO BlockManagerMaster: Trying to register BlockManager
>
> 14/04/03 15:50:36 INFO BlockManagerMasterActor$BlockManagerInfo:
> Registering block manager master:37833 with 294.6 MB RAM
>
> 14/04/03 15:50:36 INFO BlockManagerMaster: Registered BlockManager
>
> 14/04/03 15:50:36 INFO HttpServer: Starting HTTP Server
>
> 14/04/03 15:50:36 INFO HttpBroadcast: Broadcast server started at
> http://192.168.0.138:34083
>
> 14/04/03 15:50:36 INFO SparkEnv: Registering MapOutputTracker
>
> 14/04/03 15:50:36 INFO HttpFileServer: HTTP File server directory is
> /tmp/spark-8241eae9-be7c-4ddd-9258-206927d0abcf
>
> 14/04/03 15:50:36 INFO HttpServer: Starting HTTP Server
>
> 14/04/03 15:50:37 INFO SparkUI: Started Spark Web UI at http://master:4040
>
> #
>
> # A fatal error has been detected by the Java Runtime Environment:
>
> #
>
> #  SIGSEGV (0xb) at pc=0xb6d1053e, pid=9496, tid=2305694528
>
> #
>
> # JRE version: Java(TM) SE Runtime Environment (7.0_51-b13) (build
> 1.7.0_51-b13)
>
> # Java VM: Java HotSpot(TM) Server VM (24.51-b03 mixed mode linux-x86 )
>
> # Problematic frame:
>
> # V  [libjvm.so+0x4bf53e]  jni_GetByteArrayElements+0x7e
>
> #
>
> # Core dump written. Default location:
> /home/rad/Downloads/spark-0.9.0-incubating/core or core.9496
>
> #
>
> # An error report file with more information is saved as:
>
> # /home/rad/Downloads/spark-0.9.0-incubating/hs_err_pid9496.log
>
> #
>
> # If you would like to submit a bug report, please visit:
>
> #   http://bugreport.sun.com/bugreport/crash.jsp
>
> #
>
> bin/spark-shell: line 97:  9496 Aborted                 (core dumped)
> $FWDIR/bin/spark-class $OPTIONS org.apache.spark.repl.Main "$@"
>
> *hs_err_pid9496.log* (65K) Download Attachment<http://apache-spark-user-list.1001560.n3.nabble.com/attachment/3702/0/hs_err_pid9496.log>
>
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-spark-user-list.1001560.n3.nabble.com/Error-when-run-Spark-on-mesos-tp3687p3702.html
>  To unsubscribe from Error when run Spark on mesos, click here<http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=3687&code=Y253ZWlrZUBnbWFpbC5jb218MzY4N3wxMDU1NzU3NDU=>
> .
> NAML<http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>



-- 
不学习,不知道




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Error-when-run-Spark-on-mesos-tp3687p3703.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

RE: Error when run Spark on mesos

Posted by Gino Mathews <gi...@thinkpalm.com>.
Hi,

I have installed Spark 0.9.0 on Ubuntu 12.04 LTS with Hadoop 2.2.0 and able to successfully run few apps in a 1+2 stand alone configuration. I tried both standalone apps as well as spark-shell. But when I tried various mesos version from 0.13.0 to 0.17.0, the stand alone app as well as spark-shell are failing with segmentation fault during the initialization. I have n’t seen the spark app trying to connect to mesos anyway. The invocation log of spark-shell is pasted below. Btw is there any documentation on how to upgrade to spark to 0.9.1.

Thanks and regards
Gino Mathews K

From: panfei [mailto:cnweike@gmail.com]
Sent: Thursday, April 03, 2014 11:37 AM
To: user@spark.apache.org
Subject: Re: Error when run Spark on mesos

after upgrading to 0.9.1 , everything  goes well now. thanks for the reply.




rad@master:~/Downloads/spark-0.9.0-incubating$ MASTER=mesos://master:5050 bin/spark-shell
14/04/03 15:50:21 INFO HttpServer: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/04/03 15:50:22 INFO HttpServer: Starting HTTP Server
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 0.9.0
      /_/

Using Scala version 2.10.3 (Java HotSpot(TM) Server VM, Java 1.7.0_51)
Type in expressions to have them evaluated.
Type :help for more information.
14/04/03 15:50:34 INFO Slf4jLogger: Slf4jLogger started
14/04/03 15:50:35 INFO Remoting: Starting remoting
14/04/03 15:50:35 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@master:41611]
14/04/03 15:50:35 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@master:41611]
14/04/03 15:50:35 INFO SparkEnv: Registering BlockManagerMaster
14/04/03 15:50:36 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140403155036-7eff
14/04/03 15:50:36 INFO MemoryStore: MemoryStore started with capacity 294.6 MB.
14/04/03 15:50:36 INFO ConnectionManager: Bound socket to port 37833 with id = ConnectionManagerId(master,37833)
14/04/03 15:50:36 INFO BlockManagerMaster: Trying to register BlockManager
14/04/03 15:50:36 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager master:37833 with 294.6 MB RAM
14/04/03 15:50:36 INFO BlockManagerMaster: Registered BlockManager
14/04/03 15:50:36 INFO HttpServer: Starting HTTP Server
14/04/03 15:50:36 INFO HttpBroadcast: Broadcast server started at http://192.168.0.138:34083
14/04/03 15:50:36 INFO SparkEnv: Registering MapOutputTracker
14/04/03 15:50:36 INFO HttpFileServer: HTTP File server directory is /tmp/spark-8241eae9-be7c-4ddd-9258-206927d0abcf
14/04/03 15:50:36 INFO HttpServer: Starting HTTP Server
14/04/03 15:50:37 INFO SparkUI: Started Spark Web UI at http://master:4040
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0xb6d1053e, pid=9496, tid=2305694528
#
# JRE version: Java(TM) SE Runtime Environment (7.0_51-b13) (build 1.7.0_51-b13)
# Java VM: Java HotSpot(TM) Server VM (24.51-b03 mixed mode linux-x86 )
# Problematic frame:
# V  [libjvm.so+0x4bf53e]  jni_GetByteArrayElements+0x7e
#
# Core dump written. Default location: /home/rad/Downloads/spark-0.9.0-incubating/core or core.9496
#
# An error report file with more information is saved as:
# /home/rad/Downloads/spark-0.9.0-incubating/hs_err_pid9496.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.sun.com/bugreport/crash.jsp
#
bin/spark-shell: line 97:  9496 Aborted                 (core dumped) $FWDIR/bin/spark-class $OPTIONS org.apache.spark.repl.Main "$@"

Re: Error when run Spark on mesos

Posted by panfei <cn...@gmail.com>.
after upgrading to 0.9.1 , everything  goes well now. thanks for the reply.


2014-04-03 13:47 GMT+08:00 andy petrella <an...@gmail.com>:

> Hello,
> It's indeed due to a known bug,  but using another IP for the driver won't
> be enough (other problems will pop up).
> A easy solution would be to switch to 0.9.1 ... see
> http://apache-spark-user-list.1001560.n3.nabble.com/ActorNotFound-problem-for-mesos-driver-td3636.html
>
> Hth
> Andy
> Le 3 avr. 2014 06:34, "Ian Ferreira" <ia...@hotmail.com> a écrit :
>
> I think this is related to a known issue (regression) in 0.9.0.  Try using
>> explicit IP other than loop back.
>>
>> Sent from a mobile device
>>
>> On Apr 2, 2014, at 8:53 PM, "panfei" <cn...@gmail.com> wrote:
>>
>> any advice ?
>>
>>
>> 2014-04-03 11:35 GMT+08:00 felix <cn...@gmail.com>:
>>
>>> I deployed mesos and test it using the exmaple/test-framework script,
>>> mesos seems OK. but when runing spark on the mesos cluster, the mesos slave
>>> nodes report the following exception, any one can help me to fix this ?
>>> thanks in advance: 14/04/03 11:24:39 INFO Slf4jLogger: Slf4jLogger started
>>> 14/04/03 11:24:39 INFO Remoting: Starting remoting 14/04/03 11:24:39 INFO
>>> Remoting: Remoting started; listening on addresses :[akka.tcp://spark@pdn00:40265]
>>> 14/04/03 11:24:39 INFO Remoting: Remoting now listens on addresses:
>>> [akka.tcp://spark@pdn00:40265] 14/04/03 11:24:39 INFO SparkEnv:
>>> Connecting to BlockManagerMaster: akka.tcp://spark@localhost:7077/user/BlockManagerMaster
>>> akka.actor.ActorNotFound: Actor not found for:
>>> ActorSelection[Actor[akka.tcp://spark@localhost:7077/]/user/BlockManagerMaster]
>>> at
>>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:66)
>>> at
>>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:64)
>>> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at
>>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>>> at
>>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>>> at
>>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>>> at
>>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>>> at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>>> at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58) at
>>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
>>> at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)
>>> at
>>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
>>> at
>>> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>>> at
>>> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>>> at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:269) at
>>> akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:512) at
>>> akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:545) at
>>> akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:535) at
>>> akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:91)
>>> at akka.actor.ActorRef.tell(ActorRef.scala:125) at
>>> akka.dispatch.Mailboxes$$anon$1$$anon$2.enqueue(Mailboxes.scala:44) at
>>> akka.dispatch.QueueBasedMessageQueue$class.cleanUp(Mailbox.scala:438) at
>>> akka.dispatch.UnboundedDequeBasedMailbox$MessageQueue.cleanUp(Mailbox.scala:650)
>>> at akka.dispatch.Mailbox.cleanUp(Mailbox.scala:309) at
>>> akka.dispatch.MessageDispatcher.unregister(AbstractDispatcher.scala:204) at
>>> akka.dispatch.MessageDispatcher.detach(AbstractDispatcher.scala:140) at
>>> akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:203)
>>> at
>>> akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:163)
>>> at akka.actor.ActorCell.terminate(ActorCell.scala:338) at
>>> akka.actor.ActorCell.invokeAll$1(ActorCell.scala:431) at
>>> akka.actor.ActorCell.systemInvoke(ActorCell.scala:447) at
>>> akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262) at
>>> akka.dispatch.Mailbox.run(Mailbox.scala:218) at
>>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
>>> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at
>>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>> at
>>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>>
>>> ------------------------------
>>> View this message in context: Error when run Spark on mesos<http://apache-spark-user-list.1001560.n3.nabble.com/Error-when-run-Spark-on-mesos-tp3687.html>
>>> Sent from the Apache Spark User List mailing list archive<http://apache-spark-user-list.1001560.n3.nabble.com/>at
>>> Nabble.com.
>>>
>>
>>
>>
>> --
>> 不学习,不知道
>>
>>


-- 
不学习,不知道

Re: Error when run Spark on mesos

Posted by andy petrella <an...@gmail.com>.
Hello,
It's indeed due to a known bug,  but using another IP for the driver won't
be enough (other problems will pop up).
A easy solution would be to switch to 0.9.1 ... see
http://apache-spark-user-list.1001560.n3.nabble.com/ActorNotFound-problem-for-mesos-driver-td3636.html

Hth
Andy
Le 3 avr. 2014 06:34, "Ian Ferreira" <ia...@hotmail.com> a écrit :

> I think this is related to a known issue (regression) in 0.9.0.  Try using
> explicit IP other than loop back.
>
> Sent from a mobile device
>
> On Apr 2, 2014, at 8:53 PM, "panfei" <cn...@gmail.com> wrote:
>
> any advice ?
>
>
> 2014-04-03 11:35 GMT+08:00 felix <cn...@gmail.com>:
>
>> I deployed mesos and test it using the exmaple/test-framework script,
>> mesos seems OK. but when runing spark on the mesos cluster, the mesos slave
>> nodes report the following exception, any one can help me to fix this ?
>> thanks in advance: 14/04/03 11:24:39 INFO Slf4jLogger: Slf4jLogger started
>> 14/04/03 11:24:39 INFO Remoting: Starting remoting 14/04/03 11:24:39 INFO
>> Remoting: Remoting started; listening on addresses :[akka.tcp://spark@pdn00:40265]
>> 14/04/03 11:24:39 INFO Remoting: Remoting now listens on addresses:
>> [akka.tcp://spark@pdn00:40265] 14/04/03 11:24:39 INFO SparkEnv:
>> Connecting to BlockManagerMaster: akka.tcp://spark@localhost:7077/user/BlockManagerMaster
>> akka.actor.ActorNotFound: Actor not found for:
>> ActorSelection[Actor[akka.tcp://spark@localhost:7077/]/user/BlockManagerMaster]
>> at
>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:66)
>> at
>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:64)
>> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at
>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>> at
>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>> at
>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>> at
>> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>> at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>> at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58) at
>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
>> at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)
>> at
>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
>> at
>> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>> at
>> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>> at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:269) at
>> akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:512) at
>> akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:545) at
>> akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:535) at
>> akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:91)
>> at akka.actor.ActorRef.tell(ActorRef.scala:125) at
>> akka.dispatch.Mailboxes$$anon$1$$anon$2.enqueue(Mailboxes.scala:44) at
>> akka.dispatch.QueueBasedMessageQueue$class.cleanUp(Mailbox.scala:438) at
>> akka.dispatch.UnboundedDequeBasedMailbox$MessageQueue.cleanUp(Mailbox.scala:650)
>> at akka.dispatch.Mailbox.cleanUp(Mailbox.scala:309) at
>> akka.dispatch.MessageDispatcher.unregister(AbstractDispatcher.scala:204) at
>> akka.dispatch.MessageDispatcher.detach(AbstractDispatcher.scala:140) at
>> akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:203)
>> at
>> akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:163)
>> at akka.actor.ActorCell.terminate(ActorCell.scala:338) at
>> akka.actor.ActorCell.invokeAll$1(ActorCell.scala:431) at
>> akka.actor.ActorCell.systemInvoke(ActorCell.scala:447) at
>> akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262) at
>> akka.dispatch.Mailbox.run(Mailbox.scala:218) at
>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
>> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at
>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>> at
>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>
>> ------------------------------
>> View this message in context: Error when run Spark on mesos<http://apache-spark-user-list.1001560.n3.nabble.com/Error-when-run-Spark-on-mesos-tp3687.html>
>> Sent from the Apache Spark User List mailing list archive<http://apache-spark-user-list.1001560.n3.nabble.com/>at
>> Nabble.com.
>>
>
>
>
> --
> 不学习,不知道
>
>

Re: Error when run Spark on mesos

Posted by Ian Ferreira <ia...@hotmail.com>.
I think this is related to a known issue (regression) in 0.9.0.  Try using explicit IP other than loop back.

Sent from a mobile device

> On Apr 2, 2014, at 8:53 PM, "panfei" <cn...@gmail.com> wrote:
> 
> any advice ?
> 
> 
> 2014-04-03 11:35 GMT+08:00 felix <cn...@gmail.com>:
>> I deployed mesos and test it using the exmaple/test-framework script, mesos seems OK. but when runing spark on the mesos cluster, the mesos slave nodes report the following exception, any one can help me to fix this ? thanks in advance: 14/04/03 11:24:39 INFO Slf4jLogger: Slf4jLogger started 14/04/03 11:24:39 INFO Remoting: Starting remoting 14/04/03 11:24:39 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@pdn00:40265] 14/04/03 11:24:39 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@pdn00:40265] 14/04/03 11:24:39 INFO SparkEnv: Connecting to BlockManagerMaster: akka.tcp://spark@localhost:7077/user/BlockManagerMaster akka.actor.ActorNotFound: Actor not found for: ActorSelection[Actor[akka.tcp://spark@localhost:7077/]/user/BlockManagerMaster] at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:66) at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:64) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67) at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82) at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59) at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58) at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74) at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110) at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:269) at akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:512) at akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:545) at akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:535) at akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:91) at akka.actor.ActorRef.tell(ActorRef.scala:125) at akka.dispatch.Mailboxes$$anon$1$$anon$2.enqueue(Mailboxes.scala:44) at akka.dispatch.QueueBasedMessageQueue$class.cleanUp(Mailbox.scala:438) at akka.dispatch.UnboundedDequeBasedMailbox$MessageQueue.cleanUp(Mailbox.scala:650) at akka.dispatch.Mailbox.cleanUp(Mailbox.scala:309) at akka.dispatch.MessageDispatcher.unregister(AbstractDispatcher.scala:204) at akka.dispatch.MessageDispatcher.detach(AbstractDispatcher.scala:140) at akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:203) at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:163) at akka.actor.ActorCell.terminate(ActorCell.scala:338) at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:431) at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447) at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262) at akka.dispatch.Mailbox.run(Mailbox.scala:218) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 
>> View this message in context: Error when run Spark on mesos
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> 
> 
> 
> -- 
> 不学习,不知道

Re: Error when run Spark on mesos

Posted by panfei <cn...@gmail.com>.
any advice ?


2014-04-03 11:35 GMT+08:00 felix <cn...@gmail.com>:

> I deployed mesos and test it using the exmaple/test-framework script,
> mesos seems OK. but when runing spark on the mesos cluster, the mesos slave
> nodes report the following exception, any one can help me to fix this ?
> thanks in advance: 14/04/03 11:24:39 INFO Slf4jLogger: Slf4jLogger started
> 14/04/03 11:24:39 INFO Remoting: Starting remoting 14/04/03 11:24:39 INFO
> Remoting: Remoting started; listening on addresses :[akka.tcp://spark@pdn00:40265]
> 14/04/03 11:24:39 INFO Remoting: Remoting now listens on addresses:
> [akka.tcp://spark@pdn00:40265] 14/04/03 11:24:39 INFO SparkEnv:
> Connecting to BlockManagerMaster: akka.tcp://spark@localhost:7077/user/BlockManagerMaster
> akka.actor.ActorNotFound: Actor not found for:
> ActorSelection[Actor[akka.tcp://spark@localhost:7077/]/user/BlockManagerMaster]
> at
> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:66)
> at
> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:64)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
> at
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
> at
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
> at
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
> at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58) at
> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
> at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)
> at
> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
> at
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
> at
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
> at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:269) at
> akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:512) at
> akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:545) at
> akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:535) at
> akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:91)
> at akka.actor.ActorRef.tell(ActorRef.scala:125) at
> akka.dispatch.Mailboxes$$anon$1$$anon$2.enqueue(Mailboxes.scala:44) at
> akka.dispatch.QueueBasedMessageQueue$class.cleanUp(Mailbox.scala:438) at
> akka.dispatch.UnboundedDequeBasedMailbox$MessageQueue.cleanUp(Mailbox.scala:650)
> at akka.dispatch.Mailbox.cleanUp(Mailbox.scala:309) at
> akka.dispatch.MessageDispatcher.unregister(AbstractDispatcher.scala:204) at
> akka.dispatch.MessageDispatcher.detach(AbstractDispatcher.scala:140) at
> akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:203)
> at
> akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:163)
> at akka.actor.ActorCell.terminate(ActorCell.scala:338) at
> akka.actor.ActorCell.invokeAll$1(ActorCell.scala:431) at
> akka.actor.ActorCell.systemInvoke(ActorCell.scala:447) at
> akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262) at
> akka.dispatch.Mailbox.run(Mailbox.scala:218) at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
> ------------------------------
> View this message in context: Error when run Spark on mesos<http://apache-spark-user-list.1001560.n3.nabble.com/Error-when-run-Spark-on-mesos-tp3687.html>
> Sent from the Apache Spark User List mailing list archive<http://apache-spark-user-list.1001560.n3.nabble.com/>at Nabble.com.
>



-- 
不学习,不知道