You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Sung Hwan Chung <co...@cs.stanford.edu> on 2014/03/26 21:17:49 UTC

YARN problem using an external jar in worker nodes Inbox x

Hello, (this is Yarn related)

I'm able to load an external jar and use its classes within
ApplicationMaster. I wish to use this jar within worker nodes, so I added
sc.addJar(pathToJar) and ran.

I get the following exception:

org.apache.spark.SparkException: Job aborted: Task 0.0:1 failed 4
times (most recent failure: Exception failure:
java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor)
Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception
failure: java.lang.NoClassDefFoundError:
org/opencv/objdetect/HOGDescriptor)
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
scala.Option.foreach(Option.scala:236)
org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
akka.actor.ActorCell.invoke(ActorCell.scala:456)
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
akka.dispatch.Mailbox.run(Mailbox.scala:219)
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)



And in worker node containers' stderr log (nothing in stdout log), I don't
see any reference to loading jars:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/home/gpphddata/1/yarn/nm-local-dir/usercache/yarn/filecache/7394400996676014282/spark-assembly-0.9.0-incubating-hadoop2.0.2-alpha-gphd-2.0.1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/lib/gphd/hadoop-2.0.2_alpha_gphd_2_0_1_0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
14/03/26 13:12:18 INFO slf4j.Slf4jLogger: Slf4jLogger started
14/03/26 13:12:18 INFO Remoting: Starting remoting
14/03/26 13:12:18 INFO Remoting: Remoting started; listening on
addresses :[akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
14/03/26 13:12:18 INFO Remoting: Remoting now listens on addresses:
[akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
14/03/26 13:12:18 INFO executor.CoarseGrainedExecutorBackend:
Connecting to driver:
akka.tcp://spark@alpinenode5.alpinenow.local:10314/user/CoarseGrainedScheduler
14/03/26 13:12:18 ERROR executor.CoarseGrainedExecutorBackend: Driver
Disassociated [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
-> [akka.tcp://spark@alpinenode5.alpinenow.local:10314] disassociated!
Shutting down.



Any idea what's going on?

Re: YARN problem using an external jar in worker nodes Inbox x

Posted by Sandy Ryza <sa...@cloudera.com>.
That bug only appears to apply to spark-shell.

Do things work in yarn-client mode or on a standalone cluster?  Are you
passing a path with parent directories to addJar?


On Thu, Mar 27, 2014 at 3:01 PM, Sung Hwan Chung
<co...@cs.stanford.edu>wrote:

> Well, it says that the jar was successfully added but can't reference
> classes from it. Does this have anything to do with this bug?
>
>
> http://stackoverflow.com/questions/22457645/when-to-use-spark-classpath-or-sparkcontext-addjar
>
>
> On Thu, Mar 27, 2014 at 2:57 PM, Sandy Ryza <sa...@cloudera.com>wrote:
>
>> I just tried this in CDH (only a few patches ahead of 0.9.0) and was able
>> to include a dependency with --addJars successfully.
>>
>> Can you share how you're invoking SparkContext.addJar?  Anything
>> interesting in the application master logs?
>>
>> -Sandy
>>
>>
>>
>>
>> On Thu, Mar 27, 2014 at 11:35 AM, Sung Hwan Chung <
>> codedeft@cs.stanford.edu> wrote:
>>
>>> Yea it's in a standalone mode and I did use SparkContext.addJar method
>>> and tried setting setExecutorEnv "SPARK_CLASSPATH", etc. but none of it
>>> worked.
>>>
>>> I finally made it work by modifying the ClientBase.scala code where I
>>> set 'appMasterOnly' to false before the addJars contents were added to
>>> distCacheMgr. But this is not what I should be doing, right?
>>>
>>> Is there a problem with addJar method in 0.9.0?
>>>
>>>
>>> On Wed, Mar 26, 2014 at 1:47 PM, Sandy Ryza <sa...@cloudera.com>wrote:
>>>
>>>> Hi Sung,
>>>>
>>>> Are you using yarn-standalone mode?  Have you specified the --addJars
>>>> option with your external jars?
>>>>
>>>> -Sandy
>>>>
>>>>
>>>> On Wed, Mar 26, 2014 at 1:17 PM, Sung Hwan Chung <
>>>> codedeft@cs.stanford.edu> wrote:
>>>>
>>>>> Hello, (this is Yarn related)
>>>>>
>>>>> I'm able to load an external jar and use its classes within
>>>>> ApplicationMaster. I wish to use this jar within worker nodes, so I added
>>>>> sc.addJar(pathToJar) and ran.
>>>>>
>>>>> I get the following exception:
>>>>>
>>>>> org.apache.spark.SparkException: Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception failure: java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor)
>>>>> Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception failure: java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor)
>>>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
>>>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
>>>>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>>>>> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
>>>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
>>>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
>>>>> scala.Option.foreach(Option.scala:236)
>>>>> org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
>>>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
>>>>> akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
>>>>> akka.actor.ActorCell.invoke(ActorCell.scala:456)
>>>>> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
>>>>> akka.dispatch.Mailbox.run(Mailbox.scala:219)
>>>>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
>>>>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>>>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>>>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>>>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>>>>
>>>>>
>>>>>
>>>>> And in worker node containers' stderr log (nothing in stdout log), I
>>>>> don't see any reference to loading jars:
>>>>>
>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>> SLF4J: Found binding in [jar:file:/home/gpphddata/1/yarn/nm-local-dir/usercache/yarn/filecache/7394400996676014282/spark-assembly-0.9.0-incubating-hadoop2.0.2-alpha-gphd-2.0.1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in [jar:file:/usr/lib/gphd/hadoop-2.0.2_alpha_gphd_2_0_1_0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>>>> 14/03/26 13:12:18 INFO slf4j.Slf4jLogger: Slf4jLogger started
>>>>> 14/03/26 13:12:18 INFO Remoting: Starting remoting
>>>>> 14/03/26 13:12:18 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
>>>>> 14/03/26 13:12:18 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
>>>>> 14/03/26 13:12:18 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://spark@alpinenode5.alpinenow.local:10314/user/CoarseGrainedScheduler
>>>>> 14/03/26 13:12:18 ERROR executor.CoarseGrainedExecutorBackend: Driver Disassociated [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006] -> [akka.tcp://spark@alpinenode5.alpinenow.local:10314] disassociated! Shutting down.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Any idea what's going on?
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: YARN problem using an external jar in worker nodes Inbox x

Posted by Sung Hwan Chung <co...@cs.stanford.edu>.
Well, it says that the jar was successfully added but can't reference
classes from it. Does this have anything to do with this bug?

http://stackoverflow.com/questions/22457645/when-to-use-spark-classpath-or-sparkcontext-addjar


On Thu, Mar 27, 2014 at 2:57 PM, Sandy Ryza <sa...@cloudera.com> wrote:

> I just tried this in CDH (only a few patches ahead of 0.9.0) and was able
> to include a dependency with --addJars successfully.
>
> Can you share how you're invoking SparkContext.addJar?  Anything
> interesting in the application master logs?
>
> -Sandy
>
>
>
>
> On Thu, Mar 27, 2014 at 11:35 AM, Sung Hwan Chung <
> codedeft@cs.stanford.edu> wrote:
>
>> Yea it's in a standalone mode and I did use SparkContext.addJar method
>> and tried setting setExecutorEnv "SPARK_CLASSPATH", etc. but none of it
>> worked.
>>
>> I finally made it work by modifying the ClientBase.scala code where I set
>> 'appMasterOnly' to false before the addJars contents were added to
>> distCacheMgr. But this is not what I should be doing, right?
>>
>> Is there a problem with addJar method in 0.9.0?
>>
>>
>> On Wed, Mar 26, 2014 at 1:47 PM, Sandy Ryza <sa...@cloudera.com>wrote:
>>
>>> Hi Sung,
>>>
>>> Are you using yarn-standalone mode?  Have you specified the --addJars
>>> option with your external jars?
>>>
>>> -Sandy
>>>
>>>
>>> On Wed, Mar 26, 2014 at 1:17 PM, Sung Hwan Chung <
>>> codedeft@cs.stanford.edu> wrote:
>>>
>>>> Hello, (this is Yarn related)
>>>>
>>>> I'm able to load an external jar and use its classes within
>>>> ApplicationMaster. I wish to use this jar within worker nodes, so I added
>>>> sc.addJar(pathToJar) and ran.
>>>>
>>>> I get the following exception:
>>>>
>>>> org.apache.spark.SparkException: Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception failure: java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor)
>>>> Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception failure: java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor)
>>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
>>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
>>>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>>>> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
>>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
>>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
>>>> scala.Option.foreach(Option.scala:236)
>>>> org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
>>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
>>>> akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
>>>> akka.actor.ActorCell.invoke(ActorCell.scala:456)
>>>> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
>>>> akka.dispatch.Mailbox.run(Mailbox.scala:219)
>>>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
>>>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>>>
>>>>
>>>>
>>>> And in worker node containers' stderr log (nothing in stdout log), I
>>>> don't see any reference to loading jars:
>>>>
>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>> SLF4J: Found binding in [jar:file:/home/gpphddata/1/yarn/nm-local-dir/usercache/yarn/filecache/7394400996676014282/spark-assembly-0.9.0-incubating-hadoop2.0.2-alpha-gphd-2.0.1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> SLF4J: Found binding in [jar:file:/usr/lib/gphd/hadoop-2.0.2_alpha_gphd_2_0_1_0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>>> 14/03/26 13:12:18 INFO slf4j.Slf4jLogger: Slf4jLogger started
>>>> 14/03/26 13:12:18 INFO Remoting: Starting remoting
>>>> 14/03/26 13:12:18 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
>>>> 14/03/26 13:12:18 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
>>>> 14/03/26 13:12:18 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://spark@alpinenode5.alpinenow.local:10314/user/CoarseGrainedScheduler
>>>> 14/03/26 13:12:18 ERROR executor.CoarseGrainedExecutorBackend: Driver Disassociated [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006] -> [akka.tcp://spark@alpinenode5.alpinenow.local:10314] disassociated! Shutting down.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Any idea what's going on?
>>>>
>>>>
>>>
>>
>

Re: YARN problem using an external jar in worker nodes Inbox x

Posted by Sandy Ryza <sa...@cloudera.com>.
I just tried this in CDH (only a few patches ahead of 0.9.0) and was able
to include a dependency with --addJars successfully.

Can you share how you're invoking SparkContext.addJar?  Anything
interesting in the application master logs?

-Sandy




On Thu, Mar 27, 2014 at 11:35 AM, Sung Hwan Chung
<co...@cs.stanford.edu>wrote:

> Yea it's in a standalone mode and I did use SparkContext.addJar method and
> tried setting setExecutorEnv "SPARK_CLASSPATH", etc. but none of it worked.
>
> I finally made it work by modifying the ClientBase.scala code where I set
> 'appMasterOnly' to false before the addJars contents were added to
> distCacheMgr. But this is not what I should be doing, right?
>
> Is there a problem with addJar method in 0.9.0?
>
>
> On Wed, Mar 26, 2014 at 1:47 PM, Sandy Ryza <sa...@cloudera.com>wrote:
>
>> Hi Sung,
>>
>> Are you using yarn-standalone mode?  Have you specified the --addJars
>> option with your external jars?
>>
>> -Sandy
>>
>>
>> On Wed, Mar 26, 2014 at 1:17 PM, Sung Hwan Chung <
>> codedeft@cs.stanford.edu> wrote:
>>
>>> Hello, (this is Yarn related)
>>>
>>> I'm able to load an external jar and use its classes within
>>> ApplicationMaster. I wish to use this jar within worker nodes, so I added
>>> sc.addJar(pathToJar) and ran.
>>>
>>> I get the following exception:
>>>
>>> org.apache.spark.SparkException: Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception failure: java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor)
>>> Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception failure: java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor)
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
>>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>>> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
>>> scala.Option.foreach(Option.scala:236)
>>> org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
>>> akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
>>> akka.actor.ActorCell.invoke(ActorCell.scala:456)
>>> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
>>> akka.dispatch.Mailbox.run(Mailbox.scala:219)
>>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
>>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>>
>>>
>>>
>>> And in worker node containers' stderr log (nothing in stdout log), I
>>> don't see any reference to loading jars:
>>>
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in [jar:file:/home/gpphddata/1/yarn/nm-local-dir/usercache/yarn/filecache/7394400996676014282/spark-assembly-0.9.0-incubating-hadoop2.0.2-alpha-gphd-2.0.1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in [jar:file:/usr/lib/gphd/hadoop-2.0.2_alpha_gphd_2_0_1_0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> 14/03/26 13:12:18 INFO slf4j.Slf4jLogger: Slf4jLogger started
>>> 14/03/26 13:12:18 INFO Remoting: Starting remoting
>>> 14/03/26 13:12:18 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
>>> 14/03/26 13:12:18 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
>>> 14/03/26 13:12:18 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://spark@alpinenode5.alpinenow.local:10314/user/CoarseGrainedScheduler
>>> 14/03/26 13:12:18 ERROR executor.CoarseGrainedExecutorBackend: Driver Disassociated [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006] -> [akka.tcp://spark@alpinenode5.alpinenow.local:10314] disassociated! Shutting down.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Any idea what's going on?
>>>
>>>
>>
>

Re: YARN problem using an external jar in worker nodes Inbox x

Posted by Sung Hwan Chung <co...@cs.stanford.edu>.
Yea it's in a standalone mode and I did use SparkContext.addJar method and
tried setting setExecutorEnv "SPARK_CLASSPATH", etc. but none of it worked.

I finally made it work by modifying the ClientBase.scala code where I set
'appMasterOnly' to false before the addJars contents were added to
distCacheMgr. But this is not what I should be doing, right?

Is there a problem with addJar method in 0.9.0?


On Wed, Mar 26, 2014 at 1:47 PM, Sandy Ryza <sa...@cloudera.com> wrote:

> Hi Sung,
>
> Are you using yarn-standalone mode?  Have you specified the --addJars
> option with your external jars?
>
> -Sandy
>
>
> On Wed, Mar 26, 2014 at 1:17 PM, Sung Hwan Chung <codedeft@cs.stanford.edu
> > wrote:
>
>> Hello, (this is Yarn related)
>>
>> I'm able to load an external jar and use its classes within
>> ApplicationMaster. I wish to use this jar within worker nodes, so I added
>> sc.addJar(pathToJar) and ran.
>>
>> I get the following exception:
>>
>> org.apache.spark.SparkException: Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception failure: java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor)
>> Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception failure: java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor)
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
>> scala.Option.foreach(Option.scala:236)
>> org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
>> akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
>> akka.actor.ActorCell.invoke(ActorCell.scala:456)
>> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
>> akka.dispatch.Mailbox.run(Mailbox.scala:219)
>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>
>>
>>
>> And in worker node containers' stderr log (nothing in stdout log), I
>> don't see any reference to loading jars:
>>
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in [jar:file:/home/gpphddata/1/yarn/nm-local-dir/usercache/yarn/filecache/7394400996676014282/spark-assembly-0.9.0-incubating-hadoop2.0.2-alpha-gphd-2.0.1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in [jar:file:/usr/lib/gphd/hadoop-2.0.2_alpha_gphd_2_0_1_0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> 14/03/26 13:12:18 INFO slf4j.Slf4jLogger: Slf4jLogger started
>> 14/03/26 13:12:18 INFO Remoting: Starting remoting
>> 14/03/26 13:12:18 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
>> 14/03/26 13:12:18 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
>> 14/03/26 13:12:18 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://spark@alpinenode5.alpinenow.local:10314/user/CoarseGrainedScheduler
>> 14/03/26 13:12:18 ERROR executor.CoarseGrainedExecutorBackend: Driver Disassociated [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006] -> [akka.tcp://spark@alpinenode5.alpinenow.local:10314] disassociated! Shutting down.
>>
>>
>>
>>
>>
>>
>> Any idea what's going on?
>>
>>
>

Re: YARN problem using an external jar in worker nodes Inbox x

Posted by Sandy Ryza <sa...@cloudera.com>.
Hi Sung,

Are you using yarn-standalone mode?  Have you specified the --addJars
option with your external jars?

-Sandy


On Wed, Mar 26, 2014 at 1:17 PM, Sung Hwan Chung
<co...@cs.stanford.edu>wrote:

> Hello, (this is Yarn related)
>
> I'm able to load an external jar and use its classes within
> ApplicationMaster. I wish to use this jar within worker nodes, so I added
> sc.addJar(pathToJar) and ran.
>
> I get the following exception:
>
> org.apache.spark.SparkException: Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception failure: java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor)
> Job aborted: Task 0.0:1 failed 4 times (most recent failure: Exception failure: java.lang.NoClassDefFoundError: org/opencv/objdetect/HOGDescriptor)
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
> org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
> scala.Option.foreach(Option.scala:236)
> org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
> org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
> akka.actor.ActorCell.invoke(ActorCell.scala:456)
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
> akka.dispatch.Mailbox.run(Mailbox.scala:219)
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
>
>
> And in worker node containers' stderr log (nothing in stdout log), I don't
> see any reference to loading jars:
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in [jar:file:/home/gpphddata/1/yarn/nm-local-dir/usercache/yarn/filecache/7394400996676014282/spark-assembly-0.9.0-incubating-hadoop2.0.2-alpha-gphd-2.0.1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in [jar:file:/usr/lib/gphd/hadoop-2.0.2_alpha_gphd_2_0_1_0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 14/03/26 13:12:18 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 14/03/26 13:12:18 INFO Remoting: Starting remoting
> 14/03/26 13:12:18 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
> 14/03/26 13:12:18 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006]
> 14/03/26 13:12:18 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://spark@alpinenode5.alpinenow.local:10314/user/CoarseGrainedScheduler
> 14/03/26 13:12:18 ERROR executor.CoarseGrainedExecutorBackend: Driver Disassociated [akka.tcp://sparkExecutor@alpinenode6.alpinenow.local:44006] -> [akka.tcp://spark@alpinenode5.alpinenow.local:10314] disassociated! Shutting down.
>
>
>
>
> Any idea what's going on?
>
>