You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Chang Lim <ch...@gmail.com> on 2014/05/28 18:53:18 UTC

Re: Spark Streaming RDD to Shark table

OK...I needed to set the JVM class.path for the worker to find the fb class:
env.put("SPARK_JAVA_OPTS",
"-Djava.class.path=/home/myInc/hive-0.9.0-bin/lib/libfb303.jar");

Now I am seeing the following "spark.httpBroadcast.uri" error.  What am I
missing?

java.util.NoSuchElementException: spark.httpBroadcast.uri
	at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:151)
	at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:151)
	at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
	at scala.collection.AbstractMap.getOrElse(Map.scala:58)
	at org.apache.spark.SparkConf.get(SparkConf.scala:151)
	at
org.apache.spark.broadcast.HttpBroadcast$.initialize(HttpBroadcast.scala:104)
	at
org.apache.spark.broadcast.HttpBroadcastFactory.initialize(HttpBroadcast.scala:70)
	at
org.apache.spark.broadcast.BroadcastManager.initialize(Broadcast.scala:81)
	at org.apache.spark.broadcast.BroadcastManager.<init>(Broadcast.scala:68)
	at org.apache.spark.SparkEnv$.create(SparkEnv.scala:175)
	at org.apache.spark.executor.Executor.<init>(Executor.scala:110)
	at
org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$receive$1.applyOrElse(CoarseGrainedExecutorBackend.scala:56)
	. . .
14/05/27 15:26:45 INFO CoarseGrainedExecutorBackend: Connecting to driver:
akka.tcp://spark@clim2-dsv.myInc.ad.myInccorp.com:3694/user/CoarseGrainedScheduler
14/05/27 15:26:46 ERROR CoarseGrainedExecutorBackend: Slave registration
failed: Duplicate executor ID: 8

===========================
Full Stack:
===========================
Spark Executor Command: "/usr/lib/jvm/java-7-openjdk-i386/bin/java" "-cp"
":/home/myInc/spark-0.9.1-bin-hadoop1/conf:/home/myInc/spark-0.9.1-bin-hadoop1/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop1.0.4.jar"
"-Djava.library.path=/home/myInc/hive-0.9.0-bin/lib/libfb303.jar"
"-Djava.library.path=/home/myInc/hive-0.9.0-bin/lib/libfb303.jar" "-Xms512M"
"-Xmx512M" "org.apache.spark.executor.CoarseGrainedExecutorBackend"
"akka.tcp://spark@clim2-dsv.myInc.ad.myInccorp.com:3694/user/CoarseGrainedScheduler"
"8" "tahiti-ins.myInc.ad.myInccorp.com" "1"
"akka.tcp://sparkWorker@tahiti-ins.myInc.ad.myInccorp.com:37841/user/Worker"
"app-20140527152556-0029"
========================================

log4j:WARN No appenders could be found for logger
(akka.event.slf4j.Slf4jLogger).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.
14/05/27 15:26:44 INFO CoarseGrainedExecutorBackend: Using Spark's default
log4j profile: org/apache/spark/log4j-defaults.properties
14/05/27 15:26:44 INFO WorkerWatcher: Connecting to worker
akka.tcp://sparkWorker@tahiti-ins.myInc.ad.myInccorp.com:37841/user/Worker
14/05/27 15:26:44 INFO CoarseGrainedExecutorBackend: Connecting to driver:
akka.tcp://spark@clim2-dsv.myInc.ad.myInccorp.com:3694/user/CoarseGrainedScheduler
14/05/27 15:26:45 INFO WorkerWatcher: Successfully connected to
akka.tcp://sparkWorker@tahiti-ins.myInc.ad.myInccorp.com:37841/user/Worker
14/05/27 15:26:45 INFO CoarseGrainedExecutorBackend: Successfully registered
with driver
14/05/27 15:26:45 INFO Slf4jLogger: Slf4jLogger started
14/05/27 15:26:45 INFO Remoting: Starting remoting
14/05/27 15:26:45 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://spark@tahiti-ins.myInc.ad.myInccorp.com:43488]
14/05/27 15:26:45 INFO Remoting: Remoting now listens on addresses:
[akka.tcp://spark@tahiti-ins.myInc.ad.myInccorp.com:43488]
14/05/27 15:26:45 INFO SparkEnv: Connecting to BlockManagerMaster:
akka.tcp://spark@clim2-dsv.myInc.ad.myInccorp.com:3694/user/BlockManagerMaster
14/05/27 15:26:45 INFO DiskBlockManager: Created local directory at
/tmp/spark-local-20140527152645-b13b
14/05/27 15:26:45 INFO MemoryStore: MemoryStore started with capacity 297.0
MB.
14/05/27 15:26:45 INFO ConnectionManager: Bound socket to port 55853 with id
= ConnectionManagerId(tahiti-ins.myInc.ad.myInccorp.com,55853)
14/05/27 15:26:45 INFO BlockManagerMaster: Trying to register BlockManager
14/05/27 15:26:45 INFO BlockManagerMaster: Registered BlockManager
14/05/27 15:26:45 ERROR OneForOneStrategy: spark.httpBroadcast.uri
java.util.NoSuchElementException: spark.httpBroadcast.uri
	at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:151)
	at org.apache.spark.SparkConf$$anonfun$get$1.apply(SparkConf.scala:151)
	at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
	at scala.collection.AbstractMap.getOrElse(Map.scala:58)
	at org.apache.spark.SparkConf.get(SparkConf.scala:151)
	at
org.apache.spark.broadcast.HttpBroadcast$.initialize(HttpBroadcast.scala:104)
	at
org.apache.spark.broadcast.HttpBroadcastFactory.initialize(HttpBroadcast.scala:70)
	at
org.apache.spark.broadcast.BroadcastManager.initialize(Broadcast.scala:81)
	at org.apache.spark.broadcast.BroadcastManager.<init>(Broadcast.scala:68)
	at org.apache.spark.SparkEnv$.create(SparkEnv.scala:175)
	at org.apache.spark.executor.Executor.<init>(Executor.scala:110)
	at
org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$receive$1.applyOrElse(CoarseGrainedExecutorBackend.scala:56)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
	at akka.actor.ActorCell.invoke(ActorCell.scala:456)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
	at akka.dispatch.Mailbox.run(Mailbox.scala:219)
	at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/05/27 15:26:45 INFO CoarseGrainedExecutorBackend: Connecting to driver:
akka.tcp://spark@clim2-dsv.myInc.ad.myInccorp.com:3694/user/CoarseGrainedScheduler
14/05/27 15:26:46 ERROR CoarseGrainedExecutorBackend: Slave registration
failed: Duplicate executor ID: 8





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-RDD-to-Shark-table-tp5063p6485.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.