You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2016/06/03 17:45:59 UTC

[jira] [Commented] (SPARK-2243) Support multiple SparkContexts in the same JVM

    [ https://issues.apache.org/jira/browse/SPARK-2243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15314470#comment-15314470 ] 

ASF GitHub Bot commented on SPARK-2243:
---------------------------------------

Github user bbuild11 commented on the issue:

    https://github.com/apache/incubator-zeppelin/pull/868
  
    I get this in the Zeppelin log file:
    
    ERROR [2016-06-03 16:49:20,929] ({qtp519821334-82} NotebookServer.java[onMessage]:210) - Can't handle message
    java.lang.NullPointerException
    	at org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:128)
    	at org.apache.zeppelin.socket.NotebookSocket.onWebSocketText(NotebookSocket.java:56)
    	at org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextMessage(JettyListenerEventDriver.java:128)
    	at org.eclipse.jetty.websocket.common.message.SimpleTextMessage.messageComplete(SimpleTextMessage.java:69)
    	at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65)
    	at org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextFrame(JettyListenerEventDriver.java:122)
    	at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.incomingFrame(AbstractEventDriver.java:161)
    	at org.eclipse.jetty.websocket.common.WebSocketSession.incomingFrame(WebSocketSession.java:309)
    	at org.eclipse.jetty.websocket.common.extensions.ExtensionStack.incomingFrame(ExtensionStack.java:214)
    	at org.eclipse.jetty.websocket.common.Parser.notifyFrame(Parser.java:220)
    	at org.eclipse.jetty.websocket.common.Parser.parse(Parser.java:258)
    	at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.readParse(AbstractWebSocketConnection.java:632)
    	at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:480)
    	at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
    	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
    	at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
    	at java.lang.Thread.run(Thread.java:745)
     INFO [2016-06-03 16:54:20,934] ({qtp519821334-86} NotebookServer.java[onClose]:216) - Closed connection to 10.4.67.203 : 56658. (1001) Idle Timeout
    ERROR [2016-06-03 17:06:45,825] ({Thread-21} JobProgressPoller.java[run]:54) - Can not get or update progress
    org.apache.zeppelin.interpreter.InterpreterException: org.apache.thrift.transport.TTransportException
    	at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getProgress(RemoteInterpreter.java:361)
    	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getProgress(LazyOpenInterpreter.java:110)
    	at org.apache.zeppelin.notebook.Paragraph.progress(Paragraph.java:226)
    	at org.apache.zeppelin.scheduler.JobProgressPoller.run(JobProgressPoller.java:51)
    Caused by: org.apache.thrift.transport.TTransportException
    	at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
    	at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
    	at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
    	at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
    	at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
    	at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
    	at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_getProgress(RemoteInterpreterService.java:279)
    	at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.getProgress(RemoteInterpreterService.java:264)
    	at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getProgress(RemoteInterpreter.java:358)
    	... 3 more
    ERROR [2016-06-03 17:26:57,357] ({Thread-40} JobProgressPoller.java[run]:54) - Can not get or update progress
    org.apache.zeppelin.interpreter.InterpreterException: org.apache.thrift.transport.TTransportException
    	at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getProgress(RemoteInterpreter.java:361)
    	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getProgress(LazyOpenInterpreter.java:110)
    	at org.apache.zeppelin.notebook.Paragraph.progress(Paragraph.java:226)
    	at org.apache.zeppelin.scheduler.JobProgressPoller.run(JobProgressPoller.java:51)
    Caused by: org.apache.thrift.transport.TTransportException
    	at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
    	at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
    	at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
    	at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
    	at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
    	at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
    	at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_getProgress(RemoteInterpreterService.java:279)
    	at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.getProgress(RemoteInterpreterService.java:264)
    	at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getProgress(RemoteInterpreter.java:358)
    	... 3 more
    
    And in the Interpreter for Spark log file:
    
     INFO [2016-06-03 17:26:57,773] ({pool-2-thread-3} SparkInterpreter.java[createSparkContext]:225) - ------ Create new SparkContext yarn-client -------
     WARN [2016-06-03 17:26:57,773] ({pool-2-thread-3} SparkInterpreter.java[createSparkContext]:249) - Spark method classServerUri not available due to: [org.apache.spark.repl.SparkILoop$SparkILoopInterpreter.classServerUri()]
     WARN [2016-06-03 17:26:57,774] ({pool-2-thread-3} Logging.scala[logWarning]:70) - Another SparkContext is being constructed (or threw an exception in its constructor).  This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at:
    org.apache.spark.SparkContext.<init>(SparkContext.scala:83)
    org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:330)
    org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:118)
    org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:499)
    org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)
    org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)
    org.apache.zeppelin.interpreter.LazyOpenInterpreter.getProgress(LazyOpenInterpreter.java:109)
    org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.getProgress(RemoteInterpreterServer.java:408)
    org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Processor$getProgress.getResult(RemoteInterpreterService.java:1492)
    org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Processor$getProgress.getResult(RemoteInterpreterService.java:1477)
    org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    java.lang.Thread.run(Thread.java:745)
     INFO [2016-06-03 17:26:57,775] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Running Spark version 1.6.0
     WARN [2016-06-03 17:26:57,776] ({pool-2-thread-3} Logging.scala[logWarning]:70) -
    SPARK_CLASSPATH was detected (set to ':/home/zeppelin/incubator-zeppelin/interpreter/spark/dep/*:/home/zeppelin/incubator-zeppelin/interpreter/spark/*:/home/zeppelin/incubator-zeppelin/zeppelin-interpreter/target/lib/*::/home/zeppelin/incubator-zeppelin/conf:/home/zeppelin/incubator-zeppelin/conf:/home/zeppelin/incubator-zeppelin/zeppelin-interpreter/target/classes').
    This is deprecated in Spark 1.0+.
    
    Please instead use:
     - ./spark-submit with --driver-class-path to augment the driver classpath
     - spark.executor.extraClassPath to augment the executor classpath
    
     INFO [2016-06-03 17:26:57,773] ({pool-2-thread-3} SparkInterpreter.java[createSparkContext]:225) - ------ Create new SparkContext yarn-client -------
     WARN [2016-06-03 17:26:57,773] ({pool-2-thread-3} SparkInterpreter.java[createSparkContext]:249) - Spark method classServerUri not available due to: [org.apache.spark.repl.SparkILoop$SparkILoopInterpreter.classServerUri()]
     WARN [2016-06-03 17:26:57,774] ({pool-2-thread-3} Logging.scala[logWarning]:70) - Another SparkContext is being constructed (or threw an exception in its constructor).  This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at:
    org.apache.spark.SparkContext.<init>(SparkContext.scala:83)
    org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:330)
    org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:118)
    org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:499)
    org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:74)
    org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68)
    org.apache.zeppelin.interpreter.LazyOpenInterpreter.getProgress(LazyOpenInterpreter.java:109)
    org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.getProgress(RemoteInterpreterServer.java:408)
    org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Processor$getProgress.getResult(RemoteInterpreterService.java:1492)
    org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Processor$getProgress.getResult(RemoteInterpreterService.java:1477)
    org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    java.lang.Thread.run(Thread.java:745)
     INFO [2016-06-03 17:26:57,775] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Running Spark version 1.6.0
     WARN [2016-06-03 17:26:57,776] ({pool-2-thread-3} Logging.scala[logWarning]:70) -
    SPARK_CLASSPATH was detected (set to ':/home/zeppelin/incubator-zeppelin/interpreter/spark/dep/*:/home/zeppelin/incubator-zeppelin/interpreter/spark/*:/home/zeppelin/incubator-zeppelin/zeppelin-interpreter/target/lib/*::/home/zeppelin/incubator-zeppelin/conf:/home/zeppelin/incubator-zeppelin/conf:/home/zeppelin/incubator-zeppelin/zeppelin-interpreter/target/classes').
    This is deprecated in Spark 1.0+.
    
    Please instead use:
     - ./spark-submit with --driver-class-path to augment the driver classpath
     - spark.executor.extraClassPath to augment the executor classpath
    
     WARN [2016-06-03 17:26:57,776] ({pool-2-thread-3} Logging.scala[logWarning]:70) - Setting 'spark.executor.extraClassPath' to ':/home/zeppelin/incubator-zeppelin/interpreter/spark/dep/*:/home/zeppelin/incubator-zeppelin/interpreter/spark/*:/home/zeppelin/incubator-zeppelin/zeppelin-interpreter/target/lib/*::/home/zeppelin/incubator-zeppelin/conf:/home/zeppelin/incubator-zeppelin/conf:/home/zeppelin/incubator-zeppelin/zeppelin-interpreter/target/classes' as a work-around.
     WARN [2016-06-03 17:26:57,776] ({pool-2-thread-3} Logging.scala[logWarning]:70) - Setting 'spark.driver.extraClassPath' to ':/home/zeppelin/incubator-zeppelin/interpreter/spark/dep/*:/home/zeppelin/incubator-zeppelin/interpreter/spark/*:/home/zeppelin/incubator-zeppelin/zeppelin-interpreter/target/lib/*::/home/zeppelin/incubator-zeppelin/conf:/home/zeppelin/incubator-zeppelin/conf:/home/zeppelin/incubator-zeppelin/zeppelin-interpreter/target/classes' as a work-around.
     INFO [2016-06-03 17:26:57,776] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Changing view acls to: zeppelin
     INFO [2016-06-03 17:26:57,776] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Changing modify acls to: zeppelin
     INFO [2016-06-03 17:26:57,776] ({pool-2-thread-3} Logging.scala[logInfo]:58) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(zeppelin); users with modify permissions: Set(zeppelin)
     INFO [2016-06-03 17:26:57,784] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Successfully started service 'sparkDriver' on port 51559.
     INFO [2016-06-03 17:26:57,843] ({sparkDriverActorSystem-akka.actor.default-dispatcher-2} Slf4jLogger.scala[applyOrElse]:80) - Slf4jLogger started
     INFO [2016-06-03 17:26:57,846] ({sparkDriverActorSystem-akka.actor.default-dispatcher-3} Slf4jLogger.scala[apply$mcV$sp]:74) - Starting remoting
     INFO [2016-06-03 17:26:57,853] ({sparkDriverActorSystem-akka.actor.default-dispatcher-3} Slf4jLogger.scala[apply$mcV$sp]:74) - Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@172.28.22.133:34296]
     INFO [2016-06-03 17:26:57,853] ({sparkDriverActorSystem-akka.actor.default-dispatcher-3} Slf4jLogger.scala[apply$mcV$sp]:74) - Remoting now listens on addresses: [akka.tcp://sparkDriverActorSystem@172.28.22.133:34296]
     INFO [2016-06-03 17:26:57,854] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Successfully started service 'sparkDriverActorSystem' on port 34296.
     INFO [2016-06-03 17:26:57,854] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Registering MapOutputTracker
     INFO [2016-06-03 17:26:57,855] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Registering BlockManagerMaster
     INFO [2016-06-03 17:26:57,856] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Created local directory at /tmp/blockmgr-7aec5561-fd7c-4023-84cf-5ddd3aef88a4
     INFO [2016-06-03 17:26:57,857] ({pool-2-thread-3} Logging.scala[logInfo]:58) - MemoryStore started with capacity 528.1 MB
     INFO [2016-06-03 17:26:57,890] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Registering OutputCommitCoordinator
     INFO [2016-06-03 17:26:57,897] ({pool-2-thread-3} Server.java[doStart]:272) - jetty-8.y.z-SNAPSHOT
     INFO [2016-06-03 17:26:57,902] ({pool-2-thread-3} AbstractConnector.java[doStart]:338) - Started SelectChannelConnector@0.0.0.0:4040
     INFO [2016-06-03 17:26:57,903] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Successfully started service 'SparkUI' on port 4040.
     INFO [2016-06-03 17:26:57,904] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Started SparkUI at http://172.28.22.133:4040
     INFO [2016-06-03 17:26:57,924] ({pool-2-thread-3} Logging.scala[logInfo]:58) - Created default pool default, schedulingMode: FIFO, minShare: 0, weight: 1
     INFO [2016-06-03 17:26:57,955] ({pool-2-thread-3} RMProxy.java[createRMProxy]:98) - Connecting to ResourceManager at /0.0.0.0:8032
     INFO [2016-06-03 17:26:58,958] ({pool-2-thread-3} Client.java[handleConnectionFailure]:867) - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
     INFO [2016-06-03 17:26:59,959] ({pool-2-thread-3} Client.java[handleConnectionFailure]:867) - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
     INFO [2016-06-03 17:27:00,961] ({pool-2-thread-3} Client.java[handleConnectionFailure]:867) - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
    
    Hope this helps.
    
    Thanks,
    Ben
    
    > On Jun 3, 2016, at 10:14 AM, Felix Cheung <no...@github.com> wrote:
    > 
    > Can you check the log file?
    > 
    > —
    > You are receiving this because you commented.
    > Reply to this email directly, view it on GitHub <https://github.com/apache/incubator-zeppelin/pull/868#issuecomment-223637827>, or mute the thread <https://github.com/notifications/unsubscribe/ABxu9MdO4CkE120f89WRXAnowf1Y1yXBks5qIGDvgaJpZM4IS7A7>.
    > 
    



> Support multiple SparkContexts in the same JVM
> ----------------------------------------------
>
>                 Key: SPARK-2243
>                 URL: https://issues.apache.org/jira/browse/SPARK-2243
>             Project: Spark
>          Issue Type: New Feature
>          Components: Block Manager, Spark Core
>    Affects Versions: 0.7.0, 1.0.0, 1.1.0
>            Reporter: Miguel Angel Fernandez Diaz
>
> We're developing a platform where we create several Spark contexts for carrying out different calculations. Is there any restriction when using several Spark contexts? We have two contexts, one for Spark calculations and another one for Spark Streaming jobs. The next error arises when we first execute a Spark calculation and, once the execution is finished, a Spark Streaming job is launched:
> {code}
> 14/06/23 16:40:08 ERROR executor.Executor: Exception in task ID 0
> java.io.FileNotFoundException: http://172.19.0.215:47530/broadcast_0
> 	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1624)
> 	at org.apache.spark.broadcast.HttpBroadcast$.read(HttpBroadcast.scala:156)
> 	at org.apache.spark.broadcast.HttpBroadcast.readObject(HttpBroadcast.scala:56)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
> 	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> 	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
> 	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:40)
> 	at org.apache.spark.scheduler.ResultTask$.deserializeInfo(ResultTask.scala:63)
> 	at org.apache.spark.scheduler.ResultTask.readExternal(ResultTask.scala:139)
> 	at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
> 	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:40)
> 	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:62)
> 	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:193)
> 	at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:45)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:176)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> 14/06/23 16:40:08 WARN scheduler.TaskSetManager: Lost TID 0 (task 0.0:0)
> 14/06/23 16:40:08 WARN scheduler.TaskSetManager: Loss was due to java.io.FileNotFoundException
> java.io.FileNotFoundException: http://172.19.0.215:47530/broadcast_0
> 	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1624)
> 	at org.apache.spark.broadcast.HttpBroadcast$.read(HttpBroadcast.scala:156)
> 	at org.apache.spark.broadcast.HttpBroadcast.readObject(HttpBroadcast.scala:56)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
> 	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
> 	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
> 	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:40)
> 	at org.apache.spark.scheduler.ResultTask$.deserializeInfo(ResultTask.scala:63)
> 	at org.apache.spark.scheduler.ResultTask.readExternal(ResultTask.scala:139)
> 	at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1837)
> 	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
> 	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
> 	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
> 	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:40)
> 	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:62)
> 	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:193)
> 	at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:45)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:176)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> 14/06/23 16:40:08 ERROR scheduler.TaskSetManager: Task 0.0:0 failed 1 times; aborting job
> 14/06/23 16:40:08 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
> 14/06/23 16:40:08 INFO scheduler.DAGScheduler: Failed to run runJob at NetworkInputTracker.scala:182
> [WARNING] 
> org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed 1 times (most recent failure: Exception failure: java.io.FileNotFoundException: http://172.19.0.215:47530/broadcast_0)
> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1020)
> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1018)
> 	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> 	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1018)
> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
> 	at scala.Option.foreach(Option.scala:236)
> 	at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:604)
> 	at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:190)
> 	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
> 	at akka.actor.ActorCell.invoke(ActorCell.scala:456)
> 	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
> 	at akka.dispatch.Mailbox.run(Mailbox.scala:219)
> 	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:385)
> 	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> 	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> 	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> 	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 14/06/23 16:40:09 INFO dstream.ForEachDStream: metadataCleanupDelay = 3600
> {code}
> So far, we are working on localhost. Any clue about where this error is coming from? Any workaround to solve the issue?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org