You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Shivansh Srivastava <sh...@knoldus.com> on 2016/11/07 07:15:08 UTC

Spark Exits with exception

This is the stackTrace that I am getting while running the application:

    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 233 on executor id: 4 hostname: 10.178.149.243.
    16/11/03 11:25:45 WARN TaskSetManager: Lost task 1.0 in stage 11.0 (TID
217, 10.178.149.243): java.util.NoSuchElementException: None.get
        at scala.None$.get(Option.scala:347)
        at scala.None$.get(Option.scala:345)
        at
org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
        at
org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

    16/11/03 11:25:45 INFO TaskSetManager: Lost task 14.0 in stage 11.0
(TID 225) on executor 10.178.149.243: java.util.NoSuchElementException
(None.get) [duplicate 1]
    16/11/03 11:25:45 INFO TaskSetManager: Starting task 14.1 in stage 11.0
(TID 234, 10.178.149.243, partition 14, NODE_LOCAL, 8828 bytes)
    16/11/03 11:25:45 INFO TaskSetManager: Lost task 22.0 in stage 11.0
(TID 232) on executor 10.178.149.243: java.util.NoSuchElementException
(None.get) [duplicate 2]
    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 234 on executor id: 4 hostname: 10.178.149.243.
    16/11/03 11:25:45 INFO TaskSetManager: Starting task 22.1 in stage 11.0
(TID 235, 10.178.149.243, partition 22, NODE_LOCAL, 9066 bytes)
    16/11/03 11:25:45 INFO TaskSetManager: Lost task 24.0 in stage 11.0
(TID 233) on executor 10.178.149.243: java.util.NoSuchElementException
(None.get) [duplicate 3]
    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 235 on executor id: 4 hostname: 10.178.149.243.
    16/11/03 11:25:45 INFO TaskSetManager: Starting task 24.1 in stage 11.0
(TID 236, 10.178.149.243, partition 24, NODE_LOCAL, 9185 bytes)
    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 236 on executor id: 4 hostname: 10.178.149.243.
    16/11/03 11:25:45 INFO TaskSetManager: Lost task 22.1 in stage 11.0
(TID 235) on executor 10.178.149.243: java.util.NoSuchElementException
(None.get) [duplicate 4]
    16/11/03 11:25:45 INFO TaskSetManager: Starting task 22.2 in stage 11.0
(TID 237, 10.178.149.243, partition 22, NODE_LOCAL, 9066 bytes)
    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 237 on executor id: 4 hostname: 10.178.149.243.
    16/11/03 11:25:45 INFO TaskSetManager: Lost task 14.1 in stage 11.0
(TID 234) on executor 10.178.149.243: java.util.NoSuchElementException
(None.get) [duplicate 5]
    16/11/03 11:25:45 INFO TaskSetManager: Starting task 14.2 in stage 11.0
(TID 238, 10.178.149.243, partition 14, NODE_LOCAL, 8828 bytes)
    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 238 on executor id: 4 hostname: 10.178.149.243.
    16/11/03 11:25:45 INFO TaskSetManager: Lost task 24.1 in stage 11.0
(TID 236) on executor 10.178.149.243: java.util.NoSuchElementException
(None.get) [duplicate 6]
    16/11/03 11:25:45 INFO TaskSetManager: Starting task 24.2 in stage 11.0
(TID 239, 10.178.149.243, partition 24, NODE_LOCAL, 9185 bytes)
    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 239 on executor id: 4 hostname: 10.178.149.243.
    16/11/03 11:25:45 INFO TaskSetManager: Lost task 22.2 in stage 11.0
(TID 237) on executor 10.178.149.243: java.util.NoSuchElementException
(None.get) [duplicate 7]
    16/11/03 11:25:45 INFO TaskSetManager: Starting task 22.3 in stage 11.0
(TID 240, 10.178.149.243, partition 22, NODE_LOCAL, 9066 bytes)
    16/11/03 11:25:45 INFO TaskSetManager: Lost task 14.2 in stage 11.0
(TID 238) on executor 10.178.149.243: java.util.NoSuchElementException
(None.get) [duplicate 8]
    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 240 on executor id: 4 hostname: 10.178.149.243.
    16/11/03 11:25:45 INFO TaskSetManager: Starting task 14.3 in stage 11.0
(TID 241, 10.178.149.243, partition 14, NODE_LOCAL, 8828 bytes)
    16/11/03 11:25:45 INFO TaskSetManager: Lost task 24.2 in stage 11.0
(TID 239) on executor 10.178.149.243: java.util.NoSuchElementException
(None.get) [duplicate 9]
    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 241 on executor id: 4 hostname: 10.178.149.243.
    16/11/03 11:25:45 INFO TaskSetManager: Starting task 24.3 in stage 11.0
(TID 242, 10.178.149.243, partition 24, NODE_LOCAL, 9185 bytes)
    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 242 on executor id: 4 hostname: 10.178.149.243.
    16/11/03 11:25:45 INFO TaskSetManager: Lost task 22.3 in stage 11.0
(TID 240) on executor 10.178.149.243: java.util.NoSuchElementException
(None.get) [duplicate 10]
    16/11/03 11:25:45 ERROR TaskSetManager: Task 22 in stage 11.0 failed 4
times; aborting job
    16/11/03 11:25:45 INFO TaskSetManager: Starting task 0.0 in stage 12.0
(TID 243, 10.178.149.243, partition 0, NODE_LOCAL, 10016 bytes)
    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 243 on executor id: 4 hostname: 10.178.149.243.
    16/11/03 11:25:45 INFO TaskSetManager: Lost task 14.3 in stage 11.0
(TID 241) on executor 10.178.149.243: java.util.NoSuchElementException
(None.get) [duplicate 11]
    16/11/03 11:25:45 INFO TaskSchedulerImpl: Cancelling stage 12
    16/11/03 11:25:45 INFO TaskSchedulerImpl: Stage 12 was cancelled
    16/11/03 11:25:45 INFO TaskSetManager: Starting task 0.0 in stage 14.0
(TID 244, 10.178.149.243, partition 0, NODE_LOCAL, 7638 bytes)
    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 244 on executor id: 4 hostname: 10.178.149.243.
    16/11/03 11:25:45 INFO TaskSetManager: Lost task 24.3 in stage 11.0
(TID 242) on executor 10.178.149.243: java.util.NoSuchElementException
(None.get) [duplicate 12]
    16/11/03 11:25:45 INFO DAGScheduler: ShuffleMapStage 12 (show at
RNFBackTagger.scala:97) failed in 0.112 s
    16/11/03 11:25:45 INFO TaskSchedulerImpl: Cancelling stage 14
    16/11/03 11:25:45 INFO TaskSchedulerImpl: Stage 14 was cancelled
    16/11/03 11:25:45 INFO DAGScheduler: ShuffleMapStage 14 (show at
RNFBackTagger.scala:97) failed in 0.104 s
    16/11/03 11:25:45 INFO TaskSchedulerImpl: Cancelling stage 11
    16/11/03 11:25:45 INFO TaskSchedulerImpl: Stage 11 was cancelled
    16/11/03 11:25:45 INFO DAGScheduler: ShuffleMapStage 11 (show at
RNFBackTagger.scala:97) failed in 0.126 s
    16/11/03 11:25:45 WARN TaskSetManager: Lost task 0.0 in stage 12.0 (TID
243, 10.178.149.243): java.util.NoSuchElementException: None.get
        at scala.None$.get(Option.scala:347)
        at scala.None$.get(Option.scala:345)
        at
org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
        at
org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

    16/11/03 11:25:45 INFO DAGScheduler: Job 7 failed: show at
RNFBackTagger.scala:97, took 0.141681 s
    16/11/03 11:25:45 INFO TaskSchedulerImpl: Removed TaskSet 12.0, whose
tasks have all completed, from pool
    Exception in thread "main" org.apache.spark.SparkException: Job aborted
due to stage failure: Task 22 in stage 11.0 failed 4 times, most recent
failure: Lost task 22.3 in stage 11.0 (TID 240, 10.178.149.243):
java.util.NoSuchElementException: None.get
        at scala.None$.get(Option.scala:347)
        at scala.None$.get(Option.scala:345)
        at
org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
        at
org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

    Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437)
        at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
        at scala.Option.foreach(Option.scala:257)
        at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
        at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1659)
        at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618)
        at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1897)
        at
org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:347)
        at
org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)
        at
org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2183)
        at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
        at
org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2532)
        at org.apache.spark.sql.Dataset.org
$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2182)
        at org.apache.spark.sql.Dataset.org
$apache$spark$sql$Dataset$$collect(Dataset.scala:2189)
        at
org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1925)
        at
org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1924)
        at
org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2562)
        at org.apache.spark.sql.Dataset.head(Dataset.scala:1924)
        at org.apache.spark.sql.Dataset.take(Dataset.scala:2139)
        at org.apache.spark.sql.Dataset.showString(Dataset.scala:239)
        at org.apache.spark.sql.Dataset.show(Dataset.scala:526)
        at com.knoldus.xml.RNFBackTagger$.main(RNFBackTagger.scala:97)
        at com.knoldus.xml.RNFBackTagger.main(RNFBackTagger.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
        at
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
        at
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    Caused by: java.util.NoSuchElementException: None.get
        at scala.None$.get(Option.scala:347)
        at scala.None$.get(Option.scala:345)
        at
org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
        at
org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    16/11/03 11:25:45 WARN JobProgressListener: Task start for unknown
stage 12
    16/11/03 11:25:45 WARN TaskSetManager: Lost task 0.0 in stage 14.0 (TID
244, 10.178.149.243): java.util.NoSuchElementException: None.get
        at scala.None$.get(Option.scala:347)
        at scala.None$.get(Option.scala:345)
        at
org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
        at
org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

    16/11/03 11:25:45 INFO TaskSchedulerImpl: Removed TaskSet 14.0, whose
tasks have all completed, from pool
    16/11/03 11:25:45 INFO SparkContext: Invoking stop() from shutdown hook
    16/11/03 11:25:45 WARN JobProgressListener: Task start for unknown
stage 14
    16/11/03 11:25:45 INFO SerialShutdownHooks: Successfully executed
shutdown hook: Clearing session cache for C* connector
    16/11/03 11:25:45 INFO TaskSetManager: Finished task 5.0 in stage 11.0
(TID 219) in 137 ms on 10.178.149.22 (1/35)
    16/11/03 11:25:45 INFO SparkUI: Stopped Spark web UI at
http://10.178.149.133:4040
    16/11/03 11:25:45 INFO StandaloneSchedulerBackend: Shutting down all
executors
    16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Asking each executor to shut down
    16/11/03 11:25:45 ERROR TransportRequestHandler: Error while invoking
RpcHandler#receive() for one-way message.
    org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
        at
org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
        at
org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:132)
        at
org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:571)
        at
org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:179)
        at
org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:108)
        at
org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
        at
org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
        at
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
        at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
        at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
        at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
        at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        at java.lang.Thread.run(Thread.java:745)
    16/11/03 11:25:45 ERROR TransportRequestHandler: Error while invoking
RpcHandler#receive() for one-way message.
    org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
        at
org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
        at
org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:132)
        at
org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:571)
        at
org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:179)
        at
org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:108)
        at
org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
        at
org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
        at
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
        at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
        at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
        at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
        at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        at java.lang.Thread.run(Thread.java:745)
    16/11/03 11:25:45 INFO MapOutputTrackerMasterEndpoint:
MapOutputTrackerMasterEndpoint stopped!
    16/11/03 11:25:45 INFO MemoryStore: MemoryStore cleared
    16/11/03 11:25:45 INFO BlockManager: BlockManager stopped
    16/11/03 11:25:45 INFO BlockManagerMaster: BlockManagerMaster stopped
    16/11/03 11:25:45 INFO
OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
OutputCommitCoordinator stopped!
    16/11/03 11:25:45 ERROR TransportRequestHandler: Error while invoking
RpcHandler#receive() for one-way message.
    org.apache.spark.rpc.RpcEnvStoppedException: RpcEnv already stopped.
        at
org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:150)
        at
org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:132)
        at
org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:571)
        at
org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:179)
        at
org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:108)
        at
org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
        at
org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
        at
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
        at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
        at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
        at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
        at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        at java.lang.Thread.run(Thread.java:745)
    16/11/03 11:25:45 ERROR TransportRequestHandler: Error while invoking
RpcHandler#receive() for one-way message.
    org.apache.spark.rpc.RpcEnvStoppedException: RpcEnv already stopped.
        at
org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:150)
        at
org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:132)
        at
org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:571)
        at
org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:179)
        at
org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:108)
        at
org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
        at
org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
        at
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
        at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
        at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
        at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
        at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
        at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        at java.lang.Thread.run(Thread.java:745)
    16/11/03 11:25:45 INFO SparkContext: Successfully stopped SparkContext
    16/11/03 11:25:45 INFO ShutdownHookManager: Shutdown hook called
    16/11/03 11:25:45 INFO ShutdownHookManager: Deleting directory
/tmp/spark-c52a6da9-5702-4128-9950-805d5f9dd75e

Sometimes the error trace changes and it gives only this:
16/11/07 07:04:35 WARN TaskSetManager: Lost task 0.0 in stage 7.0 (TID 199,
10.178.149.243): java.util.NoSuchElementException: None.get
    at scala.None$.get(Option.scala:347)
    at scala.None$.get(Option.scala:345)
    at
org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
    at
org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

16/11/07 07:04:35 INFO TaskSetManager: Starting task 0.1 in stage 7.0 (TID
200, 10.178.149.243, partition 0, NODE_LOCAL, 5286 bytes)
16/11/07 07:04:35 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 200 on executor id: 5 hostname: 10.178.149.243.
16/11/07 07:04:35 INFO TaskSetManager: Lost task 0.1 in stage 7.0 (TID 200)
on executor 10.178.149.243: java.util.NoSuchElementException (None.get)
[duplicate 1]
16/11/07 07:04:35 INFO TaskSetManager: Starting task 0.2 in stage 7.0 (TID
201, 10.178.149.243, partition 0, NODE_LOCAL, 5286 bytes)
16/11/07 07:04:35 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 201 on executor id: 7 hostname: 10.178.149.243.
16/11/07 07:04:35 INFO TaskSetManager: Lost task 0.2 in stage 7.0 (TID 201)
on executor 10.178.149.243: java.util.NoSuchElementException (None.get)
[duplicate 2]
16/11/07 07:04:35 INFO TaskSetManager: Starting task 0.3 in stage 7.0 (TID
202, 10.178.149.243, partition 0, NODE_LOCAL, 5286 bytes)
16/11/07 07:04:35 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
Launching task 202 on executor id: 5 hostname: 10.178.149.243.
16/11/07 07:04:35 INFO TaskSetManager: Lost task 0.3 in stage 7.0 (TID 202)
on executor 10.178.149.243: java.util.NoSuchElementException (None.get)
[duplicate 3]
16/11/07 07:04:35 ERROR TaskSetManager: Task 0 in stage 7.0 failed 4 times;
aborting job
16/11/07 07:04:35 INFO TaskSchedulerImpl: Removed TaskSet 7.0, whose tasks
have all completed, from pool
16/11/07 07:04:35 INFO TaskSchedulerImpl: Cancelling stage 7
16/11/07 07:04:35 INFO DAGScheduler: ResultStage 7 (show at
RNFBackTagger.scala:90) failed in 0.105 s
16/11/07 07:04:35 INFO DAGScheduler: Job 2 failed: show at
RNFBackTagger.scala:90, took 40.037558 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due
to stage failure: Task 0 in stage 7.0 failed 4 times, most recent failure:
Lost task 0.3 in stage 7.0 (TID 202, 10.178.149.243):
java.util.NoSuchElementException: None.get
    at scala.None$.get(Option.scala:347)
    at scala.None$.get(Option.scala:345)
    at
org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
    at
org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437)
    at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
    at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
    at scala.Option.foreach(Option.scala:257)
    at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
    at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1659)
    at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618)
    at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1897)
    at
org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:347)
    at
org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)
    at
org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2183)
    at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
    at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2532)
    at org.apache.spark.sql.Dataset.org
$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2182)
    at org.apache.spark.sql.Dataset.org
$apache$spark$sql$Dataset$$collect(Dataset.scala:2189)
    at
org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1925)
    at
org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1924)
    at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2562)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:1924)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2139)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:239)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:526)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:486)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:495)
    at com.knoldus.xml.RNFBackTagger$.main(RNFBackTagger.scala:90)
    at com.knoldus.xml.RNFBackTagger.main(RNFBackTagger.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
    at
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.util.NoSuchElementException: None.get
    at scala.None$.get(Option.scala:347)
    at scala.None$.get(Option.scala:345)
    at
org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
    at
org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)



Earlier I was not able to pin point the problem !
Then I tried the removing unncessary Code approach !

Then I found out  the problem lies in this :

     val groupedDF = selectedDF.groupBy("id").agg(collect_list("value"))
        groupedDF.show


Because if I try to show selectedDF it displays the correct result!


The spark version that I am using is 2.0.0 ! Please help me out and let me
know what is the problem.

Link to Code is :
https://gist.github.com/shiv4nsh/0c3f62e3afd95634a6061b405c774582

Show on line 19 prints and the show on 28 throws this exception.


Server Configuration: I have spark 2.0 running on 8 core worker with 10 gb
memory and its running on centOS

Script for launching application:


    ./bin/spark-submit --class com.knoldus.Application
/root/code/newCode/project1/target/deployable.jar

Any help is appreciated !

**Note:** The code works fine in local mode. This error is thrown when i
try to run it on cluster.


This question I have already asked on Stackoverflow here:
http://stackoverflow.com/questions/40400424/spark-exits-with-exception?noredirect=1#comment68095238_40400424

-- 
*Best Regards | Shivansh*
*Software Consultant*
*Knoldus Software LLP*

*India - US - Canada*
* Twitter <http://www.twitter.com/shiv4nsh> | FB
<http://www.facebook.com/xeruo> | LinkedIn
<https://in.linkedin.com/pub/shivansh-srivastava/3a/204/251>*

Re: Spark Exits with exception

Posted by Shivansh Srivastava <sh...@knoldus.com>.
Can someone help me out ! That what actually i am doing wrong !

The Spark UI shows that multiple apps are getting submitted , but I am
submitting only single application on Spark and All the applications are in
WAITING State except the main one !



On Mon, Nov 7, 2016 at 12:45 PM, Shivansh Srivastava <sh...@knoldus.com>
wrote:

>
>
> This is the stackTrace that I am getting while running the application:
>
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 233 on executor id: 4 hostname: 10.178.149.243.
>     16/11/03 11:25:45 WARN TaskSetManager: Lost task 1.0 in stage 11.0
> (TID 217, 10.178.149.243): java.util.NoSuchElementException: None.get
>         at scala.None$.get(Option.scala:347)
>         at scala.None$.get(Option.scala:345)
>         at org.apache.spark.storage.BlockInfoManager.
> releaseAllLocksForTask(BlockInfoManager.scala:343)
>         at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(
> BlockManager.scala:644)
>         at org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:281)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
>
>     16/11/03 11:25:45 INFO TaskSetManager: Lost task 14.0 in stage 11.0
> (TID 225) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 1]
>     16/11/03 11:25:45 INFO TaskSetManager: Starting task 14.1 in stage
> 11.0 (TID 234, 10.178.149.243, partition 14, NODE_LOCAL, 8828 bytes)
>     16/11/03 11:25:45 INFO TaskSetManager: Lost task 22.0 in stage 11.0
> (TID 232) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 2]
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 234 on executor id: 4 hostname: 10.178.149.243.
>     16/11/03 11:25:45 INFO TaskSetManager: Starting task 22.1 in stage
> 11.0 (TID 235, 10.178.149.243, partition 22, NODE_LOCAL, 9066 bytes)
>     16/11/03 11:25:45 INFO TaskSetManager: Lost task 24.0 in stage 11.0
> (TID 233) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 3]
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 235 on executor id: 4 hostname: 10.178.149.243.
>     16/11/03 11:25:45 INFO TaskSetManager: Starting task 24.1 in stage
> 11.0 (TID 236, 10.178.149.243, partition 24, NODE_LOCAL, 9185 bytes)
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 236 on executor id: 4 hostname: 10.178.149.243.
>     16/11/03 11:25:45 INFO TaskSetManager: Lost task 22.1 in stage 11.0
> (TID 235) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 4]
>     16/11/03 11:25:45 INFO TaskSetManager: Starting task 22.2 in stage
> 11.0 (TID 237, 10.178.149.243, partition 22, NODE_LOCAL, 9066 bytes)
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 237 on executor id: 4 hostname: 10.178.149.243.
>     16/11/03 11:25:45 INFO TaskSetManager: Lost task 14.1 in stage 11.0
> (TID 234) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 5]
>     16/11/03 11:25:45 INFO TaskSetManager: Starting task 14.2 in stage
> 11.0 (TID 238, 10.178.149.243, partition 14, NODE_LOCAL, 8828 bytes)
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 238 on executor id: 4 hostname: 10.178.149.243.
>     16/11/03 11:25:45 INFO TaskSetManager: Lost task 24.1 in stage 11.0
> (TID 236) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 6]
>     16/11/03 11:25:45 INFO TaskSetManager: Starting task 24.2 in stage
> 11.0 (TID 239, 10.178.149.243, partition 24, NODE_LOCAL, 9185 bytes)
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 239 on executor id: 4 hostname: 10.178.149.243.
>     16/11/03 11:25:45 INFO TaskSetManager: Lost task 22.2 in stage 11.0
> (TID 237) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 7]
>     16/11/03 11:25:45 INFO TaskSetManager: Starting task 22.3 in stage
> 11.0 (TID 240, 10.178.149.243, partition 22, NODE_LOCAL, 9066 bytes)
>     16/11/03 11:25:45 INFO TaskSetManager: Lost task 14.2 in stage 11.0
> (TID 238) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 8]
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 240 on executor id: 4 hostname: 10.178.149.243.
>     16/11/03 11:25:45 INFO TaskSetManager: Starting task 14.3 in stage
> 11.0 (TID 241, 10.178.149.243, partition 14, NODE_LOCAL, 8828 bytes)
>     16/11/03 11:25:45 INFO TaskSetManager: Lost task 24.2 in stage 11.0
> (TID 239) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 9]
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 241 on executor id: 4 hostname: 10.178.149.243.
>     16/11/03 11:25:45 INFO TaskSetManager: Starting task 24.3 in stage
> 11.0 (TID 242, 10.178.149.243, partition 24, NODE_LOCAL, 9185 bytes)
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 242 on executor id: 4 hostname: 10.178.149.243.
>     16/11/03 11:25:45 INFO TaskSetManager: Lost task 22.3 in stage 11.0
> (TID 240) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 10]
>     16/11/03 11:25:45 ERROR TaskSetManager: Task 22 in stage 11.0 failed 4
> times; aborting job
>     16/11/03 11:25:45 INFO TaskSetManager: Starting task 0.0 in stage 12.0
> (TID 243, 10.178.149.243, partition 0, NODE_LOCAL, 10016 bytes)
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 243 on executor id: 4 hostname: 10.178.149.243.
>     16/11/03 11:25:45 INFO TaskSetManager: Lost task 14.3 in stage 11.0
> (TID 241) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 11]
>     16/11/03 11:25:45 INFO TaskSchedulerImpl: Cancelling stage 12
>     16/11/03 11:25:45 INFO TaskSchedulerImpl: Stage 12 was cancelled
>     16/11/03 11:25:45 INFO TaskSetManager: Starting task 0.0 in stage 14.0
> (TID 244, 10.178.149.243, partition 0, NODE_LOCAL, 7638 bytes)
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 244 on executor id: 4 hostname: 10.178.149.243.
>     16/11/03 11:25:45 INFO TaskSetManager: Lost task 24.3 in stage 11.0
> (TID 242) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 12]
>     16/11/03 11:25:45 INFO DAGScheduler: ShuffleMapStage 12 (show at
> RNFBackTagger.scala:97) failed in 0.112 s
>     16/11/03 11:25:45 INFO TaskSchedulerImpl: Cancelling stage 14
>     16/11/03 11:25:45 INFO TaskSchedulerImpl: Stage 14 was cancelled
>     16/11/03 11:25:45 INFO DAGScheduler: ShuffleMapStage 14 (show at
> RNFBackTagger.scala:97) failed in 0.104 s
>     16/11/03 11:25:45 INFO TaskSchedulerImpl: Cancelling stage 11
>     16/11/03 11:25:45 INFO TaskSchedulerImpl: Stage 11 was cancelled
>     16/11/03 11:25:45 INFO DAGScheduler: ShuffleMapStage 11 (show at
> RNFBackTagger.scala:97) failed in 0.126 s
>     16/11/03 11:25:45 WARN TaskSetManager: Lost task 0.0 in stage 12.0
> (TID 243, 10.178.149.243): java.util.NoSuchElementException: None.get
>         at scala.None$.get(Option.scala:347)
>         at scala.None$.get(Option.scala:345)
>         at org.apache.spark.storage.BlockInfoManager.
> releaseAllLocksForTask(BlockInfoManager.scala:343)
>         at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(
> BlockManager.scala:644)
>         at org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:281)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
>
>     16/11/03 11:25:45 INFO DAGScheduler: Job 7 failed: show at
> RNFBackTagger.scala:97, took 0.141681 s
>     16/11/03 11:25:45 INFO TaskSchedulerImpl: Removed TaskSet 12.0, whose
> tasks have all completed, from pool
>     Exception in thread "main" org.apache.spark.SparkException: Job
> aborted due to stage failure: Task 22 in stage 11.0 failed 4 times, most
> recent failure: Lost task 22.3 in stage 11.0 (TID 240, 10.178.149.243):
> java.util.NoSuchElementException: None.get
>         at scala.None$.get(Option.scala:347)
>         at scala.None$.get(Option.scala:345)
>         at org.apache.spark.storage.BlockInfoManager.
> releaseAllLocksForTask(BlockInfoManager.scala:343)
>         at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(
> BlockManager.scala:644)
>         at org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:281)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
>
>     Driver stacktrace:
>         at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$
> scheduler$DAGScheduler$$failJobAndIndependentStages(
> DAGScheduler.scala:1450)
>         at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> abortStage$1.apply(DAGScheduler.scala:1438)
>         at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> abortStage$1.apply(DAGScheduler.scala:1437)
>         at scala.collection.mutable.ResizableArray$class.foreach(
> ResizableArray.scala:59)
>         at scala.collection.mutable.ArrayBuffer.foreach(
> ArrayBuffer.scala:48)
>         at org.apache.spark.scheduler.DAGScheduler.abortStage(
> DAGScheduler.scala:1437)
>         at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
>         at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
>         at scala.Option.foreach(Option.scala:257)
>         at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(
> DAGScheduler.scala:811)
>         at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
> doOnReceive(DAGScheduler.scala:1659)
>         at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
> onReceive(DAGScheduler.scala:1618)
>         at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
> onReceive(DAGScheduler.scala:1607)
>         at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>         at org.apache.spark.scheduler.DAGScheduler.runJob(
> DAGScheduler.scala:632)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1897)
>         at org.apache.spark.sql.execution.SparkPlan.
> executeTake(SparkPlan.scala:347)
>         at org.apache.spark.sql.execution.CollectLimitExec.
> executeCollect(limit.scala:39)
>         at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$
> Dataset$$execute$1$1.apply(Dataset.scala:2183)
>         at org.apache.spark.sql.execution.SQLExecution$.
> withNewExecutionId(SQLExecution.scala:57)
>         at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.
> scala:2532)
>         at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$
> execute$1(Dataset.scala:2182)
>         at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$
> collect(Dataset.scala:2189)
>         at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.
> scala:1925)
>         at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.
> scala:1924)
>         at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.
> scala:2562)
>         at org.apache.spark.sql.Dataset.head(Dataset.scala:1924)
>         at org.apache.spark.sql.Dataset.take(Dataset.scala:2139)
>         at org.apache.spark.sql.Dataset.showString(Dataset.scala:239)
>         at org.apache.spark.sql.Dataset.show(Dataset.scala:526)
>         at com.knoldus.xml.RNFBackTagger$.main(RNFBackTagger.scala:97)
>         at com.knoldus.xml.RNFBackTagger.main(RNFBackTagger.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:185)
>         at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:210)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:124)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>     Caused by: java.util.NoSuchElementException: None.get
>         at scala.None$.get(Option.scala:347)
>         at scala.None$.get(Option.scala:345)
>         at org.apache.spark.storage.BlockInfoManager.
> releaseAllLocksForTask(BlockInfoManager.scala:343)
>         at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(
> BlockManager.scala:644)
>         at org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:281)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
>     16/11/03 11:25:45 WARN JobProgressListener: Task start for unknown
> stage 12
>     16/11/03 11:25:45 WARN TaskSetManager: Lost task 0.0 in stage 14.0
> (TID 244, 10.178.149.243): java.util.NoSuchElementException: None.get
>         at scala.None$.get(Option.scala:347)
>         at scala.None$.get(Option.scala:345)
>         at org.apache.spark.storage.BlockInfoManager.
> releaseAllLocksForTask(BlockInfoManager.scala:343)
>         at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(
> BlockManager.scala:644)
>         at org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:281)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
>
>     16/11/03 11:25:45 INFO TaskSchedulerImpl: Removed TaskSet 14.0, whose
> tasks have all completed, from pool
>     16/11/03 11:25:45 INFO SparkContext: Invoking stop() from shutdown hook
>     16/11/03 11:25:45 WARN JobProgressListener: Task start for unknown
> stage 14
>     16/11/03 11:25:45 INFO SerialShutdownHooks: Successfully executed
> shutdown hook: Clearing session cache for C* connector
>     16/11/03 11:25:45 INFO TaskSetManager: Finished task 5.0 in stage 11.0
> (TID 219) in 137 ms on 10.178.149.22 (1/35)
>     16/11/03 11:25:45 INFO SparkUI: Stopped Spark web UI at
> http://10.178.149.133:4040
>     16/11/03 11:25:45 INFO StandaloneSchedulerBackend: Shutting down all
> executors
>     16/11/03 11:25:45 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Asking each executor to shut down
>     16/11/03 11:25:45 ERROR TransportRequestHandler: Error while invoking
> RpcHandler#receive() for one-way message.
>     org.apache.spark.SparkException: Could not find
> CoarseGrainedScheduler.
>         at org.apache.spark.rpc.netty.Dispatcher.postMessage(
> Dispatcher.scala:152)
>         at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(
> Dispatcher.scala:132)
>         at org.apache.spark.rpc.netty.NettyRpcHandler.receive(
> NettyRpcEnv.scala:571)
>         at org.apache.spark.network.server.TransportRequestHandler.
> processOneWayMessage(TransportRequestHandler.java:179)
>         at org.apache.spark.network.server.TransportRequestHandler.handle(
> TransportRequestHandler.java:108)
>         at org.apache.spark.network.server.TransportChannelHandler.
> channelRead0(TransportChannelHandler.java:119)
>         at org.apache.spark.network.server.TransportChannelHandler.
> channelRead0(TransportChannelHandler.java:51)
>         at io.netty.channel.SimpleChannelInboundHandler.channelRead(
> SimpleChannelInboundHandler.java:105)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at io.netty.handler.timeout.IdleStateHandler.channelRead(
> IdleStateHandler.java:266)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(
> MessageToMessageDecoder.java:103)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at org.apache.spark.network.util.TransportFrameDecoder.
> channelRead(TransportFrameDecoder.java:85)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at io.netty.channel.DefaultChannelPipeline.fireChannelRead(
> DefaultChannelPipeline.java:846)
>         at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(
> AbstractNioByteChannel.java:131)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKey(
> NioEventLoop.java:511)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(
> NioEventLoop.java:468)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKeys(
> NioEventLoop.java:382)
>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>         at io.netty.util.concurrent.SingleThreadEventExecutor$2.
> run(SingleThreadEventExecutor.java:111)
>         at java.lang.Thread.run(Thread.java:745)
>     16/11/03 11:25:45 ERROR TransportRequestHandler: Error while invoking
> RpcHandler#receive() for one-way message.
>     org.apache.spark.SparkException: Could not find
> CoarseGrainedScheduler.
>         at org.apache.spark.rpc.netty.Dispatcher.postMessage(
> Dispatcher.scala:152)
>         at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(
> Dispatcher.scala:132)
>         at org.apache.spark.rpc.netty.NettyRpcHandler.receive(
> NettyRpcEnv.scala:571)
>         at org.apache.spark.network.server.TransportRequestHandler.
> processOneWayMessage(TransportRequestHandler.java:179)
>         at org.apache.spark.network.server.TransportRequestHandler.handle(
> TransportRequestHandler.java:108)
>         at org.apache.spark.network.server.TransportChannelHandler.
> channelRead0(TransportChannelHandler.java:119)
>         at org.apache.spark.network.server.TransportChannelHandler.
> channelRead0(TransportChannelHandler.java:51)
>         at io.netty.channel.SimpleChannelInboundHandler.channelRead(
> SimpleChannelInboundHandler.java:105)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at io.netty.handler.timeout.IdleStateHandler.channelRead(
> IdleStateHandler.java:266)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(
> MessageToMessageDecoder.java:103)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at org.apache.spark.network.util.TransportFrameDecoder.
> channelRead(TransportFrameDecoder.java:85)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at io.netty.channel.DefaultChannelPipeline.fireChannelRead(
> DefaultChannelPipeline.java:846)
>         at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(
> AbstractNioByteChannel.java:131)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKey(
> NioEventLoop.java:511)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(
> NioEventLoop.java:468)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKeys(
> NioEventLoop.java:382)
>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>         at io.netty.util.concurrent.SingleThreadEventExecutor$2.
> run(SingleThreadEventExecutor.java:111)
>         at java.lang.Thread.run(Thread.java:745)
>     16/11/03 11:25:45 INFO MapOutputTrackerMasterEndpoint:
> MapOutputTrackerMasterEndpoint stopped!
>     16/11/03 11:25:45 INFO MemoryStore: MemoryStore cleared
>     16/11/03 11:25:45 INFO BlockManager: BlockManager stopped
>     16/11/03 11:25:45 INFO BlockManagerMaster: BlockManagerMaster stopped
>     16/11/03 11:25:45 INFO OutputCommitCoordinator$
> OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
>     16/11/03 11:25:45 ERROR TransportRequestHandler: Error while invoking
> RpcHandler#receive() for one-way message.
>     org.apache.spark.rpc.RpcEnvStoppedException: RpcEnv already stopped.
>         at org.apache.spark.rpc.netty.Dispatcher.postMessage(
> Dispatcher.scala:150)
>         at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(
> Dispatcher.scala:132)
>         at org.apache.spark.rpc.netty.NettyRpcHandler.receive(
> NettyRpcEnv.scala:571)
>         at org.apache.spark.network.server.TransportRequestHandler.
> processOneWayMessage(TransportRequestHandler.java:179)
>         at org.apache.spark.network.server.TransportRequestHandler.handle(
> TransportRequestHandler.java:108)
>         at org.apache.spark.network.server.TransportChannelHandler.
> channelRead0(TransportChannelHandler.java:119)
>         at org.apache.spark.network.server.TransportChannelHandler.
> channelRead0(TransportChannelHandler.java:51)
>         at io.netty.channel.SimpleChannelInboundHandler.channelRead(
> SimpleChannelInboundHandler.java:105)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at io.netty.handler.timeout.IdleStateHandler.channelRead(
> IdleStateHandler.java:266)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(
> MessageToMessageDecoder.java:103)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at org.apache.spark.network.util.TransportFrameDecoder.
> channelRead(TransportFrameDecoder.java:85)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at io.netty.channel.DefaultChannelPipeline.fireChannelRead(
> DefaultChannelPipeline.java:846)
>         at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(
> AbstractNioByteChannel.java:131)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKey(
> NioEventLoop.java:511)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(
> NioEventLoop.java:468)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKeys(
> NioEventLoop.java:382)
>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>         at io.netty.util.concurrent.SingleThreadEventExecutor$2.
> run(SingleThreadEventExecutor.java:111)
>         at java.lang.Thread.run(Thread.java:745)
>     16/11/03 11:25:45 ERROR TransportRequestHandler: Error while invoking
> RpcHandler#receive() for one-way message.
>     org.apache.spark.rpc.RpcEnvStoppedException: RpcEnv already stopped.
>         at org.apache.spark.rpc.netty.Dispatcher.postMessage(
> Dispatcher.scala:150)
>         at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(
> Dispatcher.scala:132)
>         at org.apache.spark.rpc.netty.NettyRpcHandler.receive(
> NettyRpcEnv.scala:571)
>         at org.apache.spark.network.server.TransportRequestHandler.
> processOneWayMessage(TransportRequestHandler.java:179)
>         at org.apache.spark.network.server.TransportRequestHandler.handle(
> TransportRequestHandler.java:108)
>         at org.apache.spark.network.server.TransportChannelHandler.
> channelRead0(TransportChannelHandler.java:119)
>         at org.apache.spark.network.server.TransportChannelHandler.
> channelRead0(TransportChannelHandler.java:51)
>         at io.netty.channel.SimpleChannelInboundHandler.channelRead(
> SimpleChannelInboundHandler.java:105)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at io.netty.handler.timeout.IdleStateHandler.channelRead(
> IdleStateHandler.java:266)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(
> MessageToMessageDecoder.java:103)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at org.apache.spark.network.util.TransportFrameDecoder.
> channelRead(TransportFrameDecoder.java:85)
>         at io.netty.channel.AbstractChannelHandlerContext.
> invokeChannelRead(AbstractChannelHandlerContext.java:308)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(
> AbstractChannelHandlerContext.java:294)
>         at io.netty.channel.DefaultChannelPipeline.fireChannelRead(
> DefaultChannelPipeline.java:846)
>         at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(
> AbstractNioByteChannel.java:131)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKey(
> NioEventLoop.java:511)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(
> NioEventLoop.java:468)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKeys(
> NioEventLoop.java:382)
>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>         at io.netty.util.concurrent.SingleThreadEventExecutor$2.
> run(SingleThreadEventExecutor.java:111)
>         at java.lang.Thread.run(Thread.java:745)
>     16/11/03 11:25:45 INFO SparkContext: Successfully stopped SparkContext
>     16/11/03 11:25:45 INFO ShutdownHookManager: Shutdown hook called
>     16/11/03 11:25:45 INFO ShutdownHookManager: Deleting directory
> /tmp/spark-c52a6da9-5702-4128-9950-805d5f9dd75e
>
> Sometimes the error trace changes and it gives only this:
> 16/11/07 07:04:35 WARN TaskSetManager: Lost task 0.0 in stage 7.0 (TID
> 199, 10.178.149.243): java.util.NoSuchElementException: None.get
>     at scala.None$.get(Option.scala:347)
>     at scala.None$.get(Option.scala:345)
>     at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(
> BlockInfoManager.scala:343)
>     at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(
> BlockManager.scala:644)
>     at org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:281)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>
> 16/11/07 07:04:35 INFO TaskSetManager: Starting task 0.1 in stage 7.0 (TID
> 200, 10.178.149.243, partition 0, NODE_LOCAL, 5286 bytes)
> 16/11/07 07:04:35 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 200 on executor id: 5 hostname: 10.178.149.243.
> 16/11/07 07:04:35 INFO TaskSetManager: Lost task 0.1 in stage 7.0 (TID
> 200) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 1]
> 16/11/07 07:04:35 INFO TaskSetManager: Starting task 0.2 in stage 7.0 (TID
> 201, 10.178.149.243, partition 0, NODE_LOCAL, 5286 bytes)
> 16/11/07 07:04:35 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 201 on executor id: 7 hostname: 10.178.149.243.
> 16/11/07 07:04:35 INFO TaskSetManager: Lost task 0.2 in stage 7.0 (TID
> 201) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 2]
> 16/11/07 07:04:35 INFO TaskSetManager: Starting task 0.3 in stage 7.0 (TID
> 202, 10.178.149.243, partition 0, NODE_LOCAL, 5286 bytes)
> 16/11/07 07:04:35 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:
> Launching task 202 on executor id: 5 hostname: 10.178.149.243.
> 16/11/07 07:04:35 INFO TaskSetManager: Lost task 0.3 in stage 7.0 (TID
> 202) on executor 10.178.149.243: java.util.NoSuchElementException
> (None.get) [duplicate 3]
> 16/11/07 07:04:35 ERROR TaskSetManager: Task 0 in stage 7.0 failed 4
> times; aborting job
> 16/11/07 07:04:35 INFO TaskSchedulerImpl: Removed TaskSet 7.0, whose tasks
> have all completed, from pool
> 16/11/07 07:04:35 INFO TaskSchedulerImpl: Cancelling stage 7
> 16/11/07 07:04:35 INFO DAGScheduler: ResultStage 7 (show at
> RNFBackTagger.scala:90) failed in 0.105 s
> 16/11/07 07:04:35 INFO DAGScheduler: Job 2 failed: show at
> RNFBackTagger.scala:90, took 40.037558 s
> Exception in thread "main" org.apache.spark.SparkException: Job aborted
> due to stage failure: Task 0 in stage 7.0 failed 4 times, most recent
> failure: Lost task 0.3 in stage 7.0 (TID 202, 10.178.149.243): java.util.NoSuchElementException:
> None.get
>     at scala.None$.get(Option.scala:347)
>     at scala.None$.get(Option.scala:345)
>     at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(
> BlockInfoManager.scala:343)
>     at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(
> BlockManager.scala:644)
>     at org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:281)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>
> Driver stacktrace:
>     at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$
> scheduler$DAGScheduler$$failJobAndIndependentStages(
> DAGScheduler.scala:1450)
>     at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> abortStage$1.apply(DAGScheduler.scala:1438)
>     at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> abortStage$1.apply(DAGScheduler.scala:1437)
>     at scala.collection.mutable.ResizableArray$class.foreach(
> ResizableArray.scala:59)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>     at org.apache.spark.scheduler.DAGScheduler.abortStage(
> DAGScheduler.scala:1437)
>     at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
>     at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
>     at scala.Option.foreach(Option.scala:257)
>     at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(
> DAGScheduler.scala:811)
>     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
> doOnReceive(DAGScheduler.scala:1659)
>     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
> onReceive(DAGScheduler.scala:1618)
>     at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
> onReceive(DAGScheduler.scala:1607)
>     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>     at org.apache.spark.scheduler.DAGScheduler.runJob(
> DAGScheduler.scala:632)
>     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871)
>     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884)
>     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1897)
>     at org.apache.spark.sql.execution.SparkPlan.
> executeTake(SparkPlan.scala:347)
>     at org.apache.spark.sql.execution.CollectLimitExec.
> executeCollect(limit.scala:39)
>     at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$
> Dataset$$execute$1$1.apply(Dataset.scala:2183)
>     at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(
> SQLExecution.scala:57)
>     at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2532)
>     at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$
> execute$1(Dataset.scala:2182)
>     at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$
> collect(Dataset.scala:2189)
>     at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.
> scala:1925)
>     at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.
> scala:1924)
>     at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2562)
>     at org.apache.spark.sql.Dataset.head(Dataset.scala:1924)
>     at org.apache.spark.sql.Dataset.take(Dataset.scala:2139)
>     at org.apache.spark.sql.Dataset.showString(Dataset.scala:239)
>     at org.apache.spark.sql.Dataset.show(Dataset.scala:526)
>     at org.apache.spark.sql.Dataset.show(Dataset.scala:486)
>     at org.apache.spark.sql.Dataset.show(Dataset.scala:495)
>     at com.knoldus.xml.RNFBackTagger$.main(RNFBackTagger.scala:90)
>     at com.knoldus.xml.RNFBackTagger.main(RNFBackTagger.scala)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
>     at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:185)
>     at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
>     at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
>     at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.util.NoSuchElementException: None.get
>     at scala.None$.get(Option.scala:347)
>     at scala.None$.get(Option.scala:345)
>     at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(
> BlockInfoManager.scala:343)
>     at org.apache.spark.storage.BlockManager.releaseAllLocksForTask(
> BlockManager.scala:644)
>     at org.apache.spark.executor.Executor$TaskRunner.run(
> Executor.scala:281)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>
>
>
> Earlier I was not able to pin point the problem !
> Then I tried the removing unncessary Code approach !
>
> Then I found out  the problem lies in this :
>
>      val groupedDF = selectedDF.groupBy("id").agg(collect_list("value"))
>         groupedDF.show
>
>
> Because if I try to show selectedDF it displays the correct result!
>
>
> The spark version that I am using is 2.0.0 ! Please help me out and let me
> know what is the problem.
>
> Link to Code is :
> https://gist.github.com/shiv4nsh/0c3f62e3afd95634a6061b405c774582
>
> Show on line 19 prints and the show on 28 throws this exception.
>
>
> Server Configuration: I have spark 2.0 running on 8 core worker with 10 gb
> memory and its running on centOS
>
> Script for launching application:
>
>
>     ./bin/spark-submit --class com.knoldus.Application
> /root/code/newCode/project1/target/deployable.jar
>
> Any help is appreciated !
>
> **Note:** The code works fine in local mode. This error is thrown when i
> try to run it on cluster.
>
>
> This question I have already asked on Stackoverflow here:
> http://stackoverflow.com/questions/40400424/spark-exits-with-exception?
> noredirect=1#comment68095238_40400424
>
> --
> *Best Regards | Shivansh*
> *Software Consultant*
> *Knoldus Software LLP*
>
> *India - US - Canada*
> * Twitter <http://www.twitter.com/shiv4nsh> | FB
> <http://www.facebook.com/xeruo> | LinkedIn
> <https://in.linkedin.com/pub/shivansh-srivastava/3a/204/251>*
>



-- 
*Best Regards | Shivansh*
*Software Consultant*
*Knoldus Software LLP*

*India - US - Canada*
* Twitter <http://www.twitter.com/shiv4nsh> | FB
<http://www.facebook.com/xeruo> | LinkedIn
<https://in.linkedin.com/pub/shivansh-srivastava/3a/204/251>*