You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Eduardo Costa Alfaia <e....@unibs.it> on 2014/03/10 17:02:12 UTC

Log Analyze

Hi Guys,
Could anyone help me to understand this piece of log in red? Why is this 
happened?

Thanks

14/03/10 16:55:20 INFO SparkContext: Starting job: first at 
NetworkWordCount.scala:87
14/03/10 16:55:20 INFO JobScheduler: Finished job streaming job 
1394466892000 ms.0 from job set of time 1394466892000 ms
14/03/10 16:55:20 INFO JobScheduler: Total delay: 28.537 s for time 
1394466892000 ms (execution: 4.479 s)
14/03/10 16:55:20 INFO JobScheduler: Starting job streaming job 
1394466893000 ms.0 from job set of time 1394466893000 ms
14/03/10 16:55:20 INFO JobGenerator: Checkpointing graph for time 
1394466892000 ms
14/03/10 16:55:20 INFO DStreamGraph: Updating checkpoint data for time 
1394466892000 ms
14/03/10 16:55:20 INFO DStreamGraph: Updated checkpoint data for time 
1394466892000 ms
14/03/10 16:55:20 INFO CheckpointWriter: Saving checkpoint for time 
1394466892000 ms to file 
'hdfs://computer8:54310/user/root/INPUT/checkpoint-1394466892000'
14/03/10 16:55:20 INFO DAGScheduler: Registering RDD 496 (combineByKey 
at ShuffledDStream.scala:42)
14/03/10 16:55:20 INFO DAGScheduler: Got job 39 (first at 
NetworkWordCount.scala:87) with 1 output partitions (allowLocal=true)
14/03/10 16:55:20 INFO DAGScheduler: Final stage: Stage 77 (first at 
NetworkWordCount.scala:87)
14/03/10 16:55:20 INFO DAGScheduler: Parents of final stage: List(Stage 78)
14/03/10 16:55:20 INFO DAGScheduler: Missing parents: List(Stage 78)
14/03/10 16:55:20 INFO BlockManagerMasterActor$BlockManagerInfo: Removed 
input-1-1394466782400 on computer10.ant-net:34062 in memory (size: 5.9 
MB, free: 502.2 MB)
14/03/10 16:55:20 INFO DAGScheduler: Submitting Stage 78 
(MapPartitionsRDD[496] at combineByKey at ShuffledDStream.scala:42), 
which has no missing parents
14/03/10 16:55:20 INFO BlockManagerMasterActor$BlockManagerInfo: Added 
input-1-1394466816600 in memory on computer10.ant-net:34062 (size: 4.4 
MB, free: 497.8 MB)
14/03/10 16:55:20 INFO DAGScheduler: Submitting 15 missing tasks from 
Stage 78 (MapPartitionsRDD[496] at combineByKey at ShuffledDStream.scala:42)
14/03/10 16:55:20 INFO TaskSchedulerImpl: Adding task set 78.0 with 15 tasks
14/03/10 16:55:20 INFO TaskSetManager: Starting task 78.0:9 as TID 539 
on executor 2: computer1.ant-net (PROCESS_LOCAL)
14/03/10 16:55:20 INFO TaskSetManager: Serialized task 78.0:9 as 4144 
bytes in 1 ms
14/03/10 16:55:20 INFO TaskSetManager: Starting task 78.0:10 as TID 540 
on executor 1: computer10.ant-net (PROCESS_LOCAL)
14/03/10 16:55:20 INFO TaskSetManager: Serialized task 78.0:10 as 4144 
bytes in 0 ms
14/03/10 16:55:20 INFO TaskSetManager: Starting task 78.0:11 as TID 541 
on executor 0: computer11.ant-net (PROCESS_LOCAL)
14/03/10 16:55:20 INFO TaskSetManager: Serialized task 78.0:11 as 4144 
bytes in 0 ms
14/03/10 16:55:20 INFO BlockManagerMasterActor$BlockManagerInfo: Removed 
input-0-1394466874200 on computer1.ant-net:51406 in memory (size: 2.9 
MB, free: 460.0 MB)
14/03/10 16:55:20 INFO BlockManagerMasterActor$BlockManagerInfo: Removed 
input-0-1394466874400 on computer1.ant-net:51406 in memory (size: 4.1 
MB, free: 468.2 MB)
14/03/10 16:55:20 INFO TaskSetManager: Starting task 78.0:12 as TID 542 
on executor 1: computer10.ant-net (PROCESS_LOCAL)
14/03/10 16:55:20 INFO TaskSetManager: Serialized task 78.0:12 as 4144 
bytes in 1 ms
14/03/10 16:55:20 WARN TaskSetManager: Lost TID 540 (task 78.0:10)
14/03/10 16:55:20 INFO CheckpointWriter: Deleting 
hdfs://computer8:54310/user/root/INPUT/checkpoint-1394466892000
14/03/10 16:55:20 INFO CheckpointWriter: Checkpoint for time 
1394466892000 ms saved to file 
'hdfs://computer8:54310/user/root/INPUT/checkpoint-1394466892000', took 
3633 bytes and 93 ms
14/03/10 16:55:20 INFO DStreamGraph: Clearing checkpoint data for time 
1394466892000 ms
14/03/10 16:55:20 INFO DStreamGraph: Cleared checkpoint data for time 
1394466892000 ms
14/03/10 16:55:20 INFO BlockManagerMasterActor$BlockManagerInfo: Removed 
input-2-1394466789000 on computer11.ant-net:58332 in memory (size: 3.9 
MB, free: 536.0 MB)
14/03/10 16:55:20 WARN TaskSetManager: Loss was due to java.lang.Exception
java.lang.Exception: Could not compute split, block 
input-2-1394466794200 not found
     at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:45)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
     at org.apache.spark.rdd.UnionPartition.iterator(UnionRDD.scala:32)
     at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:72)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
     at org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
     at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
     at org.apache.spark.rdd.UnionPartition.iterator(UnionRDD.scala:32)
     at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:72)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
     at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
     at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
     at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
     at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
     at org.apache.spark.scheduler.Task.run(Task.scala:53)
     at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
     at 
org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
     at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:724)


-- 
Informativa sulla Privacy: http://www.unibs.it/node/8155