You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by pradeep_s <sr...@gmail.com> on 2014/04/30 00:16:19 UTC

Re: Spark cluster standalone setup memory issue

Also seeing logs related to memory towards the end.
14/04/29 15:07:54 INFO MemoryStore: ensureFreeSpace(138763) called with
curMem=0, maxMem=1116418867
14/04/29 15:07:54 INFO MemoryStore: Block broadcast_0 stored as values to
memory (estimated size 135.5 KB, free 1064.6 MB)
14/04/29 15:07:54 INFO FileInputFormat: Total input paths to process : 1
14/04/29 15:07:54 INFO SparkContext: Starting job: count at SparkPOC.java:18
14/04/29 15:07:54 INFO DAGScheduler: Got job 0 (count at SparkPOC.java:18)
with 2 output partitions (allowLocal=false)
14/04/29 15:07:54 INFO DAGScheduler: Final stage: Stage 0 (count at
SparkPOC.java:18)
14/04/29 15:07:54 INFO DAGScheduler: Parents of final stage: List()
14/04/29 15:07:54 INFO DAGScheduler: Missing parents: List()
14/04/29 15:07:54 INFO DAGScheduler: Submitting Stage 0 (FilteredRDD[2] at
filter at SparkPOC.java:16), which has no missing parents
14/04/29 15:07:54 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0
(FilteredRDD[2] at filter at SparkPOC.java:16)
14/04/29 15:07:54 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
*1/4/04/29 15:08:09 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory/*
<http://apache-spark-user-list.1001560.n3.nabble.com/file/n5065/sparkui.png> 




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-cluster-standalone-setup-tp5060p5065.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.