You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Ed Sweeney <ed...@falkonry.com> on 2014/07/25 17:08:42 UTC

Initial job has not accepted any resources (but workers are in UI)

Hi all,

Amazon Linux, AWS, Spark 1.0.1 reading a file.

The UI shows there are workers and shows this app context with the 2
tasks waiting.  All the hostnames resolve properly so I am guessing
the message is correct and that the workers won't accept the job for
mem reasons.

What params do I tweak to know for sure?

Thanks for any help, -Ed

ps. most of the searches for this error end up being about workers now
connecting but our instance's UI shows the workers.

46692 [pool-10-thread-1] INFO org.apache.hadoop.mapred.FileInputFormat
- Total input paths to process : 1
46709 [pool-10-thread-1] INFO org.apache.spark.SparkContext - Starting
job: count at DListRDD.scala:19
46728 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.DAGScheduler - Got job 0 (count at
DListRDD.scala:19) with 2 output partitions (allowLocal=false)
46728 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.DAGScheduler - Final stage: Stage 0(count
at DListRDD.scala:19)
46729 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.DAGScheduler - Parents of final stage:
List()
46753 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.DAGScheduler - Missing parents: List()
46764 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.DAGScheduler - Submitting Stage 0
(FilteredRDD[6] at filter at DListRDD.scala:13), which has no missing
parents
46765 [pool-38-thread-1] INFO org.apache.hadoop.mapred.FileInputFormat
- Total input paths to process : 1
46772 [pool-38-thread-1] INFO org.apache.spark.SparkContext - Starting
job: count at DListRDD.scala:19
46832 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.DAGScheduler - Submitting 2 missing tasks
from Stage 0 (FilteredRDD[6] at filter at DListRDD.scala:13)
46834 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.TaskSchedulerImpl - Adding task set 0.0
with 2 tasks
46845 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.FairSchedulableBuilder - Added task set
TaskSet_0 tasks to pool default
46852 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.DAGScheduler - Got job 1 (count at
DListRDD.scala:19) with 2 output partitions (allowLocal=false)
46852 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.DAGScheduler - Final stage: Stage 1(count
at DListRDD.scala:19)
46853 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.DAGScheduler - Parents of final stage:
List()
46855 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.DAGScheduler - Missing parents: List()
46856 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.DAGScheduler - Submitting Stage 1
(FilteredRDD[6] at filter at DListRDD.scala:13), which has no missing
parents
46860 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.DAGScheduler - Submitting 2 missing tasks
from Stage 1 (FilteredRDD[6] at filter at DListRDD.scala:13)
46860 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.TaskSchedulerImpl - Adding task set 1.0
with 2 tasks
46861 [spark-akka.actor.default-dispatcher-5] INFO
org.apache.spark.scheduler.FairSchedulableBuilder - Added task set
TaskSet_1 tasks to pool default
2014-07-25 01:25:09,616 [Thread-2] DEBUG
falkonry.commons.service.ServiceHandler - Listening...
61847 [Timer-0] WARN org.apache.spark.scheduler.TaskSchedulerImpl -
Initial job has not accepted any resources; check your cluster UI to
ensure that workers are registered and have sufficient memory

Re: Initial job has not accepted any resources (but workers are in UI)

Posted by Navicore <ed...@falkonry.com>.
solution: opened all ports on the ec2 machine that the driver was running on. 
need to narrow down what ports akka wants... but the issue is solved.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-but-workers-are-in-UI-tp10659p10707.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Initial job has not accepted any resources (but workers are in UI)

Posted by Navicore <ed...@falkonry.com>.
thx for the reply,

the UI says my application has cores and mem

ID	Name	Cores	Memory per Node	Submitted Time	User	State	Duration
app-20140725164107-0001	SectionsAndSeamsPipeline	6	512.0 MB	2014/07/25
16:41:07	tercel	RUNNING	21 s



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Initial-job-has-not-accepted-any-resources-but-workers-are-in-UI-tp10659p10671.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.