You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by arun k <ar...@gmail.com> on 2011/09/16 05:43:40 UTC
Remote exception and IO exception when running a job with capacity scheduler
Hi !
I have set up hadoop-0.20.203 in local mode as per
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
I have run the wordcount example Job and have all daemons running.
Then to run this job with capacity scheduler i have done the following :
1. Added
properties value
mapred.jobtracker.taskScheduler
org.apache.hadoop.mapred.CapacityTaskScheduler
mapred.queue.names myqueue1,myqueue2
mapred.capacity-scheduler.queue.myqueue1.capacity 25
mapred.capacity-scheduler.queue.myqueue1.capacity 75
Already present property
mapred.job.tracker localhost:54311
2.${HADOOP_HOME}$ bin/stop-all.sh
3.${HADOOP_HOME}$ bin/start-all.sh
4.$jps shows all daemons
5.${HADOOP_HOME}$ bin/hadoop jar hadoop*examples*.jar wordcount -
Dmapred.job.queue.name=myqueue1 /user/hduser/wcinput /user/hduser/wcoutput
I get the error:
java.io.IOException: Call to localhost/127.0.0.1:54311 failed on local
exception: java.io.IOException: Connection reset by peer
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1065)
at org.apache.hadoop.ipc.Client.call(Client.java:1033)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224)
...................
When i give
$jps
32463 NameNode
32763 SecondaryNameNode
32611 DataNode
931 Jps
The jobracker log gives info
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2011-09-16 00:21:42,012 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
up the system directory
2011-09-16 00:21:42,014 INFO org.apache.hadoop.mapred.JobTracker: problem
cleaning system directory:
hdfs://localhost:54310/app203/hadoop203/tmp/mapred/system
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
/app203/hadoop203/tmp/mapred/system. Name node is in safe mode.
The ratio of reported blocks 1.0000 has reached the threshold 0.9990. Safe
mode will be turned off automatically in 6 seconds.
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1851)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1831)
2011-09-16 00:21:52,321 FATAL org.apache.hadoop.mapred.JobTracker:
java.io.IOException: Queue 'myqueue1' doesn't have configured capacity!
at
org.apache.hadoop.mapred.CapacityTaskScheduler.parseQueues(CapacityTaskScheduler.java:905)
at
org.apache.hadoop.mapred.CapacityTaskScheduler.start(CapacityTaskScheduler.java:822)
at
org.apache.hadoop.mapred.JobTracker.offerService(JobTracker.java:2563)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4957)
2011-09-16 00:21:52,322 INFO org.apache.hadoop.mapred.JobTracker:
SHUTDOWN_MSG:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Even if i submit the job to "myqueue2" i see the same error of "myqueue1"
2011-09-16 00:21:52,321 FATAL org.apache.hadoop.mapred.JobTracker:
java.io.IOException: Queue 'myqueue1' doesn't have configured capacity!
Thanks,
Arun