You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by ursbrbalaji <ur...@gmail.com> on 2011/02/08 11:24:35 UTC

Re: no jobtracker to stop,no namenode to stop

Hi Prabhu,

I am facing exactly the same problem. I too followed the steps in the below
link.

Please let me know which configuration file was modified and what were the
changes.

Thanks,
Balaji


-- 
View this message in context: http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p2450308.html
Sent from the Users mailing list archive at Nabble.com.

Re: no jobtracker to stop,no namenode to stop

Posted by NJain <ni...@gmail.com>.
Hey Nikhil,

Just tried what you asked for and yes there are files and folders in
c:/Hadoop/name (folders: current, image, previous.checkpoint, in_use.lock)
and also tried with the firewall is disabled.

Just want to let you know one more thing that when on Jobtracker UI, I
click on '0' under column nodes (ref: my last post), I get the following
message:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
localhost Hadoop Machine ListActive Task TrackersThere are currently no
known active Task Trackers.
==================================================================================================

Is it so that the task trackers are not starting?

I get the following message on start-all.sh:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
$ start-all.sh
starting namenode, logging to
/cygdrive/c/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-Nitesh-namenode-XX.out
localhost: starting datanode, logging to
/cygdrive/c/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-Nitesh-datanode-XX.out
localhost: starting secondarynamenode, logging to
/cygdrive/c/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-Nitesh-secondarynamenode-XX.out
starting jobtracker, logging to
/cygdrive/c/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-Nitesh-jobtracker-XX.out
*localhost: starting tasktracker, logging to
/cygdrive/c/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-Nitesh-tasktracker-XX.out
*
==================================================================================================


when I
cat /cygdrive/c/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-Nitesh-tasktracker-XX.out,
I get:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ulimit -a for user Nitesh
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 8
stack size              (kbytes, -s) 2023
cpu time               (seconds, -t) unlimited
max user processes              (-u) 256
virtual memory          (kbytes, -v) unlimited
==================================================================================================

but when I $stop-all.sh, I get:
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
stopping jobtracker
*localhost: no tasktracker to stop*
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode
==================================================================================================

Do you know how can I verify that the tasktrackers are starting correctly?

Thanks,
Nitesh


On Fri, Aug 30, 2013 at 2:24 PM, Nikhil2405 [via Hadoop Common] <
ml-node+s472056n4024982h78@n3.nabble.com> wrote:

> Hi Nitesh,
>
> I think your localhost ip should be 127.0.0.0 (try this), also check
> whether, after starting ./start-all.sh it is writing any thing in this *c:/Hadoop/name
> folder* or not, also check that you disabled your firewall, sometime it
> may cause problem to.
>
>
> Thanks
>
> Nikhil
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4024982.html
>  To unsubscribe from no jobtracker to stop,no namenode to stop, click here<http://hadoop-common.472056.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=34874&code=bml0ZXNoLmphaW44NUBnbWFpbC5jb218MzQ4NzR8MzUzNjEyNzQx>
> .
> NAML<http://hadoop-common.472056.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4025014.html
Sent from the Users mailing list archive at Nabble.com.

Re: no jobtracker to stop,no namenode to stop

Posted by NJain <ni...@gmail.com>.
Hi Nikhil,

Appreciate your quick response on this, but the issue still continues. I
believe I have covered all the pointers you have mentioned. Still I am
pasting the portions of the documents so that you can verify.

1. /etc/hosts file,  localhost should not be commented, and add ip address.
The entry looks like the below:
# localhost name resolution is handled within DNS itself.
        127.0.0.1       localhost

2. core-site.xml, hdfs//localhost:port number
<configuration>
     <property>
         <name>fs.default.name</name>
         <value>hdfs://localhost:9000</value>
     </property>
</configuration>
3. mapred-site.xml hdfs//localhost:port number mapred.local.dir
<configuration>
     <property>
         <name>mapred.job.tracker</name>
         <value>localhost:9001</value>
     </property>
</configuration>

4. hdfs-site.xml 1.replication factor should be one
                      include dfs.name.dir property
                                dfs.data.dir property
                                for both the property check on net
<configuration>
     <property>
         <name>dfs.replication</name>
         <value>1</value>
     </property>
     <property>
         <name>dfs.name.dir</name>
         <value>c:/Hadoop/name</value>
     </property>
     <property>
         <name>dfs.data.dir</name>
         <value>c:/Hadoop/data</value>
     </property>
</configuration>


I am getting stuck at:
13/08/30 11:39:26 WARN mapred.JobClient: No job jar file set.  User classes
may not be found. See JobConf(Class) or JobConf#setJar(String).
13/08/30 11:39:26 INFO input.FileInputFormat: Total input paths to process
: 1
13/08/30 11:39:26 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
13/08/30 11:39:26 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/30 11:39:27 INFO mapred.JobClient: Running job: job_201308301135_0002
13/08/30 11:39:28 INFO mapred.JobClient:  map 0% reduce 0%

My Jobtracker UI looks like this:

Cluster Summary (Heap Size is 120.06 MB/888.94 MB)Running Map TasksRunning
Reduce TasksTotal SubmissionsNodesOccupied Map SlotsOccupied Reduce
SlotsReserved
Map SlotsReserved Reduce SlotsMap Task CapacityReduce Task CapacityAvg.
Tasks/NodeBlacklisted NodesGraylisted NodesExcluded
Nodes0010<http://localhost:50030/machines.jsp?type=active>
000000-0 <http://localhost:50030/machines.jsp?type=blacklisted>0<http://localhost:50030/machines.jsp?type=graylisted>
0 <http://localhost:50030/machines.jsp?type=excluded>



I have a feeling that the jobtracker is not able to find the task tracker
as there is a 0 in nodes column.

Does this ring any bells to you?

Thanks,
Nitesh Jain



On Thu, Aug 29, 2013 at 5:51 PM, Nikhil2405 [via Hadoop Common] <
ml-node+s472056n4024848h24@n3.nabble.com> wrote:

> Hi Nitesh,
>
> I think your problem may be in your configuration, so check your files as
> follow
>
> 1. /etc/hosts file,  localhost should not be commented, and add ip
> address.
> 2. core-site.xml, hdfs//localhost:port number
> 3. mapred-site.xml hdfs//localhost:port number mapred.local.dir
> 4. hdfs-site.xml 1.replication factor should be one
>                       include dfs.name.dir property
>                                 dfs.data.dir property
>                                 for both the property check on net
>
> Thanks
>
> Nikhil
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4024848.html
>  To unsubscribe from no jobtracker to stop,no namenode to stop, click here<http://hadoop-common.472056.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=34874&code=bml0ZXNoLmphaW44NUBnbWFpbC5jb218MzQ4NzR8MzUzNjEyNzQx>
> .
> NAML<http://hadoop-common.472056.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4024979.html
Sent from the Users mailing list archive at Nabble.com.

Re: no jobtracker to stop,no namenode to stop

Posted by NJain <ni...@gmail.com>.
Hi,

I am facing an issue where the map job is stuck at map 0% reduce 0%.

I have installed Hadoop version 1.2.1 and am trying to run on my windows 8
machine using cygwin in pseudo distribution mode. I have followed the
instruction at: http://hadoop.apache.org/docs/stable/single_node_setup.html 
and have copied the configuration files from there itself. 

When I do stop-all.sh, I observe output as below: 
stopping jobtracker
*localhost: no tasktracker to stop*
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode

Can anyone please help or suggest me on this? I am stuck with this for a
while now. 

Thanks,
Nitesh 
   



--
View this message in context: http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4024745.html
Sent from the Users mailing list archive at Nabble.com.

Re: no jobtracker to stop,no namenode to stop

Posted by Harsh J <ha...@cloudera.com>.
In spirit of http://xkcd.com/979/, please also let us know what you felt
was the original issue and how you managed to solve it - for benefit of
other people searching in future?


On Mon, Jan 21, 2013 at 3:26 PM, Sigehere <pe...@gmail.com> wrote:

> Hey, Friends I have solved that error
> Thanks
>
>
>
>
> --
> View this message in context:
> http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4006830.html
> Sent from the Users mailing list archive at Nabble.com.
>



-- 
Harsh J

Re: no jobtracker to stop,no namenode to stop

Posted by Sigehere <pe...@gmail.com>.
Hey, Friends I have solved that error 
Thanks
 



--
View this message in context: http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4006830.html
Sent from the Users mailing list archive at Nabble.com.

no jobtracker to stop,no namenode to stop

Posted by Sigehere <pe...@gmail.com>.
I also have same problem. 

$jps 
20120 Jps

my log info as follow:
************************************************************/
2013-01-21 12:45:02,004 INFO org.apache.hadoop.mapred.JobTracker:
STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting JobTracker
STARTUP_MSG:   host = Sigehere-lp/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.4
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
1393290; compiled by 'hortonfo' on Wed Oct  3 05:20:10 UTC 2012
************************************************************/
2013-01-21 12:45:02,090 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
loaded properties from hadoop-metrics2.properties
2013-01-21 12:45:02,123 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
2013-01-21 12:45:02,124 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
at 10 second(s).
2013-01-21 12:45:02,124 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics system
started
2013-01-21 12:45:02,190 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
QueueMetrics,q=default registered.
2013-01-21 12:45:02,345 FATAL org.apache.hadoop.mapred.JobTracker:
java.lang.IllegalArgumentException: Does not contain a valid host:port
authority: local
        at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:162)
        at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:128)
        at
org.apache.hadoop.mapred.JobTracker.getAddress(JobTracker.java:2560)
        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2200)
        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
        at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
        at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
        at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

2013-01-21 12:45:02,345 INFO org.apache.hadoop.mapred.JobTracker:
SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down JobTracker at Sigehere-lp/127.0.1.1
************************************************************/

AND:-----------------------------
mapred-site.xml

 <property>
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>
    <description>A URI whose
        scheme and authority determine the FileSystem implementation. The
        uri’s scheme determines the config property (fs.SCHEME.impl) naming
        the FileSystem implementation class. The uri’s authority is used to
        determine the host, port, etc. for a filesystem.
    </description>
    <final>true</final>
  </property>

can any buddy tell me how will i resolve this problem.



--
View this message in context: http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p4006820.html
Sent from the Users mailing list archive at Nabble.com.

Re: no jobtracker to stop,no namenode to stop

Posted by ursbrbalaji <ur...@gmail.com>.
Hi Madhu,

Thanks for the response, sorry was busy couldn't check.

My mapred-site.xml is as follows.

Let me know the suggested changes to be done.

THanks in advance.

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<!-- In: conf/mapred-site.xml -->
<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
</configuration>

- B R Balaji
-- 
View this message in context: http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p2500297.html
Sent from the Users mailing list archive at Nabble.com.

Re: no jobtracker to stop,no namenode to stop

Posted by madhu phatak <ph...@gmail.com>.
IP address wiLL not work ..You have to put the hostnames in every
configuration file.
On Wed, Feb 9, 2011 at 9:58 PM, madhu phatak <ph...@gmail.com> wrote:

>
> IP address with not work ..You have to put the hostnames in every
> configuration file.
>
> On Wed, Feb 9, 2011 at 2:01 PM, ursbrbalaji <ur...@gmail.com> wrote:
>
>>
>>
>> Hi Madhu,
>>
>> The jobtracker logs show the following exception.
>>
>> 2011-02-09 16:24:51,244 INFO org.apache.hadoop.mapred.JobTracker:
>> STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting JobTracker
>> STARTUP_MSG:   host = BRBALAJI-PC/172.17.168.45
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>> 2011-02-09 16:24:51,357 INFO org.apache.hadoop.mapred.JobTracker:
>> Scheduler
>> configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
>> limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
>> 2011-02-09 16:24:51,421 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> Initializing RPC Metrics with hostName=JobTracker, port=54311
>> 2011-02-09 16:24:56,538 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2011-02-09 16:24:56,703 INFO org.apache.hadoop.http.HttpServer: Port
>> returned by webServer.getConnectors()[0].getLocalPort() before open() is
>> -1.
>> Opening the listener on 50030
>> 2011-02-09 16:24:56,704 INFO org.apache.hadoop.http.HttpServer:
>> listener.getLocalPort() returned 50030
>> webServer.getConnectors()[0].getLocalPort() returned 50030
>> 2011-02-09 16:24:56,704 INFO org.apache.hadoop.http.HttpServer: Jetty
>> bound
>> to port 50030
>> 2011-02-09 16:24:56,704 INFO org.mortbay.log: jetty-6.1.14
>> 2011-02-09 16:24:57,394 INFO org.mortbay.log: Started
>> SelectChannelConnector@0.0.0.0:50030
>> 2011-02-09 16:24:57,395 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=JobTracker, sessionId=
>> 2011-02-09 16:24:57,396 INFO org.apache.hadoop.mapred.JobTracker:
>> JobTracker
>> up at: 54311
>> 2011-02-09 16:24:57,396 INFO org.apache.hadoop.mapred.JobTracker:
>> JobTracker
>> webserver: 50030
>> 2011-02-09 16:24:58,710 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect
>> to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
>> 2011-02-09 16:24:59,711 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect
>> to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
>> 2011-02-09 16:25:00,712 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect
>> to server: localhost/127.0.0.1:54310. Already tried 2 time(s).
>> 2011-02-09 16:25:01,713 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect
>> to server: localhost/127.0.0.1:54310. Already tried 3 time(s).
>> 2011-02-09 16:25:02,713 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect
>> to server: localhost/127.0.0.1:54310. Already tried 4 time(s).
>> 2011-02-09 16:25:03,714 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect
>> to server: localhost/127.0.0.1:54310. Already tried 5 time(s).
>> 2011-02-09 16:25:04,715 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect
>> to server: localhost/127.0.0.1:54310. Already tried 6 time(s).
>> 2011-02-09 16:25:05,715 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect
>> to server: localhost/127.0.0.1:54310. Already tried 7 time(s).
>> 2011-02-09 16:25:06,716 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect
>> to server: localhost/127.0.0.1:54310. Already tried 8 time(s).
>> 2011-02-09 16:25:07,717 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect
>> to server: localhost/127.0.0.1:54310. Already tried 9 time(s).
>> 2011-02-09 16:25:07,722 INFO org.apache.hadoop.mapred.JobTracker: problem
>> cleaning system directory: null
>> java.net.ConnectException: Call to localhost/127.0.0.1:54310 failed on
>> connection exception: java.net.ConnectException: Connection refused
>>        at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
>>        at org.apache.hadoop.ipc.Client.call(Client.java:743)
>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>        at $Proxy4.getProtocolVersion(Unknown Source)
>>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>>        at
>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>>        at
>>
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>>        at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>>        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1665)
>>        at
>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)
>>        at
>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175)
>>        at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702)
>> Caused by: java.net.ConnectException: Connection refused
>>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>        at
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>        at
>>
>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>>        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
>>        at
>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
>>        at
>> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
>>        at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
>>        at org.apache.hadoop.ipc.Client.call(Client.java:720)
>>        ... 16 more
>> 2011-02-09 16:25:08,899 INFO org.apache.hadoop.mapred.JobTracker:
>> SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down JobTracker at BRBALAJI-PC/172.17.168.45
>> ************************************************************/
>> Please let me know what might be the problem.
>>
>> Thanks,
>> B R Balaji
>> --
>> View this message in context:
>> http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p2457071.html
>> Sent from the Users mailing list archive at Nabble.com.
>>
>
>

Re: no jobtracker to stop,no namenode to stop

Posted by madhu phatak <ph...@gmail.com>.
IP address with not work ..You have to put the hostnames in every
configuration file.
On Wed, Feb 9, 2011 at 2:01 PM, ursbrbalaji <ur...@gmail.com> wrote:

>
>
> Hi Madhu,
>
> The jobtracker logs show the following exception.
>
> 2011-02-09 16:24:51,244 INFO org.apache.hadoop.mapred.JobTracker:
> STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting JobTracker
> STARTUP_MSG:   host = BRBALAJI-PC/172.17.168.45
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2011-02-09 16:24:51,357 INFO org.apache.hadoop.mapred.JobTracker: Scheduler
> configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
> limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
> 2011-02-09 16:24:51,421 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=JobTracker, port=54311
> 2011-02-09 16:24:56,538 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2011-02-09 16:24:56,703 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is
> -1.
> Opening the listener on 50030
> 2011-02-09 16:24:56,704 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50030
> webServer.getConnectors()[0].getLocalPort() returned 50030
> 2011-02-09 16:24:56,704 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50030
> 2011-02-09 16:24:56,704 INFO org.mortbay.log: jetty-6.1.14
> 2011-02-09 16:24:57,394 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50030
> 2011-02-09 16:24:57,395 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=JobTracker, sessionId=
> 2011-02-09 16:24:57,396 INFO org.apache.hadoop.mapred.JobTracker:
> JobTracker
> up at: 54311
> 2011-02-09 16:24:57,396 INFO org.apache.hadoop.mapred.JobTracker:
> JobTracker
> webserver: 50030
> 2011-02-09 16:24:58,710 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
> 2011-02-09 16:24:59,711 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
> 2011-02-09 16:25:00,712 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: localhost/127.0.0.1:54310. Already tried 2 time(s).
> 2011-02-09 16:25:01,713 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: localhost/127.0.0.1:54310. Already tried 3 time(s).
> 2011-02-09 16:25:02,713 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: localhost/127.0.0.1:54310. Already tried 4 time(s).
> 2011-02-09 16:25:03,714 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: localhost/127.0.0.1:54310. Already tried 5 time(s).
> 2011-02-09 16:25:04,715 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: localhost/127.0.0.1:54310. Already tried 6 time(s).
> 2011-02-09 16:25:05,715 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: localhost/127.0.0.1:54310. Already tried 7 time(s).
> 2011-02-09 16:25:06,716 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: localhost/127.0.0.1:54310. Already tried 8 time(s).
> 2011-02-09 16:25:07,717 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: localhost/127.0.0.1:54310. Already tried 9 time(s).
> 2011-02-09 16:25:07,722 INFO org.apache.hadoop.mapred.JobTracker: problem
> cleaning system directory: null
> java.net.ConnectException: Call to localhost/127.0.0.1:54310 failed on
> connection exception: java.net.ConnectException: Connection refused
>        at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
>        at org.apache.hadoop.ipc.Client.call(Client.java:743)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at $Proxy4.getProtocolVersion(Unknown Source)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>        at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>        at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>        at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>        at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1665)
>        at
> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)
>        at
> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175)
>        at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702)
> Caused by: java.net.ConnectException: Connection refused
>        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>        at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>        at
>
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
>        at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
>        at
> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
>        at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
>        at org.apache.hadoop.ipc.Client.call(Client.java:720)
>        ... 16 more
> 2011-02-09 16:25:08,899 INFO org.apache.hadoop.mapred.JobTracker:
> SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down JobTracker at BRBALAJI-PC/172.17.168.45
> ************************************************************/
> Please let me know what might be the problem.
>
> Thanks,
> B R Balaji
> --
> View this message in context:
> http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p2457071.html
> Sent from the Users mailing list archive at Nabble.com.
>

Re: no jobtracker to stop,no namenode to stop

Posted by ursbrbalaji <ur...@gmail.com>.

Hi Madhu,

The jobtracker logs show the following exception.

2011-02-09 16:24:51,244 INFO org.apache.hadoop.mapred.JobTracker:
STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting JobTracker
STARTUP_MSG:   host = BRBALAJI-PC/172.17.168.45
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-02-09 16:24:51,357 INFO org.apache.hadoop.mapred.JobTracker: Scheduler
configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
2011-02-09 16:24:51,421 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=JobTracker, port=54311
2011-02-09 16:24:56,538 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2011-02-09 16:24:56,703 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
Opening the listener on 50030
2011-02-09 16:24:56,704 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50030
webServer.getConnectors()[0].getLocalPort() returned 50030
2011-02-09 16:24:56,704 INFO org.apache.hadoop.http.HttpServer: Jetty bound
to port 50030
2011-02-09 16:24:56,704 INFO org.mortbay.log: jetty-6.1.14
2011-02-09 16:24:57,394 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50030
2011-02-09 16:24:57,395 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=JobTracker, sessionId=
2011-02-09 16:24:57,396 INFO org.apache.hadoop.mapred.JobTracker: JobTracker
up at: 54311
2011-02-09 16:24:57,396 INFO org.apache.hadoop.mapred.JobTracker: JobTracker
webserver: 50030
2011-02-09 16:24:58,710 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
2011-02-09 16:24:59,711 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
2011-02-09 16:25:00,712 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 2 time(s).
2011-02-09 16:25:01,713 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 3 time(s).
2011-02-09 16:25:02,713 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 4 time(s).
2011-02-09 16:25:03,714 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 5 time(s).
2011-02-09 16:25:04,715 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 6 time(s).
2011-02-09 16:25:05,715 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 7 time(s).
2011-02-09 16:25:06,716 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 8 time(s).
2011-02-09 16:25:07,717 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 9 time(s).
2011-02-09 16:25:07,722 INFO org.apache.hadoop.mapred.JobTracker: problem
cleaning system directory: null
java.net.ConnectException: Call to localhost/127.0.0.1:54310 failed on
connection exception: java.net.ConnectException: Connection refused
	at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
	at org.apache.hadoop.ipc.Client.call(Client.java:743)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
	at $Proxy4.getProtocolVersion(Unknown Source)
	at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
	at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
	at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
	at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1665)
	at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)
	at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175)
	at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
	at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
	at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
	at org.apache.hadoop.ipc.Client.call(Client.java:720)
	... 16 more
2011-02-09 16:25:08,899 INFO org.apache.hadoop.mapred.JobTracker:
SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down JobTracker at BRBALAJI-PC/172.17.168.45
************************************************************/
Please let me know what might be the problem.

Thanks,
B R Balaji
-- 
View this message in context: http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p2457071.html
Sent from the Users mailing list archive at Nabble.com.

Re: no jobtracker to stop,no namenode to stop

Posted by madhu phatak <ph...@gmail.com>.
Please see the job tracker logs

On Tue, Feb 8, 2011 at 3:54 PM, ursbrbalaji <ur...@gmail.com> wrote:

>
> Hi Prabhu,
>
> I am facing exactly the same problem. I too followed the steps in the below
> link.
>
> Please let me know which configuration file was modified and what were the
> changes.
>
> Thanks,
> Balaji
>
>
> --
> View this message in context:
> http://hadoop-common.472056.n3.nabble.com/no-jobtracker-to-stop-no-namenode-to-stop-tp34874p2450308.html
> Sent from the Users mailing list archive at Nabble.com.
>