You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Xavier Stevens <Xa...@fox.com> on 2008/06/12 20:35:31 UTC
Hadoop not stopping processes
Has anyone else had a problem with hadoop not stopping processes? I am
running 0.16.4 and when I issue bin/stop-all.sh I get "no <process> to
stop" for every node. But a quick look at the processes on the system
says otherwise. One thing to note is that they don't show up when I run
"jps" either.
Thanks,
-Xavier
RE: Hadoop not stopping processes
Posted by Xavier Stevens <Xa...@fox.com>.
I was using su from a different account. Now when I am trying to start back up I am seeing this stack trace for the NameNode when it fails to startup. Is this a bug?
2008-06-12 12:04:27,948 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=4000
2008-06-12 12:04:27,953 INFO org.apache.hadoop.dfs.NameNode: Namenode up at: <namenode name/ip here>
2008-06-12 12:04:27,957 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2008-06-12 12:04:27,959 INFO org.apache.hadoop.dfs.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2008-06-12 12:04:28,015 INFO org.apache.hadoop.fs.FSNamesystem: fsOwner=hadoop,hadoop,wheel
2008-06-12 12:04:28,016 INFO org.apache.hadoop.fs.FSNamesystem: supergroup=hadoop
2008-06-12 12:04:28,016 INFO org.apache.hadoop.fs.FSNamesystem: isPermissionEnabled=true
2008-06-12 12:04:29,365 ERROR org.apache.hadoop.dfs.NameNode: java.lang.NullPointerException
at org.apache.hadoop.dfs.INodeDirectory.getExistingPathINodes(INode.java:408)
at org.apache.hadoop.dfs.INodeDirectory.getNode(INode.java:357)
at org.apache.hadoop.dfs.INodeDirectory.getNode(INode.java:365)
at org.apache.hadoop.dfs.FSDirectory.unprotectedDelete(FSDirectory.java:458)
at org.apache.hadoop.dfs.FSEditLog.loadFSEdits(FSEditLog.java:537)
at org.apache.hadoop.dfs.FSImage.loadFSEdits(FSImage.java:756)
at org.apache.hadoop.dfs.FSImage.loadFSImage(FSImage.java:639)
at org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:222)
at org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:79)
at org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:254)
at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:235)
at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:131)
at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:176)
at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:162)
at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:846)
at org.apache.hadoop.dfs.NameNode.main(NameNode.java:855)
-----Original Message-----
From: milesosb@gmail.com on behalf of Miles Osborne
Sent: Thu 6/12/2008 12:00 PM
To: core-user@hadoop.apache.org
Subject: Re: Hadoop not stopping processes
(for 16.4), I've noticed that stop-all.sh sometimes doesn't work when the
corresponding start-all was done as a cron job at system boot. also, if
your USER differs, it won't work
eg as root, with USER=root, start-all.sh needs a corresponding stop-all.sh,
but also as root; if instead, you su to root from a non-root account,
stop-all.sh will fail.
Miles
2008/6/12 Xavier Stevens <Xa...@fox.com>:
> Has anyone else had a problem with hadoop not stopping processes? I am
> running 0.16.4 and when I issue bin/stop-all.sh I get "no <process> to
> stop" for every node. But a quick look at the processes on the system
> says otherwise. One thing to note is that they don't show up when I run
> "jps" either.
>
> Thanks,
>
> -Xavier
>
--
The University of Edinburgh is a charitable body, registered in Scotland,
with registration number SC005336.
Re: Hadoop not stopping processes
Posted by Miles Osborne <mi...@inf.ed.ac.uk>.
(for 16.4), I've noticed that stop-all.sh sometimes doesn't work when the
corresponding start-all was done as a cron job at system boot. also, if
your USER differs, it won't work
eg as root, with USER=root, start-all.sh needs a corresponding stop-all.sh,
but also as root; if instead, you su to root from a non-root account,
stop-all.sh will fail.
Miles
2008/6/12 Xavier Stevens <Xa...@fox.com>:
> Has anyone else had a problem with hadoop not stopping processes? I am
> running 0.16.4 and when I issue bin/stop-all.sh I get "no <process> to
> stop" for every node. But a quick look at the processes on the system
> says otherwise. One thing to note is that they don't show up when I run
> "jps" either.
>
> Thanks,
>
> -Xavier
>
--
The University of Edinburgh is a charitable body, registered in Scotland,
with registration number SC005336.