You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Harsh J (Resolved) (JIRA)" <ji...@apache.org> on 2011/11/20 06:56:51 UTC

[jira] [Resolved] (HDFS-129) NPE in FSNamesystem.checkDecommissionStateInternal

     [ https://issues.apache.org/jira/browse/HDFS-129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Harsh J resolved HDFS-129.
--------------------------

    Resolution: Not A Problem

This appears to have been fixed. On current branch-0.20-security, I see a proper disallowed exception.

We can reopen if we see an NPE again.
                
> NPE in FSNamesystem.checkDecommissionStateInternal
> --------------------------------------------------
>
>                 Key: HDFS-129
>                 URL: https://issues.apache.org/jira/browse/HDFS-129
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Koji Noguchi
>            Priority: Minor
>
> When bringing back a decommissioned node, we forgot to take out the hostname from dfs.hosts.exclude and call dfsadmin -refreshNodes.  
> Somehow, instead of getting 'reject' message, datanode shutdown with NPE.  After dfsadmin -refreshNodes, datanode was able to join back. 
> Stack trace, 
> {noformat} 
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = ____.____.com/99.9.99.9
> STARTUP_MSG:   args = []
> ************************************************************/
> 2008-02-26 20:30:56,523 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with
> processName=DataNode, sessionId=null
> 2008-02-26 20:30:57,818 INFO org.apache.hadoop.dfs.DataNode: Opened server at -----
> 2008-02-26 20:30:57,938 INFO org.mortbay.util.Credential: Checking Resource aliases
> 2008-02-26 20:30:57,982 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4
> 2008-02-26 20:30:58,000 INFO org.mortbay.util.Container: Started HttpContext[/static,/static]
> 2008-02-26 20:30:58,001 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs]
> 2008-02-26 20:30:58,360 INFO org.mortbay.util.Container: Started org.mortbay.jetty.servlet.WebApplicationHandler@20fa83
> 2008-02-26 20:30:58,462 INFO org.mortbay.util.Container: Started WebApplicationContext[/,/]
> 2008-02-26 20:30:58,464 INFO org.mortbay.http.SocketListener: Started SocketListener on 0.0.0.0:-----
> 2008-02-26 20:30:58,464 INFO org.mortbay.util.Container: Started org.mortbay.jetty.Server@16dc861
> 2008-02-26 20:30:58,464 INFO org.apache.hadoop.dfs.DataNode: Starting to run script to get datanode network location
> 2008-02-26 20:30:58,591 INFO org.mortbay.util.ThreadedServer: Stopping Acceptor
> ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=-----]
> 2008-02-26 20:30:58,593 INFO org.mortbay.http.SocketListener: Stopped SocketListener on 0.0.0.0:-----
> 2008-02-26 20:30:58,642 INFO org.mortbay.util.Container: Stopped HttpContext[/static,/static]
> 2008-02-26 20:30:58,680 INFO org.mortbay.util.Container: Stopped HttpContext[/logs,/logs]
> 2008-02-26 20:30:58,681 INFO org.mortbay.util.Container: Stopped org.mortbay.jetty.servlet.WebApplicationHandler@20fa83
> 2008-02-26 20:30:58,718 INFO org.mortbay.util.Container: Stopped WebApplicationContext[/,/]
> 2008-02-26 20:30:58,719 INFO org.mortbay.util.Container: Stopped org.mortbay.jetty.Server@16dc861
> 2008-02-26 20:30:58,719 ERROR org.apache.hadoop.dfs.DataNode: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: java.lang.NullPointerException
>         at org.apache.hadoop.dfs.FSNamesystem.checkDecommissionStateInternal(FSNamesystem.java:2918)
>         at org.apache.hadoop.dfs.FSNamesystem.verifyNodeRegistration(FSNamesystem.java:3134)
>         at org.apache.hadoop.dfs.FSNamesystem.registerDatanode(FSNamesystem.java:1679)
>         at org.apache.hadoop.dfs.NameNode.register(NameNode.java:538)
>         at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:379)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:596)
>         at org.apache.hadoop.ipc.Client.call(Client.java:482)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:184)
>         at org.apache.hadoop.dfs.$Proxy0.register(Unknown Source)
>         at org.apache.hadoop.dfs.DataNode.register(DataNode.java:391)
>         at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:287)
>         at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:206)
>         at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:1575)
>         at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1519)
>         at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:1540)
>         at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1711)
> 2008-02-26 20:30:58,720 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at ___.____.com/99.9.99.9
> ************************************************************/
> {noformat} 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira