You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Robert Chansler (JIRA)" <ji...@apache.org> on 2008/04/14 18:31:05 UTC

[jira] Updated: (HADOOP-2873) Namenode fails to re-start after cluster shutdown - DFSClient: Could not obtain blocks even all datanodes were up & live

     [ https://issues.apache.org/jira/browse/HADOOP-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Robert Chansler updated HADOOP-2873:
------------------------------------

    Hadoop Flags: [Incompatible change]

Noted as incompatible in changes.txt

> Namenode fails to re-start after cluster shutdown - DFSClient: Could not obtain blocks even all datanodes were up & live
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-2873
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2873
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.0
>            Reporter: André Martin
>            Assignee: dhruba borthakur
>             Fix For: 0.17.0
>
>         Attachments: leaseConstruction.patch, leaseConstruction.patch, leaseConstruction.patch, leaseConstruction.patch
>
>
> Namenode fails to re-start with the following exception:
>  2008-02-21 14:20:48,831 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG:
>  /************************************************************
>  STARTUP_MSG: Starting NameNode
>  STARTUP_MSG:   host = se09/141.76.xxx.xxx
>  STARTUP_MSG:   args = []
>  STARTUP_MSG:   version = 2008-02-19_11-01-48
>  STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/trunk -r 628999; compiled by 'hudson' on Tue Feb 19 11:09:05 UTC 2008
>  ************************************************************/
>  2008-02-21 14:20:49,367 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing RPC Metrics with serverName=NameNode, port=8000
>  2008-02-21 14:20:49,374 INFO org.apache.hadoop.dfs.NameNode: Namenode up at: se09.inf.tu-dresden.de/141.76.xxx.xxx:8000
>  2008-02-21 14:20:49,378 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
>  2008-02-21 14:20:49,381 INFO org.apache.hadoop.dfs.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
>  2008-02-21 14:20:49,501 INFO org.apache.hadoop.fs.FSNamesystem: fsOwner=amartin,students
>  2008-02-21 14:20:49,501 INFO org.apache.hadoop.fs.FSNamesystem: supergroup=supergroup
>  2008-02-21 14:20:49,501 INFO org.apache.hadoop.fs.FSNamesystem: isPermissionEnabled=true
>  2008-02-21 14:20:49,788 INFO org.apache.hadoop.ipc.Server: Stopping server on 8000
>  2008-02-21 14:20:49,790 ERROR org.apache.hadoop.dfs.NameNode: java.io.IOException: Created 13 leases but found 4
>      at org.apache.hadoop.dfs.FSImage.loadFilesUnderConstruction(FSImage.java:935)
>      at org.apache.hadoop.dfs.FSImage.loadFSImage(FSImage.java:749)
>      at org.apache.hadoop.dfs.FSImage.loadFSImage(FSImage.java:634)
>      at org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:223)
>      at org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:79)
>      at org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:261)
>      at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:242)
>      at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:131)
>      at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:176)
>      at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:162)
>      at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:851)
>      at org.apache.hadoop.dfs.NameNode.main(NameNode.java:860)
>  2008-02-21 14:20:49,791 INFO org.apache.hadoop.dfs.NameNode: SHUTDOWN_MSG:
>  /************************************************************
>  SHUTDOWN_MSG: Shutting down NameNode at se09/141.76.xxx.xxx
>  ************************************************************/ 
> Cluster restart was needed since the DFS client produced the following error message even all datanodes were up:
>  08/02/21 14:04:35 INFO fs.DFSClient: Could not obtain block blk_-4008950704646490788 from any node:  java.io.IOException: No live nodes contain current block

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.