You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Ravi Shetye <ra...@gmail.com> on 2013/09/30 09:33:55 UTC

unable to restart namenode on hadoop 1.0.4

Can some one please help me about how I go ahead debugging the issue.The NN
log has the following error stack

2013-09-30 07:28:42,768 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
started
2013-09-30 07:28:42,967 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
registered.
2013-09-30 07:28:42,972 WARN
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
exists!
2013-09-30 07:28:42,978 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
registered.
2013-09-30 07:28:42,980 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
NameNode registered.
2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: VM type
  = 64-bit
2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
memory = 27.3075 MB
2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: capacity
 = 2^22 = 4194304 entries
2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet:
recommended=4194304, actual=4194304
2013-09-30 07:28:43,084 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
2013-09-30 07:28:43,084 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-09-30 07:28:43,084 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true
2013-09-30 07:28:43,119 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.block.invalidate.limit=100
2013-09-30 07:28:43,119 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
accessTokenLifetime=0 min(s)
2013-09-30 07:28:43,183 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStateMBean and NameNodeMXBean
2013-09-30 07:28:43,207 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
occuring more than 10 times
2013-09-30 07:28:43,221 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files = 528665
2013-09-30 07:28:49,109 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files under construction = 7
2013-09-30 07:28:49,111 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 79872266 loaded in 5 seconds.
2013-09-30 07:28:49,113 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.lang.NullPointerException
        at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1099)
        at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1111)
        at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:1014)
        at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:208)
        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:631)
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1021)
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:839)
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:377)
        at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

2013-09-30 07:28:49,114 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:


-- 
RAVI SHETYE

Re: unable to restart namenode on hadoop 1.0.4

Posted by Ravi Shetye <ra...@gmail.com>.
I do not think these are same issue, Please correct me if I am worng.
the SO link is abour SNN unable to establish communication with NN.
In my case I am unable to launch NN itself.

The NLP issue is at the highlighted line, but I am not sure how to go about
resolving it

  /** Add a node child to the inodes at index pos.
   * Its ancestors are stored at [0, pos-1].
   * QuotaExceededException is thrown if it violates quota limit */
  private <T extends INode> T addChild(INode[] pathComponents, int pos,
      T child, long childDiskspace, boolean inheritPermission,
      boolean checkQuota) throws QuotaExceededException {
    INode.DirCounts counts = new INode.DirCounts();
    child.spaceConsumedInTree(counts);
    if (childDiskspace < 0) {
      childDiskspace = counts.getDsCount();
    }
    updateCount(pathComponents, pos, counts.getNsCount(), childDiskspace,
        checkQuota);
    *T addedNode = ((INodeDirectory)pathComponents[pos-1]).addChild(*
*        child, inheritPermission);*
    if (addedNode == null) {
      updateCount(pathComponents, pos, -counts.getNsCount(),
          -childDiskspace, true);
    }
    return addedNode;
  }




On Mon, Sep 30, 2013 at 1:31 PM, Manoj Sah <ma...@cloudwick.com> wrote:

> Hi,
> http://stackoverflow.com/questions/5490805/hadoop-nullpointerexcep
>
> try this link
>
> Thanks
> Manoj
>
>
> On Mon, Sep 30, 2013 at 1:03 PM, Ravi Shetye <ra...@gmail.com> wrote:
>
>> Can some one please help me about how I go ahead debugging the issue.The
>> NN log has the following error stack
>>
>> 2013-09-30 07:28:42,768 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>> started
>> 2013-09-30 07:28:42,967 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 2013-09-30 07:28:42,972 WARN
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 2013-09-30 07:28:42,978 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>> registered.
>> 2013-09-30 07:28:42,980 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> NameNode registered.
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>     = 64-bit
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>> memory = 27.3075 MB
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>    = 2^22 = 4194304 entries
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet:
>> recommended=4194304, actual=4194304
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2013-09-30 07:28:43,119 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> dfs.block.invalidate.limit=100
>> 2013-09-30 07:28:43,119 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>> accessTokenLifetime=0 min(s)
>> 2013-09-30 07:28:43,183 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStateMBean and NameNodeMXBean
>> 2013-09-30 07:28:43,207 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>> occuring more than 10 times
>> 2013-09-30 07:28:43,221 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Number of files = 528665
>> 2013-09-30 07:28:49,109 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Number of files under
>> construction = 7
>> 2013-09-30 07:28:49,111 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Image file of size 79872266
>> loaded in 5 seconds.
>> 2013-09-30 07:28:49,113 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.lang.NullPointerException
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1099)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1111)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:1014)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:208)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:631)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1021)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:839)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:377)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>
>> 2013-09-30 07:28:49,114 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>
>>
>> --
>> RAVI SHETYE
>>
>
>
>
> --
> *
> With Best Regards
> Manoj Kumar Sahu
> Cloudwick Technologies
> Hyderabad-500016.
> 8374232928 /7842496524
> *
> Pl. *Save a tree. Please don't print this e-mail unless you really need
> to...*
>



-- 
RAVI SHETYE

Re: unable to restart namenode on hadoop 1.0.4

Posted by Ravi Shetye <ra...@gmail.com>.
I do not think these are same issue, Please correct me if I am worng.
the SO link is abour SNN unable to establish communication with NN.
In my case I am unable to launch NN itself.

The NLP issue is at the highlighted line, but I am not sure how to go about
resolving it

  /** Add a node child to the inodes at index pos.
   * Its ancestors are stored at [0, pos-1].
   * QuotaExceededException is thrown if it violates quota limit */
  private <T extends INode> T addChild(INode[] pathComponents, int pos,
      T child, long childDiskspace, boolean inheritPermission,
      boolean checkQuota) throws QuotaExceededException {
    INode.DirCounts counts = new INode.DirCounts();
    child.spaceConsumedInTree(counts);
    if (childDiskspace < 0) {
      childDiskspace = counts.getDsCount();
    }
    updateCount(pathComponents, pos, counts.getNsCount(), childDiskspace,
        checkQuota);
    *T addedNode = ((INodeDirectory)pathComponents[pos-1]).addChild(*
*        child, inheritPermission);*
    if (addedNode == null) {
      updateCount(pathComponents, pos, -counts.getNsCount(),
          -childDiskspace, true);
    }
    return addedNode;
  }




On Mon, Sep 30, 2013 at 1:31 PM, Manoj Sah <ma...@cloudwick.com> wrote:

> Hi,
> http://stackoverflow.com/questions/5490805/hadoop-nullpointerexcep
>
> try this link
>
> Thanks
> Manoj
>
>
> On Mon, Sep 30, 2013 at 1:03 PM, Ravi Shetye <ra...@gmail.com> wrote:
>
>> Can some one please help me about how I go ahead debugging the issue.The
>> NN log has the following error stack
>>
>> 2013-09-30 07:28:42,768 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>> started
>> 2013-09-30 07:28:42,967 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 2013-09-30 07:28:42,972 WARN
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 2013-09-30 07:28:42,978 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>> registered.
>> 2013-09-30 07:28:42,980 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> NameNode registered.
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>     = 64-bit
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>> memory = 27.3075 MB
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>    = 2^22 = 4194304 entries
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet:
>> recommended=4194304, actual=4194304
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2013-09-30 07:28:43,119 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> dfs.block.invalidate.limit=100
>> 2013-09-30 07:28:43,119 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>> accessTokenLifetime=0 min(s)
>> 2013-09-30 07:28:43,183 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStateMBean and NameNodeMXBean
>> 2013-09-30 07:28:43,207 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>> occuring more than 10 times
>> 2013-09-30 07:28:43,221 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Number of files = 528665
>> 2013-09-30 07:28:49,109 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Number of files under
>> construction = 7
>> 2013-09-30 07:28:49,111 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Image file of size 79872266
>> loaded in 5 seconds.
>> 2013-09-30 07:28:49,113 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.lang.NullPointerException
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1099)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1111)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:1014)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:208)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:631)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1021)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:839)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:377)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>
>> 2013-09-30 07:28:49,114 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>
>>
>> --
>> RAVI SHETYE
>>
>
>
>
> --
> *
> With Best Regards
> Manoj Kumar Sahu
> Cloudwick Technologies
> Hyderabad-500016.
> 8374232928 /7842496524
> *
> Pl. *Save a tree. Please don't print this e-mail unless you really need
> to...*
>



-- 
RAVI SHETYE

Re: unable to restart namenode on hadoop 1.0.4

Posted by Ravi Shetye <ra...@gmail.com>.
I do not think these are same issue, Please correct me if I am worng.
the SO link is abour SNN unable to establish communication with NN.
In my case I am unable to launch NN itself.

The NLP issue is at the highlighted line, but I am not sure how to go about
resolving it

  /** Add a node child to the inodes at index pos.
   * Its ancestors are stored at [0, pos-1].
   * QuotaExceededException is thrown if it violates quota limit */
  private <T extends INode> T addChild(INode[] pathComponents, int pos,
      T child, long childDiskspace, boolean inheritPermission,
      boolean checkQuota) throws QuotaExceededException {
    INode.DirCounts counts = new INode.DirCounts();
    child.spaceConsumedInTree(counts);
    if (childDiskspace < 0) {
      childDiskspace = counts.getDsCount();
    }
    updateCount(pathComponents, pos, counts.getNsCount(), childDiskspace,
        checkQuota);
    *T addedNode = ((INodeDirectory)pathComponents[pos-1]).addChild(*
*        child, inheritPermission);*
    if (addedNode == null) {
      updateCount(pathComponents, pos, -counts.getNsCount(),
          -childDiskspace, true);
    }
    return addedNode;
  }




On Mon, Sep 30, 2013 at 1:31 PM, Manoj Sah <ma...@cloudwick.com> wrote:

> Hi,
> http://stackoverflow.com/questions/5490805/hadoop-nullpointerexcep
>
> try this link
>
> Thanks
> Manoj
>
>
> On Mon, Sep 30, 2013 at 1:03 PM, Ravi Shetye <ra...@gmail.com> wrote:
>
>> Can some one please help me about how I go ahead debugging the issue.The
>> NN log has the following error stack
>>
>> 2013-09-30 07:28:42,768 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>> started
>> 2013-09-30 07:28:42,967 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 2013-09-30 07:28:42,972 WARN
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 2013-09-30 07:28:42,978 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>> registered.
>> 2013-09-30 07:28:42,980 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> NameNode registered.
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>     = 64-bit
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>> memory = 27.3075 MB
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>    = 2^22 = 4194304 entries
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet:
>> recommended=4194304, actual=4194304
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2013-09-30 07:28:43,119 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> dfs.block.invalidate.limit=100
>> 2013-09-30 07:28:43,119 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>> accessTokenLifetime=0 min(s)
>> 2013-09-30 07:28:43,183 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStateMBean and NameNodeMXBean
>> 2013-09-30 07:28:43,207 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>> occuring more than 10 times
>> 2013-09-30 07:28:43,221 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Number of files = 528665
>> 2013-09-30 07:28:49,109 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Number of files under
>> construction = 7
>> 2013-09-30 07:28:49,111 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Image file of size 79872266
>> loaded in 5 seconds.
>> 2013-09-30 07:28:49,113 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.lang.NullPointerException
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1099)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1111)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:1014)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:208)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:631)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1021)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:839)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:377)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>
>> 2013-09-30 07:28:49,114 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>
>>
>> --
>> RAVI SHETYE
>>
>
>
>
> --
> *
> With Best Regards
> Manoj Kumar Sahu
> Cloudwick Technologies
> Hyderabad-500016.
> 8374232928 /7842496524
> *
> Pl. *Save a tree. Please don't print this e-mail unless you really need
> to...*
>



-- 
RAVI SHETYE

Re: unable to restart namenode on hadoop 1.0.4

Posted by Ravi Shetye <ra...@gmail.com>.
I do not think these are same issue, Please correct me if I am worng.
the SO link is abour SNN unable to establish communication with NN.
In my case I am unable to launch NN itself.

The NLP issue is at the highlighted line, but I am not sure how to go about
resolving it

  /** Add a node child to the inodes at index pos.
   * Its ancestors are stored at [0, pos-1].
   * QuotaExceededException is thrown if it violates quota limit */
  private <T extends INode> T addChild(INode[] pathComponents, int pos,
      T child, long childDiskspace, boolean inheritPermission,
      boolean checkQuota) throws QuotaExceededException {
    INode.DirCounts counts = new INode.DirCounts();
    child.spaceConsumedInTree(counts);
    if (childDiskspace < 0) {
      childDiskspace = counts.getDsCount();
    }
    updateCount(pathComponents, pos, counts.getNsCount(), childDiskspace,
        checkQuota);
    *T addedNode = ((INodeDirectory)pathComponents[pos-1]).addChild(*
*        child, inheritPermission);*
    if (addedNode == null) {
      updateCount(pathComponents, pos, -counts.getNsCount(),
          -childDiskspace, true);
    }
    return addedNode;
  }




On Mon, Sep 30, 2013 at 1:31 PM, Manoj Sah <ma...@cloudwick.com> wrote:

> Hi,
> http://stackoverflow.com/questions/5490805/hadoop-nullpointerexcep
>
> try this link
>
> Thanks
> Manoj
>
>
> On Mon, Sep 30, 2013 at 1:03 PM, Ravi Shetye <ra...@gmail.com> wrote:
>
>> Can some one please help me about how I go ahead debugging the issue.The
>> NN log has the following error stack
>>
>> 2013-09-30 07:28:42,768 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>> started
>> 2013-09-30 07:28:42,967 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 2013-09-30 07:28:42,972 WARN
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 2013-09-30 07:28:42,978 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>> registered.
>> 2013-09-30 07:28:42,980 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> NameNode registered.
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>     = 64-bit
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>> memory = 27.3075 MB
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>    = 2^22 = 4194304 entries
>> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet:
>> recommended=4194304, actual=4194304
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2013-09-30 07:28:43,084 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2013-09-30 07:28:43,119 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> dfs.block.invalidate.limit=100
>> 2013-09-30 07:28:43,119 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>> accessTokenLifetime=0 min(s)
>> 2013-09-30 07:28:43,183 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStateMBean and NameNodeMXBean
>> 2013-09-30 07:28:43,207 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>> occuring more than 10 times
>> 2013-09-30 07:28:43,221 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Number of files = 528665
>> 2013-09-30 07:28:49,109 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Number of files under
>> construction = 7
>> 2013-09-30 07:28:49,111 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Image file of size 79872266
>> loaded in 5 seconds.
>> 2013-09-30 07:28:49,113 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.lang.NullPointerException
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1099)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1111)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:1014)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:208)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:631)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1021)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:839)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:377)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>
>> 2013-09-30 07:28:49,114 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>
>>
>> --
>> RAVI SHETYE
>>
>
>
>
> --
> *
> With Best Regards
> Manoj Kumar Sahu
> Cloudwick Technologies
> Hyderabad-500016.
> 8374232928 /7842496524
> *
> Pl. *Save a tree. Please don't print this e-mail unless you really need
> to...*
>



-- 
RAVI SHETYE

Re: unable to restart namenode on hadoop 1.0.4

Posted by Manoj Sah <ma...@cloudwick.com>.
Hi,
http://stackoverflow.com/questions/5490805/hadoop-nullpointerexcep

try this link

Thanks
Manoj


On Mon, Sep 30, 2013 at 1:03 PM, Ravi Shetye <ra...@gmail.com> wrote:

> Can some one please help me about how I go ahead debugging the issue.The
> NN log has the following error stack
>
> 2013-09-30 07:28:42,768 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2013-09-30 07:28:42,967 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 2013-09-30 07:28:42,972 WARN
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
> exists!
> 2013-09-30 07:28:42,978 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
> registered.
> 2013-09-30 07:28:42,980 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> NameNode registered.
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>   = 64-bit
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> memory = 27.3075 MB
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>    = 2^22 = 4194304 entries
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet:
> recommended=4194304, actual=4194304
> 2013-09-30 07:28:43,084 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
> 2013-09-30 07:28:43,084 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2013-09-30 07:28:43,084 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2013-09-30 07:28:43,119 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.block.invalidate.limit=100
> 2013-09-30 07:28:43,119 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> accessTokenLifetime=0 min(s)
> 2013-09-30 07:28:43,183 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStateMBean and NameNodeMXBean
> 2013-09-30 07:28:43,207 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2013-09-30 07:28:43,221 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 528665
> 2013-09-30 07:28:49,109 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 7
> 2013-09-30 07:28:49,111 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 79872266 loaded in 5 seconds.
> 2013-09-30 07:28:49,113 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.lang.NullPointerException
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1099)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1111)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:1014)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:631)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1021)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:839)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:377)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>
> 2013-09-30 07:28:49,114 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
>
> --
> RAVI SHETYE
>



-- 
*
With Best Regards
Manoj Kumar Sahu
Cloudwick Technologies
Hyderabad-500016.
8374232928 /7842496524
*
Pl. *Save a tree. Please don't print this e-mail unless you really need
to...*

Re: unable to restart namenode on hadoop 1.0.4

Posted by Manoj Sah <ma...@cloudwick.com>.
Hi,
http://stackoverflow.com/questions/5490805/hadoop-nullpointerexcep

try this link

Thanks
Manoj


On Mon, Sep 30, 2013 at 1:03 PM, Ravi Shetye <ra...@gmail.com> wrote:

> Can some one please help me about how I go ahead debugging the issue.The
> NN log has the following error stack
>
> 2013-09-30 07:28:42,768 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2013-09-30 07:28:42,967 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 2013-09-30 07:28:42,972 WARN
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
> exists!
> 2013-09-30 07:28:42,978 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
> registered.
> 2013-09-30 07:28:42,980 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> NameNode registered.
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>   = 64-bit
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> memory = 27.3075 MB
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>    = 2^22 = 4194304 entries
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet:
> recommended=4194304, actual=4194304
> 2013-09-30 07:28:43,084 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
> 2013-09-30 07:28:43,084 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2013-09-30 07:28:43,084 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2013-09-30 07:28:43,119 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.block.invalidate.limit=100
> 2013-09-30 07:28:43,119 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> accessTokenLifetime=0 min(s)
> 2013-09-30 07:28:43,183 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStateMBean and NameNodeMXBean
> 2013-09-30 07:28:43,207 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2013-09-30 07:28:43,221 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 528665
> 2013-09-30 07:28:49,109 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 7
> 2013-09-30 07:28:49,111 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 79872266 loaded in 5 seconds.
> 2013-09-30 07:28:49,113 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.lang.NullPointerException
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1099)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1111)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:1014)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:631)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1021)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:839)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:377)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>
> 2013-09-30 07:28:49,114 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
>
> --
> RAVI SHETYE
>



-- 
*
With Best Regards
Manoj Kumar Sahu
Cloudwick Technologies
Hyderabad-500016.
8374232928 /7842496524
*
Pl. *Save a tree. Please don't print this e-mail unless you really need
to...*

Re: unable to restart namenode on hadoop 1.0.4

Posted by Manoj Sah <ma...@cloudwick.com>.
Hi,
http://stackoverflow.com/questions/5490805/hadoop-nullpointerexcep

try this link

Thanks
Manoj


On Mon, Sep 30, 2013 at 1:03 PM, Ravi Shetye <ra...@gmail.com> wrote:

> Can some one please help me about how I go ahead debugging the issue.The
> NN log has the following error stack
>
> 2013-09-30 07:28:42,768 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2013-09-30 07:28:42,967 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 2013-09-30 07:28:42,972 WARN
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
> exists!
> 2013-09-30 07:28:42,978 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
> registered.
> 2013-09-30 07:28:42,980 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> NameNode registered.
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>   = 64-bit
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> memory = 27.3075 MB
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>    = 2^22 = 4194304 entries
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet:
> recommended=4194304, actual=4194304
> 2013-09-30 07:28:43,084 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
> 2013-09-30 07:28:43,084 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2013-09-30 07:28:43,084 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2013-09-30 07:28:43,119 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.block.invalidate.limit=100
> 2013-09-30 07:28:43,119 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> accessTokenLifetime=0 min(s)
> 2013-09-30 07:28:43,183 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStateMBean and NameNodeMXBean
> 2013-09-30 07:28:43,207 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2013-09-30 07:28:43,221 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 528665
> 2013-09-30 07:28:49,109 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 7
> 2013-09-30 07:28:49,111 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 79872266 loaded in 5 seconds.
> 2013-09-30 07:28:49,113 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.lang.NullPointerException
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1099)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1111)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:1014)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:631)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1021)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:839)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:377)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>
> 2013-09-30 07:28:49,114 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
>
> --
> RAVI SHETYE
>



-- 
*
With Best Regards
Manoj Kumar Sahu
Cloudwick Technologies
Hyderabad-500016.
8374232928 /7842496524
*
Pl. *Save a tree. Please don't print this e-mail unless you really need
to...*

Re: unable to restart namenode on hadoop 1.0.4

Posted by Manoj Sah <ma...@cloudwick.com>.
Hi,
http://stackoverflow.com/questions/5490805/hadoop-nullpointerexcep

try this link

Thanks
Manoj


On Mon, Sep 30, 2013 at 1:03 PM, Ravi Shetye <ra...@gmail.com> wrote:

> Can some one please help me about how I go ahead debugging the issue.The
> NN log has the following error stack
>
> 2013-09-30 07:28:42,768 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2013-09-30 07:28:42,967 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 2013-09-30 07:28:42,972 WARN
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
> exists!
> 2013-09-30 07:28:42,978 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
> registered.
> 2013-09-30 07:28:42,980 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> NameNode registered.
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>   = 64-bit
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> memory = 27.3075 MB
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>    = 2^22 = 4194304 entries
> 2013-09-30 07:28:43,012 INFO org.apache.hadoop.hdfs.util.GSet:
> recommended=4194304, actual=4194304
> 2013-09-30 07:28:43,084 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=ubuntu
> 2013-09-30 07:28:43,084 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2013-09-30 07:28:43,084 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2013-09-30 07:28:43,119 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.block.invalidate.limit=100
> 2013-09-30 07:28:43,119 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> accessTokenLifetime=0 min(s)
> 2013-09-30 07:28:43,183 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStateMBean and NameNodeMXBean
> 2013-09-30 07:28:43,207 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2013-09-30 07:28:43,221 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files = 528665
> 2013-09-30 07:28:49,109 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Number of files under construction = 7
> 2013-09-30 07:28:49,111 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Image file of size 79872266 loaded in 5 seconds.
> 2013-09-30 07:28:49,113 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.lang.NullPointerException
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1099)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1111)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:1014)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:208)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:631)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1021)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:839)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:377)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>
> 2013-09-30 07:28:49,114 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
>
> --
> RAVI SHETYE
>



-- 
*
With Best Regards
Manoj Kumar Sahu
Cloudwick Technologies
Hyderabad-500016.
8374232928 /7842496524
*
Pl. *Save a tree. Please don't print this e-mail unless you really need
to...*