You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by MOHAMMED IRFANULLA S <m....@huawei.com> on 2010/03/13 09:20:18 UTC

Problem formatting namenode

I'm facing issues while formatting the namenode. 
I've given hadoop.tmp.dir as /opt/hdfs. 
Below is the log output.

linux-5e47:/usr/local/hadoop # ./bin/hadoop namenode -format

10/03/13 15:52:09 INFO namenode.NameNode: STARTUP_MSG: 

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = linux-5e47/162.2.11.16

STARTUP_MSG:   args = [-format]

STARTUP_MSG:   version = 0.20.1

STARTUP_MSG:   build =
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009

************************************************************/

10/03/13 15:52:09 INFO namenode.FSNamesystem: fsOwner=root,root

10/03/13 15:52:09 INFO namenode.FSNamesystem: supergroup=supergroup

10/03/13 15:52:09 INFO namenode.FSNamesystem: isPermissionEnabled=true

10/03/13 15:52:09 INFO common.Storage: Image file of size 94 saved in 0
seconds.

10/03/13 15:52:09 ERROR namenode.NameNode: java.io.IOException: No space
left on device

        at sun.nio.ch.FileDispatcher.pwrite0(Native Method)

        at sun.nio.ch.FileDispatcher.pwrite(FileDispatcher.java:45)

        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:100)

        at sun.nio.ch.IOUtil.write(IOUtil.java:60)

        at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:648)

        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.pre
allocate(FSEditLog.java:228)

        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.flu
shAndSync(FSEditLog.java:204)

        at
org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutp
utStream.java:89)

        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.cre
ate(FSEditLog.java:161)

        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog.createEditLogFile(FSEditLog
.java:342)

        at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1093)

        at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1110)

        at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:856)

        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:948)

        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)



10/03/13 15:52:09 INFO namenode.NameNode: SHUTDOWN_MSG: 

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at linux-5e47/162.2.11.16

************************************************************/


 <ma...@hadoop.apache.org> 

df -h gives me this:

linux-5e47:/usr/local/hadoop # df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda2             101G  618M  100G   1% /

udev                  3.9G  160K  3.9G   1% /dev

/dev/sda5             101G   34M  100G   1% /home

/dev/sda7             8.6T  990M  8.6T   1% /opt

/dev/sda6             101G  641M  100G   1% /tmp

/dev/sda3             101G  2.7G   98G   3% /usr

/dev/sda4             101G  118M  100G   1% /var



As evident from the output, there is more than enough space available on the
device. If someother drive is specified for hadoop.tmp.dir(lower than 1T),
this issue is not happening. So, what could be the real reason behind the
above problem. 

Really appreciate help on this.

Thanks and regards,

Md. Irfanulla S.

 


RE: Problem formatting namenode

Posted by Sagar Shukla <sa...@persistent.co.in>.
Hi,
   Reviewing the situation again, I see that /opt partition where Filesystem is getting created is of 8.6TB. Is this a single disk partition ? Is this a SAN / NAS mounted partition ?

As per the architecture of Hadoop, it will collectively use storage of all the cluster node members in /opt partition. So you can have 500GB on each of the node members i.e. if you have 100 nodes with 500GB each then it would 50TB available in HDFS file-system.

May be if you can provide more details on the Filesystem type etc than I can try to provide more details.

Thanks,
Sagar

-----Original Message-----
From: MOHAMMED IRFANULLA S [mailto:m.irfanulla@huawei.com] 
Sent: Wednesday, March 17, 2010 9:21 AM
To: hdfs-user@hadoop.apache.org
Subject: RE: Problem formatting namenode

 


Hi, Sagar

Thanks for your reply.
I'm starting the hadoop as user root and the directory /opt has full[777]
permissions recursively. But still, the same problem occurs. Any specific
reason for the problem.

Thanks and regards,
Md. Irfanulla S.




-----Original Message-----
From: Sagar Shukla [mailto:sagar_shukla@persistent.co.in] 
Sent: Sunday, March 14, 2010 8:22 PM
To: hdfs-user@hadoop.apache.org; m.irfanulla@huawei.com
Subject: RE: Problem formatting namenode

Hi,
    Hadoop could be starting hadoop by default. So please check if user
hadoop has write permission on directory /opt where data directory /opt/hdfs
is getting created.

Thanks,
Sagar
________________________________________
From: MOHAMMED IRFANULLA S [m.irfanulla@huawei.com]
Sent: Saturday, March 13, 2010 1:50 PM
To: hdfs-user@hadoop.apache.org
Subject: Problem formatting namenode

I'm facing issues while formatting the namenode.
I've given hadoop.tmp.dir as /opt/hdfs.
Below is the log output.

linux-5e47:/usr/local/hadoop # ./bin/hadoop namenode -format
10/03/13 15:52:09 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = linux-5e47/162.2.11.16
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.1
STARTUP_MSG:   build =
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
************************************************************/
10/03/13 15:52:09 INFO namenode.FSNamesystem: fsOwner=root,root
10/03/13 15:52:09 INFO namenode.FSNamesystem: supergroup=supergroup
10/03/13 15:52:09 INFO namenode.FSNamesystem: isPermissionEnabled=true
10/03/13 15:52:09 INFO common.Storage: Image file of size 94 saved in 0
seconds.
10/03/13 15:52:09 ERROR namenode.NameNode: java.io.IOException: No space
left on device
        at sun.nio.ch.FileDispatcher.pwrite0(Native Method)
        at sun.nio.ch.FileDispatcher.pwrite(FileDispatcher.java:45)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:100)
        at sun.nio.ch.IOUtil.write(IOUtil.java:60)
        at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:648)
        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.pre
allocate(FSEditLog.java:228)
        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.flu
shAndSync(FSEditLog.java:204)
        at
org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutp
utStream.java:89)
        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.cre
ate(FSEditLog.java:161)
        at
org.apache.hadoop.hdfs.server.namenode.FSEditLog.createEditLogFile(FSEditLog
.java:342)
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1093)
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1110)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:856)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java
:948)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

10/03/13 15:52:09 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at linux-5e47/162.2.11.16
************************************************************/


<ma...@hadoop.apache.org>

df -h gives me this:

linux-5e47:/usr/local/hadoop # df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             101G  618M  100G   1% /
udev                  3.9G  160K  3.9G   1% /dev
/dev/sda5             101G   34M  100G   1% /home
/dev/sda7             8.6T  990M  8.6T   1% /opt
/dev/sda6             101G  641M  100G   1% /tmp
/dev/sda3             101G  2.7G   98G   3% /usr
/dev/sda4             101G  118M  100G   1% /var


As evident from the output, there is more than enough space available on the
device. If someother drive is specified for hadoop.tmp.dir(lower than 1T),
this issue is not happening. So, what could be the real reason behind the
above problem.

Really appreciate help on this.

Thanks and regards,

Md. Irfanulla S.




DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the
property of Persistent Systems Ltd. It is intended only for the use of the
individual or entity to which it is addressed. If you are not the intended
recipient, you are not authorized to read, retain, copy, print, distribute
or use this message. If you have received this communication in error,
please notify the sender and delete all copies of this message. Persistent
Systems Ltd. does not accept any liability for virus infected mails.


DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.

RE: Problem formatting namenode

Posted by Sagar Shukla <sa...@persistent.co.in>.
Hi,
    Hadoop could be starting hadoop by default. So please check if user hadoop has write permission on directory /opt where data directory /opt/hdfs is getting created.

Thanks,
Sagar
________________________________________
From: MOHAMMED IRFANULLA S [m.irfanulla@huawei.com]
Sent: Saturday, March 13, 2010 1:50 PM
To: hdfs-user@hadoop.apache.org
Subject: Problem formatting namenode

I'm facing issues while formatting the namenode.
I've given hadoop.tmp.dir as /opt/hdfs.
Below is the log output.

linux-5e47:/usr/local/hadoop # ./bin/hadoop namenode -format
10/03/13 15:52:09 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = linux-5e47/162.2.11.16
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.1
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
************************************************************/
10/03/13 15:52:09 INFO namenode.FSNamesystem: fsOwner=root,root
10/03/13 15:52:09 INFO namenode.FSNamesystem: supergroup=supergroup
10/03/13 15:52:09 INFO namenode.FSNamesystem: isPermissionEnabled=true
10/03/13 15:52:09 INFO common.Storage: Image file of size 94 saved in 0 seconds.
10/03/13 15:52:09 ERROR namenode.NameNode: java.io.IOException: No space left on device
        at sun.nio.ch.FileDispatcher.pwrite0(Native Method)
        at sun.nio.ch.FileDispatcher.pwrite(FileDispatcher.java:45)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:100)
        at sun.nio.ch.IOUtil.write(IOUtil.java:60)
        at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:648)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.preallocate(FSEditLog.java:228)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.flushAndSync(FSEditLog.java:204)
        at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:89)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog$EditLogFileOutputStream.create(FSEditLog.java:161)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createEditLogFile(FSEditLog.java:342)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1093)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1110)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:856)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:948)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

10/03/13 15:52:09 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at linux-5e47/162.2.11.16
************************************************************/


<ma...@hadoop.apache.org>

df -h gives me this:

linux-5e47:/usr/local/hadoop # df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             101G  618M  100G   1% /
udev                  3.9G  160K  3.9G   1% /dev
/dev/sda5             101G   34M  100G   1% /home
/dev/sda7             8.6T  990M  8.6T   1% /opt
/dev/sda6             101G  641M  100G   1% /tmp
/dev/sda3             101G  2.7G   98G   3% /usr
/dev/sda4             101G  118M  100G   1% /var


As evident from the output, there is more than enough space available on the device. If someother drive is specified for hadoop.tmp.dir(lower than 1T), this issue is not happening. So, what could be the real reason behind the above problem.

Really appreciate help on this.

Thanks and regards,

Md. Irfanulla S.




DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.