You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Fatih Haltas <fa...@nyu.edu> on 2014/03/16 10:07:33 UTC

I am about to lose all my data please help

Dear All,

I have just restarted machines of my hadoop clusters. Now, I am trying to
restart hadoop clusters again, but getting error on namenode restart. I am
afraid of loosing my data as it was properly running for more than 3
months. Currently, I believe if I do namenode formatting, it will work
again, however, data will be lost. Is there anyway to solve this without
losing the data.

I will really appreciate any help.

Thanks.


=====================
Here is the logs;
====================
2014-02-26 16:02:39,698 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.4
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
************************************************************/
2014-02-26 16:02:40,005 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
loaded properties from hadoop-metrics2.properties
2014-02-26 16:02:40,019 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
2014-02-26 16:02:40,021 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2014-02-26 16:02:40,021 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
started
2014-02-26 16:02:40,169 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
registered.
2014-02-26 16:02:40,193 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
registered.
2014-02-26 16:02:40,194 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
NameNode registered.
2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
  = 64-bit
2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
memory = 17.77875 MB
2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
 = 2^21 = 2097152 entries
2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
recommended=2097152, actual=2097152
2014-02-26 16:02:40,273 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
2014-02-26 16:02:40,273 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2014-02-26 16:02:40,274 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true
2014-02-26 16:02:40,279 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.block.invalidate.limit=100
2014-02-26 16:02:40,279 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
accessTokenLifetime=0 min(s)
2014-02-26 16:02:40,724 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStateMBean and NameNodeMXBean
2014-02-26 16:02:40,749 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
occuring more than 10 times
2014-02-26 16:02:40,780 ERROR
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
initialization failed.
java.io.IOException: NameNode is not formatted.
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
        at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2014-02-26 16:02:40,781 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
NameNode is not formatted.
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
        at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

2014-02-26 16:02:40,781 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
************************************************************/

===========================
Here is the core-site.xml
===========================
 <?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
    <name>fs.default.name</name>
    <value>-BLANKED</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/project/hadoop-data</value>
  </property>
</configuration>

Re: I am about to lose all my data please help

Posted by Fatih Haltas <fa...@nyu.edu>.
Ok, Thanks to you all, I just removed version information of all datanodes
and namenodes, then restart, it is working fine now


On Mon, Mar 24, 2014 at 5:52 PM, praveenesh kumar <pr...@gmail.com>wrote:

> Can you also make sure your hostname and IP address are still mapped
> correctly. Because what I am guessing is when you restart your machine,
> your /etc/hosts entries might get restored (it happens in some
> distributions, based on how you installed it). So when you are trying to
> restart your namenode, it might be pointing to some different IP/machine
> (in general localhost).
>
> I can't think of any reason how  it can happen just by restarting the
> machine.
>
>
> On Mon, Mar 24, 2014 at 5:42 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> Can you confirm that you namenode image and fseditlog are still there? if
>> not, then your data IS lost.
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>
>>> No, not ofcourse I blinded it.
>>>
>>>
>>> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:
>>>
>>>> Is this property correct ?
>>>>
>>>>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>-BLANKED</value>
>>>>   </property>
>>>>
>>>> Regards
>>>> Prav
>>>>
>>>>
>>>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>>>
>>>>> Thanks for you helps, but still could not solve my problem.
>>>>>
>>>>>
>>>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>
>>>>>> Ah yes, I overlooked this. Then please check the file are there or
>>>>>> not: "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>>>>
>>>>>> Regards,
>>>>>> *Stanley Shi,*
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com>wrote:
>>>>>>
>>>>>>> I don't think this is the case, because there is;
>>>>>>>   <property>
>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>   </property>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>>>
>>>>>>>> one possible reason is that you didn't set the namenode working
>>>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>>>>> afraid you have lost all your namenode data.
>>>>>>>>
>>>>>>>> *<property>
>>>>>>>>   <name>dfs.name.dir</name>
>>>>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>>>>       of directories then the name table is replicated in all of the
>>>>>>>>       directories, for redundancy. </description>
>>>>>>>> </property>*
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> *Stanley Shi,*
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <
>>>>>>>> mirko.kaempf@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>>>>> And how much memory has the NameNode.
>>>>>>>>>
>>>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>>>>> checkpointing?
>>>>>>>>>
>>>>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>>>>
>>>>>>>>> With this information at hand, one might be able to fix your
>>>>>>>>> setup, but do not format the old namenode before
>>>>>>>>> all is working with a fresh one.
>>>>>>>>>
>>>>>>>>> Grab a copy of the maintainance guide:
>>>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>>>>> which helps solving such type of problems as well.
>>>>>>>>>
>>>>>>>>> Best wishes
>>>>>>>>> Mirko
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>>>>
>>>>>>>>> Dear All,
>>>>>>>>>>
>>>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>>>>> without losing the data.
>>>>>>>>>>
>>>>>>>>>> I will really appreciate any help.
>>>>>>>>>>
>>>>>>>>>> Thanks.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> =====================
>>>>>>>>>> Here is the logs;
>>>>>>>>>> ====================
>>>>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>>>>> /************************************************************
>>>>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>>>>> STARTUP_MSG:   args = []
>>>>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>>>>> STARTUP_MSG:   build =
>>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>>>>> ************************************************************/
>>>>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>>>>> hadoop-metrics2.properties
>>>>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>>>>> period at 10 second(s).
>>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>>>>> started
>>>>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>>>>> registered.
>>>>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>>>>> registered.
>>>>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>>> NameNode registered.
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>>>>> type       = 64-bit
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>>>>> max memory = 17.77875 MB
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>>> recommended=2097152, actual=2097152
>>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>>> isPermissionEnabled=true
>>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>>> dfs.block.invalidate.limit=100
>>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>>>>> occuring more than 10 times
>>>>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>>>>> initialization failed.
>>>>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>>>>> NameNode is not formatted.
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>>>
>>>>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>>>>> /************************************************************
>>>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>>>>> ************************************************************/
>>>>>>>>>>
>>>>>>>>>> ===========================
>>>>>>>>>> Here is the core-site.xml
>>>>>>>>>> ===========================
>>>>>>>>>>  <?xml version="1.0"?>
>>>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>>>>
>>>>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>>>>
>>>>>>>>>> <configuration>
>>>>>>>>>> <property>
>>>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>>>     <value>-BLANKED</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>>>>   </property>
>>>>>>>>>> </configuration>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Fatih Haltas <fa...@nyu.edu>.
Ok, Thanks to you all, I just removed version information of all datanodes
and namenodes, then restart, it is working fine now


On Mon, Mar 24, 2014 at 5:52 PM, praveenesh kumar <pr...@gmail.com>wrote:

> Can you also make sure your hostname and IP address are still mapped
> correctly. Because what I am guessing is when you restart your machine,
> your /etc/hosts entries might get restored (it happens in some
> distributions, based on how you installed it). So when you are trying to
> restart your namenode, it might be pointing to some different IP/machine
> (in general localhost).
>
> I can't think of any reason how  it can happen just by restarting the
> machine.
>
>
> On Mon, Mar 24, 2014 at 5:42 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> Can you confirm that you namenode image and fseditlog are still there? if
>> not, then your data IS lost.
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>
>>> No, not ofcourse I blinded it.
>>>
>>>
>>> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:
>>>
>>>> Is this property correct ?
>>>>
>>>>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>-BLANKED</value>
>>>>   </property>
>>>>
>>>> Regards
>>>> Prav
>>>>
>>>>
>>>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>>>
>>>>> Thanks for you helps, but still could not solve my problem.
>>>>>
>>>>>
>>>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>
>>>>>> Ah yes, I overlooked this. Then please check the file are there or
>>>>>> not: "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>>>>
>>>>>> Regards,
>>>>>> *Stanley Shi,*
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com>wrote:
>>>>>>
>>>>>>> I don't think this is the case, because there is;
>>>>>>>   <property>
>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>   </property>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>>>
>>>>>>>> one possible reason is that you didn't set the namenode working
>>>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>>>>> afraid you have lost all your namenode data.
>>>>>>>>
>>>>>>>> *<property>
>>>>>>>>   <name>dfs.name.dir</name>
>>>>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>>>>       of directories then the name table is replicated in all of the
>>>>>>>>       directories, for redundancy. </description>
>>>>>>>> </property>*
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> *Stanley Shi,*
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <
>>>>>>>> mirko.kaempf@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>>>>> And how much memory has the NameNode.
>>>>>>>>>
>>>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>>>>> checkpointing?
>>>>>>>>>
>>>>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>>>>
>>>>>>>>> With this information at hand, one might be able to fix your
>>>>>>>>> setup, but do not format the old namenode before
>>>>>>>>> all is working with a fresh one.
>>>>>>>>>
>>>>>>>>> Grab a copy of the maintainance guide:
>>>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>>>>> which helps solving such type of problems as well.
>>>>>>>>>
>>>>>>>>> Best wishes
>>>>>>>>> Mirko
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>>>>
>>>>>>>>> Dear All,
>>>>>>>>>>
>>>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>>>>> without losing the data.
>>>>>>>>>>
>>>>>>>>>> I will really appreciate any help.
>>>>>>>>>>
>>>>>>>>>> Thanks.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> =====================
>>>>>>>>>> Here is the logs;
>>>>>>>>>> ====================
>>>>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>>>>> /************************************************************
>>>>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>>>>> STARTUP_MSG:   args = []
>>>>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>>>>> STARTUP_MSG:   build =
>>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>>>>> ************************************************************/
>>>>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>>>>> hadoop-metrics2.properties
>>>>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>>>>> period at 10 second(s).
>>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>>>>> started
>>>>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>>>>> registered.
>>>>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>>>>> registered.
>>>>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>>> NameNode registered.
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>>>>> type       = 64-bit
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>>>>> max memory = 17.77875 MB
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>>> recommended=2097152, actual=2097152
>>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>>> isPermissionEnabled=true
>>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>>> dfs.block.invalidate.limit=100
>>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>>>>> occuring more than 10 times
>>>>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>>>>> initialization failed.
>>>>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>>>>> NameNode is not formatted.
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>>>
>>>>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>>>>> /************************************************************
>>>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>>>>> ************************************************************/
>>>>>>>>>>
>>>>>>>>>> ===========================
>>>>>>>>>> Here is the core-site.xml
>>>>>>>>>> ===========================
>>>>>>>>>>  <?xml version="1.0"?>
>>>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>>>>
>>>>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>>>>
>>>>>>>>>> <configuration>
>>>>>>>>>> <property>
>>>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>>>     <value>-BLANKED</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>>>>   </property>
>>>>>>>>>> </configuration>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Fatih Haltas <fa...@nyu.edu>.
Ok, Thanks to you all, I just removed version information of all datanodes
and namenodes, then restart, it is working fine now


On Mon, Mar 24, 2014 at 5:52 PM, praveenesh kumar <pr...@gmail.com>wrote:

> Can you also make sure your hostname and IP address are still mapped
> correctly. Because what I am guessing is when you restart your machine,
> your /etc/hosts entries might get restored (it happens in some
> distributions, based on how you installed it). So when you are trying to
> restart your namenode, it might be pointing to some different IP/machine
> (in general localhost).
>
> I can't think of any reason how  it can happen just by restarting the
> machine.
>
>
> On Mon, Mar 24, 2014 at 5:42 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> Can you confirm that you namenode image and fseditlog are still there? if
>> not, then your data IS lost.
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>
>>> No, not ofcourse I blinded it.
>>>
>>>
>>> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:
>>>
>>>> Is this property correct ?
>>>>
>>>>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>-BLANKED</value>
>>>>   </property>
>>>>
>>>> Regards
>>>> Prav
>>>>
>>>>
>>>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>>>
>>>>> Thanks for you helps, but still could not solve my problem.
>>>>>
>>>>>
>>>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>
>>>>>> Ah yes, I overlooked this. Then please check the file are there or
>>>>>> not: "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>>>>
>>>>>> Regards,
>>>>>> *Stanley Shi,*
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com>wrote:
>>>>>>
>>>>>>> I don't think this is the case, because there is;
>>>>>>>   <property>
>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>   </property>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>>>
>>>>>>>> one possible reason is that you didn't set the namenode working
>>>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>>>>> afraid you have lost all your namenode data.
>>>>>>>>
>>>>>>>> *<property>
>>>>>>>>   <name>dfs.name.dir</name>
>>>>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>>>>       of directories then the name table is replicated in all of the
>>>>>>>>       directories, for redundancy. </description>
>>>>>>>> </property>*
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> *Stanley Shi,*
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <
>>>>>>>> mirko.kaempf@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>>>>> And how much memory has the NameNode.
>>>>>>>>>
>>>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>>>>> checkpointing?
>>>>>>>>>
>>>>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>>>>
>>>>>>>>> With this information at hand, one might be able to fix your
>>>>>>>>> setup, but do not format the old namenode before
>>>>>>>>> all is working with a fresh one.
>>>>>>>>>
>>>>>>>>> Grab a copy of the maintainance guide:
>>>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>>>>> which helps solving such type of problems as well.
>>>>>>>>>
>>>>>>>>> Best wishes
>>>>>>>>> Mirko
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>>>>
>>>>>>>>> Dear All,
>>>>>>>>>>
>>>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>>>>> without losing the data.
>>>>>>>>>>
>>>>>>>>>> I will really appreciate any help.
>>>>>>>>>>
>>>>>>>>>> Thanks.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> =====================
>>>>>>>>>> Here is the logs;
>>>>>>>>>> ====================
>>>>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>>>>> /************************************************************
>>>>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>>>>> STARTUP_MSG:   args = []
>>>>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>>>>> STARTUP_MSG:   build =
>>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>>>>> ************************************************************/
>>>>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>>>>> hadoop-metrics2.properties
>>>>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>>>>> period at 10 second(s).
>>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>>>>> started
>>>>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>>>>> registered.
>>>>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>>>>> registered.
>>>>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>>> NameNode registered.
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>>>>> type       = 64-bit
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>>>>> max memory = 17.77875 MB
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>>> recommended=2097152, actual=2097152
>>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>>> isPermissionEnabled=true
>>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>>> dfs.block.invalidate.limit=100
>>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>>>>> occuring more than 10 times
>>>>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>>>>> initialization failed.
>>>>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>>>>> NameNode is not formatted.
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>>>
>>>>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>>>>> /************************************************************
>>>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>>>>> ************************************************************/
>>>>>>>>>>
>>>>>>>>>> ===========================
>>>>>>>>>> Here is the core-site.xml
>>>>>>>>>> ===========================
>>>>>>>>>>  <?xml version="1.0"?>
>>>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>>>>
>>>>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>>>>
>>>>>>>>>> <configuration>
>>>>>>>>>> <property>
>>>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>>>     <value>-BLANKED</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>>>>   </property>
>>>>>>>>>> </configuration>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Fatih Haltas <fa...@nyu.edu>.
Ok, Thanks to you all, I just removed version information of all datanodes
and namenodes, then restart, it is working fine now


On Mon, Mar 24, 2014 at 5:52 PM, praveenesh kumar <pr...@gmail.com>wrote:

> Can you also make sure your hostname and IP address are still mapped
> correctly. Because what I am guessing is when you restart your machine,
> your /etc/hosts entries might get restored (it happens in some
> distributions, based on how you installed it). So when you are trying to
> restart your namenode, it might be pointing to some different IP/machine
> (in general localhost).
>
> I can't think of any reason how  it can happen just by restarting the
> machine.
>
>
> On Mon, Mar 24, 2014 at 5:42 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> Can you confirm that you namenode image and fseditlog are still there? if
>> not, then your data IS lost.
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>
>>> No, not ofcourse I blinded it.
>>>
>>>
>>> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:
>>>
>>>> Is this property correct ?
>>>>
>>>>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>-BLANKED</value>
>>>>   </property>
>>>>
>>>> Regards
>>>> Prav
>>>>
>>>>
>>>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>>>
>>>>> Thanks for you helps, but still could not solve my problem.
>>>>>
>>>>>
>>>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>
>>>>>> Ah yes, I overlooked this. Then please check the file are there or
>>>>>> not: "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>>>>
>>>>>> Regards,
>>>>>> *Stanley Shi,*
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com>wrote:
>>>>>>
>>>>>>> I don't think this is the case, because there is;
>>>>>>>   <property>
>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>   </property>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>>>
>>>>>>>> one possible reason is that you didn't set the namenode working
>>>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>>>>> afraid you have lost all your namenode data.
>>>>>>>>
>>>>>>>> *<property>
>>>>>>>>   <name>dfs.name.dir</name>
>>>>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>>>>       of directories then the name table is replicated in all of the
>>>>>>>>       directories, for redundancy. </description>
>>>>>>>> </property>*
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> *Stanley Shi,*
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <
>>>>>>>> mirko.kaempf@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>>>>> And how much memory has the NameNode.
>>>>>>>>>
>>>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>>>>> checkpointing?
>>>>>>>>>
>>>>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>>>>
>>>>>>>>> With this information at hand, one might be able to fix your
>>>>>>>>> setup, but do not format the old namenode before
>>>>>>>>> all is working with a fresh one.
>>>>>>>>>
>>>>>>>>> Grab a copy of the maintainance guide:
>>>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>>>>> which helps solving such type of problems as well.
>>>>>>>>>
>>>>>>>>> Best wishes
>>>>>>>>> Mirko
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>>>>
>>>>>>>>> Dear All,
>>>>>>>>>>
>>>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>>>>> without losing the data.
>>>>>>>>>>
>>>>>>>>>> I will really appreciate any help.
>>>>>>>>>>
>>>>>>>>>> Thanks.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> =====================
>>>>>>>>>> Here is the logs;
>>>>>>>>>> ====================
>>>>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>>>>> /************************************************************
>>>>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>>>>> STARTUP_MSG:   args = []
>>>>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>>>>> STARTUP_MSG:   build =
>>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>>>>> ************************************************************/
>>>>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>>>>> hadoop-metrics2.properties
>>>>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>>>>> period at 10 second(s).
>>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>>>>> started
>>>>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>>>>> registered.
>>>>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>>>>> registered.
>>>>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>>> NameNode registered.
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>>>>> type       = 64-bit
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>>>>> max memory = 17.77875 MB
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>>> recommended=2097152, actual=2097152
>>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>>> isPermissionEnabled=true
>>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>>> dfs.block.invalidate.limit=100
>>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>>>>> occuring more than 10 times
>>>>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>>>>> initialization failed.
>>>>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>>>>> NameNode is not formatted.
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>>         at
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>>>
>>>>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>>>>> /************************************************************
>>>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>>>>> ************************************************************/
>>>>>>>>>>
>>>>>>>>>> ===========================
>>>>>>>>>> Here is the core-site.xml
>>>>>>>>>> ===========================
>>>>>>>>>>  <?xml version="1.0"?>
>>>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>>>>
>>>>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>>>>
>>>>>>>>>> <configuration>
>>>>>>>>>> <property>
>>>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>>>     <value>-BLANKED</value>
>>>>>>>>>>   </property>
>>>>>>>>>>   <property>
>>>>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>>>>   </property>
>>>>>>>>>> </configuration>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by praveenesh kumar <pr...@gmail.com>.
Can you also make sure your hostname and IP address are still mapped
correctly. Because what I am guessing is when you restart your machine,
your /etc/hosts entries might get restored (it happens in some
distributions, based on how you installed it). So when you are trying to
restart your namenode, it might be pointing to some different IP/machine
(in general localhost).

I can't think of any reason how  it can happen just by restarting the
machine.


On Mon, Mar 24, 2014 at 5:42 AM, Stanley Shi <ss...@gopivotal.com> wrote:

> Can you confirm that you namenode image and fseditlog are still there? if
> not, then your data IS lost.
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>
>> No, not ofcourse I blinded it.
>>
>>
>> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:
>>
>>> Is this property correct ?
>>>
>>>
>>> <property>
>>>     <name>fs.default.name</name>
>>>     <value>-BLANKED</value>
>>>   </property>
>>>
>>> Regards
>>> Prav
>>>
>>>
>>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>>
>>>> Thanks for you helps, but still could not solve my problem.
>>>>
>>>>
>>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>
>>>>> Ah yes, I overlooked this. Then please check the file are there or
>>>>> not: "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>>>
>>>>> Regards,
>>>>> *Stanley Shi,*
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>>
>>>>>> I don't think this is the case, because there is;
>>>>>>   <property>
>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>   </property>
>>>>>>
>>>>>>
>>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>>
>>>>>>> one possible reason is that you didn't set the namenode working
>>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>>>> afraid you have lost all your namenode data.
>>>>>>>
>>>>>>> *<property>
>>>>>>>   <name>dfs.name.dir</name>
>>>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>>>       of directories then the name table is replicated in all of the
>>>>>>>       directories, for redundancy. </description>
>>>>>>> </property>*
>>>>>>>
>>>>>>>
>>>>>>> Regards,
>>>>>>> *Stanley Shi,*
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mirko.kaempf@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>>>> And how much memory has the NameNode.
>>>>>>>>
>>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>>>> checkpointing?
>>>>>>>>
>>>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>>>
>>>>>>>> With this information at hand, one might be able to fix your setup,
>>>>>>>> but do not format the old namenode before
>>>>>>>> all is working with a fresh one.
>>>>>>>>
>>>>>>>> Grab a copy of the maintainance guide:
>>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>>>> which helps solving such type of problems as well.
>>>>>>>>
>>>>>>>> Best wishes
>>>>>>>> Mirko
>>>>>>>>
>>>>>>>>
>>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>>>
>>>>>>>> Dear All,
>>>>>>>>>
>>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>>>> without losing the data.
>>>>>>>>>
>>>>>>>>> I will really appreciate any help.
>>>>>>>>>
>>>>>>>>> Thanks.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> =====================
>>>>>>>>> Here is the logs;
>>>>>>>>> ====================
>>>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>>>> /************************************************************
>>>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>>>> STARTUP_MSG:   args = []
>>>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>>>> STARTUP_MSG:   build =
>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>>>> ************************************************************/
>>>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>>>> hadoop-metrics2.properties
>>>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>>>> period at 10 second(s).
>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>>>> started
>>>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>>>> registered.
>>>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>>>> registered.
>>>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>> NameNode registered.
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>>>> type       = 64-bit
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>>>> max memory = 17.77875 MB
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>> recommended=2097152, actual=2097152
>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>> isPermissionEnabled=true
>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>> dfs.block.invalidate.limit=100
>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>>>> occuring more than 10 times
>>>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>>>> initialization failed.
>>>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>>>> NameNode is not formatted.
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>>
>>>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>>>> /************************************************************
>>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>>>> ************************************************************/
>>>>>>>>>
>>>>>>>>> ===========================
>>>>>>>>> Here is the core-site.xml
>>>>>>>>> ===========================
>>>>>>>>>  <?xml version="1.0"?>
>>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>>>
>>>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>>>
>>>>>>>>> <configuration>
>>>>>>>>> <property>
>>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>>     <value>-BLANKED</value>
>>>>>>>>>   </property>
>>>>>>>>>   <property>
>>>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>>>   </property>
>>>>>>>>> </configuration>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by praveenesh kumar <pr...@gmail.com>.
Can you also make sure your hostname and IP address are still mapped
correctly. Because what I am guessing is when you restart your machine,
your /etc/hosts entries might get restored (it happens in some
distributions, based on how you installed it). So when you are trying to
restart your namenode, it might be pointing to some different IP/machine
(in general localhost).

I can't think of any reason how  it can happen just by restarting the
machine.


On Mon, Mar 24, 2014 at 5:42 AM, Stanley Shi <ss...@gopivotal.com> wrote:

> Can you confirm that you namenode image and fseditlog are still there? if
> not, then your data IS lost.
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>
>> No, not ofcourse I blinded it.
>>
>>
>> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:
>>
>>> Is this property correct ?
>>>
>>>
>>> <property>
>>>     <name>fs.default.name</name>
>>>     <value>-BLANKED</value>
>>>   </property>
>>>
>>> Regards
>>> Prav
>>>
>>>
>>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>>
>>>> Thanks for you helps, but still could not solve my problem.
>>>>
>>>>
>>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>
>>>>> Ah yes, I overlooked this. Then please check the file are there or
>>>>> not: "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>>>
>>>>> Regards,
>>>>> *Stanley Shi,*
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>>
>>>>>> I don't think this is the case, because there is;
>>>>>>   <property>
>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>   </property>
>>>>>>
>>>>>>
>>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>>
>>>>>>> one possible reason is that you didn't set the namenode working
>>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>>>> afraid you have lost all your namenode data.
>>>>>>>
>>>>>>> *<property>
>>>>>>>   <name>dfs.name.dir</name>
>>>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>>>       of directories then the name table is replicated in all of the
>>>>>>>       directories, for redundancy. </description>
>>>>>>> </property>*
>>>>>>>
>>>>>>>
>>>>>>> Regards,
>>>>>>> *Stanley Shi,*
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mirko.kaempf@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>>>> And how much memory has the NameNode.
>>>>>>>>
>>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>>>> checkpointing?
>>>>>>>>
>>>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>>>
>>>>>>>> With this information at hand, one might be able to fix your setup,
>>>>>>>> but do not format the old namenode before
>>>>>>>> all is working with a fresh one.
>>>>>>>>
>>>>>>>> Grab a copy of the maintainance guide:
>>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>>>> which helps solving such type of problems as well.
>>>>>>>>
>>>>>>>> Best wishes
>>>>>>>> Mirko
>>>>>>>>
>>>>>>>>
>>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>>>
>>>>>>>> Dear All,
>>>>>>>>>
>>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>>>> without losing the data.
>>>>>>>>>
>>>>>>>>> I will really appreciate any help.
>>>>>>>>>
>>>>>>>>> Thanks.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> =====================
>>>>>>>>> Here is the logs;
>>>>>>>>> ====================
>>>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>>>> /************************************************************
>>>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>>>> STARTUP_MSG:   args = []
>>>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>>>> STARTUP_MSG:   build =
>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>>>> ************************************************************/
>>>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>>>> hadoop-metrics2.properties
>>>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>>>> period at 10 second(s).
>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>>>> started
>>>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>>>> registered.
>>>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>>>> registered.
>>>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>> NameNode registered.
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>>>> type       = 64-bit
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>>>> max memory = 17.77875 MB
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>> recommended=2097152, actual=2097152
>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>> isPermissionEnabled=true
>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>> dfs.block.invalidate.limit=100
>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>>>> occuring more than 10 times
>>>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>>>> initialization failed.
>>>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>>>> NameNode is not formatted.
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>>
>>>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>>>> /************************************************************
>>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>>>> ************************************************************/
>>>>>>>>>
>>>>>>>>> ===========================
>>>>>>>>> Here is the core-site.xml
>>>>>>>>> ===========================
>>>>>>>>>  <?xml version="1.0"?>
>>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>>>
>>>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>>>
>>>>>>>>> <configuration>
>>>>>>>>> <property>
>>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>>     <value>-BLANKED</value>
>>>>>>>>>   </property>
>>>>>>>>>   <property>
>>>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>>>   </property>
>>>>>>>>> </configuration>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by praveenesh kumar <pr...@gmail.com>.
Can you also make sure your hostname and IP address are still mapped
correctly. Because what I am guessing is when you restart your machine,
your /etc/hosts entries might get restored (it happens in some
distributions, based on how you installed it). So when you are trying to
restart your namenode, it might be pointing to some different IP/machine
(in general localhost).

I can't think of any reason how  it can happen just by restarting the
machine.


On Mon, Mar 24, 2014 at 5:42 AM, Stanley Shi <ss...@gopivotal.com> wrote:

> Can you confirm that you namenode image and fseditlog are still there? if
> not, then your data IS lost.
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>
>> No, not ofcourse I blinded it.
>>
>>
>> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:
>>
>>> Is this property correct ?
>>>
>>>
>>> <property>
>>>     <name>fs.default.name</name>
>>>     <value>-BLANKED</value>
>>>   </property>
>>>
>>> Regards
>>> Prav
>>>
>>>
>>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>>
>>>> Thanks for you helps, but still could not solve my problem.
>>>>
>>>>
>>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>
>>>>> Ah yes, I overlooked this. Then please check the file are there or
>>>>> not: "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>>>
>>>>> Regards,
>>>>> *Stanley Shi,*
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>>
>>>>>> I don't think this is the case, because there is;
>>>>>>   <property>
>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>   </property>
>>>>>>
>>>>>>
>>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>>
>>>>>>> one possible reason is that you didn't set the namenode working
>>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>>>> afraid you have lost all your namenode data.
>>>>>>>
>>>>>>> *<property>
>>>>>>>   <name>dfs.name.dir</name>
>>>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>>>       of directories then the name table is replicated in all of the
>>>>>>>       directories, for redundancy. </description>
>>>>>>> </property>*
>>>>>>>
>>>>>>>
>>>>>>> Regards,
>>>>>>> *Stanley Shi,*
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mirko.kaempf@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>>>> And how much memory has the NameNode.
>>>>>>>>
>>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>>>> checkpointing?
>>>>>>>>
>>>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>>>
>>>>>>>> With this information at hand, one might be able to fix your setup,
>>>>>>>> but do not format the old namenode before
>>>>>>>> all is working with a fresh one.
>>>>>>>>
>>>>>>>> Grab a copy of the maintainance guide:
>>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>>>> which helps solving such type of problems as well.
>>>>>>>>
>>>>>>>> Best wishes
>>>>>>>> Mirko
>>>>>>>>
>>>>>>>>
>>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>>>
>>>>>>>> Dear All,
>>>>>>>>>
>>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>>>> without losing the data.
>>>>>>>>>
>>>>>>>>> I will really appreciate any help.
>>>>>>>>>
>>>>>>>>> Thanks.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> =====================
>>>>>>>>> Here is the logs;
>>>>>>>>> ====================
>>>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>>>> /************************************************************
>>>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>>>> STARTUP_MSG:   args = []
>>>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>>>> STARTUP_MSG:   build =
>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>>>> ************************************************************/
>>>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>>>> hadoop-metrics2.properties
>>>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>>>> period at 10 second(s).
>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>>>> started
>>>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>>>> registered.
>>>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>>>> registered.
>>>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>> NameNode registered.
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>>>> type       = 64-bit
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>>>> max memory = 17.77875 MB
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>> recommended=2097152, actual=2097152
>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>> isPermissionEnabled=true
>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>> dfs.block.invalidate.limit=100
>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>>>> occuring more than 10 times
>>>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>>>> initialization failed.
>>>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>>>> NameNode is not formatted.
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>>
>>>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>>>> /************************************************************
>>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>>>> ************************************************************/
>>>>>>>>>
>>>>>>>>> ===========================
>>>>>>>>> Here is the core-site.xml
>>>>>>>>> ===========================
>>>>>>>>>  <?xml version="1.0"?>
>>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>>>
>>>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>>>
>>>>>>>>> <configuration>
>>>>>>>>> <property>
>>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>>     <value>-BLANKED</value>
>>>>>>>>>   </property>
>>>>>>>>>   <property>
>>>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>>>   </property>
>>>>>>>>> </configuration>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by praveenesh kumar <pr...@gmail.com>.
Can you also make sure your hostname and IP address are still mapped
correctly. Because what I am guessing is when you restart your machine,
your /etc/hosts entries might get restored (it happens in some
distributions, based on how you installed it). So when you are trying to
restart your namenode, it might be pointing to some different IP/machine
(in general localhost).

I can't think of any reason how  it can happen just by restarting the
machine.


On Mon, Mar 24, 2014 at 5:42 AM, Stanley Shi <ss...@gopivotal.com> wrote:

> Can you confirm that you namenode image and fseditlog are still there? if
> not, then your data IS lost.
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>
>> No, not ofcourse I blinded it.
>>
>>
>> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:
>>
>>> Is this property correct ?
>>>
>>>
>>> <property>
>>>     <name>fs.default.name</name>
>>>     <value>-BLANKED</value>
>>>   </property>
>>>
>>> Regards
>>> Prav
>>>
>>>
>>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>>
>>>> Thanks for you helps, but still could not solve my problem.
>>>>
>>>>
>>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>
>>>>> Ah yes, I overlooked this. Then please check the file are there or
>>>>> not: "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>>>
>>>>> Regards,
>>>>> *Stanley Shi,*
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>>
>>>>>> I don't think this is the case, because there is;
>>>>>>   <property>
>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>   </property>
>>>>>>
>>>>>>
>>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>>
>>>>>>> one possible reason is that you didn't set the namenode working
>>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>>>> afraid you have lost all your namenode data.
>>>>>>>
>>>>>>> *<property>
>>>>>>>   <name>dfs.name.dir</name>
>>>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>>>       of directories then the name table is replicated in all of the
>>>>>>>       directories, for redundancy. </description>
>>>>>>> </property>*
>>>>>>>
>>>>>>>
>>>>>>> Regards,
>>>>>>> *Stanley Shi,*
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mirko.kaempf@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>>>> And how much memory has the NameNode.
>>>>>>>>
>>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>>>> checkpointing?
>>>>>>>>
>>>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>>>
>>>>>>>> With this information at hand, one might be able to fix your setup,
>>>>>>>> but do not format the old namenode before
>>>>>>>> all is working with a fresh one.
>>>>>>>>
>>>>>>>> Grab a copy of the maintainance guide:
>>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>>>> which helps solving such type of problems as well.
>>>>>>>>
>>>>>>>> Best wishes
>>>>>>>> Mirko
>>>>>>>>
>>>>>>>>
>>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>>>
>>>>>>>> Dear All,
>>>>>>>>>
>>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>>>> without losing the data.
>>>>>>>>>
>>>>>>>>> I will really appreciate any help.
>>>>>>>>>
>>>>>>>>> Thanks.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> =====================
>>>>>>>>> Here is the logs;
>>>>>>>>> ====================
>>>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>>>> /************************************************************
>>>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>>>> STARTUP_MSG:   args = []
>>>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>>>> STARTUP_MSG:   build =
>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>>>> ************************************************************/
>>>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>>>> hadoop-metrics2.properties
>>>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>>>> period at 10 second(s).
>>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>>>> started
>>>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>>>> registered.
>>>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>>>> registered.
>>>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>>> NameNode registered.
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>>>> type       = 64-bit
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>>>> max memory = 17.77875 MB
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>>> recommended=2097152, actual=2097152
>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>> isPermissionEnabled=true
>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>> dfs.block.invalidate.limit=100
>>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>>>> occuring more than 10 times
>>>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>>>> initialization failed.
>>>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>>>> NameNode is not formatted.
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>>         at
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>>
>>>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>>>> /************************************************************
>>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>>>> ************************************************************/
>>>>>>>>>
>>>>>>>>> ===========================
>>>>>>>>> Here is the core-site.xml
>>>>>>>>> ===========================
>>>>>>>>>  <?xml version="1.0"?>
>>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>>>
>>>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>>>
>>>>>>>>> <configuration>
>>>>>>>>> <property>
>>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>>     <value>-BLANKED</value>
>>>>>>>>>   </property>
>>>>>>>>>   <property>
>>>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>>>   </property>
>>>>>>>>> </configuration>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Stanley Shi <ss...@gopivotal.com>.
Can you confirm that you namenode image and fseditlog are still there? if
not, then your data IS lost.

Regards,
*Stanley Shi,*



On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas <fa...@nyu.edu> wrote:

> No, not ofcourse I blinded it.
>
>
> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:
>
>> Is this property correct ?
>>
>>
>> <property>
>>     <name>fs.default.name</name>
>>     <value>-BLANKED</value>
>>   </property>
>>
>> Regards
>> Prav
>>
>>
>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>
>>> Thanks for you helps, but still could not solve my problem.
>>>
>>>
>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>
>>>> Ah yes, I overlooked this. Then please check the file are there or not:
>>>> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>>
>>>> Regards,
>>>> *Stanley Shi,*
>>>>
>>>>
>>>>
>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>
>>>>> I don't think this is the case, because there is;
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>   </property>
>>>>>
>>>>>
>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>
>>>>>> one possible reason is that you didn't set the namenode working
>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>>> afraid you have lost all your namenode data.
>>>>>>
>>>>>> *<property>
>>>>>>   <name>dfs.name.dir</name>
>>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>>       of directories then the name table is replicated in all of the
>>>>>>       directories, for redundancy. </description>
>>>>>> </property>*
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> *Stanley Shi,*
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>>> And how much memory has the NameNode.
>>>>>>>
>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>>> checkpointing?
>>>>>>>
>>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>>
>>>>>>> With this information at hand, one might be able to fix your setup,
>>>>>>> but do not format the old namenode before
>>>>>>> all is working with a fresh one.
>>>>>>>
>>>>>>> Grab a copy of the maintainance guide:
>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>>> which helps solving such type of problems as well.
>>>>>>>
>>>>>>> Best wishes
>>>>>>> Mirko
>>>>>>>
>>>>>>>
>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>>
>>>>>>> Dear All,
>>>>>>>>
>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>>> without losing the data.
>>>>>>>>
>>>>>>>> I will really appreciate any help.
>>>>>>>>
>>>>>>>> Thanks.
>>>>>>>>
>>>>>>>>
>>>>>>>> =====================
>>>>>>>> Here is the logs;
>>>>>>>> ====================
>>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>>> /************************************************************
>>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>>> STARTUP_MSG:   args = []
>>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>>> STARTUP_MSG:   build =
>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>>> ************************************************************/
>>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>>> hadoop-metrics2.properties
>>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>>> period at 10 second(s).
>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>>> started
>>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>>> registered.
>>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>>> registered.
>>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>> NameNode registered.
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>>> type       = 64-bit
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>>> max memory = 17.77875 MB
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>> recommended=2097152, actual=2097152
>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>> isPermissionEnabled=true
>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>> dfs.block.invalidate.limit=100
>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>>> occuring more than 10 times
>>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>>> initialization failed.
>>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>>> NameNode is not formatted.
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>
>>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>>> /************************************************************
>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>>> ************************************************************/
>>>>>>>>
>>>>>>>> ===========================
>>>>>>>> Here is the core-site.xml
>>>>>>>> ===========================
>>>>>>>>  <?xml version="1.0"?>
>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>>
>>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>>
>>>>>>>> <configuration>
>>>>>>>> <property>
>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>     <value>-BLANKED</value>
>>>>>>>>   </property>
>>>>>>>>   <property>
>>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>>   </property>
>>>>>>>> </configuration>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Stanley Shi <ss...@gopivotal.com>.
Can you confirm that you namenode image and fseditlog are still there? if
not, then your data IS lost.

Regards,
*Stanley Shi,*



On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas <fa...@nyu.edu> wrote:

> No, not ofcourse I blinded it.
>
>
> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:
>
>> Is this property correct ?
>>
>>
>> <property>
>>     <name>fs.default.name</name>
>>     <value>-BLANKED</value>
>>   </property>
>>
>> Regards
>> Prav
>>
>>
>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>
>>> Thanks for you helps, but still could not solve my problem.
>>>
>>>
>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>
>>>> Ah yes, I overlooked this. Then please check the file are there or not:
>>>> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>>
>>>> Regards,
>>>> *Stanley Shi,*
>>>>
>>>>
>>>>
>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>
>>>>> I don't think this is the case, because there is;
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>   </property>
>>>>>
>>>>>
>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>
>>>>>> one possible reason is that you didn't set the namenode working
>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>>> afraid you have lost all your namenode data.
>>>>>>
>>>>>> *<property>
>>>>>>   <name>dfs.name.dir</name>
>>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>>       of directories then the name table is replicated in all of the
>>>>>>       directories, for redundancy. </description>
>>>>>> </property>*
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> *Stanley Shi,*
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>>> And how much memory has the NameNode.
>>>>>>>
>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>>> checkpointing?
>>>>>>>
>>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>>
>>>>>>> With this information at hand, one might be able to fix your setup,
>>>>>>> but do not format the old namenode before
>>>>>>> all is working with a fresh one.
>>>>>>>
>>>>>>> Grab a copy of the maintainance guide:
>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>>> which helps solving such type of problems as well.
>>>>>>>
>>>>>>> Best wishes
>>>>>>> Mirko
>>>>>>>
>>>>>>>
>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>>
>>>>>>> Dear All,
>>>>>>>>
>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>>> without losing the data.
>>>>>>>>
>>>>>>>> I will really appreciate any help.
>>>>>>>>
>>>>>>>> Thanks.
>>>>>>>>
>>>>>>>>
>>>>>>>> =====================
>>>>>>>> Here is the logs;
>>>>>>>> ====================
>>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>>> /************************************************************
>>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>>> STARTUP_MSG:   args = []
>>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>>> STARTUP_MSG:   build =
>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>>> ************************************************************/
>>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>>> hadoop-metrics2.properties
>>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>>> period at 10 second(s).
>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>>> started
>>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>>> registered.
>>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>>> registered.
>>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>> NameNode registered.
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>>> type       = 64-bit
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>>> max memory = 17.77875 MB
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>> recommended=2097152, actual=2097152
>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>> isPermissionEnabled=true
>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>> dfs.block.invalidate.limit=100
>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>>> occuring more than 10 times
>>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>>> initialization failed.
>>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>>> NameNode is not formatted.
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>
>>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>>> /************************************************************
>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>>> ************************************************************/
>>>>>>>>
>>>>>>>> ===========================
>>>>>>>> Here is the core-site.xml
>>>>>>>> ===========================
>>>>>>>>  <?xml version="1.0"?>
>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>>
>>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>>
>>>>>>>> <configuration>
>>>>>>>> <property>
>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>     <value>-BLANKED</value>
>>>>>>>>   </property>
>>>>>>>>   <property>
>>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>>   </property>
>>>>>>>> </configuration>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Stanley Shi <ss...@gopivotal.com>.
Can you confirm that you namenode image and fseditlog are still there? if
not, then your data IS lost.

Regards,
*Stanley Shi,*



On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas <fa...@nyu.edu> wrote:

> No, not ofcourse I blinded it.
>
>
> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:
>
>> Is this property correct ?
>>
>>
>> <property>
>>     <name>fs.default.name</name>
>>     <value>-BLANKED</value>
>>   </property>
>>
>> Regards
>> Prav
>>
>>
>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>
>>> Thanks for you helps, but still could not solve my problem.
>>>
>>>
>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>
>>>> Ah yes, I overlooked this. Then please check the file are there or not:
>>>> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>>
>>>> Regards,
>>>> *Stanley Shi,*
>>>>
>>>>
>>>>
>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>
>>>>> I don't think this is the case, because there is;
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>   </property>
>>>>>
>>>>>
>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>
>>>>>> one possible reason is that you didn't set the namenode working
>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>>> afraid you have lost all your namenode data.
>>>>>>
>>>>>> *<property>
>>>>>>   <name>dfs.name.dir</name>
>>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>>       of directories then the name table is replicated in all of the
>>>>>>       directories, for redundancy. </description>
>>>>>> </property>*
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> *Stanley Shi,*
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>>> And how much memory has the NameNode.
>>>>>>>
>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>>> checkpointing?
>>>>>>>
>>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>>
>>>>>>> With this information at hand, one might be able to fix your setup,
>>>>>>> but do not format the old namenode before
>>>>>>> all is working with a fresh one.
>>>>>>>
>>>>>>> Grab a copy of the maintainance guide:
>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>>> which helps solving such type of problems as well.
>>>>>>>
>>>>>>> Best wishes
>>>>>>> Mirko
>>>>>>>
>>>>>>>
>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>>
>>>>>>> Dear All,
>>>>>>>>
>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>>> without losing the data.
>>>>>>>>
>>>>>>>> I will really appreciate any help.
>>>>>>>>
>>>>>>>> Thanks.
>>>>>>>>
>>>>>>>>
>>>>>>>> =====================
>>>>>>>> Here is the logs;
>>>>>>>> ====================
>>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>>> /************************************************************
>>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>>> STARTUP_MSG:   args = []
>>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>>> STARTUP_MSG:   build =
>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>>> ************************************************************/
>>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>>> hadoop-metrics2.properties
>>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>>> period at 10 second(s).
>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>>> started
>>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>>> registered.
>>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>>> registered.
>>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>> NameNode registered.
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>>> type       = 64-bit
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>>> max memory = 17.77875 MB
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>> recommended=2097152, actual=2097152
>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>> isPermissionEnabled=true
>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>> dfs.block.invalidate.limit=100
>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>>> occuring more than 10 times
>>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>>> initialization failed.
>>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>>> NameNode is not formatted.
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>
>>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>>> /************************************************************
>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>>> ************************************************************/
>>>>>>>>
>>>>>>>> ===========================
>>>>>>>> Here is the core-site.xml
>>>>>>>> ===========================
>>>>>>>>  <?xml version="1.0"?>
>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>>
>>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>>
>>>>>>>> <configuration>
>>>>>>>> <property>
>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>     <value>-BLANKED</value>
>>>>>>>>   </property>
>>>>>>>>   <property>
>>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>>   </property>
>>>>>>>> </configuration>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Stanley Shi <ss...@gopivotal.com>.
Can you confirm that you namenode image and fseditlog are still there? if
not, then your data IS lost.

Regards,
*Stanley Shi,*



On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas <fa...@nyu.edu> wrote:

> No, not ofcourse I blinded it.
>
>
> On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:
>
>> Is this property correct ?
>>
>>
>> <property>
>>     <name>fs.default.name</name>
>>     <value>-BLANKED</value>
>>   </property>
>>
>> Regards
>> Prav
>>
>>
>> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>>
>>> Thanks for you helps, but still could not solve my problem.
>>>
>>>
>>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>
>>>> Ah yes, I overlooked this. Then please check the file are there or not:
>>>> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>>
>>>> Regards,
>>>> *Stanley Shi,*
>>>>
>>>>
>>>>
>>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>
>>>>> I don't think this is the case, because there is;
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>   </property>
>>>>>
>>>>>
>>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>>
>>>>>> one possible reason is that you didn't set the namenode working
>>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>>> afraid you have lost all your namenode data.
>>>>>>
>>>>>> *<property>
>>>>>>   <name>dfs.name.dir</name>
>>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>>       of directories then the name table is replicated in all of the
>>>>>>       directories, for redundancy. </description>
>>>>>> </property>*
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> *Stanley Shi,*
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>>> And how much memory has the NameNode.
>>>>>>>
>>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>>> checkpointing?
>>>>>>>
>>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>>
>>>>>>> With this information at hand, one might be able to fix your setup,
>>>>>>> but do not format the old namenode before
>>>>>>> all is working with a fresh one.
>>>>>>>
>>>>>>> Grab a copy of the maintainance guide:
>>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>>> which helps solving such type of problems as well.
>>>>>>>
>>>>>>> Best wishes
>>>>>>> Mirko
>>>>>>>
>>>>>>>
>>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>>
>>>>>>> Dear All,
>>>>>>>>
>>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>>> without losing the data.
>>>>>>>>
>>>>>>>> I will really appreciate any help.
>>>>>>>>
>>>>>>>> Thanks.
>>>>>>>>
>>>>>>>>
>>>>>>>> =====================
>>>>>>>> Here is the logs;
>>>>>>>> ====================
>>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>>> /************************************************************
>>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>>> STARTUP_MSG:   args = []
>>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>>> STARTUP_MSG:   build =
>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>>> ************************************************************/
>>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>>> hadoop-metrics2.properties
>>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>>> period at 10 second(s).
>>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>>> started
>>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>>> registered.
>>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>>> registered.
>>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>>> NameNode registered.
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>>> type       = 64-bit
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>>> max memory = 17.77875 MB
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>>> recommended=2097152, actual=2097152
>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>> isPermissionEnabled=true
>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>> dfs.block.invalidate.limit=100
>>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>>> occuring more than 10 times
>>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>>> initialization failed.
>>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>>> NameNode is not formatted.
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>>         at
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>>
>>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>>> /************************************************************
>>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>>> ************************************************************/
>>>>>>>>
>>>>>>>> ===========================
>>>>>>>> Here is the core-site.xml
>>>>>>>> ===========================
>>>>>>>>  <?xml version="1.0"?>
>>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>>
>>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>>
>>>>>>>> <configuration>
>>>>>>>> <property>
>>>>>>>>     <name>fs.default.name</name>
>>>>>>>>     <value>-BLANKED</value>
>>>>>>>>   </property>
>>>>>>>>   <property>
>>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>>   </property>
>>>>>>>> </configuration>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Fatih Haltas <fa...@nyu.edu>.
No, not ofcourse I blinded it.


On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:

> Is this property correct ?
>
>
> <property>
>     <name>fs.default.name</name>
>     <value>-BLANKED</value>
>   </property>
>
> Regards
> Prav
>
>
> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>
>> Thanks for you helps, but still could not solve my problem.
>>
>>
>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>>
>>> Ah yes, I overlooked this. Then please check the file are there or not:
>>> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>
>>> Regards,
>>> *Stanley Shi,*
>>>
>>>
>>>
>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>
>>>> I don't think this is the case, because there is;
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>   </property>
>>>>
>>>>
>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>
>>>>> one possible reason is that you didn't set the namenode working
>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>> afraid you have lost all your namenode data.
>>>>>
>>>>> *<property>
>>>>>   <name>dfs.name.dir</name>
>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>       of directories then the name table is replicated in all of the
>>>>>       directories, for redundancy. </description>
>>>>> </property>*
>>>>>
>>>>>
>>>>> Regards,
>>>>> *Stanley Shi,*
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>> And how much memory has the NameNode.
>>>>>>
>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>> checkpointing?
>>>>>>
>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>
>>>>>> With this information at hand, one might be able to fix your setup,
>>>>>> but do not format the old namenode before
>>>>>> all is working with a fresh one.
>>>>>>
>>>>>> Grab a copy of the maintainance guide:
>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>> which helps solving such type of problems as well.
>>>>>>
>>>>>> Best wishes
>>>>>> Mirko
>>>>>>
>>>>>>
>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>
>>>>>> Dear All,
>>>>>>>
>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>> without losing the data.
>>>>>>>
>>>>>>> I will really appreciate any help.
>>>>>>>
>>>>>>> Thanks.
>>>>>>>
>>>>>>>
>>>>>>> =====================
>>>>>>> Here is the logs;
>>>>>>> ====================
>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>> /************************************************************
>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>> STARTUP_MSG:   args = []
>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>> STARTUP_MSG:   build =
>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>> ************************************************************/
>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>> hadoop-metrics2.properties
>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>> period at 10 second(s).
>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>> started
>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>> registered.
>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>> registered.
>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>> NameNode registered.
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>> type       = 64-bit
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>> max memory = 17.77875 MB
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>> recommended=2097152, actual=2097152
>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>> isPermissionEnabled=true
>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>> dfs.block.invalidate.limit=100
>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>> occuring more than 10 times
>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>> initialization failed.
>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>> NameNode is not formatted.
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>
>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>> /************************************************************
>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>> ************************************************************/
>>>>>>>
>>>>>>> ===========================
>>>>>>> Here is the core-site.xml
>>>>>>> ===========================
>>>>>>>  <?xml version="1.0"?>
>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>
>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>
>>>>>>> <configuration>
>>>>>>> <property>
>>>>>>>     <name>fs.default.name</name>
>>>>>>>     <value>-BLANKED</value>
>>>>>>>   </property>
>>>>>>>   <property>
>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>   </property>
>>>>>>> </configuration>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Fatih Haltas <fa...@nyu.edu>.
No, not ofcourse I blinded it.


On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:

> Is this property correct ?
>
>
> <property>
>     <name>fs.default.name</name>
>     <value>-BLANKED</value>
>   </property>
>
> Regards
> Prav
>
>
> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>
>> Thanks for you helps, but still could not solve my problem.
>>
>>
>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>>
>>> Ah yes, I overlooked this. Then please check the file are there or not:
>>> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>
>>> Regards,
>>> *Stanley Shi,*
>>>
>>>
>>>
>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>
>>>> I don't think this is the case, because there is;
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>   </property>
>>>>
>>>>
>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>
>>>>> one possible reason is that you didn't set the namenode working
>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>> afraid you have lost all your namenode data.
>>>>>
>>>>> *<property>
>>>>>   <name>dfs.name.dir</name>
>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>       of directories then the name table is replicated in all of the
>>>>>       directories, for redundancy. </description>
>>>>> </property>*
>>>>>
>>>>>
>>>>> Regards,
>>>>> *Stanley Shi,*
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>> And how much memory has the NameNode.
>>>>>>
>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>> checkpointing?
>>>>>>
>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>
>>>>>> With this information at hand, one might be able to fix your setup,
>>>>>> but do not format the old namenode before
>>>>>> all is working with a fresh one.
>>>>>>
>>>>>> Grab a copy of the maintainance guide:
>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>> which helps solving such type of problems as well.
>>>>>>
>>>>>> Best wishes
>>>>>> Mirko
>>>>>>
>>>>>>
>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>
>>>>>> Dear All,
>>>>>>>
>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>> without losing the data.
>>>>>>>
>>>>>>> I will really appreciate any help.
>>>>>>>
>>>>>>> Thanks.
>>>>>>>
>>>>>>>
>>>>>>> =====================
>>>>>>> Here is the logs;
>>>>>>> ====================
>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>> /************************************************************
>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>> STARTUP_MSG:   args = []
>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>> STARTUP_MSG:   build =
>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>> ************************************************************/
>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>> hadoop-metrics2.properties
>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>> period at 10 second(s).
>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>> started
>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>> registered.
>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>> registered.
>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>> NameNode registered.
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>> type       = 64-bit
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>> max memory = 17.77875 MB
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>> recommended=2097152, actual=2097152
>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>> isPermissionEnabled=true
>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>> dfs.block.invalidate.limit=100
>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>> occuring more than 10 times
>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>> initialization failed.
>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>> NameNode is not formatted.
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>
>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>> /************************************************************
>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>> ************************************************************/
>>>>>>>
>>>>>>> ===========================
>>>>>>> Here is the core-site.xml
>>>>>>> ===========================
>>>>>>>  <?xml version="1.0"?>
>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>
>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>
>>>>>>> <configuration>
>>>>>>> <property>
>>>>>>>     <name>fs.default.name</name>
>>>>>>>     <value>-BLANKED</value>
>>>>>>>   </property>
>>>>>>>   <property>
>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>   </property>
>>>>>>> </configuration>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Fatih Haltas <fa...@nyu.edu>.
No, not ofcourse I blinded it.


On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:

> Is this property correct ?
>
>
> <property>
>     <name>fs.default.name</name>
>     <value>-BLANKED</value>
>   </property>
>
> Regards
> Prav
>
>
> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>
>> Thanks for you helps, but still could not solve my problem.
>>
>>
>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>>
>>> Ah yes, I overlooked this. Then please check the file are there or not:
>>> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>
>>> Regards,
>>> *Stanley Shi,*
>>>
>>>
>>>
>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>
>>>> I don't think this is the case, because there is;
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>   </property>
>>>>
>>>>
>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>
>>>>> one possible reason is that you didn't set the namenode working
>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>> afraid you have lost all your namenode data.
>>>>>
>>>>> *<property>
>>>>>   <name>dfs.name.dir</name>
>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>       of directories then the name table is replicated in all of the
>>>>>       directories, for redundancy. </description>
>>>>> </property>*
>>>>>
>>>>>
>>>>> Regards,
>>>>> *Stanley Shi,*
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>> And how much memory has the NameNode.
>>>>>>
>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>> checkpointing?
>>>>>>
>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>
>>>>>> With this information at hand, one might be able to fix your setup,
>>>>>> but do not format the old namenode before
>>>>>> all is working with a fresh one.
>>>>>>
>>>>>> Grab a copy of the maintainance guide:
>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>> which helps solving such type of problems as well.
>>>>>>
>>>>>> Best wishes
>>>>>> Mirko
>>>>>>
>>>>>>
>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>
>>>>>> Dear All,
>>>>>>>
>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>> without losing the data.
>>>>>>>
>>>>>>> I will really appreciate any help.
>>>>>>>
>>>>>>> Thanks.
>>>>>>>
>>>>>>>
>>>>>>> =====================
>>>>>>> Here is the logs;
>>>>>>> ====================
>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>> /************************************************************
>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>> STARTUP_MSG:   args = []
>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>> STARTUP_MSG:   build =
>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>> ************************************************************/
>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>> hadoop-metrics2.properties
>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>> period at 10 second(s).
>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>> started
>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>> registered.
>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>> registered.
>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>> NameNode registered.
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>> type       = 64-bit
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>> max memory = 17.77875 MB
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>> recommended=2097152, actual=2097152
>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>> isPermissionEnabled=true
>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>> dfs.block.invalidate.limit=100
>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>> occuring more than 10 times
>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>> initialization failed.
>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>> NameNode is not formatted.
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>
>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>> /************************************************************
>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>> ************************************************************/
>>>>>>>
>>>>>>> ===========================
>>>>>>> Here is the core-site.xml
>>>>>>> ===========================
>>>>>>>  <?xml version="1.0"?>
>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>
>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>
>>>>>>> <configuration>
>>>>>>> <property>
>>>>>>>     <name>fs.default.name</name>
>>>>>>>     <value>-BLANKED</value>
>>>>>>>   </property>
>>>>>>>   <property>
>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>   </property>
>>>>>>> </configuration>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Fatih Haltas <fa...@nyu.edu>.
No, not ofcourse I blinded it.


On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar <pr...@gmail.com>wrote:

> Is this property correct ?
>
>
> <property>
>     <name>fs.default.name</name>
>     <value>-BLANKED</value>
>   </property>
>
> Regards
> Prav
>
>
> On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu>wrote:
>
>> Thanks for you helps, but still could not solve my problem.
>>
>>
>> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>>
>>> Ah yes, I overlooked this. Then please check the file are there or not:
>>> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>>
>>> Regards,
>>> *Stanley Shi,*
>>>
>>>
>>>
>>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>
>>>> I don't think this is the case, because there is;
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>   </property>
>>>>
>>>>
>>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com>wrote:
>>>>
>>>>> one possible reason is that you didn't set the namenode working
>>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>>> get deleted by the OS without any notification. If this is the case, I am
>>>>> afraid you have lost all your namenode data.
>>>>>
>>>>> *<property>
>>>>>   <name>dfs.name.dir</name>
>>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>>       of directories then the name table is replicated in all of the
>>>>>       directories, for redundancy. </description>
>>>>> </property>*
>>>>>
>>>>>
>>>>> Regards,
>>>>> *Stanley Shi,*
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>>> And how much memory has the NameNode.
>>>>>>
>>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>>> checkpointing?
>>>>>>
>>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>>
>>>>>> With this information at hand, one might be able to fix your setup,
>>>>>> but do not format the old namenode before
>>>>>> all is working with a fresh one.
>>>>>>
>>>>>> Grab a copy of the maintainance guide:
>>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>>> which helps solving such type of problems as well.
>>>>>>
>>>>>> Best wishes
>>>>>> Mirko
>>>>>>
>>>>>>
>>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>>
>>>>>> Dear All,
>>>>>>>
>>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>>> without losing the data.
>>>>>>>
>>>>>>> I will really appreciate any help.
>>>>>>>
>>>>>>> Thanks.
>>>>>>>
>>>>>>>
>>>>>>> =====================
>>>>>>> Here is the logs;
>>>>>>> ====================
>>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>>> /************************************************************
>>>>>>> STARTUP_MSG: Starting NameNode
>>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>>> STARTUP_MSG:   args = []
>>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>>> STARTUP_MSG:   build =
>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>>> ************************************************************/
>>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>>> hadoop-metrics2.properties
>>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>> MetricsSystem,sub=Stats registered.
>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>>> period at 10 second(s).
>>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>>> started
>>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>>> registered.
>>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>>> registered.
>>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>>> NameNode registered.
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>>> type       = 64-bit
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2%
>>>>>>> max memory = 17.77875 MB
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>>> recommended=2097152, actual=2097152
>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>> isPermissionEnabled=true
>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>> dfs.block.invalidate.limit=100
>>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>>> accessTokenLifetime=0 min(s)
>>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>>> occuring more than 10 times
>>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>>> initialization failed.
>>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>>> NameNode is not formatted.
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>>         at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>>
>>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>>> /************************************************************
>>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>>> ************************************************************/
>>>>>>>
>>>>>>> ===========================
>>>>>>> Here is the core-site.xml
>>>>>>> ===========================
>>>>>>>  <?xml version="1.0"?>
>>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>>
>>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>>
>>>>>>> <configuration>
>>>>>>> <property>
>>>>>>>     <name>fs.default.name</name>
>>>>>>>     <value>-BLANKED</value>
>>>>>>>   </property>
>>>>>>>   <property>
>>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>>   </property>
>>>>>>> </configuration>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by praveenesh kumar <pr...@gmail.com>.
Is this property correct ?

<property>
    <name>fs.default.name</name>
    <value>-BLANKED</value>
  </property>

Regards
Prav


On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu> wrote:

> Thanks for you helps, but still could not solve my problem.
>
>
> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> Ah yes, I overlooked this. Then please check the file are there or not:
>> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>
>>> I don't think this is the case, because there is;
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>   </property>
>>>
>>>
>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>>>
>>>> one possible reason is that you didn't set the namenode working
>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>> get deleted by the OS without any notification. If this is the case, I am
>>>> afraid you have lost all your namenode data.
>>>>
>>>> *<property>
>>>>   <name>dfs.name.dir</name>
>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>       of directories then the name table is replicated in all of the
>>>>       directories, for redundancy. </description>
>>>> </property>*
>>>>
>>>>
>>>> Regards,
>>>> *Stanley Shi,*
>>>>
>>>>
>>>>
>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>> And how much memory has the NameNode.
>>>>>
>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>> checkpointing?
>>>>>
>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>
>>>>> With this information at hand, one might be able to fix your setup,
>>>>> but do not format the old namenode before
>>>>> all is working with a fresh one.
>>>>>
>>>>> Grab a copy of the maintainance guide:
>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>> which helps solving such type of problems as well.
>>>>>
>>>>> Best wishes
>>>>> Mirko
>>>>>
>>>>>
>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>
>>>>> Dear All,
>>>>>>
>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>> without losing the data.
>>>>>>
>>>>>> I will really appreciate any help.
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>>
>>>>>> =====================
>>>>>> Here is the logs;
>>>>>> ====================
>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>> /************************************************************
>>>>>> STARTUP_MSG: Starting NameNode
>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>> STARTUP_MSG:   args = []
>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>> STARTUP_MSG:   build =
>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>> ************************************************************/
>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>> hadoop-metrics2.properties
>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>> MetricsSystem,sub=Stats registered.
>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>> period at 10 second(s).
>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>> started
>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>> registered.
>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>> registered.
>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>> NameNode registered.
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>> type       = 64-bit
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>>>> memory = 17.77875 MB
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>> recommended=2097152, actual=2097152
>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> isPermissionEnabled=true
>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> dfs.block.invalidate.limit=100
>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>> accessTokenLifetime=0 min(s)
>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>> occuring more than 10 times
>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>> initialization failed.
>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>> NameNode is not formatted.
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>
>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>> /************************************************************
>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>> ************************************************************/
>>>>>>
>>>>>> ===========================
>>>>>> Here is the core-site.xml
>>>>>> ===========================
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>> <property>
>>>>>>     <name>fs.default.name</name>
>>>>>>     <value>-BLANKED</value>
>>>>>>   </property>
>>>>>>   <property>
>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>   </property>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by praveenesh kumar <pr...@gmail.com>.
Is this property correct ?

<property>
    <name>fs.default.name</name>
    <value>-BLANKED</value>
  </property>

Regards
Prav


On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu> wrote:

> Thanks for you helps, but still could not solve my problem.
>
>
> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> Ah yes, I overlooked this. Then please check the file are there or not:
>> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>
>>> I don't think this is the case, because there is;
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>   </property>
>>>
>>>
>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>>>
>>>> one possible reason is that you didn't set the namenode working
>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>> get deleted by the OS without any notification. If this is the case, I am
>>>> afraid you have lost all your namenode data.
>>>>
>>>> *<property>
>>>>   <name>dfs.name.dir</name>
>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>       of directories then the name table is replicated in all of the
>>>>       directories, for redundancy. </description>
>>>> </property>*
>>>>
>>>>
>>>> Regards,
>>>> *Stanley Shi,*
>>>>
>>>>
>>>>
>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>> And how much memory has the NameNode.
>>>>>
>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>> checkpointing?
>>>>>
>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>
>>>>> With this information at hand, one might be able to fix your setup,
>>>>> but do not format the old namenode before
>>>>> all is working with a fresh one.
>>>>>
>>>>> Grab a copy of the maintainance guide:
>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>> which helps solving such type of problems as well.
>>>>>
>>>>> Best wishes
>>>>> Mirko
>>>>>
>>>>>
>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>
>>>>> Dear All,
>>>>>>
>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>> without losing the data.
>>>>>>
>>>>>> I will really appreciate any help.
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>>
>>>>>> =====================
>>>>>> Here is the logs;
>>>>>> ====================
>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>> /************************************************************
>>>>>> STARTUP_MSG: Starting NameNode
>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>> STARTUP_MSG:   args = []
>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>> STARTUP_MSG:   build =
>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>> ************************************************************/
>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>> hadoop-metrics2.properties
>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>> MetricsSystem,sub=Stats registered.
>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>> period at 10 second(s).
>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>> started
>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>> registered.
>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>> registered.
>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>> NameNode registered.
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>> type       = 64-bit
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>>>> memory = 17.77875 MB
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>> recommended=2097152, actual=2097152
>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> isPermissionEnabled=true
>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> dfs.block.invalidate.limit=100
>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>> accessTokenLifetime=0 min(s)
>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>> occuring more than 10 times
>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>> initialization failed.
>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>> NameNode is not formatted.
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>
>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>> /************************************************************
>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>> ************************************************************/
>>>>>>
>>>>>> ===========================
>>>>>> Here is the core-site.xml
>>>>>> ===========================
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>> <property>
>>>>>>     <name>fs.default.name</name>
>>>>>>     <value>-BLANKED</value>
>>>>>>   </property>
>>>>>>   <property>
>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>   </property>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by praveenesh kumar <pr...@gmail.com>.
Is this property correct ?

<property>
    <name>fs.default.name</name>
    <value>-BLANKED</value>
  </property>

Regards
Prav


On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu> wrote:

> Thanks for you helps, but still could not solve my problem.
>
>
> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> Ah yes, I overlooked this. Then please check the file are there or not:
>> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>
>>> I don't think this is the case, because there is;
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>   </property>
>>>
>>>
>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>>>
>>>> one possible reason is that you didn't set the namenode working
>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>> get deleted by the OS without any notification. If this is the case, I am
>>>> afraid you have lost all your namenode data.
>>>>
>>>> *<property>
>>>>   <name>dfs.name.dir</name>
>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>       of directories then the name table is replicated in all of the
>>>>       directories, for redundancy. </description>
>>>> </property>*
>>>>
>>>>
>>>> Regards,
>>>> *Stanley Shi,*
>>>>
>>>>
>>>>
>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>> And how much memory has the NameNode.
>>>>>
>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>> checkpointing?
>>>>>
>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>
>>>>> With this information at hand, one might be able to fix your setup,
>>>>> but do not format the old namenode before
>>>>> all is working with a fresh one.
>>>>>
>>>>> Grab a copy of the maintainance guide:
>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>> which helps solving such type of problems as well.
>>>>>
>>>>> Best wishes
>>>>> Mirko
>>>>>
>>>>>
>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>
>>>>> Dear All,
>>>>>>
>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>> without losing the data.
>>>>>>
>>>>>> I will really appreciate any help.
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>>
>>>>>> =====================
>>>>>> Here is the logs;
>>>>>> ====================
>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>> /************************************************************
>>>>>> STARTUP_MSG: Starting NameNode
>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>> STARTUP_MSG:   args = []
>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>> STARTUP_MSG:   build =
>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>> ************************************************************/
>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>> hadoop-metrics2.properties
>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>> MetricsSystem,sub=Stats registered.
>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>> period at 10 second(s).
>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>> started
>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>> registered.
>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>> registered.
>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>> NameNode registered.
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>> type       = 64-bit
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>>>> memory = 17.77875 MB
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>> recommended=2097152, actual=2097152
>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> isPermissionEnabled=true
>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> dfs.block.invalidate.limit=100
>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>> accessTokenLifetime=0 min(s)
>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>> occuring more than 10 times
>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>> initialization failed.
>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>> NameNode is not formatted.
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>
>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>> /************************************************************
>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>> ************************************************************/
>>>>>>
>>>>>> ===========================
>>>>>> Here is the core-site.xml
>>>>>> ===========================
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>> <property>
>>>>>>     <name>fs.default.name</name>
>>>>>>     <value>-BLANKED</value>
>>>>>>   </property>
>>>>>>   <property>
>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>   </property>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by praveenesh kumar <pr...@gmail.com>.
Is this property correct ?

<property>
    <name>fs.default.name</name>
    <value>-BLANKED</value>
  </property>

Regards
Prav


On Wed, Mar 19, 2014 at 12:58 PM, Fatih Haltas <fa...@nyu.edu> wrote:

> Thanks for you helps, but still could not solve my problem.
>
>
> On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> Ah yes, I overlooked this. Then please check the file are there or not:
>> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>>
>>> I don't think this is the case, because there is;
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>   </property>
>>>
>>>
>>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>>>
>>>> one possible reason is that you didn't set the namenode working
>>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>>> get deleted by the OS without any notification. If this is the case, I am
>>>> afraid you have lost all your namenode data.
>>>>
>>>> *<property>
>>>>   <name>dfs.name.dir</name>
>>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>>   <description>Determines where on the local filesystem the DFS name node
>>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>>       of directories then the name table is replicated in all of the
>>>>       directories, for redundancy. </description>
>>>> </property>*
>>>>
>>>>
>>>> Regards,
>>>> *Stanley Shi,*
>>>>
>>>>
>>>>
>>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> what is the location of the namenodes fsimage and editlogs?
>>>>> And how much memory has the NameNode.
>>>>>
>>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>>> checkpointing?
>>>>>
>>>>> Where are your HDFS blocks located, are those still safe?
>>>>>
>>>>> With this information at hand, one might be able to fix your setup,
>>>>> but do not format the old namenode before
>>>>> all is working with a fresh one.
>>>>>
>>>>> Grab a copy of the maintainance guide:
>>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>>> which helps solving such type of problems as well.
>>>>>
>>>>> Best wishes
>>>>> Mirko
>>>>>
>>>>>
>>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>>
>>>>> Dear All,
>>>>>>
>>>>>> I have just restarted machines of my hadoop clusters. Now, I am
>>>>>> trying to restart hadoop clusters again, but getting error on namenode
>>>>>> restart. I am afraid of loosing my data as it was properly running for more
>>>>>> than 3 months. Currently, I believe if I do namenode formatting, it will
>>>>>> work again, however, data will be lost. Is there anyway to solve this
>>>>>> without losing the data.
>>>>>>
>>>>>> I will really appreciate any help.
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>>
>>>>>> =====================
>>>>>> Here is the logs;
>>>>>> ====================
>>>>>> 2014-02-26 16:02:39,698 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>>> /************************************************************
>>>>>> STARTUP_MSG: Starting NameNode
>>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>>> STARTUP_MSG:   args = []
>>>>>> STARTUP_MSG:   version = 1.0.4
>>>>>> STARTUP_MSG:   build =
>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>>> ************************************************************/
>>>>>> 2014-02-26 16:02:40,005 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>>> hadoop-metrics2.properties
>>>>>> 2014-02-26 16:02:40,019 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>> MetricsSystem,sub=Stats registered.
>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>>> period at 10 second(s).
>>>>>> 2014-02-26 16:02:40,021 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>>> started
>>>>>> 2014-02-26 16:02:40,169 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>>> registered.
>>>>>> 2014-02-26 16:02:40,193 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>>> registered.
>>>>>> 2014-02-26 16:02:40,194 INFO
>>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>>> NameNode registered.
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM
>>>>>> type       = 64-bit
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>>>> memory = 17.77875 MB
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>> capacity      = 2^21 = 2097152 entries
>>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>>> recommended=2097152, actual=2097152
>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>>> 2014-02-26 16:02:40,273 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>>> 2014-02-26 16:02:40,274 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> isPermissionEnabled=true
>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> dfs.block.invalidate.limit=100
>>>>>> 2014-02-26 16:02:40,279 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>>> accessTokenLifetime=0 min(s)
>>>>>> 2014-02-26 16:02:40,724 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>>> 2014-02-26 16:02:40,749 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>>> occuring more than 10 times
>>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>>> initialization failed.
>>>>>> java.io.IOException: NameNode is not formatted.
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>>> NameNode is not formatted.
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>>         at
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>>
>>>>>> 2014-02-26 16:02:40,781 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>> /************************************************************
>>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>>> ************************************************************/
>>>>>>
>>>>>> ===========================
>>>>>> Here is the core-site.xml
>>>>>> ===========================
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>> <property>
>>>>>>     <name>fs.default.name</name>
>>>>>>     <value>-BLANKED</value>
>>>>>>   </property>
>>>>>>   <property>
>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>>   </property>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Fatih Haltas <fa...@nyu.edu>.
Thanks for you helps, but still could not solve my problem.


On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com> wrote:

> Ah yes, I overlooked this. Then please check the file are there or not:
> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>
>> I don't think this is the case, because there is;
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/home/hadoop/project/hadoop-data</value>
>>   </property>
>>
>>
>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>>
>>> one possible reason is that you didn't set the namenode working
>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>> get deleted by the OS without any notification. If this is the case, I am
>>> afraid you have lost all your namenode data.
>>>
>>> *<property>
>>>   <name>dfs.name.dir</name>
>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>   <description>Determines where on the local filesystem the DFS name node
>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>       of directories then the name table is replicated in all of the
>>>       directories, for redundancy. </description>
>>> </property>*
>>>
>>>
>>> Regards,
>>> *Stanley Shi,*
>>>
>>>
>>>
>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>
>>>> Hi,
>>>>
>>>> what is the location of the namenodes fsimage and editlogs?
>>>> And how much memory has the NameNode.
>>>>
>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>> checkpointing?
>>>>
>>>> Where are your HDFS blocks located, are those still safe?
>>>>
>>>> With this information at hand, one might be able to fix your setup, but
>>>> do not format the old namenode before
>>>> all is working with a fresh one.
>>>>
>>>> Grab a copy of the maintainance guide:
>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>> which helps solving such type of problems as well.
>>>>
>>>> Best wishes
>>>> Mirko
>>>>
>>>>
>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>
>>>> Dear All,
>>>>>
>>>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>>>> am afraid of loosing my data as it was properly running for more than 3
>>>>> months. Currently, I believe if I do namenode formatting, it will work
>>>>> again, however, data will be lost. Is there anyway to solve this without
>>>>> losing the data.
>>>>>
>>>>> I will really appreciate any help.
>>>>>
>>>>> Thanks.
>>>>>
>>>>>
>>>>> =====================
>>>>> Here is the logs;
>>>>> ====================
>>>>> 2014-02-26 16:02:39,698 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>> /************************************************************
>>>>> STARTUP_MSG: Starting NameNode
>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>> STARTUP_MSG:   args = []
>>>>> STARTUP_MSG:   version = 1.0.4
>>>>> STARTUP_MSG:   build =
>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>> ************************************************************/
>>>>> 2014-02-26 16:02:40,005 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>> hadoop-metrics2.properties
>>>>> 2014-02-26 16:02:40,019 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>> MetricsSystem,sub=Stats registered.
>>>>> 2014-02-26 16:02:40,021 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>> period at 10 second(s).
>>>>> 2014-02-26 16:02:40,021 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>> started
>>>>> 2014-02-26 16:02:40,169 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>> registered.
>>>>> 2014-02-26 16:02:40,193 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>> registered.
>>>>> 2014-02-26 16:02:40,194 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>> NameNode registered.
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>>>       = 64-bit
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>>> memory = 17.77875 MB
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>> capacity      = 2^21 = 2097152 entries
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>> recommended=2097152, actual=2097152
>>>>> 2014-02-26 16:02:40,273 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>> 2014-02-26 16:02:40,273 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>> 2014-02-26 16:02:40,274 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>> isPermissionEnabled=true
>>>>> 2014-02-26 16:02:40,279 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>> dfs.block.invalidate.limit=100
>>>>> 2014-02-26 16:02:40,279 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>> accessTokenLifetime=0 min(s)
>>>>> 2014-02-26 16:02:40,724 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>> 2014-02-26 16:02:40,749 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>> occuring more than 10 times
>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>> initialization failed.
>>>>> java.io.IOException: NameNode is not formatted.
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>> NameNode is not formatted.
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>
>>>>> 2014-02-26 16:02:40,781 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>> /************************************************************
>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>> ************************************************************/
>>>>>
>>>>> ===========================
>>>>> Here is the core-site.xml
>>>>> ===========================
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>> <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>-BLANKED</value>
>>>>>   </property>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Fatih Haltas <fa...@nyu.edu>.
Thanks for you helps, but still could not solve my problem.


On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com> wrote:

> Ah yes, I overlooked this. Then please check the file are there or not:
> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>
>> I don't think this is the case, because there is;
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/home/hadoop/project/hadoop-data</value>
>>   </property>
>>
>>
>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>>
>>> one possible reason is that you didn't set the namenode working
>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>> get deleted by the OS without any notification. If this is the case, I am
>>> afraid you have lost all your namenode data.
>>>
>>> *<property>
>>>   <name>dfs.name.dir</name>
>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>   <description>Determines where on the local filesystem the DFS name node
>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>       of directories then the name table is replicated in all of the
>>>       directories, for redundancy. </description>
>>> </property>*
>>>
>>>
>>> Regards,
>>> *Stanley Shi,*
>>>
>>>
>>>
>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>
>>>> Hi,
>>>>
>>>> what is the location of the namenodes fsimage and editlogs?
>>>> And how much memory has the NameNode.
>>>>
>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>> checkpointing?
>>>>
>>>> Where are your HDFS blocks located, are those still safe?
>>>>
>>>> With this information at hand, one might be able to fix your setup, but
>>>> do not format the old namenode before
>>>> all is working with a fresh one.
>>>>
>>>> Grab a copy of the maintainance guide:
>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>> which helps solving such type of problems as well.
>>>>
>>>> Best wishes
>>>> Mirko
>>>>
>>>>
>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>
>>>> Dear All,
>>>>>
>>>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>>>> am afraid of loosing my data as it was properly running for more than 3
>>>>> months. Currently, I believe if I do namenode formatting, it will work
>>>>> again, however, data will be lost. Is there anyway to solve this without
>>>>> losing the data.
>>>>>
>>>>> I will really appreciate any help.
>>>>>
>>>>> Thanks.
>>>>>
>>>>>
>>>>> =====================
>>>>> Here is the logs;
>>>>> ====================
>>>>> 2014-02-26 16:02:39,698 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>> /************************************************************
>>>>> STARTUP_MSG: Starting NameNode
>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>> STARTUP_MSG:   args = []
>>>>> STARTUP_MSG:   version = 1.0.4
>>>>> STARTUP_MSG:   build =
>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>> ************************************************************/
>>>>> 2014-02-26 16:02:40,005 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>> hadoop-metrics2.properties
>>>>> 2014-02-26 16:02:40,019 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>> MetricsSystem,sub=Stats registered.
>>>>> 2014-02-26 16:02:40,021 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>> period at 10 second(s).
>>>>> 2014-02-26 16:02:40,021 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>> started
>>>>> 2014-02-26 16:02:40,169 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>> registered.
>>>>> 2014-02-26 16:02:40,193 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>> registered.
>>>>> 2014-02-26 16:02:40,194 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>> NameNode registered.
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>>>       = 64-bit
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>>> memory = 17.77875 MB
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>> capacity      = 2^21 = 2097152 entries
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>> recommended=2097152, actual=2097152
>>>>> 2014-02-26 16:02:40,273 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>> 2014-02-26 16:02:40,273 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>> 2014-02-26 16:02:40,274 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>> isPermissionEnabled=true
>>>>> 2014-02-26 16:02:40,279 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>> dfs.block.invalidate.limit=100
>>>>> 2014-02-26 16:02:40,279 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>> accessTokenLifetime=0 min(s)
>>>>> 2014-02-26 16:02:40,724 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>> 2014-02-26 16:02:40,749 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>> occuring more than 10 times
>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>> initialization failed.
>>>>> java.io.IOException: NameNode is not formatted.
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>> NameNode is not formatted.
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>
>>>>> 2014-02-26 16:02:40,781 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>> /************************************************************
>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>> ************************************************************/
>>>>>
>>>>> ===========================
>>>>> Here is the core-site.xml
>>>>> ===========================
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>> <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>-BLANKED</value>
>>>>>   </property>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Fatih Haltas <fa...@nyu.edu>.
Thanks for you helps, but still could not solve my problem.


On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com> wrote:

> Ah yes, I overlooked this. Then please check the file are there or not:
> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>
>> I don't think this is the case, because there is;
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/home/hadoop/project/hadoop-data</value>
>>   </property>
>>
>>
>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>>
>>> one possible reason is that you didn't set the namenode working
>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>> get deleted by the OS without any notification. If this is the case, I am
>>> afraid you have lost all your namenode data.
>>>
>>> *<property>
>>>   <name>dfs.name.dir</name>
>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>   <description>Determines where on the local filesystem the DFS name node
>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>       of directories then the name table is replicated in all of the
>>>       directories, for redundancy. </description>
>>> </property>*
>>>
>>>
>>> Regards,
>>> *Stanley Shi,*
>>>
>>>
>>>
>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>
>>>> Hi,
>>>>
>>>> what is the location of the namenodes fsimage and editlogs?
>>>> And how much memory has the NameNode.
>>>>
>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>> checkpointing?
>>>>
>>>> Where are your HDFS blocks located, are those still safe?
>>>>
>>>> With this information at hand, one might be able to fix your setup, but
>>>> do not format the old namenode before
>>>> all is working with a fresh one.
>>>>
>>>> Grab a copy of the maintainance guide:
>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>> which helps solving such type of problems as well.
>>>>
>>>> Best wishes
>>>> Mirko
>>>>
>>>>
>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>
>>>> Dear All,
>>>>>
>>>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>>>> am afraid of loosing my data as it was properly running for more than 3
>>>>> months. Currently, I believe if I do namenode formatting, it will work
>>>>> again, however, data will be lost. Is there anyway to solve this without
>>>>> losing the data.
>>>>>
>>>>> I will really appreciate any help.
>>>>>
>>>>> Thanks.
>>>>>
>>>>>
>>>>> =====================
>>>>> Here is the logs;
>>>>> ====================
>>>>> 2014-02-26 16:02:39,698 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>> /************************************************************
>>>>> STARTUP_MSG: Starting NameNode
>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>> STARTUP_MSG:   args = []
>>>>> STARTUP_MSG:   version = 1.0.4
>>>>> STARTUP_MSG:   build =
>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>> ************************************************************/
>>>>> 2014-02-26 16:02:40,005 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>> hadoop-metrics2.properties
>>>>> 2014-02-26 16:02:40,019 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>> MetricsSystem,sub=Stats registered.
>>>>> 2014-02-26 16:02:40,021 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>> period at 10 second(s).
>>>>> 2014-02-26 16:02:40,021 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>> started
>>>>> 2014-02-26 16:02:40,169 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>> registered.
>>>>> 2014-02-26 16:02:40,193 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>> registered.
>>>>> 2014-02-26 16:02:40,194 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>> NameNode registered.
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>>>       = 64-bit
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>>> memory = 17.77875 MB
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>> capacity      = 2^21 = 2097152 entries
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>> recommended=2097152, actual=2097152
>>>>> 2014-02-26 16:02:40,273 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>> 2014-02-26 16:02:40,273 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>> 2014-02-26 16:02:40,274 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>> isPermissionEnabled=true
>>>>> 2014-02-26 16:02:40,279 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>> dfs.block.invalidate.limit=100
>>>>> 2014-02-26 16:02:40,279 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>> accessTokenLifetime=0 min(s)
>>>>> 2014-02-26 16:02:40,724 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>> 2014-02-26 16:02:40,749 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>> occuring more than 10 times
>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>> initialization failed.
>>>>> java.io.IOException: NameNode is not formatted.
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>> NameNode is not formatted.
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>
>>>>> 2014-02-26 16:02:40,781 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>> /************************************************************
>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>> ************************************************************/
>>>>>
>>>>> ===========================
>>>>> Here is the core-site.xml
>>>>> ===========================
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>> <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>-BLANKED</value>
>>>>>   </property>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Fatih Haltas <fa...@nyu.edu>.
Thanks for you helps, but still could not solve my problem.


On Tue, Mar 18, 2014 at 10:13 AM, Stanley Shi <ss...@gopivotal.com> wrote:

> Ah yes, I overlooked this. Then please check the file are there or not:
> "ls /home/hadoop/project/hadoop-data/dfs/name"?
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:
>
>> I don't think this is the case, because there is;
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/home/hadoop/project/hadoop-data</value>
>>   </property>
>>
>>
>> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>>
>>> one possible reason is that you didn't set the namenode working
>>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>>> get deleted by the OS without any notification. If this is the case, I am
>>> afraid you have lost all your namenode data.
>>>
>>> *<property>
>>>   <name>dfs.name.dir</name>
>>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>>   <description>Determines where on the local filesystem the DFS name node
>>>       should store the name table(fsimage).  If this is a comma-delimited list
>>>       of directories then the name table is replicated in all of the
>>>       directories, for redundancy. </description>
>>> </property>*
>>>
>>>
>>> Regards,
>>> *Stanley Shi,*
>>>
>>>
>>>
>>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>>
>>>> Hi,
>>>>
>>>> what is the location of the namenodes fsimage and editlogs?
>>>> And how much memory has the NameNode.
>>>>
>>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>>> checkpointing?
>>>>
>>>> Where are your HDFS blocks located, are those still safe?
>>>>
>>>> With this information at hand, one might be able to fix your setup, but
>>>> do not format the old namenode before
>>>> all is working with a fresh one.
>>>>
>>>> Grab a copy of the maintainance guide:
>>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>>> which helps solving such type of problems as well.
>>>>
>>>> Best wishes
>>>> Mirko
>>>>
>>>>
>>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>>
>>>> Dear All,
>>>>>
>>>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>>>> am afraid of loosing my data as it was properly running for more than 3
>>>>> months. Currently, I believe if I do namenode formatting, it will work
>>>>> again, however, data will be lost. Is there anyway to solve this without
>>>>> losing the data.
>>>>>
>>>>> I will really appreciate any help.
>>>>>
>>>>> Thanks.
>>>>>
>>>>>
>>>>> =====================
>>>>> Here is the logs;
>>>>> ====================
>>>>> 2014-02-26 16:02:39,698 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>>> /************************************************************
>>>>> STARTUP_MSG: Starting NameNode
>>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>>> STARTUP_MSG:   args = []
>>>>> STARTUP_MSG:   version = 1.0.4
>>>>> STARTUP_MSG:   build =
>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>>> ************************************************************/
>>>>> 2014-02-26 16:02:40,005 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>>> hadoop-metrics2.properties
>>>>> 2014-02-26 16:02:40,019 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>> MetricsSystem,sub=Stats registered.
>>>>> 2014-02-26 16:02:40,021 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>>> period at 10 second(s).
>>>>> 2014-02-26 16:02:40,021 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>>> started
>>>>> 2014-02-26 16:02:40,169 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>>> registered.
>>>>> 2014-02-26 16:02:40,193 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>>> registered.
>>>>> 2014-02-26 16:02:40,194 INFO
>>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>>> NameNode registered.
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>>>       = 64-bit
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>>> memory = 17.77875 MB
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>> capacity      = 2^21 = 2097152 entries
>>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>>> recommended=2097152, actual=2097152
>>>>> 2014-02-26 16:02:40,273 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>>> 2014-02-26 16:02:40,273 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>>> 2014-02-26 16:02:40,274 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>> isPermissionEnabled=true
>>>>> 2014-02-26 16:02:40,279 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>> dfs.block.invalidate.limit=100
>>>>> 2014-02-26 16:02:40,279 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>>> accessTokenLifetime=0 min(s)
>>>>> 2014-02-26 16:02:40,724 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>>> 2014-02-26 16:02:40,749 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>>> occuring more than 10 times
>>>>> 2014-02-26 16:02:40,780 ERROR
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>>> initialization failed.
>>>>> java.io.IOException: NameNode is not formatted.
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>> 2014-02-26 16:02:40,781 ERROR
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>>> NameNode is not formatted.
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>>
>>>>> 2014-02-26 16:02:40,781 INFO
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>> /************************************************************
>>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>>> ************************************************************/
>>>>>
>>>>> ===========================
>>>>> Here is the core-site.xml
>>>>> ===========================
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>> <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>-BLANKED</value>
>>>>>   </property>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Stanley Shi <ss...@gopivotal.com>.
Ah yes, I overlooked this. Then please check the file are there or not: "ls
/home/hadoop/project/hadoop-data/dfs/name"?

Regards,
*Stanley Shi,*



On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:

> I don't think this is the case, because there is;
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/home/hadoop/project/hadoop-data</value>
>   </property>
>
>
> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> one possible reason is that you didn't set the namenode working
>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>> get deleted by the OS without any notification. If this is the case, I am
>> afraid you have lost all your namenode data.
>>
>> *<property>
>>   <name>dfs.name.dir</name>
>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>   <description>Determines where on the local filesystem the DFS name node
>>       should store the name table(fsimage).  If this is a comma-delimited list
>>       of directories then the name table is replicated in all of the
>>       directories, for redundancy. </description>
>> </property>*
>>
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> what is the location of the namenodes fsimage and editlogs?
>>> And how much memory has the NameNode.
>>>
>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>> checkpointing?
>>>
>>> Where are your HDFS blocks located, are those still safe?
>>>
>>> With this information at hand, one might be able to fix your setup, but
>>> do not format the old namenode before
>>> all is working with a fresh one.
>>>
>>> Grab a copy of the maintainance guide:
>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>> which helps solving such type of problems as well.
>>>
>>> Best wishes
>>> Mirko
>>>
>>>
>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>
>>> Dear All,
>>>>
>>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>>> am afraid of loosing my data as it was properly running for more than 3
>>>> months. Currently, I believe if I do namenode formatting, it will work
>>>> again, however, data will be lost. Is there anyway to solve this without
>>>> losing the data.
>>>>
>>>> I will really appreciate any help.
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> =====================
>>>> Here is the logs;
>>>> ====================
>>>> 2014-02-26 16:02:39,698 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>> /************************************************************
>>>> STARTUP_MSG: Starting NameNode
>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>> STARTUP_MSG:   args = []
>>>> STARTUP_MSG:   version = 1.0.4
>>>> STARTUP_MSG:   build =
>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>> ************************************************************/
>>>> 2014-02-26 16:02:40,005 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>> hadoop-metrics2.properties
>>>> 2014-02-26 16:02:40,019 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> MetricsSystem,sub=Stats registered.
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>> period at 10 second(s).
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>> started
>>>> 2014-02-26 16:02:40,169 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>> registered.
>>>> 2014-02-26 16:02:40,193 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>> registered.
>>>> 2014-02-26 16:02:40,194 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> NameNode registered.
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>>       = 64-bit
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>> memory = 17.77875 MB
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>>>      = 2^21 = 2097152 entries
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>> recommended=2097152, actual=2097152
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>> 2014-02-26 16:02:40,274 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isPermissionEnabled=true
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> dfs.block.invalidate.limit=100
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>> accessTokenLifetime=0 min(s)
>>>> 2014-02-26 16:02:40,724 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>> 2014-02-26 16:02:40,749 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>> occuring more than 10 times
>>>> 2014-02-26 16:02:40,780 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>> initialization failed.
>>>> java.io.IOException: NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>> 2014-02-26 16:02:40,781 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>> NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>
>>>> 2014-02-26 16:02:40,781 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>> ************************************************************/
>>>>
>>>> ===========================
>>>> Here is the core-site.xml
>>>> ===========================
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>-BLANKED</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Azuryy Yu <az...@gmail.com>.
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/project/hadoop-data</value>
  </property>


On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:

> I don't think this is the case, because there is;
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/home/hadoop/project/hadoop-data</value>
>   </property>
>
>
> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> one possible reason is that you didn't set the namenode working
>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>> get deleted by the OS without any notification. If this is the case, I am
>> afraid you have lost all your namenode data.
>>
>> *<property>
>>   <name>dfs.name.dir</name>
>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>   <description>Determines where on the local filesystem the DFS name node
>>       should store the name table(fsimage).  If this is a comma-delimited list
>>       of directories then the name table is replicated in all of the
>>       directories, for redundancy. </description>
>> </property>*
>>
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> what is the location of the namenodes fsimage and editlogs?
>>> And how much memory has the NameNode.
>>>
>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>> checkpointing?
>>>
>>> Where are your HDFS blocks located, are those still safe?
>>>
>>> With this information at hand, one might be able to fix your setup, but
>>> do not format the old namenode before
>>> all is working with a fresh one.
>>>
>>> Grab a copy of the maintainance guide:
>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>> which helps solving such type of problems as well.
>>>
>>> Best wishes
>>> Mirko
>>>
>>>
>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>
>>> Dear All,
>>>>
>>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>>> am afraid of loosing my data as it was properly running for more than 3
>>>> months. Currently, I believe if I do namenode formatting, it will work
>>>> again, however, data will be lost. Is there anyway to solve this without
>>>> losing the data.
>>>>
>>>> I will really appreciate any help.
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> =====================
>>>> Here is the logs;
>>>> ====================
>>>> 2014-02-26 16:02:39,698 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>> /************************************************************
>>>> STARTUP_MSG: Starting NameNode
>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>> STARTUP_MSG:   args = []
>>>> STARTUP_MSG:   version = 1.0.4
>>>> STARTUP_MSG:   build =
>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>> ************************************************************/
>>>> 2014-02-26 16:02:40,005 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>> hadoop-metrics2.properties
>>>> 2014-02-26 16:02:40,019 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> MetricsSystem,sub=Stats registered.
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>> period at 10 second(s).
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>> started
>>>> 2014-02-26 16:02:40,169 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>> registered.
>>>> 2014-02-26 16:02:40,193 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>> registered.
>>>> 2014-02-26 16:02:40,194 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> NameNode registered.
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>>       = 64-bit
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>> memory = 17.77875 MB
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>>>      = 2^21 = 2097152 entries
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>> recommended=2097152, actual=2097152
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>> 2014-02-26 16:02:40,274 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isPermissionEnabled=true
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> dfs.block.invalidate.limit=100
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>> accessTokenLifetime=0 min(s)
>>>> 2014-02-26 16:02:40,724 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>> 2014-02-26 16:02:40,749 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>> occuring more than 10 times
>>>> 2014-02-26 16:02:40,780 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>> initialization failed.
>>>> java.io.IOException: NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>> 2014-02-26 16:02:40,781 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>> NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>
>>>> 2014-02-26 16:02:40,781 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>> ************************************************************/
>>>>
>>>> ===========================
>>>> Here is the core-site.xml
>>>> ===========================
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>-BLANKED</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Stanley Shi <ss...@gopivotal.com>.
Ah yes, I overlooked this. Then please check the file are there or not: "ls
/home/hadoop/project/hadoop-data/dfs/name"?

Regards,
*Stanley Shi,*



On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:

> I don't think this is the case, because there is;
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/home/hadoop/project/hadoop-data</value>
>   </property>
>
>
> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> one possible reason is that you didn't set the namenode working
>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>> get deleted by the OS without any notification. If this is the case, I am
>> afraid you have lost all your namenode data.
>>
>> *<property>
>>   <name>dfs.name.dir</name>
>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>   <description>Determines where on the local filesystem the DFS name node
>>       should store the name table(fsimage).  If this is a comma-delimited list
>>       of directories then the name table is replicated in all of the
>>       directories, for redundancy. </description>
>> </property>*
>>
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> what is the location of the namenodes fsimage and editlogs?
>>> And how much memory has the NameNode.
>>>
>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>> checkpointing?
>>>
>>> Where are your HDFS blocks located, are those still safe?
>>>
>>> With this information at hand, one might be able to fix your setup, but
>>> do not format the old namenode before
>>> all is working with a fresh one.
>>>
>>> Grab a copy of the maintainance guide:
>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>> which helps solving such type of problems as well.
>>>
>>> Best wishes
>>> Mirko
>>>
>>>
>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>
>>> Dear All,
>>>>
>>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>>> am afraid of loosing my data as it was properly running for more than 3
>>>> months. Currently, I believe if I do namenode formatting, it will work
>>>> again, however, data will be lost. Is there anyway to solve this without
>>>> losing the data.
>>>>
>>>> I will really appreciate any help.
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> =====================
>>>> Here is the logs;
>>>> ====================
>>>> 2014-02-26 16:02:39,698 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>> /************************************************************
>>>> STARTUP_MSG: Starting NameNode
>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>> STARTUP_MSG:   args = []
>>>> STARTUP_MSG:   version = 1.0.4
>>>> STARTUP_MSG:   build =
>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>> ************************************************************/
>>>> 2014-02-26 16:02:40,005 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>> hadoop-metrics2.properties
>>>> 2014-02-26 16:02:40,019 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> MetricsSystem,sub=Stats registered.
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>> period at 10 second(s).
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>> started
>>>> 2014-02-26 16:02:40,169 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>> registered.
>>>> 2014-02-26 16:02:40,193 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>> registered.
>>>> 2014-02-26 16:02:40,194 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> NameNode registered.
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>>       = 64-bit
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>> memory = 17.77875 MB
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>>>      = 2^21 = 2097152 entries
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>> recommended=2097152, actual=2097152
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>> 2014-02-26 16:02:40,274 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isPermissionEnabled=true
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> dfs.block.invalidate.limit=100
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>> accessTokenLifetime=0 min(s)
>>>> 2014-02-26 16:02:40,724 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>> 2014-02-26 16:02:40,749 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>> occuring more than 10 times
>>>> 2014-02-26 16:02:40,780 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>> initialization failed.
>>>> java.io.IOException: NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>> 2014-02-26 16:02:40,781 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>> NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>
>>>> 2014-02-26 16:02:40,781 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>> ************************************************************/
>>>>
>>>> ===========================
>>>> Here is the core-site.xml
>>>> ===========================
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>-BLANKED</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Stanley Shi <ss...@gopivotal.com>.
Ah yes, I overlooked this. Then please check the file are there or not: "ls
/home/hadoop/project/hadoop-data/dfs/name"?

Regards,
*Stanley Shi,*



On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:

> I don't think this is the case, because there is;
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/home/hadoop/project/hadoop-data</value>
>   </property>
>
>
> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> one possible reason is that you didn't set the namenode working
>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>> get deleted by the OS without any notification. If this is the case, I am
>> afraid you have lost all your namenode data.
>>
>> *<property>
>>   <name>dfs.name.dir</name>
>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>   <description>Determines where on the local filesystem the DFS name node
>>       should store the name table(fsimage).  If this is a comma-delimited list
>>       of directories then the name table is replicated in all of the
>>       directories, for redundancy. </description>
>> </property>*
>>
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> what is the location of the namenodes fsimage and editlogs?
>>> And how much memory has the NameNode.
>>>
>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>> checkpointing?
>>>
>>> Where are your HDFS blocks located, are those still safe?
>>>
>>> With this information at hand, one might be able to fix your setup, but
>>> do not format the old namenode before
>>> all is working with a fresh one.
>>>
>>> Grab a copy of the maintainance guide:
>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>> which helps solving such type of problems as well.
>>>
>>> Best wishes
>>> Mirko
>>>
>>>
>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>
>>> Dear All,
>>>>
>>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>>> am afraid of loosing my data as it was properly running for more than 3
>>>> months. Currently, I believe if I do namenode formatting, it will work
>>>> again, however, data will be lost. Is there anyway to solve this without
>>>> losing the data.
>>>>
>>>> I will really appreciate any help.
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> =====================
>>>> Here is the logs;
>>>> ====================
>>>> 2014-02-26 16:02:39,698 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>> /************************************************************
>>>> STARTUP_MSG: Starting NameNode
>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>> STARTUP_MSG:   args = []
>>>> STARTUP_MSG:   version = 1.0.4
>>>> STARTUP_MSG:   build =
>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>> ************************************************************/
>>>> 2014-02-26 16:02:40,005 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>> hadoop-metrics2.properties
>>>> 2014-02-26 16:02:40,019 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> MetricsSystem,sub=Stats registered.
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>> period at 10 second(s).
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>> started
>>>> 2014-02-26 16:02:40,169 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>> registered.
>>>> 2014-02-26 16:02:40,193 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>> registered.
>>>> 2014-02-26 16:02:40,194 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> NameNode registered.
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>>       = 64-bit
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>> memory = 17.77875 MB
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>>>      = 2^21 = 2097152 entries
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>> recommended=2097152, actual=2097152
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>> 2014-02-26 16:02:40,274 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isPermissionEnabled=true
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> dfs.block.invalidate.limit=100
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>> accessTokenLifetime=0 min(s)
>>>> 2014-02-26 16:02:40,724 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>> 2014-02-26 16:02:40,749 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>> occuring more than 10 times
>>>> 2014-02-26 16:02:40,780 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>> initialization failed.
>>>> java.io.IOException: NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>> 2014-02-26 16:02:40,781 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>> NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>
>>>> 2014-02-26 16:02:40,781 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>> ************************************************************/
>>>>
>>>> ===========================
>>>> Here is the core-site.xml
>>>> ===========================
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>-BLANKED</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Azuryy Yu <az...@gmail.com>.
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/project/hadoop-data</value>
  </property>


On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:

> I don't think this is the case, because there is;
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/home/hadoop/project/hadoop-data</value>
>   </property>
>
>
> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> one possible reason is that you didn't set the namenode working
>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>> get deleted by the OS without any notification. If this is the case, I am
>> afraid you have lost all your namenode data.
>>
>> *<property>
>>   <name>dfs.name.dir</name>
>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>   <description>Determines where on the local filesystem the DFS name node
>>       should store the name table(fsimage).  If this is a comma-delimited list
>>       of directories then the name table is replicated in all of the
>>       directories, for redundancy. </description>
>> </property>*
>>
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> what is the location of the namenodes fsimage and editlogs?
>>> And how much memory has the NameNode.
>>>
>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>> checkpointing?
>>>
>>> Where are your HDFS blocks located, are those still safe?
>>>
>>> With this information at hand, one might be able to fix your setup, but
>>> do not format the old namenode before
>>> all is working with a fresh one.
>>>
>>> Grab a copy of the maintainance guide:
>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>> which helps solving such type of problems as well.
>>>
>>> Best wishes
>>> Mirko
>>>
>>>
>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>
>>> Dear All,
>>>>
>>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>>> am afraid of loosing my data as it was properly running for more than 3
>>>> months. Currently, I believe if I do namenode formatting, it will work
>>>> again, however, data will be lost. Is there anyway to solve this without
>>>> losing the data.
>>>>
>>>> I will really appreciate any help.
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> =====================
>>>> Here is the logs;
>>>> ====================
>>>> 2014-02-26 16:02:39,698 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>> /************************************************************
>>>> STARTUP_MSG: Starting NameNode
>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>> STARTUP_MSG:   args = []
>>>> STARTUP_MSG:   version = 1.0.4
>>>> STARTUP_MSG:   build =
>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>> ************************************************************/
>>>> 2014-02-26 16:02:40,005 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>> hadoop-metrics2.properties
>>>> 2014-02-26 16:02:40,019 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> MetricsSystem,sub=Stats registered.
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>> period at 10 second(s).
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>> started
>>>> 2014-02-26 16:02:40,169 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>> registered.
>>>> 2014-02-26 16:02:40,193 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>> registered.
>>>> 2014-02-26 16:02:40,194 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> NameNode registered.
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>>       = 64-bit
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>> memory = 17.77875 MB
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>>>      = 2^21 = 2097152 entries
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>> recommended=2097152, actual=2097152
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>> 2014-02-26 16:02:40,274 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isPermissionEnabled=true
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> dfs.block.invalidate.limit=100
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>> accessTokenLifetime=0 min(s)
>>>> 2014-02-26 16:02:40,724 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>> 2014-02-26 16:02:40,749 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>> occuring more than 10 times
>>>> 2014-02-26 16:02:40,780 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>> initialization failed.
>>>> java.io.IOException: NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>> 2014-02-26 16:02:40,781 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>> NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>
>>>> 2014-02-26 16:02:40,781 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>> ************************************************************/
>>>>
>>>> ===========================
>>>> Here is the core-site.xml
>>>> ===========================
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>-BLANKED</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Stanley Shi <ss...@gopivotal.com>.
Ah yes, I overlooked this. Then please check the file are there or not: "ls
/home/hadoop/project/hadoop-data/dfs/name"?

Regards,
*Stanley Shi,*



On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:

> I don't think this is the case, because there is;
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/home/hadoop/project/hadoop-data</value>
>   </property>
>
>
> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> one possible reason is that you didn't set the namenode working
>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>> get deleted by the OS without any notification. If this is the case, I am
>> afraid you have lost all your namenode data.
>>
>> *<property>
>>   <name>dfs.name.dir</name>
>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>   <description>Determines where on the local filesystem the DFS name node
>>       should store the name table(fsimage).  If this is a comma-delimited list
>>       of directories then the name table is replicated in all of the
>>       directories, for redundancy. </description>
>> </property>*
>>
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> what is the location of the namenodes fsimage and editlogs?
>>> And how much memory has the NameNode.
>>>
>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>> checkpointing?
>>>
>>> Where are your HDFS blocks located, are those still safe?
>>>
>>> With this information at hand, one might be able to fix your setup, but
>>> do not format the old namenode before
>>> all is working with a fresh one.
>>>
>>> Grab a copy of the maintainance guide:
>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>> which helps solving such type of problems as well.
>>>
>>> Best wishes
>>> Mirko
>>>
>>>
>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>
>>> Dear All,
>>>>
>>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>>> am afraid of loosing my data as it was properly running for more than 3
>>>> months. Currently, I believe if I do namenode formatting, it will work
>>>> again, however, data will be lost. Is there anyway to solve this without
>>>> losing the data.
>>>>
>>>> I will really appreciate any help.
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> =====================
>>>> Here is the logs;
>>>> ====================
>>>> 2014-02-26 16:02:39,698 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>> /************************************************************
>>>> STARTUP_MSG: Starting NameNode
>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>> STARTUP_MSG:   args = []
>>>> STARTUP_MSG:   version = 1.0.4
>>>> STARTUP_MSG:   build =
>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>> ************************************************************/
>>>> 2014-02-26 16:02:40,005 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>> hadoop-metrics2.properties
>>>> 2014-02-26 16:02:40,019 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> MetricsSystem,sub=Stats registered.
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>> period at 10 second(s).
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>> started
>>>> 2014-02-26 16:02:40,169 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>> registered.
>>>> 2014-02-26 16:02:40,193 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>> registered.
>>>> 2014-02-26 16:02:40,194 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> NameNode registered.
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>>       = 64-bit
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>> memory = 17.77875 MB
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>>>      = 2^21 = 2097152 entries
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>> recommended=2097152, actual=2097152
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>> 2014-02-26 16:02:40,274 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isPermissionEnabled=true
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> dfs.block.invalidate.limit=100
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>> accessTokenLifetime=0 min(s)
>>>> 2014-02-26 16:02:40,724 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>> 2014-02-26 16:02:40,749 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>> occuring more than 10 times
>>>> 2014-02-26 16:02:40,780 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>> initialization failed.
>>>> java.io.IOException: NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>> 2014-02-26 16:02:40,781 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>> NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>
>>>> 2014-02-26 16:02:40,781 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>> ************************************************************/
>>>>
>>>> ===========================
>>>> Here is the core-site.xml
>>>> ===========================
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>-BLANKED</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Azuryy Yu <az...@gmail.com>.
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/project/hadoop-data</value>
  </property>


On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:

> I don't think this is the case, because there is;
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/home/hadoop/project/hadoop-data</value>
>   </property>
>
>
> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> one possible reason is that you didn't set the namenode working
>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>> get deleted by the OS without any notification. If this is the case, I am
>> afraid you have lost all your namenode data.
>>
>> *<property>
>>   <name>dfs.name.dir</name>
>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>   <description>Determines where on the local filesystem the DFS name node
>>       should store the name table(fsimage).  If this is a comma-delimited list
>>       of directories then the name table is replicated in all of the
>>       directories, for redundancy. </description>
>> </property>*
>>
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> what is the location of the namenodes fsimage and editlogs?
>>> And how much memory has the NameNode.
>>>
>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>> checkpointing?
>>>
>>> Where are your HDFS blocks located, are those still safe?
>>>
>>> With this information at hand, one might be able to fix your setup, but
>>> do not format the old namenode before
>>> all is working with a fresh one.
>>>
>>> Grab a copy of the maintainance guide:
>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>> which helps solving such type of problems as well.
>>>
>>> Best wishes
>>> Mirko
>>>
>>>
>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>
>>> Dear All,
>>>>
>>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>>> am afraid of loosing my data as it was properly running for more than 3
>>>> months. Currently, I believe if I do namenode formatting, it will work
>>>> again, however, data will be lost. Is there anyway to solve this without
>>>> losing the data.
>>>>
>>>> I will really appreciate any help.
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> =====================
>>>> Here is the logs;
>>>> ====================
>>>> 2014-02-26 16:02:39,698 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>> /************************************************************
>>>> STARTUP_MSG: Starting NameNode
>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>> STARTUP_MSG:   args = []
>>>> STARTUP_MSG:   version = 1.0.4
>>>> STARTUP_MSG:   build =
>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>> ************************************************************/
>>>> 2014-02-26 16:02:40,005 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>> hadoop-metrics2.properties
>>>> 2014-02-26 16:02:40,019 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> MetricsSystem,sub=Stats registered.
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>> period at 10 second(s).
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>> started
>>>> 2014-02-26 16:02:40,169 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>> registered.
>>>> 2014-02-26 16:02:40,193 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>> registered.
>>>> 2014-02-26 16:02:40,194 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> NameNode registered.
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>>       = 64-bit
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>> memory = 17.77875 MB
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>>>      = 2^21 = 2097152 entries
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>> recommended=2097152, actual=2097152
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>> 2014-02-26 16:02:40,274 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isPermissionEnabled=true
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> dfs.block.invalidate.limit=100
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>> accessTokenLifetime=0 min(s)
>>>> 2014-02-26 16:02:40,724 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>> 2014-02-26 16:02:40,749 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>> occuring more than 10 times
>>>> 2014-02-26 16:02:40,780 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>> initialization failed.
>>>> java.io.IOException: NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>> 2014-02-26 16:02:40,781 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>> NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>
>>>> 2014-02-26 16:02:40,781 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>> ************************************************************/
>>>>
>>>> ===========================
>>>> Here is the core-site.xml
>>>> ===========================
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>-BLANKED</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Azuryy Yu <az...@gmail.com>.
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/project/hadoop-data</value>
  </property>


On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu <az...@gmail.com> wrote:

> I don't think this is the case, because there is;
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/home/hadoop/project/hadoop-data</value>
>   </property>
>
>
> On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:
>
>> one possible reason is that you didn't set the namenode working
>> directory, by default it's in "/tmp" folder; and the "/tmp" folder might
>> get deleted by the OS without any notification. If this is the case, I am
>> afraid you have lost all your namenode data.
>>
>> *<property>
>>   <name>dfs.name.dir</name>
>>   <value>${hadoop.tmp.dir}/dfs/name</value>
>>   <description>Determines where on the local filesystem the DFS name node
>>       should store the name table(fsimage).  If this is a comma-delimited list
>>       of directories then the name table is replicated in all of the
>>       directories, for redundancy. </description>
>> </property>*
>>
>>
>> Regards,
>> *Stanley Shi,*
>>
>>
>>
>> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> what is the location of the namenodes fsimage and editlogs?
>>> And how much memory has the NameNode.
>>>
>>> Did you work with a Secondary NameNode or a Standby NameNode for
>>> checkpointing?
>>>
>>> Where are your HDFS blocks located, are those still safe?
>>>
>>> With this information at hand, one might be able to fix your setup, but
>>> do not format the old namenode before
>>> all is working with a fresh one.
>>>
>>> Grab a copy of the maintainance guide:
>>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>>> which helps solving such type of problems as well.
>>>
>>> Best wishes
>>> Mirko
>>>
>>>
>>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>>
>>> Dear All,
>>>>
>>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>>> am afraid of loosing my data as it was properly running for more than 3
>>>> months. Currently, I believe if I do namenode formatting, it will work
>>>> again, however, data will be lost. Is there anyway to solve this without
>>>> losing the data.
>>>>
>>>> I will really appreciate any help.
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> =====================
>>>> Here is the logs;
>>>> ====================
>>>> 2014-02-26 16:02:39,698 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>> /************************************************************
>>>> STARTUP_MSG: Starting NameNode
>>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>>> STARTUP_MSG:   args = []
>>>> STARTUP_MSG:   version = 1.0.4
>>>> STARTUP_MSG:   build =
>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>>> ************************************************************/
>>>> 2014-02-26 16:02:40,005 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>>> hadoop-metrics2.properties
>>>> 2014-02-26 16:02:40,019 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> MetricsSystem,sub=Stats registered.
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>>> period at 10 second(s).
>>>> 2014-02-26 16:02:40,021 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>>> started
>>>> 2014-02-26 16:02:40,169 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>>> registered.
>>>> 2014-02-26 16:02:40,193 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>>> registered.
>>>> 2014-02-26 16:02:40,194 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>>> NameNode registered.
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>>       = 64-bit
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>>> memory = 17.77875 MB
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>>>      = 2^21 = 2097152 entries
>>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>>> recommended=2097152, actual=2097152
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>>> 2014-02-26 16:02:40,273 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>>> 2014-02-26 16:02:40,274 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isPermissionEnabled=true
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> dfs.block.invalidate.limit=100
>>>> 2014-02-26 16:02:40,279 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>>> accessTokenLifetime=0 min(s)
>>>> 2014-02-26 16:02:40,724 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>> FSNamesystemStateMBean and NameNodeMXBean
>>>> 2014-02-26 16:02:40,749 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>>> occuring more than 10 times
>>>> 2014-02-26 16:02:40,780 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>> initialization failed.
>>>> java.io.IOException: NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>> 2014-02-26 16:02:40,781 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>> NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>>
>>>> 2014-02-26 16:02:40,781 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>>> ************************************************************/
>>>>
>>>> ===========================
>>>> Here is the core-site.xml
>>>> ===========================
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>> <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>-BLANKED</value>
>>>>   </property>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Azuryy Yu <az...@gmail.com>.
I don't think this is the case, because there is;
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/project/hadoop-data</value>
  </property>


On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:

> one possible reason is that you didn't set the namenode working directory,
> by default it's in "/tmp" folder; and the "/tmp" folder might get deleted
> by the OS without any notification. If this is the case, I am afraid you
> have lost all your namenode data.
>
> *<property>
>   <name>dfs.name.dir</name>
>   <value>${hadoop.tmp.dir}/dfs/name</value>
>   <description>Determines where on the local filesystem the DFS name node
>       should store the name table(fsimage).  If this is a comma-delimited list
>       of directories then the name table is replicated in all of the
>       directories, for redundancy. </description>
> </property>*
>
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>
>> Hi,
>>
>> what is the location of the namenodes fsimage and editlogs?
>> And how much memory has the NameNode.
>>
>> Did you work with a Secondary NameNode or a Standby NameNode for
>> checkpointing?
>>
>> Where are your HDFS blocks located, are those still safe?
>>
>> With this information at hand, one might be able to fix your setup, but
>> do not format the old namenode before
>> all is working with a fresh one.
>>
>> Grab a copy of the maintainance guide:
>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>> which helps solving such type of problems as well.
>>
>> Best wishes
>> Mirko
>>
>>
>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>
>> Dear All,
>>>
>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>> am afraid of loosing my data as it was properly running for more than 3
>>> months. Currently, I believe if I do namenode formatting, it will work
>>> again, however, data will be lost. Is there anyway to solve this without
>>> losing the data.
>>>
>>> I will really appreciate any help.
>>>
>>> Thanks.
>>>
>>>
>>> =====================
>>> Here is the logs;
>>> ====================
>>> 2014-02-26 16:02:39,698 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 1.0.4
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>> ************************************************************/
>>> 2014-02-26 16:02:40,005 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>> hadoop-metrics2.properties
>>> 2014-02-26 16:02:40,019 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> MetricsSystem,sub=Stats registered.
>>> 2014-02-26 16:02:40,021 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>> period at 10 second(s).
>>> 2014-02-26 16:02:40,021 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>> started
>>> 2014-02-26 16:02:40,169 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>> registered.
>>> 2014-02-26 16:02:40,193 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>> registered.
>>> 2014-02-26 16:02:40,194 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> NameNode registered.
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>     = 64-bit
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>> memory = 17.77875 MB
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>>      = 2^21 = 2097152 entries
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>> recommended=2097152, actual=2097152
>>> 2014-02-26 16:02:40,273 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>> 2014-02-26 16:02:40,273 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>> 2014-02-26 16:02:40,274 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> isPermissionEnabled=true
>>> 2014-02-26 16:02:40,279 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> dfs.block.invalidate.limit=100
>>> 2014-02-26 16:02:40,279 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>> accessTokenLifetime=0 min(s)
>>> 2014-02-26 16:02:40,724 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>> FSNamesystemStateMBean and NameNodeMXBean
>>> 2014-02-26 16:02:40,749 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>> occuring more than 10 times
>>> 2014-02-26 16:02:40,780 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>> initialization failed.
>>> java.io.IOException: NameNode is not formatted.
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>> 2014-02-26 16:02:40,781 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> NameNode is not formatted.
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>
>>> 2014-02-26 16:02:40,781 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>> ************************************************************/
>>>
>>> ===========================
>>> Here is the core-site.xml
>>> ===========================
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>> <property>
>>>     <name>fs.default.name</name>
>>>     <value>-BLANKED</value>
>>>   </property>
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Azuryy Yu <az...@gmail.com>.
I don't think this is the case, because there is;
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/project/hadoop-data</value>
  </property>


On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:

> one possible reason is that you didn't set the namenode working directory,
> by default it's in "/tmp" folder; and the "/tmp" folder might get deleted
> by the OS without any notification. If this is the case, I am afraid you
> have lost all your namenode data.
>
> *<property>
>   <name>dfs.name.dir</name>
>   <value>${hadoop.tmp.dir}/dfs/name</value>
>   <description>Determines where on the local filesystem the DFS name node
>       should store the name table(fsimage).  If this is a comma-delimited list
>       of directories then the name table is replicated in all of the
>       directories, for redundancy. </description>
> </property>*
>
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>
>> Hi,
>>
>> what is the location of the namenodes fsimage and editlogs?
>> And how much memory has the NameNode.
>>
>> Did you work with a Secondary NameNode or a Standby NameNode for
>> checkpointing?
>>
>> Where are your HDFS blocks located, are those still safe?
>>
>> With this information at hand, one might be able to fix your setup, but
>> do not format the old namenode before
>> all is working with a fresh one.
>>
>> Grab a copy of the maintainance guide:
>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>> which helps solving such type of problems as well.
>>
>> Best wishes
>> Mirko
>>
>>
>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>
>> Dear All,
>>>
>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>> am afraid of loosing my data as it was properly running for more than 3
>>> months. Currently, I believe if I do namenode formatting, it will work
>>> again, however, data will be lost. Is there anyway to solve this without
>>> losing the data.
>>>
>>> I will really appreciate any help.
>>>
>>> Thanks.
>>>
>>>
>>> =====================
>>> Here is the logs;
>>> ====================
>>> 2014-02-26 16:02:39,698 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 1.0.4
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>> ************************************************************/
>>> 2014-02-26 16:02:40,005 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>> hadoop-metrics2.properties
>>> 2014-02-26 16:02:40,019 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> MetricsSystem,sub=Stats registered.
>>> 2014-02-26 16:02:40,021 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>> period at 10 second(s).
>>> 2014-02-26 16:02:40,021 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>> started
>>> 2014-02-26 16:02:40,169 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>> registered.
>>> 2014-02-26 16:02:40,193 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>> registered.
>>> 2014-02-26 16:02:40,194 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> NameNode registered.
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>     = 64-bit
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>> memory = 17.77875 MB
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>>      = 2^21 = 2097152 entries
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>> recommended=2097152, actual=2097152
>>> 2014-02-26 16:02:40,273 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>> 2014-02-26 16:02:40,273 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>> 2014-02-26 16:02:40,274 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> isPermissionEnabled=true
>>> 2014-02-26 16:02:40,279 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> dfs.block.invalidate.limit=100
>>> 2014-02-26 16:02:40,279 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>> accessTokenLifetime=0 min(s)
>>> 2014-02-26 16:02:40,724 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>> FSNamesystemStateMBean and NameNodeMXBean
>>> 2014-02-26 16:02:40,749 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>> occuring more than 10 times
>>> 2014-02-26 16:02:40,780 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>> initialization failed.
>>> java.io.IOException: NameNode is not formatted.
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>> 2014-02-26 16:02:40,781 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> NameNode is not formatted.
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>
>>> 2014-02-26 16:02:40,781 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>> ************************************************************/
>>>
>>> ===========================
>>> Here is the core-site.xml
>>> ===========================
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>> <property>
>>>     <name>fs.default.name</name>
>>>     <value>-BLANKED</value>
>>>   </property>
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Azuryy Yu <az...@gmail.com>.
I don't think this is the case, because there is;
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/project/hadoop-data</value>
  </property>


On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:

> one possible reason is that you didn't set the namenode working directory,
> by default it's in "/tmp" folder; and the "/tmp" folder might get deleted
> by the OS without any notification. If this is the case, I am afraid you
> have lost all your namenode data.
>
> *<property>
>   <name>dfs.name.dir</name>
>   <value>${hadoop.tmp.dir}/dfs/name</value>
>   <description>Determines where on the local filesystem the DFS name node
>       should store the name table(fsimage).  If this is a comma-delimited list
>       of directories then the name table is replicated in all of the
>       directories, for redundancy. </description>
> </property>*
>
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>
>> Hi,
>>
>> what is the location of the namenodes fsimage and editlogs?
>> And how much memory has the NameNode.
>>
>> Did you work with a Secondary NameNode or a Standby NameNode for
>> checkpointing?
>>
>> Where are your HDFS blocks located, are those still safe?
>>
>> With this information at hand, one might be able to fix your setup, but
>> do not format the old namenode before
>> all is working with a fresh one.
>>
>> Grab a copy of the maintainance guide:
>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>> which helps solving such type of problems as well.
>>
>> Best wishes
>> Mirko
>>
>>
>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>
>> Dear All,
>>>
>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>> am afraid of loosing my data as it was properly running for more than 3
>>> months. Currently, I believe if I do namenode formatting, it will work
>>> again, however, data will be lost. Is there anyway to solve this without
>>> losing the data.
>>>
>>> I will really appreciate any help.
>>>
>>> Thanks.
>>>
>>>
>>> =====================
>>> Here is the logs;
>>> ====================
>>> 2014-02-26 16:02:39,698 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 1.0.4
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>> ************************************************************/
>>> 2014-02-26 16:02:40,005 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>> hadoop-metrics2.properties
>>> 2014-02-26 16:02:40,019 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> MetricsSystem,sub=Stats registered.
>>> 2014-02-26 16:02:40,021 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>> period at 10 second(s).
>>> 2014-02-26 16:02:40,021 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>> started
>>> 2014-02-26 16:02:40,169 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>> registered.
>>> 2014-02-26 16:02:40,193 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>> registered.
>>> 2014-02-26 16:02:40,194 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> NameNode registered.
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>     = 64-bit
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>> memory = 17.77875 MB
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>>      = 2^21 = 2097152 entries
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>> recommended=2097152, actual=2097152
>>> 2014-02-26 16:02:40,273 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>> 2014-02-26 16:02:40,273 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>> 2014-02-26 16:02:40,274 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> isPermissionEnabled=true
>>> 2014-02-26 16:02:40,279 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> dfs.block.invalidate.limit=100
>>> 2014-02-26 16:02:40,279 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>> accessTokenLifetime=0 min(s)
>>> 2014-02-26 16:02:40,724 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>> FSNamesystemStateMBean and NameNodeMXBean
>>> 2014-02-26 16:02:40,749 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>> occuring more than 10 times
>>> 2014-02-26 16:02:40,780 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>> initialization failed.
>>> java.io.IOException: NameNode is not formatted.
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>> 2014-02-26 16:02:40,781 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> NameNode is not formatted.
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>
>>> 2014-02-26 16:02:40,781 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>> ************************************************************/
>>>
>>> ===========================
>>> Here is the core-site.xml
>>> ===========================
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>> <property>
>>>     <name>fs.default.name</name>
>>>     <value>-BLANKED</value>
>>>   </property>
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Azuryy Yu <az...@gmail.com>.
I don't think this is the case, because there is;
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/project/hadoop-data</value>
  </property>


On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi <ss...@gopivotal.com> wrote:

> one possible reason is that you didn't set the namenode working directory,
> by default it's in "/tmp" folder; and the "/tmp" folder might get deleted
> by the OS without any notification. If this is the case, I am afraid you
> have lost all your namenode data.
>
> *<property>
>   <name>dfs.name.dir</name>
>   <value>${hadoop.tmp.dir}/dfs/name</value>
>   <description>Determines where on the local filesystem the DFS name node
>       should store the name table(fsimage).  If this is a comma-delimited list
>       of directories then the name table is replicated in all of the
>       directories, for redundancy. </description>
> </property>*
>
>
> Regards,
> *Stanley Shi,*
>
>
>
> On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com>wrote:
>
>> Hi,
>>
>> what is the location of the namenodes fsimage and editlogs?
>> And how much memory has the NameNode.
>>
>> Did you work with a Secondary NameNode or a Standby NameNode for
>> checkpointing?
>>
>> Where are your HDFS blocks located, are those still safe?
>>
>> With this information at hand, one might be able to fix your setup, but
>> do not format the old namenode before
>> all is working with a fresh one.
>>
>> Grab a copy of the maintainance guide:
>> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
>> which helps solving such type of problems as well.
>>
>> Best wishes
>> Mirko
>>
>>
>> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>>
>> Dear All,
>>>
>>> I have just restarted machines of my hadoop clusters. Now, I am trying
>>> to restart hadoop clusters again, but getting error on namenode restart. I
>>> am afraid of loosing my data as it was properly running for more than 3
>>> months. Currently, I believe if I do namenode formatting, it will work
>>> again, however, data will be lost. Is there anyway to solve this without
>>> losing the data.
>>>
>>> I will really appreciate any help.
>>>
>>> Thanks.
>>>
>>>
>>> =====================
>>> Here is the logs;
>>> ====================
>>> 2014-02-26 16:02:39,698 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 1.0.4
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>> ************************************************************/
>>> 2014-02-26 16:02:40,005 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>>> hadoop-metrics2.properties
>>> 2014-02-26 16:02:40,019 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> MetricsSystem,sub=Stats registered.
>>> 2014-02-26 16:02:40,021 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>>> period at 10 second(s).
>>> 2014-02-26 16:02:40,021 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>> started
>>> 2014-02-26 16:02:40,169 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>>> registered.
>>> 2014-02-26 16:02:40,193 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>>> registered.
>>> 2014-02-26 16:02:40,194 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>>> NameNode registered.
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>>     = 64-bit
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>>> memory = 17.77875 MB
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>>      = 2^21 = 2097152 entries
>>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>>> recommended=2097152, actual=2097152
>>> 2014-02-26 16:02:40,273 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>>> 2014-02-26 16:02:40,273 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>> 2014-02-26 16:02:40,274 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> isPermissionEnabled=true
>>> 2014-02-26 16:02:40,279 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> dfs.block.invalidate.limit=100
>>> 2014-02-26 16:02:40,279 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>>> accessTokenLifetime=0 min(s)
>>> 2014-02-26 16:02:40,724 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>> FSNamesystemStateMBean and NameNodeMXBean
>>> 2014-02-26 16:02:40,749 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>>> occuring more than 10 times
>>> 2014-02-26 16:02:40,780 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>> initialization failed.
>>> java.io.IOException: NameNode is not formatted.
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>> 2014-02-26 16:02:40,781 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> NameNode is not formatted.
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>>
>>> 2014-02-26 16:02:40,781 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>>> ************************************************************/
>>>
>>> ===========================
>>> Here is the core-site.xml
>>> ===========================
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>> <property>
>>>     <name>fs.default.name</name>
>>>     <value>-BLANKED</value>
>>>   </property>
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/home/hadoop/project/hadoop-data</value>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>>
>>
>

Re: I am about to lose all my data please help

Posted by Stanley Shi <ss...@gopivotal.com>.
one possible reason is that you didn't set the namenode working directory,
by default it's in "/tmp" folder; and the "/tmp" folder might get deleted
by the OS without any notification. If this is the case, I am afraid you
have lost all your namenode data.

*<property>
  <name>dfs.name.dir</name>
  <value>${hadoop.tmp.dir}/dfs/name</value>
  <description>Determines where on the local filesystem the DFS name node
      should store the name table(fsimage).  If this is a comma-delimited list
      of directories then the name table is replicated in all of the
      directories, for redundancy. </description>
</property>*


Regards,
*Stanley Shi,*



On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com> wrote:

> Hi,
>
> what is the location of the namenodes fsimage and editlogs?
> And how much memory has the NameNode.
>
> Did you work with a Secondary NameNode or a Standby NameNode for
> checkpointing?
>
> Where are your HDFS blocks located, are those still safe?
>
> With this information at hand, one might be able to fix your setup, but do
> not format the old namenode before
> all is working with a fresh one.
>
> Grab a copy of the maintainance guide:
> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
> which helps solving such type of problems as well.
>
> Best wishes
> Mirko
>
>
> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>
> Dear All,
>>
>> I have just restarted machines of my hadoop clusters. Now, I am trying to
>> restart hadoop clusters again, but getting error on namenode restart. I am
>> afraid of loosing my data as it was properly running for more than 3
>> months. Currently, I believe if I do namenode formatting, it will work
>> again, however, data will be lost. Is there anyway to solve this without
>> losing the data.
>>
>> I will really appreciate any help.
>>
>> Thanks.
>>
>>
>> =====================
>> Here is the logs;
>> ====================
>> 2014-02-26 16:02:39,698 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 1.0.4
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>> ************************************************************/
>> 2014-02-26 16:02:40,005 INFO
>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 2014-02-26 16:02:40,019 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 2014-02-26 16:02:40,021 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>> period at 10 second(s).
>> 2014-02-26 16:02:40,021 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>> started
>> 2014-02-26 16:02:40,169 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 2014-02-26 16:02:40,193 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>> registered.
>> 2014-02-26 16:02:40,194 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> NameNode registered.
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>     = 64-bit
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>> memory = 17.77875 MB
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>    = 2^21 = 2097152 entries
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>> recommended=2097152, actual=2097152
>> 2014-02-26 16:02:40,273 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>> 2014-02-26 16:02:40,273 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2014-02-26 16:02:40,274 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2014-02-26 16:02:40,279 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> dfs.block.invalidate.limit=100
>> 2014-02-26 16:02:40,279 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>> accessTokenLifetime=0 min(s)
>> 2014-02-26 16:02:40,724 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStateMBean and NameNodeMXBean
>> 2014-02-26 16:02:40,749 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>> occuring more than 10 times
>> 2014-02-26 16:02:40,780 ERROR
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>> initialization failed.
>> java.io.IOException: NameNode is not formatted.
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>> 2014-02-26 16:02:40,781 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> NameNode is not formatted.
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>
>> 2014-02-26 16:02:40,781 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>> ************************************************************/
>>
>> ===========================
>> Here is the core-site.xml
>> ===========================
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>> <property>
>>     <name>fs.default.name</name>
>>     <value>-BLANKED</value>
>>   </property>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/home/hadoop/project/hadoop-data</value>
>>   </property>
>> </configuration>
>>
>>
>>
>>
>

Re: I am about to lose all my data please help

Posted by Stanley Shi <ss...@gopivotal.com>.
one possible reason is that you didn't set the namenode working directory,
by default it's in "/tmp" folder; and the "/tmp" folder might get deleted
by the OS without any notification. If this is the case, I am afraid you
have lost all your namenode data.

*<property>
  <name>dfs.name.dir</name>
  <value>${hadoop.tmp.dir}/dfs/name</value>
  <description>Determines where on the local filesystem the DFS name node
      should store the name table(fsimage).  If this is a comma-delimited list
      of directories then the name table is replicated in all of the
      directories, for redundancy. </description>
</property>*


Regards,
*Stanley Shi,*



On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com> wrote:

> Hi,
>
> what is the location of the namenodes fsimage and editlogs?
> And how much memory has the NameNode.
>
> Did you work with a Secondary NameNode or a Standby NameNode for
> checkpointing?
>
> Where are your HDFS blocks located, are those still safe?
>
> With this information at hand, one might be able to fix your setup, but do
> not format the old namenode before
> all is working with a fresh one.
>
> Grab a copy of the maintainance guide:
> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
> which helps solving such type of problems as well.
>
> Best wishes
> Mirko
>
>
> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>
> Dear All,
>>
>> I have just restarted machines of my hadoop clusters. Now, I am trying to
>> restart hadoop clusters again, but getting error on namenode restart. I am
>> afraid of loosing my data as it was properly running for more than 3
>> months. Currently, I believe if I do namenode formatting, it will work
>> again, however, data will be lost. Is there anyway to solve this without
>> losing the data.
>>
>> I will really appreciate any help.
>>
>> Thanks.
>>
>>
>> =====================
>> Here is the logs;
>> ====================
>> 2014-02-26 16:02:39,698 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 1.0.4
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>> ************************************************************/
>> 2014-02-26 16:02:40,005 INFO
>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 2014-02-26 16:02:40,019 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 2014-02-26 16:02:40,021 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>> period at 10 second(s).
>> 2014-02-26 16:02:40,021 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>> started
>> 2014-02-26 16:02:40,169 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 2014-02-26 16:02:40,193 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>> registered.
>> 2014-02-26 16:02:40,194 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> NameNode registered.
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>     = 64-bit
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>> memory = 17.77875 MB
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>    = 2^21 = 2097152 entries
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>> recommended=2097152, actual=2097152
>> 2014-02-26 16:02:40,273 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>> 2014-02-26 16:02:40,273 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2014-02-26 16:02:40,274 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2014-02-26 16:02:40,279 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> dfs.block.invalidate.limit=100
>> 2014-02-26 16:02:40,279 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>> accessTokenLifetime=0 min(s)
>> 2014-02-26 16:02:40,724 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStateMBean and NameNodeMXBean
>> 2014-02-26 16:02:40,749 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>> occuring more than 10 times
>> 2014-02-26 16:02:40,780 ERROR
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>> initialization failed.
>> java.io.IOException: NameNode is not formatted.
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>> 2014-02-26 16:02:40,781 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> NameNode is not formatted.
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>
>> 2014-02-26 16:02:40,781 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>> ************************************************************/
>>
>> ===========================
>> Here is the core-site.xml
>> ===========================
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>> <property>
>>     <name>fs.default.name</name>
>>     <value>-BLANKED</value>
>>   </property>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/home/hadoop/project/hadoop-data</value>
>>   </property>
>> </configuration>
>>
>>
>>
>>
>

Re: I am about to lose all my data please help

Posted by Stanley Shi <ss...@gopivotal.com>.
one possible reason is that you didn't set the namenode working directory,
by default it's in "/tmp" folder; and the "/tmp" folder might get deleted
by the OS without any notification. If this is the case, I am afraid you
have lost all your namenode data.

*<property>
  <name>dfs.name.dir</name>
  <value>${hadoop.tmp.dir}/dfs/name</value>
  <description>Determines where on the local filesystem the DFS name node
      should store the name table(fsimage).  If this is a comma-delimited list
      of directories then the name table is replicated in all of the
      directories, for redundancy. </description>
</property>*


Regards,
*Stanley Shi,*



On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com> wrote:

> Hi,
>
> what is the location of the namenodes fsimage and editlogs?
> And how much memory has the NameNode.
>
> Did you work with a Secondary NameNode or a Standby NameNode for
> checkpointing?
>
> Where are your HDFS blocks located, are those still safe?
>
> With this information at hand, one might be able to fix your setup, but do
> not format the old namenode before
> all is working with a fresh one.
>
> Grab a copy of the maintainance guide:
> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
> which helps solving such type of problems as well.
>
> Best wishes
> Mirko
>
>
> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>
> Dear All,
>>
>> I have just restarted machines of my hadoop clusters. Now, I am trying to
>> restart hadoop clusters again, but getting error on namenode restart. I am
>> afraid of loosing my data as it was properly running for more than 3
>> months. Currently, I believe if I do namenode formatting, it will work
>> again, however, data will be lost. Is there anyway to solve this without
>> losing the data.
>>
>> I will really appreciate any help.
>>
>> Thanks.
>>
>>
>> =====================
>> Here is the logs;
>> ====================
>> 2014-02-26 16:02:39,698 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 1.0.4
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>> ************************************************************/
>> 2014-02-26 16:02:40,005 INFO
>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 2014-02-26 16:02:40,019 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 2014-02-26 16:02:40,021 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>> period at 10 second(s).
>> 2014-02-26 16:02:40,021 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>> started
>> 2014-02-26 16:02:40,169 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 2014-02-26 16:02:40,193 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>> registered.
>> 2014-02-26 16:02:40,194 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> NameNode registered.
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>     = 64-bit
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>> memory = 17.77875 MB
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>    = 2^21 = 2097152 entries
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>> recommended=2097152, actual=2097152
>> 2014-02-26 16:02:40,273 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>> 2014-02-26 16:02:40,273 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2014-02-26 16:02:40,274 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2014-02-26 16:02:40,279 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> dfs.block.invalidate.limit=100
>> 2014-02-26 16:02:40,279 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>> accessTokenLifetime=0 min(s)
>> 2014-02-26 16:02:40,724 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStateMBean and NameNodeMXBean
>> 2014-02-26 16:02:40,749 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>> occuring more than 10 times
>> 2014-02-26 16:02:40,780 ERROR
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>> initialization failed.
>> java.io.IOException: NameNode is not formatted.
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>> 2014-02-26 16:02:40,781 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> NameNode is not formatted.
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>
>> 2014-02-26 16:02:40,781 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>> ************************************************************/
>>
>> ===========================
>> Here is the core-site.xml
>> ===========================
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>> <property>
>>     <name>fs.default.name</name>
>>     <value>-BLANKED</value>
>>   </property>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/home/hadoop/project/hadoop-data</value>
>>   </property>
>> </configuration>
>>
>>
>>
>>
>

Re: I am about to lose all my data please help

Posted by Stanley Shi <ss...@gopivotal.com>.
one possible reason is that you didn't set the namenode working directory,
by default it's in "/tmp" folder; and the "/tmp" folder might get deleted
by the OS without any notification. If this is the case, I am afraid you
have lost all your namenode data.

*<property>
  <name>dfs.name.dir</name>
  <value>${hadoop.tmp.dir}/dfs/name</value>
  <description>Determines where on the local filesystem the DFS name node
      should store the name table(fsimage).  If this is a comma-delimited list
      of directories then the name table is replicated in all of the
      directories, for redundancy. </description>
</property>*


Regards,
*Stanley Shi,*



On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf <mi...@gmail.com> wrote:

> Hi,
>
> what is the location of the namenodes fsimage and editlogs?
> And how much memory has the NameNode.
>
> Did you work with a Secondary NameNode or a Standby NameNode for
> checkpointing?
>
> Where are your HDFS blocks located, are those still safe?
>
> With this information at hand, one might be able to fix your setup, but do
> not format the old namenode before
> all is working with a fresh one.
>
> Grab a copy of the maintainance guide:
> http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
> which helps solving such type of problems as well.
>
> Best wishes
> Mirko
>
>
> 2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:
>
> Dear All,
>>
>> I have just restarted machines of my hadoop clusters. Now, I am trying to
>> restart hadoop clusters again, but getting error on namenode restart. I am
>> afraid of loosing my data as it was properly running for more than 3
>> months. Currently, I believe if I do namenode formatting, it will work
>> again, however, data will be lost. Is there anyway to solve this without
>> losing the data.
>>
>> I will really appreciate any help.
>>
>> Thanks.
>>
>>
>> =====================
>> Here is the logs;
>> ====================
>> 2014-02-26 16:02:39,698 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 1.0.4
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>> ************************************************************/
>> 2014-02-26 16:02:40,005 INFO
>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 2014-02-26 16:02:40,019 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 2014-02-26 16:02:40,021 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
>> period at 10 second(s).
>> 2014-02-26 16:02:40,021 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>> started
>> 2014-02-26 16:02:40,169 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 2014-02-26 16:02:40,193 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
>> registered.
>> 2014-02-26 16:02:40,194 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
>> NameNode registered.
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>>     = 64-bit
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
>> memory = 17.77875 MB
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>>    = 2^21 = 2097152 entries
>> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
>> recommended=2097152, actual=2097152
>> 2014-02-26 16:02:40,273 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
>> 2014-02-26 16:02:40,273 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2014-02-26 16:02:40,274 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2014-02-26 16:02:40,279 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> dfs.block.invalidate.limit=100
>> 2014-02-26 16:02:40,279 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
>> accessTokenLifetime=0 min(s)
>> 2014-02-26 16:02:40,724 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStateMBean and NameNodeMXBean
>> 2014-02-26 16:02:40,749 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
>> occuring more than 10 times
>> 2014-02-26 16:02:40,780 ERROR
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>> initialization failed.
>> java.io.IOException: NameNode is not formatted.
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>> 2014-02-26 16:02:40,781 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> NameNode is not formatted.
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>>
>> 2014-02-26 16:02:40,781 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
>> ************************************************************/
>>
>> ===========================
>> Here is the core-site.xml
>> ===========================
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>> <property>
>>     <name>fs.default.name</name>
>>     <value>-BLANKED</value>
>>   </property>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/home/hadoop/project/hadoop-data</value>
>>   </property>
>> </configuration>
>>
>>
>>
>>
>

Re: I am about to lose all my data please help

Posted by Mirko Kämpf <mi...@gmail.com>.
Hi,

what is the location of the namenodes fsimage and editlogs?
And how much memory has the NameNode.

Did you work with a Secondary NameNode or a Standby NameNode for
checkpointing?

Where are your HDFS blocks located, are those still safe?

With this information at hand, one might be able to fix your setup, but do
not format the old namenode before
all is working with a fresh one.

Grab a copy of the maintainance guide:
http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
which helps solving such type of problems as well.

Best wishes
Mirko


2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:

> Dear All,
>
> I have just restarted machines of my hadoop clusters. Now, I am trying to
> restart hadoop clusters again, but getting error on namenode restart. I am
> afraid of loosing my data as it was properly running for more than 3
> months. Currently, I believe if I do namenode formatting, it will work
> again, however, data will be lost. Is there anyway to solve this without
> losing the data.
>
> I will really appreciate any help.
>
> Thanks.
>
>
> =====================
> Here is the logs;
> ====================
> 2014-02-26 16:02:39,698 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 1.0.4
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
> ************************************************************/
> 2014-02-26 16:02:40,005 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2014-02-26 16:02:40,019 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 2014-02-26 16:02:40,021 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2014-02-26 16:02:40,021 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2014-02-26 16:02:40,169 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 2014-02-26 16:02:40,193 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
> registered.
> 2014-02-26 16:02:40,194 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> NameNode registered.
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>   = 64-bit
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> memory = 17.77875 MB
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>    = 2^21 = 2097152 entries
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
> recommended=2097152, actual=2097152
> 2014-02-26 16:02:40,273 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
> 2014-02-26 16:02:40,273 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2014-02-26 16:02:40,274 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2014-02-26 16:02:40,279 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.block.invalidate.limit=100
> 2014-02-26 16:02:40,279 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> accessTokenLifetime=0 min(s)
> 2014-02-26 16:02:40,724 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStateMBean and NameNodeMXBean
> 2014-02-26 16:02:40,749 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2014-02-26 16:02:40,780 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException: NameNode is not formatted.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
> 2014-02-26 16:02:40,781 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> NameNode is not formatted.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>
> 2014-02-26 16:02:40,781 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
> ************************************************************/
>
> ===========================
> Here is the core-site.xml
> ===========================
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>     <name>fs.default.name</name>
>     <value>-BLANKED</value>
>   </property>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/home/hadoop/project/hadoop-data</value>
>   </property>
> </configuration>
>
>
>
>

Re: I am about to lose all my data please help

Posted by Mirko Kämpf <mi...@gmail.com>.
Hi,

what is the location of the namenodes fsimage and editlogs?
And how much memory has the NameNode.

Did you work with a Secondary NameNode or a Standby NameNode for
checkpointing?

Where are your HDFS blocks located, are those still safe?

With this information at hand, one might be able to fix your setup, but do
not format the old namenode before
all is working with a fresh one.

Grab a copy of the maintainance guide:
http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
which helps solving such type of problems as well.

Best wishes
Mirko


2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:

> Dear All,
>
> I have just restarted machines of my hadoop clusters. Now, I am trying to
> restart hadoop clusters again, but getting error on namenode restart. I am
> afraid of loosing my data as it was properly running for more than 3
> months. Currently, I believe if I do namenode formatting, it will work
> again, however, data will be lost. Is there anyway to solve this without
> losing the data.
>
> I will really appreciate any help.
>
> Thanks.
>
>
> =====================
> Here is the logs;
> ====================
> 2014-02-26 16:02:39,698 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 1.0.4
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
> ************************************************************/
> 2014-02-26 16:02:40,005 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2014-02-26 16:02:40,019 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 2014-02-26 16:02:40,021 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2014-02-26 16:02:40,021 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2014-02-26 16:02:40,169 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 2014-02-26 16:02:40,193 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
> registered.
> 2014-02-26 16:02:40,194 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> NameNode registered.
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>   = 64-bit
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> memory = 17.77875 MB
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>    = 2^21 = 2097152 entries
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
> recommended=2097152, actual=2097152
> 2014-02-26 16:02:40,273 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
> 2014-02-26 16:02:40,273 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2014-02-26 16:02:40,274 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2014-02-26 16:02:40,279 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.block.invalidate.limit=100
> 2014-02-26 16:02:40,279 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> accessTokenLifetime=0 min(s)
> 2014-02-26 16:02:40,724 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStateMBean and NameNodeMXBean
> 2014-02-26 16:02:40,749 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2014-02-26 16:02:40,780 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException: NameNode is not formatted.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
> 2014-02-26 16:02:40,781 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> NameNode is not formatted.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>
> 2014-02-26 16:02:40,781 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
> ************************************************************/
>
> ===========================
> Here is the core-site.xml
> ===========================
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>     <name>fs.default.name</name>
>     <value>-BLANKED</value>
>   </property>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/home/hadoop/project/hadoop-data</value>
>   </property>
> </configuration>
>
>
>
>

Re: I am about to lose all my data please help

Posted by Mirko Kämpf <mi...@gmail.com>.
Hi,

what is the location of the namenodes fsimage and editlogs?
And how much memory has the NameNode.

Did you work with a Secondary NameNode or a Standby NameNode for
checkpointing?

Where are your HDFS blocks located, are those still safe?

With this information at hand, one might be able to fix your setup, but do
not format the old namenode before
all is working with a fresh one.

Grab a copy of the maintainance guide:
http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
which helps solving such type of problems as well.

Best wishes
Mirko


2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:

> Dear All,
>
> I have just restarted machines of my hadoop clusters. Now, I am trying to
> restart hadoop clusters again, but getting error on namenode restart. I am
> afraid of loosing my data as it was properly running for more than 3
> months. Currently, I believe if I do namenode formatting, it will work
> again, however, data will be lost. Is there anyway to solve this without
> losing the data.
>
> I will really appreciate any help.
>
> Thanks.
>
>
> =====================
> Here is the logs;
> ====================
> 2014-02-26 16:02:39,698 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 1.0.4
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
> ************************************************************/
> 2014-02-26 16:02:40,005 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2014-02-26 16:02:40,019 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 2014-02-26 16:02:40,021 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2014-02-26 16:02:40,021 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2014-02-26 16:02:40,169 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 2014-02-26 16:02:40,193 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
> registered.
> 2014-02-26 16:02:40,194 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> NameNode registered.
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>   = 64-bit
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> memory = 17.77875 MB
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>    = 2^21 = 2097152 entries
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
> recommended=2097152, actual=2097152
> 2014-02-26 16:02:40,273 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
> 2014-02-26 16:02:40,273 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2014-02-26 16:02:40,274 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2014-02-26 16:02:40,279 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.block.invalidate.limit=100
> 2014-02-26 16:02:40,279 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> accessTokenLifetime=0 min(s)
> 2014-02-26 16:02:40,724 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStateMBean and NameNodeMXBean
> 2014-02-26 16:02:40,749 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2014-02-26 16:02:40,780 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException: NameNode is not formatted.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
> 2014-02-26 16:02:40,781 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> NameNode is not formatted.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>
> 2014-02-26 16:02:40,781 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
> ************************************************************/
>
> ===========================
> Here is the core-site.xml
> ===========================
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>     <name>fs.default.name</name>
>     <value>-BLANKED</value>
>   </property>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/home/hadoop/project/hadoop-data</value>
>   </property>
> </configuration>
>
>
>
>

Re: I am about to lose all my data please help

Posted by Mirko Kämpf <mi...@gmail.com>.
Hi,

what is the location of the namenodes fsimage and editlogs?
And how much memory has the NameNode.

Did you work with a Secondary NameNode or a Standby NameNode for
checkpointing?

Where are your HDFS blocks located, are those still safe?

With this information at hand, one might be able to fix your setup, but do
not format the old namenode before
all is working with a fresh one.

Grab a copy of the maintainance guide:
http://shop.oreilly.com/product/0636920025085.do?sortby=publicationDate
which helps solving such type of problems as well.

Best wishes
Mirko


2014-03-16 9:07 GMT+00:00 Fatih Haltas <fa...@nyu.edu>:

> Dear All,
>
> I have just restarted machines of my hadoop clusters. Now, I am trying to
> restart hadoop clusters again, but getting error on namenode restart. I am
> afraid of loosing my data as it was properly running for more than 3
> months. Currently, I believe if I do namenode formatting, it will work
> again, however, data will be lost. Is there anyway to solve this without
> losing the data.
>
> I will really appreciate any help.
>
> Thanks.
>
>
> =====================
> Here is the logs;
> ====================
> 2014-02-26 16:02:39,698 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = ADUAE042-LAP-V/127.0.0.1
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 1.0.4
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
> ************************************************************/
> 2014-02-26 16:02:40,005 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2014-02-26 16:02:40,019 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 2014-02-26 16:02:40,021 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2014-02-26 16:02:40,021 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> started
> 2014-02-26 16:02:40,169 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 2014-02-26 16:02:40,193 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
> registered.
> 2014-02-26 16:02:40,194 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> NameNode registered.
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: VM type
>   = 64-bit
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> memory = 17.77875 MB
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet: capacity
>    = 2^21 = 2097152 entries
> 2014-02-26 16:02:40,242 INFO org.apache.hadoop.hdfs.util.GSet:
> recommended=2097152, actual=2097152
> 2014-02-26 16:02:40,273 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
> 2014-02-26 16:02:40,273 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2014-02-26 16:02:40,274 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2014-02-26 16:02:40,279 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.block.invalidate.limit=100
> 2014-02-26 16:02:40,279 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> accessTokenLifetime=0 min(s)
> 2014-02-26 16:02:40,724 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStateMBean and NameNodeMXBean
> 2014-02-26 16:02:40,749 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> occuring more than 10 times
> 2014-02-26 16:02:40,780 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException: NameNode is not formatted.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
> 2014-02-26 16:02:40,781 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> NameNode is not formatted.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
>
> 2014-02-26 16:02:40,781 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at ADUAE042-LAP-V/127.0.0.1
> ************************************************************/
>
> ===========================
> Here is the core-site.xml
> ===========================
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>     <name>fs.default.name</name>
>     <value>-BLANKED</value>
>   </property>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/home/hadoop/project/hadoop-data</value>
>   </property>
> </configuration>
>
>
>
>