You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by hadoop hive <ha...@gmail.com> on 2012/11/16 08:45:17 UTC

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Seems like you havn't format your cluster (if its 1st time made).

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:

> Hi,
>
> Please help!
>
> I have installed a Hadoop Cluster with a single master (master1) and have
> HBase running on the HDFS.  Now I am setting up the second master
>  (master2) in order to form HA.  When I used JPS to check the cluster, I
> found :
>
> 2782 Jps
> 2126 NameNode
> 2720 SecondaryNameNode
> i.e. The datanode on this server could not be started
>
> In the log file, found:
> 2012-11-16 10:28:44,851 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
> = 1356148070; datanode namespaceID = 1151604993
>
>
>
> One of the possible solutions to fix this issue is to:  stop the cluster,
> reformat the NameNode, restart the cluster.
> QUESTION: As I already have HBASE running on the cluster, if I reformat
> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
> all data lost as I don't have many data in HBASE and HDFS, however I don't
> want to re-install HBASE again.
>
>
> On the other hand, I have tried another solution: stop the DataNode, edit
> the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
> restart the datanode, it doesn't work:
> Warning: $HADOOP_HOME is deprecated.
> starting master2, logging to
> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
> Exception in thread "main" java.lang.NoClassDefFoundError: master2
> Caused by: java.lang.ClassNotFoundException: master2
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
> Could not find the main class: master2.  Program will exit.
> QUESTION: Any other solutions?
>
>
>
> Thanks
>
>
>
>
>
>
>

RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by "Kartashov, Andy" <An...@mpac.ca>.
Vinay,

Two questions.

1.                   Configure the another namenode's configuration.

What exactly to configure.

2.                   What is zkfs?
From: Vinayakumar B [mailto:vinayakumar.b@huawei.com]
Sent: Friday, November 16, 2012 3:31 AM
To: user@hadoop.apache.org
Subject: RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Hi,

If you are moving from NonHA (single master) to HA, then follow the below steps.

1.       Configure the another namenode's configuration in the running namenode and all datanode's configurations. And configure logical fs.defaultFS

2.       Configure the shared storage related configuration.

3.       Stop the running NameNode and all datanodes.

4.       Execute 'hdfs namenode -initializeSharedEdits' from the existing namenode installation, to transfer the edits to shared storage.

5.       Now format zkfc using 'hdfs zkfc -formatZK' and start zkfc using 'hadoop-daemon.sh start zkfc'

6.       Now restart the namenode from existing installation. If all configurations are fine, then NameNode should start successfully as STANDBY, then zkfc will make it to ACTIVE.



7.       Now install the NameNode in another machine (master2) with same configuration, except 'dfs.ha.namenode.id'.

8.       Now instead of format, you need to copy the name dir contents from another namenode (master1) to master2's name dir. For this you are having 2 options.

a.       Execute 'hdfs namenode -bootStrapStandby'  from the master2 installation.

b.      Using 'scp' copy entire contents of name dir from master1 to master2's name dir.

9.       Now start the zkfc for second namenode ( No need to do zkfc format now). Also start the namenode (master2)

Regards,
Vinay-
From: Uma Maheswara Rao G [mailto:maheswara@huawei.com]
Sent: Friday, November 16, 2012 1:26 PM
To: user@hadoop.apache.org
Subject: RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs


If you format namenode, you need to cleanup storage directories of DataNode as well if that is having some data already. DN also will have namespace ID saved and compared with NN namespaceID. if you format NN, then namespaceID will be changed and DN may have still older namespaceID. So, just cleaning the data in DN would be fine.



Regards,

Uma

________________________________
From: hadoop hive [hadoophive@gmail.com]
Sent: Friday, November 16, 2012 1:15 PM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
Seems like you havn't format your cluster (if its 1st time made).
On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk<ma...@hsk.hk> <ac...@hsk.hk>> wrote:
Hi,

Please help!

I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :

2782 Jps
2126 NameNode
2720 SecondaryNameNode
i.e. The datanode on this server could not be started

In the log file, found:
2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070; datanode namespaceID = 1151604993



One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.


On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993), restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: master2.  Program will exit.
QUESTION: Any other solutions?



Thanks







NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le pr?sent courriel et toute pi?ce jointe qui l'accompagne sont confidentiels, prot?g?s par le droit d'auteur et peuvent ?tre couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autoris?e est interdite. Si vous n'?tes pas le destinataire pr?vu de ce courriel, supprimez-le et contactez imm?diatement l'exp?diteur. Veuillez penser ? l'environnement avant d'imprimer le pr?sent courriel

RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by "Kartashov, Andy" <An...@mpac.ca>.
Vinay,

Two questions.

1.                   Configure the another namenode's configuration.

What exactly to configure.

2.                   What is zkfs?
From: Vinayakumar B [mailto:vinayakumar.b@huawei.com]
Sent: Friday, November 16, 2012 3:31 AM
To: user@hadoop.apache.org
Subject: RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Hi,

If you are moving from NonHA (single master) to HA, then follow the below steps.

1.       Configure the another namenode's configuration in the running namenode and all datanode's configurations. And configure logical fs.defaultFS

2.       Configure the shared storage related configuration.

3.       Stop the running NameNode and all datanodes.

4.       Execute 'hdfs namenode -initializeSharedEdits' from the existing namenode installation, to transfer the edits to shared storage.

5.       Now format zkfc using 'hdfs zkfc -formatZK' and start zkfc using 'hadoop-daemon.sh start zkfc'

6.       Now restart the namenode from existing installation. If all configurations are fine, then NameNode should start successfully as STANDBY, then zkfc will make it to ACTIVE.



7.       Now install the NameNode in another machine (master2) with same configuration, except 'dfs.ha.namenode.id'.

8.       Now instead of format, you need to copy the name dir contents from another namenode (master1) to master2's name dir. For this you are having 2 options.

a.       Execute 'hdfs namenode -bootStrapStandby'  from the master2 installation.

b.      Using 'scp' copy entire contents of name dir from master1 to master2's name dir.

9.       Now start the zkfc for second namenode ( No need to do zkfc format now). Also start the namenode (master2)

Regards,
Vinay-
From: Uma Maheswara Rao G [mailto:maheswara@huawei.com]
Sent: Friday, November 16, 2012 1:26 PM
To: user@hadoop.apache.org
Subject: RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs


If you format namenode, you need to cleanup storage directories of DataNode as well if that is having some data already. DN also will have namespace ID saved and compared with NN namespaceID. if you format NN, then namespaceID will be changed and DN may have still older namespaceID. So, just cleaning the data in DN would be fine.



Regards,

Uma

________________________________
From: hadoop hive [hadoophive@gmail.com]
Sent: Friday, November 16, 2012 1:15 PM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
Seems like you havn't format your cluster (if its 1st time made).
On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk<ma...@hsk.hk> <ac...@hsk.hk>> wrote:
Hi,

Please help!

I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :

2782 Jps
2126 NameNode
2720 SecondaryNameNode
i.e. The datanode on this server could not be started

In the log file, found:
2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070; datanode namespaceID = 1151604993



One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.


On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993), restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: master2.  Program will exit.
QUESTION: Any other solutions?



Thanks







NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le pr?sent courriel et toute pi?ce jointe qui l'accompagne sont confidentiels, prot?g?s par le droit d'auteur et peuvent ?tre couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autoris?e est interdite. Si vous n'?tes pas le destinataire pr?vu de ce courriel, supprimez-le et contactez imm?diatement l'exp?diteur. Veuillez penser ? l'environnement avant d'imprimer le pr?sent courriel

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Thank you very much, will try. 


On 16 Nov 2012, at 4:31 PM, Vinayakumar B wrote:

> Hi,
>  
> If you are moving from NonHA (single master) to HA, then follow the below steps.
> 1.       Configure the another namenode’s configuration in the running namenode and all datanode’s configurations. And configure logical fs.defaultFS
> 2.       Configure the shared storage related configuration.
> 3.       Stop the running NameNode and all datanodes.
> 4.       Execute ‘hdfs namenode –initializeSharedEdits’ from the existing namenode installation, to transfer the edits to shared storage.
> 5.       Now format zkfc using ‘hdfs zkfc –formatZK’ and start zkfc using ‘hadoop-daemon.sh start zkfc’
> 6.       Now restart the namenode from existing installation. If all configurations are fine, then NameNode should start successfully as STANDBY, then zkfc will make it to ACTIVE.
>  
> 7.       Now install the NameNode in another machine (master2) with same configuration, except ‘dfs.ha.namenode.id’.
> 8.       Now instead of format, you need to copy the name dir contents from another namenode (master1) to master2’s name dir. For this you are having 2 options.
> a.       Execute ‘hdfs namenode -bootStrapStandby’  from the master2 installation.
> b.      Using ‘scp’ copy entire contents of name dir from master1 to master2’s name dir.
> 9.       Now start the zkfc for second namenode ( No need to do zkfc format now). Also start the namenode (master2)
>  
> Regards,
> Vinay-
> From: Uma Maheswara Rao G [mailto:maheswara@huawei.com] 
> Sent: Friday, November 16, 2012 1:26 PM
> To: user@hadoop.apache.org
> Subject: RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
>  
> If you format namenode, you need to cleanup storage directories of DataNode as well if that is having some data already. DN also will have namespace ID saved and compared with NN namespaceID. if you format NN, then namespaceID will be changed and DN may have still older namespaceID. So, just cleaning the data in DN would be fine.
>  
> Regards,
> Uma
> From: hadoop hive [hadoophive@gmail.com]
> Sent: Friday, November 16, 2012 1:15 PM
> To: user@hadoop.apache.org
> Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
> 
> Seems like you havn't format your cluster (if its 1st time made).
> 
> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>  
> Please help!
>  
> I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :
>  
> 2782 Jps
> 2126 NameNode
> 2720 SecondaryNameNode
> i.e. The datanode on this server could not be started
>  
> In the log file, found: 
> 2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070; datanode namespaceID = 1151604993
>  
>  
>  
> One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
> QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.
>  
>  
> On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993), restart the datanode, it doesn't work:
> Warning: $HADOOP_HOME is deprecated.
> starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
> Exception in thread "main" java.lang.NoClassDefFoundError: master2
> Caused by: java.lang.ClassNotFoundException: master2
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
> Could not find the main class: master2.  Program will exit.
> QUESTION: Any other solutions?
>  
>  
>  
> Thanks
>  
>  
>  
>   
>  
>  
>  


Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Thank you very much, will try. 


On 16 Nov 2012, at 4:31 PM, Vinayakumar B wrote:

> Hi,
>  
> If you are moving from NonHA (single master) to HA, then follow the below steps.
> 1.       Configure the another namenode’s configuration in the running namenode and all datanode’s configurations. And configure logical fs.defaultFS
> 2.       Configure the shared storage related configuration.
> 3.       Stop the running NameNode and all datanodes.
> 4.       Execute ‘hdfs namenode –initializeSharedEdits’ from the existing namenode installation, to transfer the edits to shared storage.
> 5.       Now format zkfc using ‘hdfs zkfc –formatZK’ and start zkfc using ‘hadoop-daemon.sh start zkfc’
> 6.       Now restart the namenode from existing installation. If all configurations are fine, then NameNode should start successfully as STANDBY, then zkfc will make it to ACTIVE.
>  
> 7.       Now install the NameNode in another machine (master2) with same configuration, except ‘dfs.ha.namenode.id’.
> 8.       Now instead of format, you need to copy the name dir contents from another namenode (master1) to master2’s name dir. For this you are having 2 options.
> a.       Execute ‘hdfs namenode -bootStrapStandby’  from the master2 installation.
> b.      Using ‘scp’ copy entire contents of name dir from master1 to master2’s name dir.
> 9.       Now start the zkfc for second namenode ( No need to do zkfc format now). Also start the namenode (master2)
>  
> Regards,
> Vinay-
> From: Uma Maheswara Rao G [mailto:maheswara@huawei.com] 
> Sent: Friday, November 16, 2012 1:26 PM
> To: user@hadoop.apache.org
> Subject: RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
>  
> If you format namenode, you need to cleanup storage directories of DataNode as well if that is having some data already. DN also will have namespace ID saved and compared with NN namespaceID. if you format NN, then namespaceID will be changed and DN may have still older namespaceID. So, just cleaning the data in DN would be fine.
>  
> Regards,
> Uma
> From: hadoop hive [hadoophive@gmail.com]
> Sent: Friday, November 16, 2012 1:15 PM
> To: user@hadoop.apache.org
> Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
> 
> Seems like you havn't format your cluster (if its 1st time made).
> 
> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>  
> Please help!
>  
> I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :
>  
> 2782 Jps
> 2126 NameNode
> 2720 SecondaryNameNode
> i.e. The datanode on this server could not be started
>  
> In the log file, found: 
> 2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070; datanode namespaceID = 1151604993
>  
>  
>  
> One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
> QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.
>  
>  
> On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993), restart the datanode, it doesn't work:
> Warning: $HADOOP_HOME is deprecated.
> starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
> Exception in thread "main" java.lang.NoClassDefFoundError: master2
> Caused by: java.lang.ClassNotFoundException: master2
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
> Could not find the main class: master2.  Program will exit.
> QUESTION: Any other solutions?
>  
>  
>  
> Thanks
>  
>  
>  
>   
>  
>  
>  


RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by "Kartashov, Andy" <An...@mpac.ca>.
Vinay,

Two questions.

1.                   Configure the another namenode's configuration.

What exactly to configure.

2.                   What is zkfs?
From: Vinayakumar B [mailto:vinayakumar.b@huawei.com]
Sent: Friday, November 16, 2012 3:31 AM
To: user@hadoop.apache.org
Subject: RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Hi,

If you are moving from NonHA (single master) to HA, then follow the below steps.

1.       Configure the another namenode's configuration in the running namenode and all datanode's configurations. And configure logical fs.defaultFS

2.       Configure the shared storage related configuration.

3.       Stop the running NameNode and all datanodes.

4.       Execute 'hdfs namenode -initializeSharedEdits' from the existing namenode installation, to transfer the edits to shared storage.

5.       Now format zkfc using 'hdfs zkfc -formatZK' and start zkfc using 'hadoop-daemon.sh start zkfc'

6.       Now restart the namenode from existing installation. If all configurations are fine, then NameNode should start successfully as STANDBY, then zkfc will make it to ACTIVE.



7.       Now install the NameNode in another machine (master2) with same configuration, except 'dfs.ha.namenode.id'.

8.       Now instead of format, you need to copy the name dir contents from another namenode (master1) to master2's name dir. For this you are having 2 options.

a.       Execute 'hdfs namenode -bootStrapStandby'  from the master2 installation.

b.      Using 'scp' copy entire contents of name dir from master1 to master2's name dir.

9.       Now start the zkfc for second namenode ( No need to do zkfc format now). Also start the namenode (master2)

Regards,
Vinay-
From: Uma Maheswara Rao G [mailto:maheswara@huawei.com]
Sent: Friday, November 16, 2012 1:26 PM
To: user@hadoop.apache.org
Subject: RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs


If you format namenode, you need to cleanup storage directories of DataNode as well if that is having some data already. DN also will have namespace ID saved and compared with NN namespaceID. if you format NN, then namespaceID will be changed and DN may have still older namespaceID. So, just cleaning the data in DN would be fine.



Regards,

Uma

________________________________
From: hadoop hive [hadoophive@gmail.com]
Sent: Friday, November 16, 2012 1:15 PM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
Seems like you havn't format your cluster (if its 1st time made).
On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk<ma...@hsk.hk> <ac...@hsk.hk>> wrote:
Hi,

Please help!

I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :

2782 Jps
2126 NameNode
2720 SecondaryNameNode
i.e. The datanode on this server could not be started

In the log file, found:
2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070; datanode namespaceID = 1151604993



One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.


On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993), restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: master2.  Program will exit.
QUESTION: Any other solutions?



Thanks







NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le pr?sent courriel et toute pi?ce jointe qui l'accompagne sont confidentiels, prot?g?s par le droit d'auteur et peuvent ?tre couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autoris?e est interdite. Si vous n'?tes pas le destinataire pr?vu de ce courriel, supprimez-le et contactez imm?diatement l'exp?diteur. Veuillez penser ? l'environnement avant d'imprimer le pr?sent courriel

RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by "Kartashov, Andy" <An...@mpac.ca>.
Vinay,

Two questions.

1.                   Configure the another namenode's configuration.

What exactly to configure.

2.                   What is zkfs?
From: Vinayakumar B [mailto:vinayakumar.b@huawei.com]
Sent: Friday, November 16, 2012 3:31 AM
To: user@hadoop.apache.org
Subject: RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Hi,

If you are moving from NonHA (single master) to HA, then follow the below steps.

1.       Configure the another namenode's configuration in the running namenode and all datanode's configurations. And configure logical fs.defaultFS

2.       Configure the shared storage related configuration.

3.       Stop the running NameNode and all datanodes.

4.       Execute 'hdfs namenode -initializeSharedEdits' from the existing namenode installation, to transfer the edits to shared storage.

5.       Now format zkfc using 'hdfs zkfc -formatZK' and start zkfc using 'hadoop-daemon.sh start zkfc'

6.       Now restart the namenode from existing installation. If all configurations are fine, then NameNode should start successfully as STANDBY, then zkfc will make it to ACTIVE.



7.       Now install the NameNode in another machine (master2) with same configuration, except 'dfs.ha.namenode.id'.

8.       Now instead of format, you need to copy the name dir contents from another namenode (master1) to master2's name dir. For this you are having 2 options.

a.       Execute 'hdfs namenode -bootStrapStandby'  from the master2 installation.

b.      Using 'scp' copy entire contents of name dir from master1 to master2's name dir.

9.       Now start the zkfc for second namenode ( No need to do zkfc format now). Also start the namenode (master2)

Regards,
Vinay-
From: Uma Maheswara Rao G [mailto:maheswara@huawei.com]
Sent: Friday, November 16, 2012 1:26 PM
To: user@hadoop.apache.org
Subject: RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs


If you format namenode, you need to cleanup storage directories of DataNode as well if that is having some data already. DN also will have namespace ID saved and compared with NN namespaceID. if you format NN, then namespaceID will be changed and DN may have still older namespaceID. So, just cleaning the data in DN would be fine.



Regards,

Uma

________________________________
From: hadoop hive [hadoophive@gmail.com]
Sent: Friday, November 16, 2012 1:15 PM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
Seems like you havn't format your cluster (if its 1st time made).
On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk<ma...@hsk.hk> <ac...@hsk.hk>> wrote:
Hi,

Please help!

I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :

2782 Jps
2126 NameNode
2720 SecondaryNameNode
i.e. The datanode on this server could not be started

In the log file, found:
2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070; datanode namespaceID = 1151604993



One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.


On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993), restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: master2.  Program will exit.
QUESTION: Any other solutions?



Thanks







NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le pr?sent courriel et toute pi?ce jointe qui l'accompagne sont confidentiels, prot?g?s par le droit d'auteur et peuvent ?tre couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autoris?e est interdite. Si vous n'?tes pas le destinataire pr?vu de ce courriel, supprimez-le et contactez imm?diatement l'exp?diteur. Veuillez penser ? l'environnement avant d'imprimer le pr?sent courriel

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Thank you very much, will try. 


On 16 Nov 2012, at 4:31 PM, Vinayakumar B wrote:

> Hi,
>  
> If you are moving from NonHA (single master) to HA, then follow the below steps.
> 1.       Configure the another namenode’s configuration in the running namenode and all datanode’s configurations. And configure logical fs.defaultFS
> 2.       Configure the shared storage related configuration.
> 3.       Stop the running NameNode and all datanodes.
> 4.       Execute ‘hdfs namenode –initializeSharedEdits’ from the existing namenode installation, to transfer the edits to shared storage.
> 5.       Now format zkfc using ‘hdfs zkfc –formatZK’ and start zkfc using ‘hadoop-daemon.sh start zkfc’
> 6.       Now restart the namenode from existing installation. If all configurations are fine, then NameNode should start successfully as STANDBY, then zkfc will make it to ACTIVE.
>  
> 7.       Now install the NameNode in another machine (master2) with same configuration, except ‘dfs.ha.namenode.id’.
> 8.       Now instead of format, you need to copy the name dir contents from another namenode (master1) to master2’s name dir. For this you are having 2 options.
> a.       Execute ‘hdfs namenode -bootStrapStandby’  from the master2 installation.
> b.      Using ‘scp’ copy entire contents of name dir from master1 to master2’s name dir.
> 9.       Now start the zkfc for second namenode ( No need to do zkfc format now). Also start the namenode (master2)
>  
> Regards,
> Vinay-
> From: Uma Maheswara Rao G [mailto:maheswara@huawei.com] 
> Sent: Friday, November 16, 2012 1:26 PM
> To: user@hadoop.apache.org
> Subject: RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
>  
> If you format namenode, you need to cleanup storage directories of DataNode as well if that is having some data already. DN also will have namespace ID saved and compared with NN namespaceID. if you format NN, then namespaceID will be changed and DN may have still older namespaceID. So, just cleaning the data in DN would be fine.
>  
> Regards,
> Uma
> From: hadoop hive [hadoophive@gmail.com]
> Sent: Friday, November 16, 2012 1:15 PM
> To: user@hadoop.apache.org
> Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
> 
> Seems like you havn't format your cluster (if its 1st time made).
> 
> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>  
> Please help!
>  
> I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :
>  
> 2782 Jps
> 2126 NameNode
> 2720 SecondaryNameNode
> i.e. The datanode on this server could not be started
>  
> In the log file, found: 
> 2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070; datanode namespaceID = 1151604993
>  
>  
>  
> One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
> QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.
>  
>  
> On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993), restart the datanode, it doesn't work:
> Warning: $HADOOP_HOME is deprecated.
> starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
> Exception in thread "main" java.lang.NoClassDefFoundError: master2
> Caused by: java.lang.ClassNotFoundException: master2
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
> Could not find the main class: master2.  Program will exit.
> QUESTION: Any other solutions?
>  
>  
>  
> Thanks
>  
>  
>  
>   
>  
>  
>  


Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by "ac@hsk.hk" <ac...@hsk.hk>.
Thank you very much, will try. 


On 16 Nov 2012, at 4:31 PM, Vinayakumar B wrote:

> Hi,
>  
> If you are moving from NonHA (single master) to HA, then follow the below steps.
> 1.       Configure the another namenode’s configuration in the running namenode and all datanode’s configurations. And configure logical fs.defaultFS
> 2.       Configure the shared storage related configuration.
> 3.       Stop the running NameNode and all datanodes.
> 4.       Execute ‘hdfs namenode –initializeSharedEdits’ from the existing namenode installation, to transfer the edits to shared storage.
> 5.       Now format zkfc using ‘hdfs zkfc –formatZK’ and start zkfc using ‘hadoop-daemon.sh start zkfc’
> 6.       Now restart the namenode from existing installation. If all configurations are fine, then NameNode should start successfully as STANDBY, then zkfc will make it to ACTIVE.
>  
> 7.       Now install the NameNode in another machine (master2) with same configuration, except ‘dfs.ha.namenode.id’.
> 8.       Now instead of format, you need to copy the name dir contents from another namenode (master1) to master2’s name dir. For this you are having 2 options.
> a.       Execute ‘hdfs namenode -bootStrapStandby’  from the master2 installation.
> b.      Using ‘scp’ copy entire contents of name dir from master1 to master2’s name dir.
> 9.       Now start the zkfc for second namenode ( No need to do zkfc format now). Also start the namenode (master2)
>  
> Regards,
> Vinay-
> From: Uma Maheswara Rao G [mailto:maheswara@huawei.com] 
> Sent: Friday, November 16, 2012 1:26 PM
> To: user@hadoop.apache.org
> Subject: RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
>  
> If you format namenode, you need to cleanup storage directories of DataNode as well if that is having some data already. DN also will have namespace ID saved and compared with NN namespaceID. if you format NN, then namespaceID will be changed and DN may have still older namespaceID. So, just cleaning the data in DN would be fine.
>  
> Regards,
> Uma
> From: hadoop hive [hadoophive@gmail.com]
> Sent: Friday, November 16, 2012 1:15 PM
> To: user@hadoop.apache.org
> Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs
> 
> Seems like you havn't format your cluster (if its 1st time made).
> 
> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
> Hi,
>  
> Please help!
>  
> I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :
>  
> 2782 Jps
> 2126 NameNode
> 2720 SecondaryNameNode
> i.e. The datanode on this server could not be started
>  
> In the log file, found: 
> 2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070; datanode namespaceID = 1151604993
>  
>  
>  
> One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
> QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.
>  
>  
> On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993), restart the datanode, it doesn't work:
> Warning: $HADOOP_HOME is deprecated.
> starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
> Exception in thread "main" java.lang.NoClassDefFoundError: master2
> Caused by: java.lang.ClassNotFoundException: master2
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
> Could not find the main class: master2.  Program will exit.
> QUESTION: Any other solutions?
>  
>  
>  
> Thanks
>  
>  
>  
>   
>  
>  
>  


RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by Vinayakumar B <vi...@huawei.com>.
Hi,

 

If you are moving from NonHA (single master) to HA, then follow the below
steps.

1.       Configure the another namenode's configuration in the running
namenode and all datanode's configurations. And configure logical
fs.defaultFS

2.       Configure the shared storage related configuration.

3.       Stop the running NameNode and all datanodes.

4.       Execute 'hdfs namenode -initializeSharedEdits' from the existing
namenode installation, to transfer the edits to shared storage.

5.       Now format zkfc using 'hdfs zkfc -formatZK' and start zkfc using
'hadoop-daemon.sh start zkfc'

6.       Now restart the namenode from existing installation. If all
configurations are fine, then NameNode should start successfully as STANDBY,
then zkfc will make it to ACTIVE.

 

7.       Now install the NameNode in another machine (master2) with same
configuration, except 'dfs.ha.namenode.id'.

8.       Now instead of format, you need to copy the name dir contents from
another namenode (master1) to master2's name dir. For this you are having 2
options.

a.       Execute 'hdfs namenode -bootStrapStandby'  from the master2
installation.

b.      Using 'scp' copy entire contents of name dir from master1 to
master2's name dir.

9.       Now start the zkfc for second namenode ( No need to do zkfc format
now). Also start the namenode (master2)

 

Regards,

Vinay-

From: Uma Maheswara Rao G [mailto:maheswara@huawei.com] 
Sent: Friday, November 16, 2012 1:26 PM
To: user@hadoop.apache.org
Subject: RE: High Availability - second namenode (master2) issue:
Incompatible namespaceIDs

 

If you format namenode, you need to cleanup storage directories of DataNode
as well if that is having some data already. DN also will have namespace ID
saved and compared with NN namespaceID. if you format NN, then namespaceID
will be changed and DN may have still older namespaceID. So, just cleaning
the data in DN would be fine.

 

Regards,

Uma

  _____  

From: hadoop hive [hadoophive@gmail.com]
Sent: Friday, November 16, 2012 1:15 PM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue:
Incompatible namespaceIDs

Seems like you havn't format your cluster (if its 1st time made). 

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:

Hi, 

 

Please help!

 

I have installed a Hadoop Cluster with a single master (master1) and have
HBase running on the HDFS.  Now I am setting up the second master  (master2)
in order to form HA.  When I used JPS to check the cluster, I found :

 

2782 Jps

2126 NameNode

2720 SecondaryNameNode

i.e. The datanode on this server could not be started

 

In the log file, found: 

2012-11-16 10:28:44,851 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
= 1356148070; datanode namespaceID = 1151604993

 

 

 

One of the possible solutions to fix this issue is to:  stop the cluster,
reformat the NameNode, restart the cluster.

QUESTION: As I already have HBASE running on the cluster, if I reformat the
NameNode, do I need to reinstall the entire HBASE? I don't mind to have all
data lost as I don't have many data in HBASE and HDFS, however I don't want
to re-install HBASE again.

 

 

On the other hand, I have tried another solution: stop the DataNode, edit
the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
restart the datanode, it doesn't work:

Warning: $HADOOP_HOME is deprecated.

starting master2, logging to
/usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out

Exception in thread "main" java.lang.NoClassDefFoundError: master2

Caused by: java.lang.ClassNotFoundException: master2

at java.net.URLClassLoader$1.run(URLClassLoader.java:202)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:190)

at java.lang.ClassLoader.loadClass(ClassLoader.java:306)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)

at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

Could not find the main class: master2.  Program will exit.

QUESTION: Any other solutions?

 

 

 

Thanks

 

 

 

  

 

 

 


RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by Vinayakumar B <vi...@huawei.com>.
Hi,

 

If you are moving from NonHA (single master) to HA, then follow the below
steps.

1.       Configure the another namenode's configuration in the running
namenode and all datanode's configurations. And configure logical
fs.defaultFS

2.       Configure the shared storage related configuration.

3.       Stop the running NameNode and all datanodes.

4.       Execute 'hdfs namenode -initializeSharedEdits' from the existing
namenode installation, to transfer the edits to shared storage.

5.       Now format zkfc using 'hdfs zkfc -formatZK' and start zkfc using
'hadoop-daemon.sh start zkfc'

6.       Now restart the namenode from existing installation. If all
configurations are fine, then NameNode should start successfully as STANDBY,
then zkfc will make it to ACTIVE.

 

7.       Now install the NameNode in another machine (master2) with same
configuration, except 'dfs.ha.namenode.id'.

8.       Now instead of format, you need to copy the name dir contents from
another namenode (master1) to master2's name dir. For this you are having 2
options.

a.       Execute 'hdfs namenode -bootStrapStandby'  from the master2
installation.

b.      Using 'scp' copy entire contents of name dir from master1 to
master2's name dir.

9.       Now start the zkfc for second namenode ( No need to do zkfc format
now). Also start the namenode (master2)

 

Regards,

Vinay-

From: Uma Maheswara Rao G [mailto:maheswara@huawei.com] 
Sent: Friday, November 16, 2012 1:26 PM
To: user@hadoop.apache.org
Subject: RE: High Availability - second namenode (master2) issue:
Incompatible namespaceIDs

 

If you format namenode, you need to cleanup storage directories of DataNode
as well if that is having some data already. DN also will have namespace ID
saved and compared with NN namespaceID. if you format NN, then namespaceID
will be changed and DN may have still older namespaceID. So, just cleaning
the data in DN would be fine.

 

Regards,

Uma

  _____  

From: hadoop hive [hadoophive@gmail.com]
Sent: Friday, November 16, 2012 1:15 PM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue:
Incompatible namespaceIDs

Seems like you havn't format your cluster (if its 1st time made). 

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:

Hi, 

 

Please help!

 

I have installed a Hadoop Cluster with a single master (master1) and have
HBase running on the HDFS.  Now I am setting up the second master  (master2)
in order to form HA.  When I used JPS to check the cluster, I found :

 

2782 Jps

2126 NameNode

2720 SecondaryNameNode

i.e. The datanode on this server could not be started

 

In the log file, found: 

2012-11-16 10:28:44,851 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
= 1356148070; datanode namespaceID = 1151604993

 

 

 

One of the possible solutions to fix this issue is to:  stop the cluster,
reformat the NameNode, restart the cluster.

QUESTION: As I already have HBASE running on the cluster, if I reformat the
NameNode, do I need to reinstall the entire HBASE? I don't mind to have all
data lost as I don't have many data in HBASE and HDFS, however I don't want
to re-install HBASE again.

 

 

On the other hand, I have tried another solution: stop the DataNode, edit
the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
restart the datanode, it doesn't work:

Warning: $HADOOP_HOME is deprecated.

starting master2, logging to
/usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out

Exception in thread "main" java.lang.NoClassDefFoundError: master2

Caused by: java.lang.ClassNotFoundException: master2

at java.net.URLClassLoader$1.run(URLClassLoader.java:202)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:190)

at java.lang.ClassLoader.loadClass(ClassLoader.java:306)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)

at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

Could not find the main class: master2.  Program will exit.

QUESTION: Any other solutions?

 

 

 

Thanks

 

 

 

  

 

 

 


RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by Vinayakumar B <vi...@huawei.com>.
Hi,

 

If you are moving from NonHA (single master) to HA, then follow the below
steps.

1.       Configure the another namenode's configuration in the running
namenode and all datanode's configurations. And configure logical
fs.defaultFS

2.       Configure the shared storage related configuration.

3.       Stop the running NameNode and all datanodes.

4.       Execute 'hdfs namenode -initializeSharedEdits' from the existing
namenode installation, to transfer the edits to shared storage.

5.       Now format zkfc using 'hdfs zkfc -formatZK' and start zkfc using
'hadoop-daemon.sh start zkfc'

6.       Now restart the namenode from existing installation. If all
configurations are fine, then NameNode should start successfully as STANDBY,
then zkfc will make it to ACTIVE.

 

7.       Now install the NameNode in another machine (master2) with same
configuration, except 'dfs.ha.namenode.id'.

8.       Now instead of format, you need to copy the name dir contents from
another namenode (master1) to master2's name dir. For this you are having 2
options.

a.       Execute 'hdfs namenode -bootStrapStandby'  from the master2
installation.

b.      Using 'scp' copy entire contents of name dir from master1 to
master2's name dir.

9.       Now start the zkfc for second namenode ( No need to do zkfc format
now). Also start the namenode (master2)

 

Regards,

Vinay-

From: Uma Maheswara Rao G [mailto:maheswara@huawei.com] 
Sent: Friday, November 16, 2012 1:26 PM
To: user@hadoop.apache.org
Subject: RE: High Availability - second namenode (master2) issue:
Incompatible namespaceIDs

 

If you format namenode, you need to cleanup storage directories of DataNode
as well if that is having some data already. DN also will have namespace ID
saved and compared with NN namespaceID. if you format NN, then namespaceID
will be changed and DN may have still older namespaceID. So, just cleaning
the data in DN would be fine.

 

Regards,

Uma

  _____  

From: hadoop hive [hadoophive@gmail.com]
Sent: Friday, November 16, 2012 1:15 PM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue:
Incompatible namespaceIDs

Seems like you havn't format your cluster (if its 1st time made). 

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:

Hi, 

 

Please help!

 

I have installed a Hadoop Cluster with a single master (master1) and have
HBase running on the HDFS.  Now I am setting up the second master  (master2)
in order to form HA.  When I used JPS to check the cluster, I found :

 

2782 Jps

2126 NameNode

2720 SecondaryNameNode

i.e. The datanode on this server could not be started

 

In the log file, found: 

2012-11-16 10:28:44,851 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
= 1356148070; datanode namespaceID = 1151604993

 

 

 

One of the possible solutions to fix this issue is to:  stop the cluster,
reformat the NameNode, restart the cluster.

QUESTION: As I already have HBASE running on the cluster, if I reformat the
NameNode, do I need to reinstall the entire HBASE? I don't mind to have all
data lost as I don't have many data in HBASE and HDFS, however I don't want
to re-install HBASE again.

 

 

On the other hand, I have tried another solution: stop the DataNode, edit
the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
restart the datanode, it doesn't work:

Warning: $HADOOP_HOME is deprecated.

starting master2, logging to
/usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out

Exception in thread "main" java.lang.NoClassDefFoundError: master2

Caused by: java.lang.ClassNotFoundException: master2

at java.net.URLClassLoader$1.run(URLClassLoader.java:202)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:190)

at java.lang.ClassLoader.loadClass(ClassLoader.java:306)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)

at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

Could not find the main class: master2.  Program will exit.

QUESTION: Any other solutions?

 

 

 

Thanks

 

 

 

  

 

 

 


RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by Vinayakumar B <vi...@huawei.com>.
Hi,

 

If you are moving from NonHA (single master) to HA, then follow the below
steps.

1.       Configure the another namenode's configuration in the running
namenode and all datanode's configurations. And configure logical
fs.defaultFS

2.       Configure the shared storage related configuration.

3.       Stop the running NameNode and all datanodes.

4.       Execute 'hdfs namenode -initializeSharedEdits' from the existing
namenode installation, to transfer the edits to shared storage.

5.       Now format zkfc using 'hdfs zkfc -formatZK' and start zkfc using
'hadoop-daemon.sh start zkfc'

6.       Now restart the namenode from existing installation. If all
configurations are fine, then NameNode should start successfully as STANDBY,
then zkfc will make it to ACTIVE.

 

7.       Now install the NameNode in another machine (master2) with same
configuration, except 'dfs.ha.namenode.id'.

8.       Now instead of format, you need to copy the name dir contents from
another namenode (master1) to master2's name dir. For this you are having 2
options.

a.       Execute 'hdfs namenode -bootStrapStandby'  from the master2
installation.

b.      Using 'scp' copy entire contents of name dir from master1 to
master2's name dir.

9.       Now start the zkfc for second namenode ( No need to do zkfc format
now). Also start the namenode (master2)

 

Regards,

Vinay-

From: Uma Maheswara Rao G [mailto:maheswara@huawei.com] 
Sent: Friday, November 16, 2012 1:26 PM
To: user@hadoop.apache.org
Subject: RE: High Availability - second namenode (master2) issue:
Incompatible namespaceIDs

 

If you format namenode, you need to cleanup storage directories of DataNode
as well if that is having some data already. DN also will have namespace ID
saved and compared with NN namespaceID. if you format NN, then namespaceID
will be changed and DN may have still older namespaceID. So, just cleaning
the data in DN would be fine.

 

Regards,

Uma

  _____  

From: hadoop hive [hadoophive@gmail.com]
Sent: Friday, November 16, 2012 1:15 PM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue:
Incompatible namespaceIDs

Seems like you havn't format your cluster (if its 1st time made). 

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:

Hi, 

 

Please help!

 

I have installed a Hadoop Cluster with a single master (master1) and have
HBase running on the HDFS.  Now I am setting up the second master  (master2)
in order to form HA.  When I used JPS to check the cluster, I found :

 

2782 Jps

2126 NameNode

2720 SecondaryNameNode

i.e. The datanode on this server could not be started

 

In the log file, found: 

2012-11-16 10:28:44,851 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
= 1356148070; datanode namespaceID = 1151604993

 

 

 

One of the possible solutions to fix this issue is to:  stop the cluster,
reformat the NameNode, restart the cluster.

QUESTION: As I already have HBASE running on the cluster, if I reformat the
NameNode, do I need to reinstall the entire HBASE? I don't mind to have all
data lost as I don't have many data in HBASE and HDFS, however I don't want
to re-install HBASE again.

 

 

On the other hand, I have tried another solution: stop the DataNode, edit
the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
restart the datanode, it doesn't work:

Warning: $HADOOP_HOME is deprecated.

starting master2, logging to
/usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out

Exception in thread "main" java.lang.NoClassDefFoundError: master2

Caused by: java.lang.ClassNotFoundException: master2

at java.net.URLClassLoader$1.run(URLClassLoader.java:202)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:190)

at java.lang.ClassLoader.loadClass(ClassLoader.java:306)

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)

at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

Could not find the main class: master2.  Program will exit.

QUESTION: Any other solutions?

 

 

 

Thanks

 

 

 

  

 

 

 


Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by Suresh Srinivas <su...@hortonworks.com>.
Vinay, if the Hadoop docs are not clear in this regard, can you please
create a jira to add these details?

On Fri, Nov 16, 2012 at 12:31 AM, Vinayakumar B <vi...@huawei.com>wrote:

> Hi,****
>
> ** **
>
> If you are moving from NonHA (single master) to HA, then follow the below
> steps.****
>
> **1.       **Configure the another namenode’s configuration in the
> running namenode and all datanode’s configurations. And configure logical
> *fs.defaultFS*****
>
> **2.       **Configure the shared storage related configuration.****
>
> **3.       **Stop the running NameNode and all datanodes.****
>
> **4.       **Execute ‘hdfs namenode –initializeSharedEdits’ from the
> existing namenode installation, to transfer the edits to shared storage.**
> **
>
> **5.       **Now format zkfc using ‘hdfs zkfc –formatZK’ and start zkfc
> using ‘hadoop-daemon.sh start zkfc’****
>
> **6.       **Now restart the namenode from existing installation. If all
> configurations are fine, then NameNode should start successfully as
> STANDBY, then zkfc will make it to ACTIVE.****
>
> ** **
>
> **7.       **Now install the NameNode in another machine (master2) with
> same configuration, except ‘dfs.ha.namenode.id’.****
>
> **8.       **Now instead of format, you need to copy the name dir
> contents from another namenode (master1) to master2’s name dir. For this
> you are having 2 options.****
>
> **a.       **Execute ‘hdfs namenode -bootStrapStandby’  from the master2
> installation.****
>
> **b.      **Using ‘scp’ copy entire contents of name dir from master1 to
> master2’s name dir.****
>
> **9.       **Now start the zkfc for second namenode ( No need to do zkfc
> format now). Also start the namenode (master2)****
>
> ** **
>
> Regards,****
>
> Vinay-****
>
> *From:* Uma Maheswara Rao G [mailto:maheswara@huawei.com]
> *Sent:* Friday, November 16, 2012 1:26 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: High Availability - second namenode (master2) issue:
> Incompatible namespaceIDs****
>
> ** **
>
> If you format namenode, you need to cleanup storage directories of
> DataNode as well if that is having some data already. DN also will have
> namespace ID saved and compared with NN namespaceID. if you format NN, then
> namespaceID will be changed and DN may have still older namespaceID. So,
> just cleaning the data in DN would be fine.****
>
>  ****
>
> Regards,****
>
> Uma****
> ------------------------------
>
> *From:* hadoop hive [hadoophive@gmail.com]
> *Sent:* Friday, November 16, 2012 1:15 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: High Availability - second namenode (master2) issue:
> Incompatible namespaceIDs****
>
> Seems like you havn't format your cluster (if its 1st time made). ****
>
> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:****
>
> Hi, ****
>
> ** **
>
> Please help!****
>
> ** **
>
> I have installed a Hadoop Cluster with a single master (master1) and have
> HBase running on the HDFS.  Now I am setting up the second master
>  (master2) in order to form HA.  When I used JPS to check the cluster, I
> found :****
>
> ** **
>
> 2782 Jps****
>
> 2126 NameNode****
>
> 2720 SecondaryNameNode****
>
> i.e. The datanode on this server could not be started****
>
> ** **
>
> In the log file, found: ****
>
> 2012-11-16 10:28:44,851 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
> = 1356148070; datanode namespaceID = 1151604993****
>
> ** **
>
> ** **
>
> ** **
>
> One of the possible solutions to fix this issue is to:  stop the cluster,
> reformat the NameNode, restart the cluster.****
>
> QUESTION: As I already have HBASE running on the cluster, if I reformat
> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
> all data lost as I don't have many data in HBASE and HDFS, however I don't
> want to re-install HBASE again.****
>
> ** **
>
> ** **
>
> On the other hand, I have tried another solution: stop the DataNode, edit
> the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
> restart the datanode, it doesn't work:****
>
> Warning: $HADOOP_HOME is deprecated.****
>
> starting master2, logging to
> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out*
> ***
>
> Exception in thread "main" java.lang.NoClassDefFoundError: master2****
>
> Caused by: java.lang.ClassNotFoundException: master2****
>
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)****
>
> at java.security.AccessController.doPrivileged(Native Method)****
>
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)****
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)****
>
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)****
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)****
>
> Could not find the main class: master2.  Program will exit.****
>
> QUESTION: Any other solutions?****
>
> ** **
>
> ** **
>
> ** **
>
> Thanks****
>
> ** **
>
> ** **
>
> ** **
>
>   ****
>
> ** **
>
> ** **
>
> ** **
>



-- 
http://hortonworks.com/download/

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by Suresh Srinivas <su...@hortonworks.com>.
Vinay, if the Hadoop docs are not clear in this regard, can you please
create a jira to add these details?

On Fri, Nov 16, 2012 at 12:31 AM, Vinayakumar B <vi...@huawei.com>wrote:

> Hi,****
>
> ** **
>
> If you are moving from NonHA (single master) to HA, then follow the below
> steps.****
>
> **1.       **Configure the another namenode’s configuration in the
> running namenode and all datanode’s configurations. And configure logical
> *fs.defaultFS*****
>
> **2.       **Configure the shared storage related configuration.****
>
> **3.       **Stop the running NameNode and all datanodes.****
>
> **4.       **Execute ‘hdfs namenode –initializeSharedEdits’ from the
> existing namenode installation, to transfer the edits to shared storage.**
> **
>
> **5.       **Now format zkfc using ‘hdfs zkfc –formatZK’ and start zkfc
> using ‘hadoop-daemon.sh start zkfc’****
>
> **6.       **Now restart the namenode from existing installation. If all
> configurations are fine, then NameNode should start successfully as
> STANDBY, then zkfc will make it to ACTIVE.****
>
> ** **
>
> **7.       **Now install the NameNode in another machine (master2) with
> same configuration, except ‘dfs.ha.namenode.id’.****
>
> **8.       **Now instead of format, you need to copy the name dir
> contents from another namenode (master1) to master2’s name dir. For this
> you are having 2 options.****
>
> **a.       **Execute ‘hdfs namenode -bootStrapStandby’  from the master2
> installation.****
>
> **b.      **Using ‘scp’ copy entire contents of name dir from master1 to
> master2’s name dir.****
>
> **9.       **Now start the zkfc for second namenode ( No need to do zkfc
> format now). Also start the namenode (master2)****
>
> ** **
>
> Regards,****
>
> Vinay-****
>
> *From:* Uma Maheswara Rao G [mailto:maheswara@huawei.com]
> *Sent:* Friday, November 16, 2012 1:26 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: High Availability - second namenode (master2) issue:
> Incompatible namespaceIDs****
>
> ** **
>
> If you format namenode, you need to cleanup storage directories of
> DataNode as well if that is having some data already. DN also will have
> namespace ID saved and compared with NN namespaceID. if you format NN, then
> namespaceID will be changed and DN may have still older namespaceID. So,
> just cleaning the data in DN would be fine.****
>
>  ****
>
> Regards,****
>
> Uma****
> ------------------------------
>
> *From:* hadoop hive [hadoophive@gmail.com]
> *Sent:* Friday, November 16, 2012 1:15 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: High Availability - second namenode (master2) issue:
> Incompatible namespaceIDs****
>
> Seems like you havn't format your cluster (if its 1st time made). ****
>
> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:****
>
> Hi, ****
>
> ** **
>
> Please help!****
>
> ** **
>
> I have installed a Hadoop Cluster with a single master (master1) and have
> HBase running on the HDFS.  Now I am setting up the second master
>  (master2) in order to form HA.  When I used JPS to check the cluster, I
> found :****
>
> ** **
>
> 2782 Jps****
>
> 2126 NameNode****
>
> 2720 SecondaryNameNode****
>
> i.e. The datanode on this server could not be started****
>
> ** **
>
> In the log file, found: ****
>
> 2012-11-16 10:28:44,851 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
> = 1356148070; datanode namespaceID = 1151604993****
>
> ** **
>
> ** **
>
> ** **
>
> One of the possible solutions to fix this issue is to:  stop the cluster,
> reformat the NameNode, restart the cluster.****
>
> QUESTION: As I already have HBASE running on the cluster, if I reformat
> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
> all data lost as I don't have many data in HBASE and HDFS, however I don't
> want to re-install HBASE again.****
>
> ** **
>
> ** **
>
> On the other hand, I have tried another solution: stop the DataNode, edit
> the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
> restart the datanode, it doesn't work:****
>
> Warning: $HADOOP_HOME is deprecated.****
>
> starting master2, logging to
> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out*
> ***
>
> Exception in thread "main" java.lang.NoClassDefFoundError: master2****
>
> Caused by: java.lang.ClassNotFoundException: master2****
>
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)****
>
> at java.security.AccessController.doPrivileged(Native Method)****
>
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)****
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)****
>
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)****
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)****
>
> Could not find the main class: master2.  Program will exit.****
>
> QUESTION: Any other solutions?****
>
> ** **
>
> ** **
>
> ** **
>
> Thanks****
>
> ** **
>
> ** **
>
> ** **
>
>   ****
>
> ** **
>
> ** **
>
> ** **
>



-- 
http://hortonworks.com/download/

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by Suresh Srinivas <su...@hortonworks.com>.
Vinay, if the Hadoop docs are not clear in this regard, can you please
create a jira to add these details?

On Fri, Nov 16, 2012 at 12:31 AM, Vinayakumar B <vi...@huawei.com>wrote:

> Hi,****
>
> ** **
>
> If you are moving from NonHA (single master) to HA, then follow the below
> steps.****
>
> **1.       **Configure the another namenode’s configuration in the
> running namenode and all datanode’s configurations. And configure logical
> *fs.defaultFS*****
>
> **2.       **Configure the shared storage related configuration.****
>
> **3.       **Stop the running NameNode and all datanodes.****
>
> **4.       **Execute ‘hdfs namenode –initializeSharedEdits’ from the
> existing namenode installation, to transfer the edits to shared storage.**
> **
>
> **5.       **Now format zkfc using ‘hdfs zkfc –formatZK’ and start zkfc
> using ‘hadoop-daemon.sh start zkfc’****
>
> **6.       **Now restart the namenode from existing installation. If all
> configurations are fine, then NameNode should start successfully as
> STANDBY, then zkfc will make it to ACTIVE.****
>
> ** **
>
> **7.       **Now install the NameNode in another machine (master2) with
> same configuration, except ‘dfs.ha.namenode.id’.****
>
> **8.       **Now instead of format, you need to copy the name dir
> contents from another namenode (master1) to master2’s name dir. For this
> you are having 2 options.****
>
> **a.       **Execute ‘hdfs namenode -bootStrapStandby’  from the master2
> installation.****
>
> **b.      **Using ‘scp’ copy entire contents of name dir from master1 to
> master2’s name dir.****
>
> **9.       **Now start the zkfc for second namenode ( No need to do zkfc
> format now). Also start the namenode (master2)****
>
> ** **
>
> Regards,****
>
> Vinay-****
>
> *From:* Uma Maheswara Rao G [mailto:maheswara@huawei.com]
> *Sent:* Friday, November 16, 2012 1:26 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: High Availability - second namenode (master2) issue:
> Incompatible namespaceIDs****
>
> ** **
>
> If you format namenode, you need to cleanup storage directories of
> DataNode as well if that is having some data already. DN also will have
> namespace ID saved and compared with NN namespaceID. if you format NN, then
> namespaceID will be changed and DN may have still older namespaceID. So,
> just cleaning the data in DN would be fine.****
>
>  ****
>
> Regards,****
>
> Uma****
> ------------------------------
>
> *From:* hadoop hive [hadoophive@gmail.com]
> *Sent:* Friday, November 16, 2012 1:15 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: High Availability - second namenode (master2) issue:
> Incompatible namespaceIDs****
>
> Seems like you havn't format your cluster (if its 1st time made). ****
>
> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:****
>
> Hi, ****
>
> ** **
>
> Please help!****
>
> ** **
>
> I have installed a Hadoop Cluster with a single master (master1) and have
> HBase running on the HDFS.  Now I am setting up the second master
>  (master2) in order to form HA.  When I used JPS to check the cluster, I
> found :****
>
> ** **
>
> 2782 Jps****
>
> 2126 NameNode****
>
> 2720 SecondaryNameNode****
>
> i.e. The datanode on this server could not be started****
>
> ** **
>
> In the log file, found: ****
>
> 2012-11-16 10:28:44,851 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
> = 1356148070; datanode namespaceID = 1151604993****
>
> ** **
>
> ** **
>
> ** **
>
> One of the possible solutions to fix this issue is to:  stop the cluster,
> reformat the NameNode, restart the cluster.****
>
> QUESTION: As I already have HBASE running on the cluster, if I reformat
> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
> all data lost as I don't have many data in HBASE and HDFS, however I don't
> want to re-install HBASE again.****
>
> ** **
>
> ** **
>
> On the other hand, I have tried another solution: stop the DataNode, edit
> the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
> restart the datanode, it doesn't work:****
>
> Warning: $HADOOP_HOME is deprecated.****
>
> starting master2, logging to
> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out*
> ***
>
> Exception in thread "main" java.lang.NoClassDefFoundError: master2****
>
> Caused by: java.lang.ClassNotFoundException: master2****
>
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)****
>
> at java.security.AccessController.doPrivileged(Native Method)****
>
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)****
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)****
>
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)****
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)****
>
> Could not find the main class: master2.  Program will exit.****
>
> QUESTION: Any other solutions?****
>
> ** **
>
> ** **
>
> ** **
>
> Thanks****
>
> ** **
>
> ** **
>
> ** **
>
>   ****
>
> ** **
>
> ** **
>
> ** **
>



-- 
http://hortonworks.com/download/

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by Suresh Srinivas <su...@hortonworks.com>.
Vinay, if the Hadoop docs are not clear in this regard, can you please
create a jira to add these details?

On Fri, Nov 16, 2012 at 12:31 AM, Vinayakumar B <vi...@huawei.com>wrote:

> Hi,****
>
> ** **
>
> If you are moving from NonHA (single master) to HA, then follow the below
> steps.****
>
> **1.       **Configure the another namenode’s configuration in the
> running namenode and all datanode’s configurations. And configure logical
> *fs.defaultFS*****
>
> **2.       **Configure the shared storage related configuration.****
>
> **3.       **Stop the running NameNode and all datanodes.****
>
> **4.       **Execute ‘hdfs namenode –initializeSharedEdits’ from the
> existing namenode installation, to transfer the edits to shared storage.**
> **
>
> **5.       **Now format zkfc using ‘hdfs zkfc –formatZK’ and start zkfc
> using ‘hadoop-daemon.sh start zkfc’****
>
> **6.       **Now restart the namenode from existing installation. If all
> configurations are fine, then NameNode should start successfully as
> STANDBY, then zkfc will make it to ACTIVE.****
>
> ** **
>
> **7.       **Now install the NameNode in another machine (master2) with
> same configuration, except ‘dfs.ha.namenode.id’.****
>
> **8.       **Now instead of format, you need to copy the name dir
> contents from another namenode (master1) to master2’s name dir. For this
> you are having 2 options.****
>
> **a.       **Execute ‘hdfs namenode -bootStrapStandby’  from the master2
> installation.****
>
> **b.      **Using ‘scp’ copy entire contents of name dir from master1 to
> master2’s name dir.****
>
> **9.       **Now start the zkfc for second namenode ( No need to do zkfc
> format now). Also start the namenode (master2)****
>
> ** **
>
> Regards,****
>
> Vinay-****
>
> *From:* Uma Maheswara Rao G [mailto:maheswara@huawei.com]
> *Sent:* Friday, November 16, 2012 1:26 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: High Availability - second namenode (master2) issue:
> Incompatible namespaceIDs****
>
> ** **
>
> If you format namenode, you need to cleanup storage directories of
> DataNode as well if that is having some data already. DN also will have
> namespace ID saved and compared with NN namespaceID. if you format NN, then
> namespaceID will be changed and DN may have still older namespaceID. So,
> just cleaning the data in DN would be fine.****
>
>  ****
>
> Regards,****
>
> Uma****
> ------------------------------
>
> *From:* hadoop hive [hadoophive@gmail.com]
> *Sent:* Friday, November 16, 2012 1:15 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: High Availability - second namenode (master2) issue:
> Incompatible namespaceIDs****
>
> Seems like you havn't format your cluster (if its 1st time made). ****
>
> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:****
>
> Hi, ****
>
> ** **
>
> Please help!****
>
> ** **
>
> I have installed a Hadoop Cluster with a single master (master1) and have
> HBase running on the HDFS.  Now I am setting up the second master
>  (master2) in order to form HA.  When I used JPS to check the cluster, I
> found :****
>
> ** **
>
> 2782 Jps****
>
> 2126 NameNode****
>
> 2720 SecondaryNameNode****
>
> i.e. The datanode on this server could not be started****
>
> ** **
>
> In the log file, found: ****
>
> 2012-11-16 10:28:44,851 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
> = 1356148070; datanode namespaceID = 1151604993****
>
> ** **
>
> ** **
>
> ** **
>
> One of the possible solutions to fix this issue is to:  stop the cluster,
> reformat the NameNode, restart the cluster.****
>
> QUESTION: As I already have HBASE running on the cluster, if I reformat
> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
> all data lost as I don't have many data in HBASE and HDFS, however I don't
> want to re-install HBASE again.****
>
> ** **
>
> ** **
>
> On the other hand, I have tried another solution: stop the DataNode, edit
> the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
> restart the datanode, it doesn't work:****
>
> Warning: $HADOOP_HOME is deprecated.****
>
> starting master2, logging to
> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out*
> ***
>
> Exception in thread "main" java.lang.NoClassDefFoundError: master2****
>
> Caused by: java.lang.ClassNotFoundException: master2****
>
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)****
>
> at java.security.AccessController.doPrivileged(Native Method)****
>
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)****
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)****
>
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)****
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)****
>
> Could not find the main class: master2.  Program will exit.****
>
> QUESTION: Any other solutions?****
>
> ** **
>
> ** **
>
> ** **
>
> Thanks****
>
> ** **
>
> ** **
>
> ** **
>
>   ****
>
> ** **
>
> ** **
>
> ** **
>



-- 
http://hortonworks.com/download/

RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by Uma Maheswara Rao G <ma...@huawei.com>.
If you format namenode, you need to cleanup storage directories of DataNode as well if that is having some data already. DN also will have namespace ID saved and compared with NN namespaceID. if you format NN, then namespaceID will be changed and DN may have still older namespaceID. So, just cleaning the data in DN would be fine.



Regards,

Uma

________________________________
From: hadoop hive [hadoophive@gmail.com]
Sent: Friday, November 16, 2012 1:15 PM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Seems like you havn't format your cluster (if its 1st time made).

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk<ma...@hsk.hk> <ac...@hsk.hk>> wrote:
Hi,

Please help!

I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :

2782 Jps
2126 NameNode
2720 SecondaryNameNode
i.e. The datanode on this server could not be started

In the log file, found:
2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070; datanode namespaceID = 1151604993



One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.


On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993), restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: master2.  Program will exit.
QUESTION: Any other solutions?



Thanks








RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by "Kartashov, Andy" <An...@mpac.ca>.
Agreed here. Whenever you have id disagreement between NN and DN. Simply, delete all the entries in your df/data directory and restart DN. No need to reformat NN.

Rgds,
AK47

From: shashwat shriparv [mailto:dwivedishashwat@gmail.com]
Sent: Friday, November 16, 2012 2:53 AM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Delete the VERSION for the datanode before format.



∞
Shashwat Shriparv



On Fri, Nov 16, 2012 at 1:15 PM, hadoop hive <ha...@gmail.com>> wrote:
Seems like you havn't format your cluster (if its 1st time made).

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk<ma...@hsk.hk> <ac...@hsk.hk>> wrote:
Hi,

Please help!

I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :

2782 Jps
2126 NameNode
2720 SecondaryNameNode
i.e. The datanode on this server could not be started

In the log file, found:
2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070<tel:1356148070>; datanode namespaceID = 1151604993<tel:1151604993>



One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.


On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993<tel:1151604993>), restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: master2.  Program will exit.
QUESTION: Any other solutions?



Thanks








NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by shashwat shriparv <dw...@gmail.com>.
It seems u have formatted the namenode twice. In this case the namespace id
is not replicated to the datanodes.

Stop all the hadoop daemons and then try the following :

1. vi $PATH_TO_HADOOP_DATASTORE/dfs/name/current/VERSION and copy the
namespaceID value.
2.Now open a terminal on every machine having a datanode and do the
following:
vi $PATH_TO_HADOOP_DATASTORE/dfs/data/current/VERSION
Delete the entry corresponding to namespaceID and paste the value copied in
Step 1.
Save and exit.

Restart the hadoop daemons without formatting the namenode.



∞
Shashwat Shriparv




On Fri, Nov 16, 2012 at 1:22 PM, shashwat shriparv <
dwivedishashwat@gmail.com> wrote:

> Delete the VERSION for the datanode before format.
>
>
>
> ∞
> Shashwat Shriparv
>
>
>
>
> On Fri, Nov 16, 2012 at 1:15 PM, hadoop hive <ha...@gmail.com> wrote:
>
>> Seems like you havn't format your cluster (if its 1st time made).
>>
>> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>
>>> Hi,
>>>
>>> Please help!
>>>
>>> I have installed a Hadoop Cluster with a single master (master1) and
>>> have HBase running on the HDFS.  Now I am setting up the second master
>>>  (master2) in order to form HA.  When I used JPS to check the cluster, I
>>> found :
>>>
>>> 2782 Jps
>>> 2126 NameNode
>>> 2720 SecondaryNameNode
>>> i.e. The datanode on this server could not be started
>>>
>>> In the log file, found:
>>> 2012-11-16 10:28:44,851 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>>> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
>>> = 1356148070; datanode namespaceID = 1151604993
>>>
>>>
>>>
>>> One of the possible solutions to fix this issue is to:  stop the
>>> cluster, reformat the NameNode, restart the cluster.
>>> QUESTION: As I already have HBASE running on the cluster, if I reformat
>>> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
>>> all data lost as I don't have many data in HBASE and HDFS, however I don't
>>> want to re-install HBASE again.
>>>
>>>
>>> On the other hand, I have tried another solution: stop the DataNode,
>>> edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
>>> restart the datanode, it doesn't work:
>>> Warning: $HADOOP_HOME is deprecated.
>>> starting master2, logging to
>>> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
>>> Exception in thread "main" java.lang.NoClassDefFoundError: master2
>>> Caused by: java.lang.ClassNotFoundException: master2
>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>>  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>>>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>>> Could not find the main class: master2.  Program will exit.
>>> QUESTION: Any other solutions?
>>>
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by shashwat shriparv <dw...@gmail.com>.
It seems u have formatted the namenode twice. In this case the namespace id
is not replicated to the datanodes.

Stop all the hadoop daemons and then try the following :

1. vi $PATH_TO_HADOOP_DATASTORE/dfs/name/current/VERSION and copy the
namespaceID value.
2.Now open a terminal on every machine having a datanode and do the
following:
vi $PATH_TO_HADOOP_DATASTORE/dfs/data/current/VERSION
Delete the entry corresponding to namespaceID and paste the value copied in
Step 1.
Save and exit.

Restart the hadoop daemons without formatting the namenode.



∞
Shashwat Shriparv




On Fri, Nov 16, 2012 at 1:22 PM, shashwat shriparv <
dwivedishashwat@gmail.com> wrote:

> Delete the VERSION for the datanode before format.
>
>
>
> ∞
> Shashwat Shriparv
>
>
>
>
> On Fri, Nov 16, 2012 at 1:15 PM, hadoop hive <ha...@gmail.com> wrote:
>
>> Seems like you havn't format your cluster (if its 1st time made).
>>
>> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>
>>> Hi,
>>>
>>> Please help!
>>>
>>> I have installed a Hadoop Cluster with a single master (master1) and
>>> have HBase running on the HDFS.  Now I am setting up the second master
>>>  (master2) in order to form HA.  When I used JPS to check the cluster, I
>>> found :
>>>
>>> 2782 Jps
>>> 2126 NameNode
>>> 2720 SecondaryNameNode
>>> i.e. The datanode on this server could not be started
>>>
>>> In the log file, found:
>>> 2012-11-16 10:28:44,851 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>>> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
>>> = 1356148070; datanode namespaceID = 1151604993
>>>
>>>
>>>
>>> One of the possible solutions to fix this issue is to:  stop the
>>> cluster, reformat the NameNode, restart the cluster.
>>> QUESTION: As I already have HBASE running on the cluster, if I reformat
>>> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
>>> all data lost as I don't have many data in HBASE and HDFS, however I don't
>>> want to re-install HBASE again.
>>>
>>>
>>> On the other hand, I have tried another solution: stop the DataNode,
>>> edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
>>> restart the datanode, it doesn't work:
>>> Warning: $HADOOP_HOME is deprecated.
>>> starting master2, logging to
>>> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
>>> Exception in thread "main" java.lang.NoClassDefFoundError: master2
>>> Caused by: java.lang.ClassNotFoundException: master2
>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>>  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>>>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>>> Could not find the main class: master2.  Program will exit.
>>> QUESTION: Any other solutions?
>>>
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>

RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by "Kartashov, Andy" <An...@mpac.ca>.
Agreed here. Whenever you have id disagreement between NN and DN. Simply, delete all the entries in your df/data directory and restart DN. No need to reformat NN.

Rgds,
AK47

From: shashwat shriparv [mailto:dwivedishashwat@gmail.com]
Sent: Friday, November 16, 2012 2:53 AM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Delete the VERSION for the datanode before format.



∞
Shashwat Shriparv



On Fri, Nov 16, 2012 at 1:15 PM, hadoop hive <ha...@gmail.com>> wrote:
Seems like you havn't format your cluster (if its 1st time made).

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk<ma...@hsk.hk> <ac...@hsk.hk>> wrote:
Hi,

Please help!

I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :

2782 Jps
2126 NameNode
2720 SecondaryNameNode
i.e. The datanode on this server could not be started

In the log file, found:
2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070<tel:1356148070>; datanode namespaceID = 1151604993<tel:1151604993>



One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.


On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993<tel:1151604993>), restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: master2.  Program will exit.
QUESTION: Any other solutions?



Thanks








NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by "Kartashov, Andy" <An...@mpac.ca>.
Agreed here. Whenever you have id disagreement between NN and DN. Simply, delete all the entries in your df/data directory and restart DN. No need to reformat NN.

Rgds,
AK47

From: shashwat shriparv [mailto:dwivedishashwat@gmail.com]
Sent: Friday, November 16, 2012 2:53 AM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Delete the VERSION for the datanode before format.



∞
Shashwat Shriparv



On Fri, Nov 16, 2012 at 1:15 PM, hadoop hive <ha...@gmail.com>> wrote:
Seems like you havn't format your cluster (if its 1st time made).

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk<ma...@hsk.hk> <ac...@hsk.hk>> wrote:
Hi,

Please help!

I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :

2782 Jps
2126 NameNode
2720 SecondaryNameNode
i.e. The datanode on this server could not be started

In the log file, found:
2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070<tel:1356148070>; datanode namespaceID = 1151604993<tel:1151604993>



One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.


On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993<tel:1151604993>), restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: master2.  Program will exit.
QUESTION: Any other solutions?



Thanks








NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by shashwat shriparv <dw...@gmail.com>.
It seems u have formatted the namenode twice. In this case the namespace id
is not replicated to the datanodes.

Stop all the hadoop daemons and then try the following :

1. vi $PATH_TO_HADOOP_DATASTORE/dfs/name/current/VERSION and copy the
namespaceID value.
2.Now open a terminal on every machine having a datanode and do the
following:
vi $PATH_TO_HADOOP_DATASTORE/dfs/data/current/VERSION
Delete the entry corresponding to namespaceID and paste the value copied in
Step 1.
Save and exit.

Restart the hadoop daemons without formatting the namenode.



∞
Shashwat Shriparv




On Fri, Nov 16, 2012 at 1:22 PM, shashwat shriparv <
dwivedishashwat@gmail.com> wrote:

> Delete the VERSION for the datanode before format.
>
>
>
> ∞
> Shashwat Shriparv
>
>
>
>
> On Fri, Nov 16, 2012 at 1:15 PM, hadoop hive <ha...@gmail.com> wrote:
>
>> Seems like you havn't format your cluster (if its 1st time made).
>>
>> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>
>>> Hi,
>>>
>>> Please help!
>>>
>>> I have installed a Hadoop Cluster with a single master (master1) and
>>> have HBase running on the HDFS.  Now I am setting up the second master
>>>  (master2) in order to form HA.  When I used JPS to check the cluster, I
>>> found :
>>>
>>> 2782 Jps
>>> 2126 NameNode
>>> 2720 SecondaryNameNode
>>> i.e. The datanode on this server could not be started
>>>
>>> In the log file, found:
>>> 2012-11-16 10:28:44,851 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>>> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
>>> = 1356148070; datanode namespaceID = 1151604993
>>>
>>>
>>>
>>> One of the possible solutions to fix this issue is to:  stop the
>>> cluster, reformat the NameNode, restart the cluster.
>>> QUESTION: As I already have HBASE running on the cluster, if I reformat
>>> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
>>> all data lost as I don't have many data in HBASE and HDFS, however I don't
>>> want to re-install HBASE again.
>>>
>>>
>>> On the other hand, I have tried another solution: stop the DataNode,
>>> edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
>>> restart the datanode, it doesn't work:
>>> Warning: $HADOOP_HOME is deprecated.
>>> starting master2, logging to
>>> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
>>> Exception in thread "main" java.lang.NoClassDefFoundError: master2
>>> Caused by: java.lang.ClassNotFoundException: master2
>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>>  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>>>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>>> Could not find the main class: master2.  Program will exit.
>>> QUESTION: Any other solutions?
>>>
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>

RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by "Kartashov, Andy" <An...@mpac.ca>.
Agreed here. Whenever you have id disagreement between NN and DN. Simply, delete all the entries in your df/data directory and restart DN. No need to reformat NN.

Rgds,
AK47

From: shashwat shriparv [mailto:dwivedishashwat@gmail.com]
Sent: Friday, November 16, 2012 2:53 AM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Delete the VERSION for the datanode before format.



∞
Shashwat Shriparv



On Fri, Nov 16, 2012 at 1:15 PM, hadoop hive <ha...@gmail.com>> wrote:
Seems like you havn't format your cluster (if its 1st time made).

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk<ma...@hsk.hk> <ac...@hsk.hk>> wrote:
Hi,

Please help!

I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :

2782 Jps
2126 NameNode
2720 SecondaryNameNode
i.e. The datanode on this server could not be started

In the log file, found:
2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070<tel:1356148070>; datanode namespaceID = 1151604993<tel:1151604993>



One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.


On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993<tel:1151604993>), restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: master2.  Program will exit.
QUESTION: Any other solutions?



Thanks








NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by shashwat shriparv <dw...@gmail.com>.
It seems u have formatted the namenode twice. In this case the namespace id
is not replicated to the datanodes.

Stop all the hadoop daemons and then try the following :

1. vi $PATH_TO_HADOOP_DATASTORE/dfs/name/current/VERSION and copy the
namespaceID value.
2.Now open a terminal on every machine having a datanode and do the
following:
vi $PATH_TO_HADOOP_DATASTORE/dfs/data/current/VERSION
Delete the entry corresponding to namespaceID and paste the value copied in
Step 1.
Save and exit.

Restart the hadoop daemons without formatting the namenode.



∞
Shashwat Shriparv




On Fri, Nov 16, 2012 at 1:22 PM, shashwat shriparv <
dwivedishashwat@gmail.com> wrote:

> Delete the VERSION for the datanode before format.
>
>
>
> ∞
> Shashwat Shriparv
>
>
>
>
> On Fri, Nov 16, 2012 at 1:15 PM, hadoop hive <ha...@gmail.com> wrote:
>
>> Seems like you havn't format your cluster (if its 1st time made).
>>
>> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
>>
>>> Hi,
>>>
>>> Please help!
>>>
>>> I have installed a Hadoop Cluster with a single master (master1) and
>>> have HBase running on the HDFS.  Now I am setting up the second master
>>>  (master2) in order to form HA.  When I used JPS to check the cluster, I
>>> found :
>>>
>>> 2782 Jps
>>> 2126 NameNode
>>> 2720 SecondaryNameNode
>>> i.e. The datanode on this server could not be started
>>>
>>> In the log file, found:
>>> 2012-11-16 10:28:44,851 ERROR
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>>> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
>>> = 1356148070; datanode namespaceID = 1151604993
>>>
>>>
>>>
>>> One of the possible solutions to fix this issue is to:  stop the
>>> cluster, reformat the NameNode, restart the cluster.
>>> QUESTION: As I already have HBASE running on the cluster, if I reformat
>>> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
>>> all data lost as I don't have many data in HBASE and HDFS, however I don't
>>> want to re-install HBASE again.
>>>
>>>
>>> On the other hand, I have tried another solution: stop the DataNode,
>>> edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
>>> restart the datanode, it doesn't work:
>>> Warning: $HADOOP_HOME is deprecated.
>>> starting master2, logging to
>>> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
>>> Exception in thread "main" java.lang.NoClassDefFoundError: master2
>>> Caused by: java.lang.ClassNotFoundException: master2
>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>>  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>>>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>>> Could not find the main class: master2.  Program will exit.
>>> QUESTION: Any other solutions?
>>>
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by shashwat shriparv <dw...@gmail.com>.
Delete the VERSION for the datanode before format.



∞
Shashwat Shriparv




On Fri, Nov 16, 2012 at 1:15 PM, hadoop hive <ha...@gmail.com> wrote:

> Seems like you havn't format your cluster (if its 1st time made).
>
> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
>
>> Hi,
>>
>> Please help!
>>
>> I have installed a Hadoop Cluster with a single master (master1) and have
>> HBase running on the HDFS.  Now I am setting up the second master
>>  (master2) in order to form HA.  When I used JPS to check the cluster, I
>> found :
>>
>> 2782 Jps
>> 2126 NameNode
>> 2720 SecondaryNameNode
>> i.e. The datanode on this server could not be started
>>
>> In the log file, found:
>> 2012-11-16 10:28:44,851 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
>> = 1356148070; datanode namespaceID = 1151604993
>>
>>
>>
>> One of the possible solutions to fix this issue is to:  stop the cluster,
>> reformat the NameNode, restart the cluster.
>> QUESTION: As I already have HBASE running on the cluster, if I reformat
>> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
>> all data lost as I don't have many data in HBASE and HDFS, however I don't
>> want to re-install HBASE again.
>>
>>
>> On the other hand, I have tried another solution: stop the DataNode, edit
>> the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
>> restart the datanode, it doesn't work:
>> Warning: $HADOOP_HOME is deprecated.
>> starting master2, logging to
>> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
>> Exception in thread "main" java.lang.NoClassDefFoundError: master2
>> Caused by: java.lang.ClassNotFoundException: master2
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>> at java.security.AccessController.doPrivileged(Native Method)
>>  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>> Could not find the main class: master2.  Program will exit.
>> QUESTION: Any other solutions?
>>
>>
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by shashwat shriparv <dw...@gmail.com>.
Delete the VERSION for the datanode before format.



∞
Shashwat Shriparv




On Fri, Nov 16, 2012 at 1:15 PM, hadoop hive <ha...@gmail.com> wrote:

> Seems like you havn't format your cluster (if its 1st time made).
>
> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
>
>> Hi,
>>
>> Please help!
>>
>> I have installed a Hadoop Cluster with a single master (master1) and have
>> HBase running on the HDFS.  Now I am setting up the second master
>>  (master2) in order to form HA.  When I used JPS to check the cluster, I
>> found :
>>
>> 2782 Jps
>> 2126 NameNode
>> 2720 SecondaryNameNode
>> i.e. The datanode on this server could not be started
>>
>> In the log file, found:
>> 2012-11-16 10:28:44,851 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
>> = 1356148070; datanode namespaceID = 1151604993
>>
>>
>>
>> One of the possible solutions to fix this issue is to:  stop the cluster,
>> reformat the NameNode, restart the cluster.
>> QUESTION: As I already have HBASE running on the cluster, if I reformat
>> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
>> all data lost as I don't have many data in HBASE and HDFS, however I don't
>> want to re-install HBASE again.
>>
>>
>> On the other hand, I have tried another solution: stop the DataNode, edit
>> the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
>> restart the datanode, it doesn't work:
>> Warning: $HADOOP_HOME is deprecated.
>> starting master2, logging to
>> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
>> Exception in thread "main" java.lang.NoClassDefFoundError: master2
>> Caused by: java.lang.ClassNotFoundException: master2
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>> at java.security.AccessController.doPrivileged(Native Method)
>>  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>> Could not find the main class: master2.  Program will exit.
>> QUESTION: Any other solutions?
>>
>>
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>

Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by shashwat shriparv <dw...@gmail.com>.
Delete the VERSION for the datanode before format.



∞
Shashwat Shriparv




On Fri, Nov 16, 2012 at 1:15 PM, hadoop hive <ha...@gmail.com> wrote:

> Seems like you havn't format your cluster (if its 1st time made).
>
> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
>
>> Hi,
>>
>> Please help!
>>
>> I have installed a Hadoop Cluster with a single master (master1) and have
>> HBase running on the HDFS.  Now I am setting up the second master
>>  (master2) in order to form HA.  When I used JPS to check the cluster, I
>> found :
>>
>> 2782 Jps
>> 2126 NameNode
>> 2720 SecondaryNameNode
>> i.e. The datanode on this server could not be started
>>
>> In the log file, found:
>> 2012-11-16 10:28:44,851 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
>> = 1356148070; datanode namespaceID = 1151604993
>>
>>
>>
>> One of the possible solutions to fix this issue is to:  stop the cluster,
>> reformat the NameNode, restart the cluster.
>> QUESTION: As I already have HBASE running on the cluster, if I reformat
>> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
>> all data lost as I don't have many data in HBASE and HDFS, however I don't
>> want to re-install HBASE again.
>>
>>
>> On the other hand, I have tried another solution: stop the DataNode, edit
>> the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
>> restart the datanode, it doesn't work:
>> Warning: $HADOOP_HOME is deprecated.
>> starting master2, logging to
>> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
>> Exception in thread "main" java.lang.NoClassDefFoundError: master2
>> Caused by: java.lang.ClassNotFoundException: master2
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>> at java.security.AccessController.doPrivileged(Native Method)
>>  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>> Could not find the main class: master2.  Program will exit.
>> QUESTION: Any other solutions?
>>
>>
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>

RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by Uma Maheswara Rao G <ma...@huawei.com>.
If you format namenode, you need to cleanup storage directories of DataNode as well if that is having some data already. DN also will have namespace ID saved and compared with NN namespaceID. if you format NN, then namespaceID will be changed and DN may have still older namespaceID. So, just cleaning the data in DN would be fine.



Regards,

Uma

________________________________
From: hadoop hive [hadoophive@gmail.com]
Sent: Friday, November 16, 2012 1:15 PM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Seems like you havn't format your cluster (if its 1st time made).

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk<ma...@hsk.hk> <ac...@hsk.hk>> wrote:
Hi,

Please help!

I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :

2782 Jps
2126 NameNode
2720 SecondaryNameNode
i.e. The datanode on this server could not be started

In the log file, found:
2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070; datanode namespaceID = 1151604993



One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.


On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993), restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: master2.  Program will exit.
QUESTION: Any other solutions?



Thanks








Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by shashwat shriparv <dw...@gmail.com>.
Delete the VERSION for the datanode before format.



∞
Shashwat Shriparv




On Fri, Nov 16, 2012 at 1:15 PM, hadoop hive <ha...@gmail.com> wrote:

> Seems like you havn't format your cluster (if its 1st time made).
>
> On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk <ac...@hsk.hk> wrote:
>
>> Hi,
>>
>> Please help!
>>
>> I have installed a Hadoop Cluster with a single master (master1) and have
>> HBase running on the HDFS.  Now I am setting up the second master
>>  (master2) in order to form HA.  When I used JPS to check the cluster, I
>> found :
>>
>> 2782 Jps
>> 2126 NameNode
>> 2720 SecondaryNameNode
>> i.e. The datanode on this server could not be started
>>
>> In the log file, found:
>> 2012-11-16 10:28:44,851 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
>> Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID
>> = 1356148070; datanode namespaceID = 1151604993
>>
>>
>>
>> One of the possible solutions to fix this issue is to:  stop the cluster,
>> reformat the NameNode, restart the cluster.
>> QUESTION: As I already have HBASE running on the cluster, if I reformat
>> the NameNode, do I need to reinstall the entire HBASE? I don't mind to have
>> all data lost as I don't have many data in HBASE and HDFS, however I don't
>> want to re-install HBASE again.
>>
>>
>> On the other hand, I have tried another solution: stop the DataNode, edit
>> the namespaceID in current/VERSION (i.e. set namespaceID=1151604993),
>> restart the datanode, it doesn't work:
>> Warning: $HADOOP_HOME is deprecated.
>> starting master2, logging to
>> /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
>> Exception in thread "main" java.lang.NoClassDefFoundError: master2
>> Caused by: java.lang.ClassNotFoundException: master2
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>> at java.security.AccessController.doPrivileged(Native Method)
>>  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>> Could not find the main class: master2.  Program will exit.
>> QUESTION: Any other solutions?
>>
>>
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>

RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by Uma Maheswara Rao G <ma...@huawei.com>.
If you format namenode, you need to cleanup storage directories of DataNode as well if that is having some data already. DN also will have namespace ID saved and compared with NN namespaceID. if you format NN, then namespaceID will be changed and DN may have still older namespaceID. So, just cleaning the data in DN would be fine.



Regards,

Uma

________________________________
From: hadoop hive [hadoophive@gmail.com]
Sent: Friday, November 16, 2012 1:15 PM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Seems like you havn't format your cluster (if its 1st time made).

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk<ma...@hsk.hk> <ac...@hsk.hk>> wrote:
Hi,

Please help!

I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :

2782 Jps
2126 NameNode
2720 SecondaryNameNode
i.e. The datanode on this server could not be started

In the log file, found:
2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070; datanode namespaceID = 1151604993



One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.


On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993), restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: master2.  Program will exit.
QUESTION: Any other solutions?



Thanks








RE: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Posted by Uma Maheswara Rao G <ma...@huawei.com>.
If you format namenode, you need to cleanup storage directories of DataNode as well if that is having some data already. DN also will have namespace ID saved and compared with NN namespaceID. if you format NN, then namespaceID will be changed and DN may have still older namespaceID. So, just cleaning the data in DN would be fine.



Regards,

Uma

________________________________
From: hadoop hive [hadoophive@gmail.com]
Sent: Friday, November 16, 2012 1:15 PM
To: user@hadoop.apache.org
Subject: Re: High Availability - second namenode (master2) issue: Incompatible namespaceIDs

Seems like you havn't format your cluster (if its 1st time made).

On Fri, Nov 16, 2012 at 9:58 AM, ac@hsk.hk<ma...@hsk.hk> <ac...@hsk.hk>> wrote:
Hi,

Please help!

I have installed a Hadoop Cluster with a single master (master1) and have HBase running on the HDFS.  Now I am setting up the second master  (master2) in order to form HA.  When I used JPS to check the cluster, I found :

2782 Jps
2126 NameNode
2720 SecondaryNameNode
i.e. The datanode on this server could not be started

In the log file, found:
2012-11-16 10:28:44,851 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode namespaceID = 1356148070; datanode namespaceID = 1151604993



One of the possible solutions to fix this issue is to:  stop the cluster, reformat the NameNode, restart the cluster.
QUESTION: As I already have HBASE running on the cluster, if I reformat the NameNode, do I need to reinstall the entire HBASE? I don't mind to have all data lost as I don't have many data in HBASE and HDFS, however I don't want to re-install HBASE again.


On the other hand, I have tried another solution: stop the DataNode, edit the namespaceID in current/VERSION (i.e. set namespaceID=1151604993), restart the datanode, it doesn't work:
Warning: $HADOOP_HOME is deprecated.
starting master2, logging to /usr/local/hadoop-1.0.4/libexec/../logs/hadoop-hduser-master2-master2.out
Exception in thread "main" java.lang.NoClassDefFoundError: master2
Caused by: java.lang.ClassNotFoundException: master2
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: master2.  Program will exit.
QUESTION: Any other solutions?



Thanks