You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Manickam P <ma...@outlook.com> on 2013/09/23 15:39:45 UTC

Error while configuring HDFS fedration

Guys,

I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node. 

I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error. 

In node one i'm getting this error. 
java.net.BindException: Port in use: lab-hadoop.eng.com:50070

In another node i'm getting this error. 
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

My core-site xml has the below. 
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://10.101.89.68:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>
  </property>
</configuration>

My hdfs-site xml has the below.
<configuration>
   <property>
     <name>dfs.replication</name>
     <value>2</value>
   </property>
   <property>
     <name>dfs.permissions</name>
     <value>false</value>
   </property>
   <property>
        <name>dfs.federation.nameservices</name>
        <value>ns1,ns2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns1</name>
        <value>10.101.89.68:9001</value>
    </property>
   <property>
    <name>dfs.namenode.http-address.ns1</name>
    <value>10.101.89.68:50070</value>
   </property>
   <property>
        <name>dfs.namenode.secondary.http-address.ns1</name>
        <value>10.101.89.68:50090</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns2</name>
        <value>10.101.89.69:9001</value>
    </property>
   <property>
    <name>dfs.namenode.http-address.ns2</name>
    <value>10.101.89.69:50070</value>
   </property>
   <property>
        <name>dfs.namenode.secondary.http-address.ns2</name>
        <value>10.101.89.69:50090</value>
    </property>
 </configuration>

Please help me to fix this error. 


Thanks,
Manickam P
 		 	   		  

RE: Error while configuring HDFS fedration

Posted by Manickam P <ma...@outlook.com>.
Hi,

I followed your steps. That bind error got resolved but still i'm getting the second exception. I've given the complete stack below. 

2013-09-23 10:26:01,887 INFO org.mortbay.log: Stopped SelectChannelConnector@lab2-hadoop2-vm1.eng.com:50070
2013-09-23 10:26:01,988 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2013-09-23 10:26:01,989 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2013-09-23 10:26:01,990 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2013-09-23 10:26:01,991 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:292)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:200)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:777)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:558)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:418)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:466)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:659)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:644)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1221)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1287)
2013-09-23 10:26:02,001 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2013-09-23 10:26:02,018 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 

Thanks,
Manickam P

From: eladi@mellanox.com
To: user@hadoop.apache.org
Subject: RE: Error while configuring HDFS fedration
Date: Mon, 23 Sep 2013 14:05:47 +0000









Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes,
 and/or a format is needed on DFS.
 
I suggest the following:
 
1.      
sbin/stop-dfs.sh && sbin/stop-yarn.sh
2.      
sudo killall java
(on all nodes)
3.      
sudo chmod –R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs
(on all nodes)
4.      
sudo rm –rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/*
(on all nodes)
5.      
bin/hdfs namenode –format –force

6.      
sbin/start-dfs.sh && sbin/start-yarn.sh
 
Then see if you get that error again.
 


From: Manickam P [mailto:manickam.p@outlook.com]


Sent: Monday, September 23, 2013 4:44 PM

To: user@hadoop.apache.org

Subject: Error while configuring HDFS fedration


 

Guys,



I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.




I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.




In node one i'm getting this error. 

java.net.BindException: Port in use: lab-hadoop.eng.com:50070



In another node i'm getting this error.


org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.



My core-site xml has the below. 

<configuration>

  <property>

    <name>fs.default.name</name>

    <value>hdfs://10.101.89.68:9000</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>

  </property>

</configuration>



My hdfs-site xml has the below.

<configuration>

   <property>

     <name>dfs.replication</name>

     <value>2</value>

   </property>

   <property>

     <name>dfs.permissions</name>

     <value>false</value>

   </property>

   <property>

        <name>dfs.federation.nameservices</name>

        <value>ns1,ns2</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns1</name>

        <value>10.101.89.68:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns1</name>

    <value>10.101.89.68:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns1</name>

        <value>10.101.89.68:50090</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns2</name>

        <value>10.101.89.69:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns2</name>

    <value>10.101.89.69:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns2</name>

        <value>10.101.89.69:50090</value>

    </property>

 </configuration>



Please help me to fix this error. 





Thanks,

Manickam P





 		 	   		  

RE: Error while configuring HDFS fedration

Posted by Manickam P <ma...@outlook.com>.
Hi, 

Thanks for your inputs. I fixed the issue. 


Thanks,
Manickam P

From: eladi@mellanox.com
To: user@hadoop.apache.org
Subject: RE: Error while configuring HDFS fedration
Date: Mon, 23 Sep 2013 14:05:47 +0000









Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes,
 and/or a format is needed on DFS.
 
I suggest the following:
 
1.      
sbin/stop-dfs.sh && sbin/stop-yarn.sh
2.      
sudo killall java
(on all nodes)
3.      
sudo chmod –R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs
(on all nodes)
4.      
sudo rm –rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/*
(on all nodes)
5.      
bin/hdfs namenode –format –force

6.      
sbin/start-dfs.sh && sbin/start-yarn.sh
 
Then see if you get that error again.
 


From: Manickam P [mailto:manickam.p@outlook.com]


Sent: Monday, September 23, 2013 4:44 PM

To: user@hadoop.apache.org

Subject: Error while configuring HDFS fedration


 

Guys,



I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.




I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.




In node one i'm getting this error. 

java.net.BindException: Port in use: lab-hadoop.eng.com:50070



In another node i'm getting this error.


org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.



My core-site xml has the below. 

<configuration>

  <property>

    <name>fs.default.name</name>

    <value>hdfs://10.101.89.68:9000</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>

  </property>

</configuration>



My hdfs-site xml has the below.

<configuration>

   <property>

     <name>dfs.replication</name>

     <value>2</value>

   </property>

   <property>

     <name>dfs.permissions</name>

     <value>false</value>

   </property>

   <property>

        <name>dfs.federation.nameservices</name>

        <value>ns1,ns2</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns1</name>

        <value>10.101.89.68:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns1</name>

    <value>10.101.89.68:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns1</name>

        <value>10.101.89.68:50090</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns2</name>

        <value>10.101.89.69:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns2</name>

    <value>10.101.89.69:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns2</name>

        <value>10.101.89.69:50090</value>

    </property>

 </configuration>



Please help me to fix this error. 





Thanks,

Manickam P





 		 	   		  

RE: Error while configuring HDFS fedration

Posted by Manickam P <ma...@outlook.com>.
Hi, 

Thanks for your inputs. I fixed the issue. 


Thanks,
Manickam P

From: eladi@mellanox.com
To: user@hadoop.apache.org
Subject: RE: Error while configuring HDFS fedration
Date: Mon, 23 Sep 2013 14:05:47 +0000









Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes,
 and/or a format is needed on DFS.
 
I suggest the following:
 
1.      
sbin/stop-dfs.sh && sbin/stop-yarn.sh
2.      
sudo killall java
(on all nodes)
3.      
sudo chmod –R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs
(on all nodes)
4.      
sudo rm –rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/*
(on all nodes)
5.      
bin/hdfs namenode –format –force

6.      
sbin/start-dfs.sh && sbin/start-yarn.sh
 
Then see if you get that error again.
 


From: Manickam P [mailto:manickam.p@outlook.com]


Sent: Monday, September 23, 2013 4:44 PM

To: user@hadoop.apache.org

Subject: Error while configuring HDFS fedration


 

Guys,



I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.




I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.




In node one i'm getting this error. 

java.net.BindException: Port in use: lab-hadoop.eng.com:50070



In another node i'm getting this error.


org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.



My core-site xml has the below. 

<configuration>

  <property>

    <name>fs.default.name</name>

    <value>hdfs://10.101.89.68:9000</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>

  </property>

</configuration>



My hdfs-site xml has the below.

<configuration>

   <property>

     <name>dfs.replication</name>

     <value>2</value>

   </property>

   <property>

     <name>dfs.permissions</name>

     <value>false</value>

   </property>

   <property>

        <name>dfs.federation.nameservices</name>

        <value>ns1,ns2</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns1</name>

        <value>10.101.89.68:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns1</name>

    <value>10.101.89.68:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns1</name>

        <value>10.101.89.68:50090</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns2</name>

        <value>10.101.89.69:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns2</name>

    <value>10.101.89.69:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns2</name>

        <value>10.101.89.69:50090</value>

    </property>

 </configuration>



Please help me to fix this error. 





Thanks,

Manickam P





 		 	   		  

RE: Error while configuring HDFS fedration

Posted by Manickam P <ma...@outlook.com>.
Hi,

I followed your steps. That bind error got resolved but still i'm getting the second exception. I've given the complete stack below. 

2013-09-23 10:26:01,887 INFO org.mortbay.log: Stopped SelectChannelConnector@lab2-hadoop2-vm1.eng.com:50070
2013-09-23 10:26:01,988 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2013-09-23 10:26:01,989 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2013-09-23 10:26:01,990 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2013-09-23 10:26:01,991 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:292)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:200)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:777)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:558)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:418)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:466)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:659)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:644)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1221)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1287)
2013-09-23 10:26:02,001 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2013-09-23 10:26:02,018 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 

Thanks,
Manickam P

From: eladi@mellanox.com
To: user@hadoop.apache.org
Subject: RE: Error while configuring HDFS fedration
Date: Mon, 23 Sep 2013 14:05:47 +0000









Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes,
 and/or a format is needed on DFS.
 
I suggest the following:
 
1.      
sbin/stop-dfs.sh && sbin/stop-yarn.sh
2.      
sudo killall java
(on all nodes)
3.      
sudo chmod –R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs
(on all nodes)
4.      
sudo rm –rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/*
(on all nodes)
5.      
bin/hdfs namenode –format –force

6.      
sbin/start-dfs.sh && sbin/start-yarn.sh
 
Then see if you get that error again.
 


From: Manickam P [mailto:manickam.p@outlook.com]


Sent: Monday, September 23, 2013 4:44 PM

To: user@hadoop.apache.org

Subject: Error while configuring HDFS fedration


 

Guys,



I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.




I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.




In node one i'm getting this error. 

java.net.BindException: Port in use: lab-hadoop.eng.com:50070



In another node i'm getting this error.


org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.



My core-site xml has the below. 

<configuration>

  <property>

    <name>fs.default.name</name>

    <value>hdfs://10.101.89.68:9000</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>

  </property>

</configuration>



My hdfs-site xml has the below.

<configuration>

   <property>

     <name>dfs.replication</name>

     <value>2</value>

   </property>

   <property>

     <name>dfs.permissions</name>

     <value>false</value>

   </property>

   <property>

        <name>dfs.federation.nameservices</name>

        <value>ns1,ns2</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns1</name>

        <value>10.101.89.68:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns1</name>

    <value>10.101.89.68:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns1</name>

        <value>10.101.89.68:50090</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns2</name>

        <value>10.101.89.69:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns2</name>

    <value>10.101.89.69:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns2</name>

        <value>10.101.89.69:50090</value>

    </property>

 </configuration>



Please help me to fix this error. 





Thanks,

Manickam P





 		 	   		  

RE: Error while configuring HDFS fedration

Posted by Manickam P <ma...@outlook.com>.
Hi,

I followed your steps. That bind error got resolved but still i'm getting the second exception. I've given the complete stack below. 

2013-09-23 10:26:01,887 INFO org.mortbay.log: Stopped SelectChannelConnector@lab2-hadoop2-vm1.eng.com:50070
2013-09-23 10:26:01,988 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2013-09-23 10:26:01,989 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2013-09-23 10:26:01,990 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2013-09-23 10:26:01,991 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:292)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:200)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:777)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:558)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:418)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:466)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:659)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:644)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1221)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1287)
2013-09-23 10:26:02,001 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2013-09-23 10:26:02,018 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 

Thanks,
Manickam P

From: eladi@mellanox.com
To: user@hadoop.apache.org
Subject: RE: Error while configuring HDFS fedration
Date: Mon, 23 Sep 2013 14:05:47 +0000









Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes,
 and/or a format is needed on DFS.
 
I suggest the following:
 
1.      
sbin/stop-dfs.sh && sbin/stop-yarn.sh
2.      
sudo killall java
(on all nodes)
3.      
sudo chmod –R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs
(on all nodes)
4.      
sudo rm –rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/*
(on all nodes)
5.      
bin/hdfs namenode –format –force

6.      
sbin/start-dfs.sh && sbin/start-yarn.sh
 
Then see if you get that error again.
 


From: Manickam P [mailto:manickam.p@outlook.com]


Sent: Monday, September 23, 2013 4:44 PM

To: user@hadoop.apache.org

Subject: Error while configuring HDFS fedration


 

Guys,



I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.




I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.




In node one i'm getting this error. 

java.net.BindException: Port in use: lab-hadoop.eng.com:50070



In another node i'm getting this error.


org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.



My core-site xml has the below. 

<configuration>

  <property>

    <name>fs.default.name</name>

    <value>hdfs://10.101.89.68:9000</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>

  </property>

</configuration>



My hdfs-site xml has the below.

<configuration>

   <property>

     <name>dfs.replication</name>

     <value>2</value>

   </property>

   <property>

     <name>dfs.permissions</name>

     <value>false</value>

   </property>

   <property>

        <name>dfs.federation.nameservices</name>

        <value>ns1,ns2</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns1</name>

        <value>10.101.89.68:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns1</name>

    <value>10.101.89.68:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns1</name>

        <value>10.101.89.68:50090</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns2</name>

        <value>10.101.89.69:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns2</name>

    <value>10.101.89.69:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns2</name>

        <value>10.101.89.69:50090</value>

    </property>

 </configuration>



Please help me to fix this error. 





Thanks,

Manickam P





 		 	   		  

RE: Error while configuring HDFS fedration

Posted by Manickam P <ma...@outlook.com>.
Hi, 

Thanks for your inputs. I fixed the issue. 


Thanks,
Manickam P

From: eladi@mellanox.com
To: user@hadoop.apache.org
Subject: RE: Error while configuring HDFS fedration
Date: Mon, 23 Sep 2013 14:05:47 +0000









Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes,
 and/or a format is needed on DFS.
 
I suggest the following:
 
1.      
sbin/stop-dfs.sh && sbin/stop-yarn.sh
2.      
sudo killall java
(on all nodes)
3.      
sudo chmod –R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs
(on all nodes)
4.      
sudo rm –rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/*
(on all nodes)
5.      
bin/hdfs namenode –format –force

6.      
sbin/start-dfs.sh && sbin/start-yarn.sh
 
Then see if you get that error again.
 


From: Manickam P [mailto:manickam.p@outlook.com]


Sent: Monday, September 23, 2013 4:44 PM

To: user@hadoop.apache.org

Subject: Error while configuring HDFS fedration


 

Guys,



I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.




I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.




In node one i'm getting this error. 

java.net.BindException: Port in use: lab-hadoop.eng.com:50070



In another node i'm getting this error.


org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.



My core-site xml has the below. 

<configuration>

  <property>

    <name>fs.default.name</name>

    <value>hdfs://10.101.89.68:9000</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>

  </property>

</configuration>



My hdfs-site xml has the below.

<configuration>

   <property>

     <name>dfs.replication</name>

     <value>2</value>

   </property>

   <property>

     <name>dfs.permissions</name>

     <value>false</value>

   </property>

   <property>

        <name>dfs.federation.nameservices</name>

        <value>ns1,ns2</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns1</name>

        <value>10.101.89.68:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns1</name>

    <value>10.101.89.68:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns1</name>

        <value>10.101.89.68:50090</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns2</name>

        <value>10.101.89.69:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns2</name>

    <value>10.101.89.69:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns2</name>

        <value>10.101.89.69:50090</value>

    </property>

 </configuration>



Please help me to fix this error. 





Thanks,

Manickam P





 		 	   		  

RE: Error while configuring HDFS fedration

Posted by Manickam P <ma...@outlook.com>.
Hi,

I followed your steps. That bind error got resolved but still i'm getting the second exception. I've given the complete stack below. 

2013-09-23 10:26:01,887 INFO org.mortbay.log: Stopped SelectChannelConnector@lab2-hadoop2-vm1.eng.com:50070
2013-09-23 10:26:01,988 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2013-09-23 10:26:01,989 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2013-09-23 10:26:01,990 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2013-09-23 10:26:01,991 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:292)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:200)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:777)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:558)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:418)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:466)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:659)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:644)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1221)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1287)
2013-09-23 10:26:02,001 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2013-09-23 10:26:02,018 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 

Thanks,
Manickam P

From: eladi@mellanox.com
To: user@hadoop.apache.org
Subject: RE: Error while configuring HDFS fedration
Date: Mon, 23 Sep 2013 14:05:47 +0000









Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes,
 and/or a format is needed on DFS.
 
I suggest the following:
 
1.      
sbin/stop-dfs.sh && sbin/stop-yarn.sh
2.      
sudo killall java
(on all nodes)
3.      
sudo chmod –R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs
(on all nodes)
4.      
sudo rm –rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/*
(on all nodes)
5.      
bin/hdfs namenode –format –force

6.      
sbin/start-dfs.sh && sbin/start-yarn.sh
 
Then see if you get that error again.
 


From: Manickam P [mailto:manickam.p@outlook.com]


Sent: Monday, September 23, 2013 4:44 PM

To: user@hadoop.apache.org

Subject: Error while configuring HDFS fedration


 

Guys,



I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.




I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.




In node one i'm getting this error. 

java.net.BindException: Port in use: lab-hadoop.eng.com:50070



In another node i'm getting this error.


org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.



My core-site xml has the below. 

<configuration>

  <property>

    <name>fs.default.name</name>

    <value>hdfs://10.101.89.68:9000</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>

  </property>

</configuration>



My hdfs-site xml has the below.

<configuration>

   <property>

     <name>dfs.replication</name>

     <value>2</value>

   </property>

   <property>

     <name>dfs.permissions</name>

     <value>false</value>

   </property>

   <property>

        <name>dfs.federation.nameservices</name>

        <value>ns1,ns2</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns1</name>

        <value>10.101.89.68:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns1</name>

    <value>10.101.89.68:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns1</name>

        <value>10.101.89.68:50090</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns2</name>

        <value>10.101.89.69:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns2</name>

    <value>10.101.89.69:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns2</name>

        <value>10.101.89.69:50090</value>

    </property>

 </configuration>



Please help me to fix this error. 





Thanks,

Manickam P





 		 	   		  

RE: Error while configuring HDFS fedration

Posted by Manickam P <ma...@outlook.com>.
Hi, 

Thanks for your inputs. I fixed the issue. 


Thanks,
Manickam P

From: eladi@mellanox.com
To: user@hadoop.apache.org
Subject: RE: Error while configuring HDFS fedration
Date: Mon, 23 Sep 2013 14:05:47 +0000









Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes,
 and/or a format is needed on DFS.
 
I suggest the following:
 
1.      
sbin/stop-dfs.sh && sbin/stop-yarn.sh
2.      
sudo killall java
(on all nodes)
3.      
sudo chmod –R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs
(on all nodes)
4.      
sudo rm –rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/*
(on all nodes)
5.      
bin/hdfs namenode –format –force

6.      
sbin/start-dfs.sh && sbin/start-yarn.sh
 
Then see if you get that error again.
 


From: Manickam P [mailto:manickam.p@outlook.com]


Sent: Monday, September 23, 2013 4:44 PM

To: user@hadoop.apache.org

Subject: Error while configuring HDFS fedration


 

Guys,



I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.




I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.




In node one i'm getting this error. 

java.net.BindException: Port in use: lab-hadoop.eng.com:50070



In another node i'm getting this error.


org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.



My core-site xml has the below. 

<configuration>

  <property>

    <name>fs.default.name</name>

    <value>hdfs://10.101.89.68:9000</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>

    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>

  </property>

</configuration>



My hdfs-site xml has the below.

<configuration>

   <property>

     <name>dfs.replication</name>

     <value>2</value>

   </property>

   <property>

     <name>dfs.permissions</name>

     <value>false</value>

   </property>

   <property>

        <name>dfs.federation.nameservices</name>

        <value>ns1,ns2</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns1</name>

        <value>10.101.89.68:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns1</name>

    <value>10.101.89.68:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns1</name>

        <value>10.101.89.68:50090</value>

    </property>

    <property>

        <name>dfs.namenode.rpc-address.ns2</name>

        <value>10.101.89.69:9001</value>

    </property>

   <property>

    <name>dfs.namenode.http-address.ns2</name>

    <value>10.101.89.69:50070</value>

   </property>

   <property>

        <name>dfs.namenode.secondary.http-address.ns2</name>

        <value>10.101.89.69:50090</value>

    </property>

 </configuration>



Please help me to fix this error. 





Thanks,

Manickam P





 		 	   		  

RE: Error while configuring HDFS fedration

Posted by Elad Itzhakian <el...@mellanox.com>.
Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes, and/or a format is needed on DFS.

I suggest the following:


1.       sbin/stop-dfs.sh && sbin/stop-yarn.sh

2.       sudo killall java (on all nodes)

3.       sudo chmod -R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs (on all nodes)

4.       sudo rm -rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/* (on all nodes)

5.       bin/hdfs namenode -format -force

6.       sbin/start-dfs.sh && sbin/start-yarn.sh

Then see if you get that error again.

From: Manickam P [mailto:manickam.p@outlook.com]
Sent: Monday, September 23, 2013 4:44 PM
To: user@hadoop.apache.org
Subject: Error while configuring HDFS fedration

Guys,

I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.

I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.

In node one i'm getting this error.
java.net.BindException: Port in use: lab-hadoop.eng.com:50070

In another node i'm getting this error.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

My core-site xml has the below.
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://10.101.89.68:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>
  </property>
</configuration>

My hdfs-site xml has the below.
<configuration>
   <property>
     <name>dfs.replication</name>
     <value>2</value>
   </property>
   <property>
     <name>dfs.permissions</name>
     <value>false</value>
   </property>
   <property>
        <name>dfs.federation.nameservices</name>
        <value>ns1,ns2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns1</name>
        <value>10.101.89.68:9001</value>
    </property>
   <property>
    <name>dfs.namenode.http-address.ns1</name>
    <value>10.101.89.68:50070</value>
   </property>
   <property>
        <name>dfs.namenode.secondary.http-address.ns1</name>
        <value>10.101.89.68:50090</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns2</name>
        <value>10.101.89.69:9001</value>
    </property>
   <property>
    <name>dfs.namenode.http-address.ns2</name>
    <value>10.101.89.69:50070</value>
   </property>
   <property>
        <name>dfs.namenode.secondary.http-address.ns2</name>
        <value>10.101.89.69:50090</value>
    </property>
 </configuration>

Please help me to fix this error.


Thanks,
Manickam P


RE: Error while configuring HDFS fedration

Posted by Elad Itzhakian <el...@mellanox.com>.
Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes, and/or a format is needed on DFS.

I suggest the following:


1.       sbin/stop-dfs.sh && sbin/stop-yarn.sh

2.       sudo killall java (on all nodes)

3.       sudo chmod -R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs (on all nodes)

4.       sudo rm -rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/* (on all nodes)

5.       bin/hdfs namenode -format -force

6.       sbin/start-dfs.sh && sbin/start-yarn.sh

Then see if you get that error again.

From: Manickam P [mailto:manickam.p@outlook.com]
Sent: Monday, September 23, 2013 4:44 PM
To: user@hadoop.apache.org
Subject: Error while configuring HDFS fedration

Guys,

I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.

I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.

In node one i'm getting this error.
java.net.BindException: Port in use: lab-hadoop.eng.com:50070

In another node i'm getting this error.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

My core-site xml has the below.
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://10.101.89.68:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>
  </property>
</configuration>

My hdfs-site xml has the below.
<configuration>
   <property>
     <name>dfs.replication</name>
     <value>2</value>
   </property>
   <property>
     <name>dfs.permissions</name>
     <value>false</value>
   </property>
   <property>
        <name>dfs.federation.nameservices</name>
        <value>ns1,ns2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns1</name>
        <value>10.101.89.68:9001</value>
    </property>
   <property>
    <name>dfs.namenode.http-address.ns1</name>
    <value>10.101.89.68:50070</value>
   </property>
   <property>
        <name>dfs.namenode.secondary.http-address.ns1</name>
        <value>10.101.89.68:50090</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns2</name>
        <value>10.101.89.69:9001</value>
    </property>
   <property>
    <name>dfs.namenode.http-address.ns2</name>
    <value>10.101.89.69:50070</value>
   </property>
   <property>
        <name>dfs.namenode.secondary.http-address.ns2</name>
        <value>10.101.89.69:50090</value>
    </property>
 </configuration>

Please help me to fix this error.


Thanks,
Manickam P


RE: Error while configuring HDFS fedration

Posted by Elad Itzhakian <el...@mellanox.com>.
Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes, and/or a format is needed on DFS.

I suggest the following:


1.       sbin/stop-dfs.sh && sbin/stop-yarn.sh

2.       sudo killall java (on all nodes)

3.       sudo chmod -R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs (on all nodes)

4.       sudo rm -rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/* (on all nodes)

5.       bin/hdfs namenode -format -force

6.       sbin/start-dfs.sh && sbin/start-yarn.sh

Then see if you get that error again.

From: Manickam P [mailto:manickam.p@outlook.com]
Sent: Monday, September 23, 2013 4:44 PM
To: user@hadoop.apache.org
Subject: Error while configuring HDFS fedration

Guys,

I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.

I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.

In node one i'm getting this error.
java.net.BindException: Port in use: lab-hadoop.eng.com:50070

In another node i'm getting this error.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

My core-site xml has the below.
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://10.101.89.68:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>
  </property>
</configuration>

My hdfs-site xml has the below.
<configuration>
   <property>
     <name>dfs.replication</name>
     <value>2</value>
   </property>
   <property>
     <name>dfs.permissions</name>
     <value>false</value>
   </property>
   <property>
        <name>dfs.federation.nameservices</name>
        <value>ns1,ns2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns1</name>
        <value>10.101.89.68:9001</value>
    </property>
   <property>
    <name>dfs.namenode.http-address.ns1</name>
    <value>10.101.89.68:50070</value>
   </property>
   <property>
        <name>dfs.namenode.secondary.http-address.ns1</name>
        <value>10.101.89.68:50090</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns2</name>
        <value>10.101.89.69:9001</value>
    </property>
   <property>
    <name>dfs.namenode.http-address.ns2</name>
    <value>10.101.89.69:50070</value>
   </property>
   <property>
        <name>dfs.namenode.secondary.http-address.ns2</name>
        <value>10.101.89.69:50090</value>
    </property>
 </configuration>

Please help me to fix this error.


Thanks,
Manickam P


RE: Error while configuring HDFS fedration

Posted by Elad Itzhakian <el...@mellanox.com>.
Ports in use may result from actual processes using them, or just ghost processes. The second error may be caused by inconsistent permissions on different nodes, and/or a format is needed on DFS.

I suggest the following:


1.       sbin/stop-dfs.sh && sbin/stop-yarn.sh

2.       sudo killall java (on all nodes)

3.       sudo chmod -R 755 /home/lab/hadoop-2.1.0-beta/tmp/dfs (on all nodes)

4.       sudo rm -rf /home/lab/hadoop-2.1.0-beta/tmp/dfs/* (on all nodes)

5.       bin/hdfs namenode -format -force

6.       sbin/start-dfs.sh && sbin/start-yarn.sh

Then see if you get that error again.

From: Manickam P [mailto:manickam.p@outlook.com]
Sent: Monday, September 23, 2013 4:44 PM
To: user@hadoop.apache.org
Subject: Error while configuring HDFS fedration

Guys,

I'm trying to configure HDFS federation with 2.1.0 beta version. I am having 3 machines in that i want to have two name nodes and one data node.

I have done the other thing like password less ssh and host entries properly. when i start the cluster i'm getting the below error.

In node one i'm getting this error.
java.net.BindException: Port in use: lab-hadoop.eng.com:50070

In another node i'm getting this error.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/lab/hadoop-2.1.0-beta/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

My core-site xml has the below.
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://10.101.89.68:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/lab/hadoop-2.1.0-beta/tmp</value>
  </property>
</configuration>

My hdfs-site xml has the below.
<configuration>
   <property>
     <name>dfs.replication</name>
     <value>2</value>
   </property>
   <property>
     <name>dfs.permissions</name>
     <value>false</value>
   </property>
   <property>
        <name>dfs.federation.nameservices</name>
        <value>ns1,ns2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns1</name>
        <value>10.101.89.68:9001</value>
    </property>
   <property>
    <name>dfs.namenode.http-address.ns1</name>
    <value>10.101.89.68:50070</value>
   </property>
   <property>
        <name>dfs.namenode.secondary.http-address.ns1</name>
        <value>10.101.89.68:50090</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns2</name>
        <value>10.101.89.69:9001</value>
    </property>
   <property>
    <name>dfs.namenode.http-address.ns2</name>
    <value>10.101.89.69:50070</value>
   </property>
   <property>
        <name>dfs.namenode.secondary.http-address.ns2</name>
        <value>10.101.89.69:50090</value>
    </property>
 </configuration>

Please help me to fix this error.


Thanks,
Manickam P