You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by rk...@charter.net on 2013/04/29 23:15:53 UTC

Incompartible cluserIDS

I am trying to start up a cluster and in the datanode log on the 
NameNode server I get the error:

2013-04-29 15:50:20,988 INFO 
org.apache.hadoop.hdfs.server.common.Storage: Lock on 
/data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
2013-04-29 15:50:20,990 FATAL 
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed 
for block pool Block pool BP-1306349046-172.16.26.68-1367256199559 
(storage id DS-403514403-172.16.26.68-50010-1366406077018) service to 
devUbuntu05/172.16.26.68:9000
java.io.IOException: Incompatible clusterIDs in /data/hadoop/dfs/data: 
namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode 
clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
         at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
         at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
         at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)

How do I get around this error? What does the error mean?

Thank you.

Kevin

Re: Incompartible cluserIDS

Posted by Kevin Burton <rk...@charter.net>.
"It" is '/'?

On Apr 29, 2013, at 5:09 PM, Mohammad Tariq <do...@gmail.com> wrote:

> make it 755.
> 
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
> 
> 
> On Tue, Apr 30, 2013 at 3:30 AM, Kevin Burton <rk...@charter.net> wrote:
>> Thank you the HDFS system seems to be up. Now I am having a problem with getting the JobTracker and TaskTracker up. According to the logs on the JobTracker mapred doesn't have  write permission to /. I am not clear on what the permissions should be.
>> 
>> Anyway, thank you.
>> 
>> On Apr 29, 2013, at 4:30 PM, Mohammad Tariq <do...@gmail.com> wrote:
>> 
>>> Hello Kevin,
>>> 
>>>           Have you reformatted the NN(unsuccessfully)?Was your NN serving some other cluster earlier or your DNs were part of some other cluster?Datanodes bind themselves to namenode through namespaceID and in your case the IDs of DNs and NN seem to be different. As a workaround you could do this :
>>> 
>>> 1- Stop all the daemons.
>>> 2- Go to the directory which you have specified as the value of "dfs.name.dir" property in your hdfs-site.xml file.
>>> 3- You'll find a directory called "current" inside this directory where a file named "VERSION" will be present. Open this file and copy the value of "namespaceID" form here.
>>> 4- Now go to the directory which you have specified as the value of "dfs.data.dir" property in your hdfs-site.xml file.
>>> 5- Move inside the "current" directory and open the "VERSION" file here as well. Now replace the value of "namespaceID" present here with the one you had copied earlier.
>>> 6- Restart all the daemons.
>>> 
>>> Note : If you have not created dfs.name.dir and dfs.data.dir separately, you could find all this inside your temp directory.
>>> 
>>> HTH
>>> 
>>> Warm Regards,
>>> Tariq
>>> https://mtariq.jux.com/
>>> cloudfront.blogspot.com
>>> 
>>> 
>>> On Tue, Apr 30, 2013 at 2:45 AM,  <rk...@charter.net> wrote:
>>>> I am trying to start up a cluster and in the datanode log on the NameNode server I get the error:
>>>> 
>>>> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
>>>> 2013-04-29 15:50:20,990 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/172.16.26.68:9000
>>>> java.io.IOException: Incompatible clusterIDs in /data/hadoop/dfs/data: namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>>>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>>>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>>>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>>>> 
>>>> How do I get around this error? What does the error mean?
>>>> 
>>>> Thank you.
>>>> 
>>>> Kevin
> 

Re: Incompartible cluserIDS

Posted by Kevin Burton <rk...@charter.net>.
"It" is '/'?

On Apr 29, 2013, at 5:09 PM, Mohammad Tariq <do...@gmail.com> wrote:

> make it 755.
> 
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
> 
> 
> On Tue, Apr 30, 2013 at 3:30 AM, Kevin Burton <rk...@charter.net> wrote:
>> Thank you the HDFS system seems to be up. Now I am having a problem with getting the JobTracker and TaskTracker up. According to the logs on the JobTracker mapred doesn't have  write permission to /. I am not clear on what the permissions should be.
>> 
>> Anyway, thank you.
>> 
>> On Apr 29, 2013, at 4:30 PM, Mohammad Tariq <do...@gmail.com> wrote:
>> 
>>> Hello Kevin,
>>> 
>>>           Have you reformatted the NN(unsuccessfully)?Was your NN serving some other cluster earlier or your DNs were part of some other cluster?Datanodes bind themselves to namenode through namespaceID and in your case the IDs of DNs and NN seem to be different. As a workaround you could do this :
>>> 
>>> 1- Stop all the daemons.
>>> 2- Go to the directory which you have specified as the value of "dfs.name.dir" property in your hdfs-site.xml file.
>>> 3- You'll find a directory called "current" inside this directory where a file named "VERSION" will be present. Open this file and copy the value of "namespaceID" form here.
>>> 4- Now go to the directory which you have specified as the value of "dfs.data.dir" property in your hdfs-site.xml file.
>>> 5- Move inside the "current" directory and open the "VERSION" file here as well. Now replace the value of "namespaceID" present here with the one you had copied earlier.
>>> 6- Restart all the daemons.
>>> 
>>> Note : If you have not created dfs.name.dir and dfs.data.dir separately, you could find all this inside your temp directory.
>>> 
>>> HTH
>>> 
>>> Warm Regards,
>>> Tariq
>>> https://mtariq.jux.com/
>>> cloudfront.blogspot.com
>>> 
>>> 
>>> On Tue, Apr 30, 2013 at 2:45 AM,  <rk...@charter.net> wrote:
>>>> I am trying to start up a cluster and in the datanode log on the NameNode server I get the error:
>>>> 
>>>> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
>>>> 2013-04-29 15:50:20,990 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/172.16.26.68:9000
>>>> java.io.IOException: Incompatible clusterIDs in /data/hadoop/dfs/data: namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>>>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>>>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>>>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>>>> 
>>>> How do I get around this error? What does the error mean?
>>>> 
>>>> Thank you.
>>>> 
>>>> Kevin
> 

Re: Incompartible cluserIDS

Posted by Kevin Burton <rk...@charter.net>.
"It" is '/'?

On Apr 29, 2013, at 5:09 PM, Mohammad Tariq <do...@gmail.com> wrote:

> make it 755.
> 
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
> 
> 
> On Tue, Apr 30, 2013 at 3:30 AM, Kevin Burton <rk...@charter.net> wrote:
>> Thank you the HDFS system seems to be up. Now I am having a problem with getting the JobTracker and TaskTracker up. According to the logs on the JobTracker mapred doesn't have  write permission to /. I am not clear on what the permissions should be.
>> 
>> Anyway, thank you.
>> 
>> On Apr 29, 2013, at 4:30 PM, Mohammad Tariq <do...@gmail.com> wrote:
>> 
>>> Hello Kevin,
>>> 
>>>           Have you reformatted the NN(unsuccessfully)?Was your NN serving some other cluster earlier or your DNs were part of some other cluster?Datanodes bind themselves to namenode through namespaceID and in your case the IDs of DNs and NN seem to be different. As a workaround you could do this :
>>> 
>>> 1- Stop all the daemons.
>>> 2- Go to the directory which you have specified as the value of "dfs.name.dir" property in your hdfs-site.xml file.
>>> 3- You'll find a directory called "current" inside this directory where a file named "VERSION" will be present. Open this file and copy the value of "namespaceID" form here.
>>> 4- Now go to the directory which you have specified as the value of "dfs.data.dir" property in your hdfs-site.xml file.
>>> 5- Move inside the "current" directory and open the "VERSION" file here as well. Now replace the value of "namespaceID" present here with the one you had copied earlier.
>>> 6- Restart all the daemons.
>>> 
>>> Note : If you have not created dfs.name.dir and dfs.data.dir separately, you could find all this inside your temp directory.
>>> 
>>> HTH
>>> 
>>> Warm Regards,
>>> Tariq
>>> https://mtariq.jux.com/
>>> cloudfront.blogspot.com
>>> 
>>> 
>>> On Tue, Apr 30, 2013 at 2:45 AM,  <rk...@charter.net> wrote:
>>>> I am trying to start up a cluster and in the datanode log on the NameNode server I get the error:
>>>> 
>>>> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
>>>> 2013-04-29 15:50:20,990 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/172.16.26.68:9000
>>>> java.io.IOException: Incompatible clusterIDs in /data/hadoop/dfs/data: namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>>>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>>>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>>>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>>>> 
>>>> How do I get around this error? What does the error mean?
>>>> 
>>>> Thank you.
>>>> 
>>>> Kevin
> 

Re: Incompartible cluserIDS

Posted by Kevin Burton <rk...@charter.net>.
"It" is '/'?

On Apr 29, 2013, at 5:09 PM, Mohammad Tariq <do...@gmail.com> wrote:

> make it 755.
> 
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
> 
> 
> On Tue, Apr 30, 2013 at 3:30 AM, Kevin Burton <rk...@charter.net> wrote:
>> Thank you the HDFS system seems to be up. Now I am having a problem with getting the JobTracker and TaskTracker up. According to the logs on the JobTracker mapred doesn't have  write permission to /. I am not clear on what the permissions should be.
>> 
>> Anyway, thank you.
>> 
>> On Apr 29, 2013, at 4:30 PM, Mohammad Tariq <do...@gmail.com> wrote:
>> 
>>> Hello Kevin,
>>> 
>>>           Have you reformatted the NN(unsuccessfully)?Was your NN serving some other cluster earlier or your DNs were part of some other cluster?Datanodes bind themselves to namenode through namespaceID and in your case the IDs of DNs and NN seem to be different. As a workaround you could do this :
>>> 
>>> 1- Stop all the daemons.
>>> 2- Go to the directory which you have specified as the value of "dfs.name.dir" property in your hdfs-site.xml file.
>>> 3- You'll find a directory called "current" inside this directory where a file named "VERSION" will be present. Open this file and copy the value of "namespaceID" form here.
>>> 4- Now go to the directory which you have specified as the value of "dfs.data.dir" property in your hdfs-site.xml file.
>>> 5- Move inside the "current" directory and open the "VERSION" file here as well. Now replace the value of "namespaceID" present here with the one you had copied earlier.
>>> 6- Restart all the daemons.
>>> 
>>> Note : If you have not created dfs.name.dir and dfs.data.dir separately, you could find all this inside your temp directory.
>>> 
>>> HTH
>>> 
>>> Warm Regards,
>>> Tariq
>>> https://mtariq.jux.com/
>>> cloudfront.blogspot.com
>>> 
>>> 
>>> On Tue, Apr 30, 2013 at 2:45 AM,  <rk...@charter.net> wrote:
>>>> I am trying to start up a cluster and in the datanode log on the NameNode server I get the error:
>>>> 
>>>> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
>>>> 2013-04-29 15:50:20,990 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/172.16.26.68:9000
>>>> java.io.IOException: Incompatible clusterIDs in /data/hadoop/dfs/data: namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>>>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>>>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>>>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>>>> 
>>>> How do I get around this error? What does the error mean?
>>>> 
>>>> Thank you.
>>>> 
>>>> Kevin
> 

Re: Incompartible cluserIDS

Posted by Mohammad Tariq <do...@gmail.com>.
make it 755.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Apr 30, 2013 at 3:30 AM, Kevin Burton <rk...@charter.net>wrote:

> Thank you the HDFS system seems to be up. Now I am having a problem with
> getting the JobTracker and TaskTracker up. According to the logs on the
> JobTracker mapred doesn't have  write permission to /. I am not clear on
> what the permissions should be.
>
> Anyway, thank you.
>
> On Apr 29, 2013, at 4:30 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
> Hello Kevin,
>
>           Have you reformatted the NN(unsuccessfully)?Was your NN serving
> some other cluster earlier or your DNs were part of some other
> cluster?Datanodes bind themselves to namenode through namespaceID and in
> your case the IDs of DNs and NN seem to be different. As a workaround you
> could do this :
>
> 1- Stop all the daemons.
> 2- Go to the directory which you have specified as the value of
> "dfs.name.dir" property in your hdfs-site.xml file.
> 3- You'll find a directory called "current" inside this directory where a
> file named "VERSION" will be present. Open this file and copy the value of
> "namespaceID" form here.
> 4- Now go to the directory which you have specified as the value of
> "dfs.data.dir" property in your hdfs-site.xml file.
> 5- Move inside the "current" directory and open the "VERSION" file here as
> well. Now replace the value of "namespaceID" present here with the one you
> had copied earlier.
> 6- Restart all the daemons.
>
> Note : If you have not created dfs.name.dir and dfs.data.dir separately,
> you could find all this inside your temp directory.
>
> HTH
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 2:45 AM, <rk...@charter.net> wrote:
>
>> I am trying to start up a cluster and in the datanode log on the NameNode
>> server I get the error:
>>
>> 2013-04-29 15:50:20,988 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
>> 2013-04-29 15:50:20,990 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id
>> DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/
>> 172.16.26.68:9000
>> java.io.IOException: *Incompatible clusterIDs* in /data/hadoop/dfs/data:
>> namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode
>> clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>>
>> How do I get around this error? What does the error mean?
>>
>> Thank you.
>>
>> Kevin
>>
>
>

Re: Incompartible cluserIDS

Posted by Mohammad Tariq <do...@gmail.com>.
make it 755.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Apr 30, 2013 at 3:30 AM, Kevin Burton <rk...@charter.net>wrote:

> Thank you the HDFS system seems to be up. Now I am having a problem with
> getting the JobTracker and TaskTracker up. According to the logs on the
> JobTracker mapred doesn't have  write permission to /. I am not clear on
> what the permissions should be.
>
> Anyway, thank you.
>
> On Apr 29, 2013, at 4:30 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
> Hello Kevin,
>
>           Have you reformatted the NN(unsuccessfully)?Was your NN serving
> some other cluster earlier or your DNs were part of some other
> cluster?Datanodes bind themselves to namenode through namespaceID and in
> your case the IDs of DNs and NN seem to be different. As a workaround you
> could do this :
>
> 1- Stop all the daemons.
> 2- Go to the directory which you have specified as the value of
> "dfs.name.dir" property in your hdfs-site.xml file.
> 3- You'll find a directory called "current" inside this directory where a
> file named "VERSION" will be present. Open this file and copy the value of
> "namespaceID" form here.
> 4- Now go to the directory which you have specified as the value of
> "dfs.data.dir" property in your hdfs-site.xml file.
> 5- Move inside the "current" directory and open the "VERSION" file here as
> well. Now replace the value of "namespaceID" present here with the one you
> had copied earlier.
> 6- Restart all the daemons.
>
> Note : If you have not created dfs.name.dir and dfs.data.dir separately,
> you could find all this inside your temp directory.
>
> HTH
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 2:45 AM, <rk...@charter.net> wrote:
>
>> I am trying to start up a cluster and in the datanode log on the NameNode
>> server I get the error:
>>
>> 2013-04-29 15:50:20,988 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
>> 2013-04-29 15:50:20,990 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id
>> DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/
>> 172.16.26.68:9000
>> java.io.IOException: *Incompatible clusterIDs* in /data/hadoop/dfs/data:
>> namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode
>> clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>>
>> How do I get around this error? What does the error mean?
>>
>> Thank you.
>>
>> Kevin
>>
>
>

Re: Incompartible cluserIDS

Posted by Mohammad Tariq <do...@gmail.com>.
make it 755.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Apr 30, 2013 at 3:30 AM, Kevin Burton <rk...@charter.net>wrote:

> Thank you the HDFS system seems to be up. Now I am having a problem with
> getting the JobTracker and TaskTracker up. According to the logs on the
> JobTracker mapred doesn't have  write permission to /. I am not clear on
> what the permissions should be.
>
> Anyway, thank you.
>
> On Apr 29, 2013, at 4:30 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
> Hello Kevin,
>
>           Have you reformatted the NN(unsuccessfully)?Was your NN serving
> some other cluster earlier or your DNs were part of some other
> cluster?Datanodes bind themselves to namenode through namespaceID and in
> your case the IDs of DNs and NN seem to be different. As a workaround you
> could do this :
>
> 1- Stop all the daemons.
> 2- Go to the directory which you have specified as the value of
> "dfs.name.dir" property in your hdfs-site.xml file.
> 3- You'll find a directory called "current" inside this directory where a
> file named "VERSION" will be present. Open this file and copy the value of
> "namespaceID" form here.
> 4- Now go to the directory which you have specified as the value of
> "dfs.data.dir" property in your hdfs-site.xml file.
> 5- Move inside the "current" directory and open the "VERSION" file here as
> well. Now replace the value of "namespaceID" present here with the one you
> had copied earlier.
> 6- Restart all the daemons.
>
> Note : If you have not created dfs.name.dir and dfs.data.dir separately,
> you could find all this inside your temp directory.
>
> HTH
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 2:45 AM, <rk...@charter.net> wrote:
>
>> I am trying to start up a cluster and in the datanode log on the NameNode
>> server I get the error:
>>
>> 2013-04-29 15:50:20,988 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
>> 2013-04-29 15:50:20,990 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id
>> DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/
>> 172.16.26.68:9000
>> java.io.IOException: *Incompatible clusterIDs* in /data/hadoop/dfs/data:
>> namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode
>> clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>>
>> How do I get around this error? What does the error mean?
>>
>> Thank you.
>>
>> Kevin
>>
>
>

Re: Incompartible cluserIDS

Posted by Mohammad Tariq <do...@gmail.com>.
make it 755.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Apr 30, 2013 at 3:30 AM, Kevin Burton <rk...@charter.net>wrote:

> Thank you the HDFS system seems to be up. Now I am having a problem with
> getting the JobTracker and TaskTracker up. According to the logs on the
> JobTracker mapred doesn't have  write permission to /. I am not clear on
> what the permissions should be.
>
> Anyway, thank you.
>
> On Apr 29, 2013, at 4:30 PM, Mohammad Tariq <do...@gmail.com> wrote:
>
> Hello Kevin,
>
>           Have you reformatted the NN(unsuccessfully)?Was your NN serving
> some other cluster earlier or your DNs were part of some other
> cluster?Datanodes bind themselves to namenode through namespaceID and in
> your case the IDs of DNs and NN seem to be different. As a workaround you
> could do this :
>
> 1- Stop all the daemons.
> 2- Go to the directory which you have specified as the value of
> "dfs.name.dir" property in your hdfs-site.xml file.
> 3- You'll find a directory called "current" inside this directory where a
> file named "VERSION" will be present. Open this file and copy the value of
> "namespaceID" form here.
> 4- Now go to the directory which you have specified as the value of
> "dfs.data.dir" property in your hdfs-site.xml file.
> 5- Move inside the "current" directory and open the "VERSION" file here as
> well. Now replace the value of "namespaceID" present here with the one you
> had copied earlier.
> 6- Restart all the daemons.
>
> Note : If you have not created dfs.name.dir and dfs.data.dir separately,
> you could find all this inside your temp directory.
>
> HTH
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 2:45 AM, <rk...@charter.net> wrote:
>
>> I am trying to start up a cluster and in the datanode log on the NameNode
>> server I get the error:
>>
>> 2013-04-29 15:50:20,988 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
>> 2013-04-29 15:50:20,990 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id
>> DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/
>> 172.16.26.68:9000
>> java.io.IOException: *Incompatible clusterIDs* in /data/hadoop/dfs/data:
>> namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode
>> clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>>
>> How do I get around this error? What does the error mean?
>>
>> Thank you.
>>
>> Kevin
>>
>
>

Re: Incompartible cluserIDS

Posted by Kevin Burton <rk...@charter.net>.
Thank you the HDFS system seems to be up. Now I am having a problem with getting the JobTracker and TaskTracker up. According to the logs on the JobTracker mapred doesn't have  write permission to /. I am not clear on what the permissions should be.

Anyway, thank you.

On Apr 29, 2013, at 4:30 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Kevin,
> 
>           Have you reformatted the NN(unsuccessfully)?Was your NN serving some other cluster earlier or your DNs were part of some other cluster?Datanodes bind themselves to namenode through namespaceID and in your case the IDs of DNs and NN seem to be different. As a workaround you could do this :
> 
> 1- Stop all the daemons.
> 2- Go to the directory which you have specified as the value of "dfs.name.dir" property in your hdfs-site.xml file.
> 3- You'll find a directory called "current" inside this directory where a file named "VERSION" will be present. Open this file and copy the value of "namespaceID" form here.
> 4- Now go to the directory which you have specified as the value of "dfs.data.dir" property in your hdfs-site.xml file.
> 5- Move inside the "current" directory and open the "VERSION" file here as well. Now replace the value of "namespaceID" present here with the one you had copied earlier.
> 6- Restart all the daemons.
> 
> Note : If you have not created dfs.name.dir and dfs.data.dir separately, you could find all this inside your temp directory.
> 
> HTH
> 
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
> 
> 
> On Tue, Apr 30, 2013 at 2:45 AM, <rk...@charter.net> wrote:
>> I am trying to start up a cluster and in the datanode log on the NameNode server I get the error:
>> 
>> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
>> 2013-04-29 15:50:20,990 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/172.16.26.68:9000
>> java.io.IOException: Incompatible clusterIDs in /data/hadoop/dfs/data: namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>> 
>> How do I get around this error? What does the error mean?
>> 
>> Thank you.
>> 
>> Kevin
> 

Re: Incompartible cluserIDS

Posted by Kevin Burton <rk...@charter.net>.
Thank you the HDFS system seems to be up. Now I am having a problem with getting the JobTracker and TaskTracker up. According to the logs on the JobTracker mapred doesn't have  write permission to /. I am not clear on what the permissions should be.

Anyway, thank you.

On Apr 29, 2013, at 4:30 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Kevin,
> 
>           Have you reformatted the NN(unsuccessfully)?Was your NN serving some other cluster earlier or your DNs were part of some other cluster?Datanodes bind themselves to namenode through namespaceID and in your case the IDs of DNs and NN seem to be different. As a workaround you could do this :
> 
> 1- Stop all the daemons.
> 2- Go to the directory which you have specified as the value of "dfs.name.dir" property in your hdfs-site.xml file.
> 3- You'll find a directory called "current" inside this directory where a file named "VERSION" will be present. Open this file and copy the value of "namespaceID" form here.
> 4- Now go to the directory which you have specified as the value of "dfs.data.dir" property in your hdfs-site.xml file.
> 5- Move inside the "current" directory and open the "VERSION" file here as well. Now replace the value of "namespaceID" present here with the one you had copied earlier.
> 6- Restart all the daemons.
> 
> Note : If you have not created dfs.name.dir and dfs.data.dir separately, you could find all this inside your temp directory.
> 
> HTH
> 
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
> 
> 
> On Tue, Apr 30, 2013 at 2:45 AM, <rk...@charter.net> wrote:
>> I am trying to start up a cluster and in the datanode log on the NameNode server I get the error:
>> 
>> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
>> 2013-04-29 15:50:20,990 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/172.16.26.68:9000
>> java.io.IOException: Incompatible clusterIDs in /data/hadoop/dfs/data: namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>> 
>> How do I get around this error? What does the error mean?
>> 
>> Thank you.
>> 
>> Kevin
> 

Re: Incompartible cluserIDS

Posted by Kevin Burton <rk...@charter.net>.
Thank you the HDFS system seems to be up. Now I am having a problem with getting the JobTracker and TaskTracker up. According to the logs on the JobTracker mapred doesn't have  write permission to /. I am not clear on what the permissions should be.

Anyway, thank you.

On Apr 29, 2013, at 4:30 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Kevin,
> 
>           Have you reformatted the NN(unsuccessfully)?Was your NN serving some other cluster earlier or your DNs were part of some other cluster?Datanodes bind themselves to namenode through namespaceID and in your case the IDs of DNs and NN seem to be different. As a workaround you could do this :
> 
> 1- Stop all the daemons.
> 2- Go to the directory which you have specified as the value of "dfs.name.dir" property in your hdfs-site.xml file.
> 3- You'll find a directory called "current" inside this directory where a file named "VERSION" will be present. Open this file and copy the value of "namespaceID" form here.
> 4- Now go to the directory which you have specified as the value of "dfs.data.dir" property in your hdfs-site.xml file.
> 5- Move inside the "current" directory and open the "VERSION" file here as well. Now replace the value of "namespaceID" present here with the one you had copied earlier.
> 6- Restart all the daemons.
> 
> Note : If you have not created dfs.name.dir and dfs.data.dir separately, you could find all this inside your temp directory.
> 
> HTH
> 
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
> 
> 
> On Tue, Apr 30, 2013 at 2:45 AM, <rk...@charter.net> wrote:
>> I am trying to start up a cluster and in the datanode log on the NameNode server I get the error:
>> 
>> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
>> 2013-04-29 15:50:20,990 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/172.16.26.68:9000
>> java.io.IOException: Incompatible clusterIDs in /data/hadoop/dfs/data: namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>> 
>> How do I get around this error? What does the error mean?
>> 
>> Thank you.
>> 
>> Kevin
> 

Re: Incompartible cluserIDS

Posted by Kevin Burton <rk...@charter.net>.
Thank you the HDFS system seems to be up. Now I am having a problem with getting the JobTracker and TaskTracker up. According to the logs on the JobTracker mapred doesn't have  write permission to /. I am not clear on what the permissions should be.

Anyway, thank you.

On Apr 29, 2013, at 4:30 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Kevin,
> 
>           Have you reformatted the NN(unsuccessfully)?Was your NN serving some other cluster earlier or your DNs were part of some other cluster?Datanodes bind themselves to namenode through namespaceID and in your case the IDs of DNs and NN seem to be different. As a workaround you could do this :
> 
> 1- Stop all the daemons.
> 2- Go to the directory which you have specified as the value of "dfs.name.dir" property in your hdfs-site.xml file.
> 3- You'll find a directory called "current" inside this directory where a file named "VERSION" will be present. Open this file and copy the value of "namespaceID" form here.
> 4- Now go to the directory which you have specified as the value of "dfs.data.dir" property in your hdfs-site.xml file.
> 5- Move inside the "current" directory and open the "VERSION" file here as well. Now replace the value of "namespaceID" present here with the one you had copied earlier.
> 6- Restart all the daemons.
> 
> Note : If you have not created dfs.name.dir and dfs.data.dir separately, you could find all this inside your temp directory.
> 
> HTH
> 
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
> 
> 
> On Tue, Apr 30, 2013 at 2:45 AM, <rk...@charter.net> wrote:
>> I am trying to start up a cluster and in the datanode log on the NameNode server I get the error:
>> 
>> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename 1406@devUbuntu05
>> 2013-04-29 15:50:20,990 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/172.16.26.68:9000
>> java.io.IOException: Incompatible clusterIDs in /data/hadoop/dfs/data: namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>>         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>> 
>> How do I get around this error? What does the error mean?
>> 
>> Thank you.
>> 
>> Kevin
> 

Re: Incompartible cluserIDS

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Kevin,

          Have you reformatted the NN(unsuccessfully)?Was your NN serving
some other cluster earlier or your DNs were part of some other
cluster?Datanodes bind themselves to namenode through namespaceID and in
your case the IDs of DNs and NN seem to be different. As a workaround you
could do this :

1- Stop all the daemons.
2- Go to the directory which you have specified as the value of
"dfs.name.dir" property in your hdfs-site.xml file.
3- You'll find a directory called "current" inside this directory where a
file named "VERSION" will be present. Open this file and copy the value of
"namespaceID" form here.
4- Now go to the directory which you have specified as the value of
"dfs.data.dir" property in your hdfs-site.xml file.
5- Move inside the "current" directory and open the "VERSION" file here as
well. Now replace the value of "namespaceID" present here with the one you
had copied earlier.
6- Restart all the daemons.

Note : If you have not created dfs.name.dir and dfs.data.dir separately,
you could find all this inside your temp directory.

HTH

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Apr 30, 2013 at 2:45 AM, <rk...@charter.net> wrote:

> I am trying to start up a cluster and in the datanode log on the NameNode
> server I get the error:
>
> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename
> 1406@devUbuntu05
> 2013-04-29 15:50:20,990 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id
> DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/
> 172.16.26.68:9000
> java.io.IOException: *Incompatible clusterIDs* in /data/hadoop/dfs/data:
> namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode
> clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>
> How do I get around this error? What does the error mean?
>
> Thank you.
>
> Kevin
>

Re: Incompartible cluserIDS

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Kevin,

          Have you reformatted the NN(unsuccessfully)?Was your NN serving
some other cluster earlier or your DNs were part of some other
cluster?Datanodes bind themselves to namenode through namespaceID and in
your case the IDs of DNs and NN seem to be different. As a workaround you
could do this :

1- Stop all the daemons.
2- Go to the directory which you have specified as the value of
"dfs.name.dir" property in your hdfs-site.xml file.
3- You'll find a directory called "current" inside this directory where a
file named "VERSION" will be present. Open this file and copy the value of
"namespaceID" form here.
4- Now go to the directory which you have specified as the value of
"dfs.data.dir" property in your hdfs-site.xml file.
5- Move inside the "current" directory and open the "VERSION" file here as
well. Now replace the value of "namespaceID" present here with the one you
had copied earlier.
6- Restart all the daemons.

Note : If you have not created dfs.name.dir and dfs.data.dir separately,
you could find all this inside your temp directory.

HTH

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Apr 30, 2013 at 2:45 AM, <rk...@charter.net> wrote:

> I am trying to start up a cluster and in the datanode log on the NameNode
> server I get the error:
>
> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename
> 1406@devUbuntu05
> 2013-04-29 15:50:20,990 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id
> DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/
> 172.16.26.68:9000
> java.io.IOException: *Incompatible clusterIDs* in /data/hadoop/dfs/data:
> namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode
> clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>
> How do I get around this error? What does the error mean?
>
> Thank you.
>
> Kevin
>

Re: Incompartible cluserIDS

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Kevin,

          Have you reformatted the NN(unsuccessfully)?Was your NN serving
some other cluster earlier or your DNs were part of some other
cluster?Datanodes bind themselves to namenode through namespaceID and in
your case the IDs of DNs and NN seem to be different. As a workaround you
could do this :

1- Stop all the daemons.
2- Go to the directory which you have specified as the value of
"dfs.name.dir" property in your hdfs-site.xml file.
3- You'll find a directory called "current" inside this directory where a
file named "VERSION" will be present. Open this file and copy the value of
"namespaceID" form here.
4- Now go to the directory which you have specified as the value of
"dfs.data.dir" property in your hdfs-site.xml file.
5- Move inside the "current" directory and open the "VERSION" file here as
well. Now replace the value of "namespaceID" present here with the one you
had copied earlier.
6- Restart all the daemons.

Note : If you have not created dfs.name.dir and dfs.data.dir separately,
you could find all this inside your temp directory.

HTH

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Apr 30, 2013 at 2:45 AM, <rk...@charter.net> wrote:

> I am trying to start up a cluster and in the datanode log on the NameNode
> server I get the error:
>
> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename
> 1406@devUbuntu05
> 2013-04-29 15:50:20,990 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id
> DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/
> 172.16.26.68:9000
> java.io.IOException: *Incompatible clusterIDs* in /data/hadoop/dfs/data:
> namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode
> clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>
> How do I get around this error? What does the error mean?
>
> Thank you.
>
> Kevin
>

Re: Incompartible cluserIDS

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Kevin,

          Have you reformatted the NN(unsuccessfully)?Was your NN serving
some other cluster earlier or your DNs were part of some other
cluster?Datanodes bind themselves to namenode through namespaceID and in
your case the IDs of DNs and NN seem to be different. As a workaround you
could do this :

1- Stop all the daemons.
2- Go to the directory which you have specified as the value of
"dfs.name.dir" property in your hdfs-site.xml file.
3- You'll find a directory called "current" inside this directory where a
file named "VERSION" will be present. Open this file and copy the value of
"namespaceID" form here.
4- Now go to the directory which you have specified as the value of
"dfs.data.dir" property in your hdfs-site.xml file.
5- Move inside the "current" directory and open the "VERSION" file here as
well. Now replace the value of "namespaceID" present here with the one you
had copied earlier.
6- Restart all the daemons.

Note : If you have not created dfs.name.dir and dfs.data.dir separately,
you could find all this inside your temp directory.

HTH

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Apr 30, 2013 at 2:45 AM, <rk...@charter.net> wrote:

> I am trying to start up a cluster and in the datanode log on the NameNode
> server I get the error:
>
> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename
> 1406@devUbuntu05
> 2013-04-29 15:50:20,990 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1306349046-172.16.26.68-1367256199559 (storage id
> DS-403514403-172.16.26.68-50010-1366406077018) service to devUbuntu05/
> 172.16.26.68:9000
> java.io.IOException: *Incompatible clusterIDs* in /data/hadoop/dfs/data:
> namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode
> clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>
> How do I get around this error? What does the error mean?
>
> Thank you.
>
> Kevin
>